Ironically, Google is the safer bet and this might have been a correct decision from Tim Apple.
AI is changing at a rapid pace ( still ) and OpenAI is no longer the only game in town at the top, plus their finances are.. something we’ll hear about the next year and Sam Altman is an incredible unscrupulous person with past actions and decisions catching up to him. Not exactly the situation you want to partner with.
At this point you don’t need your AI on Apple devices to be revolutionary, it needs to work and be better than the current situation which is not difficult.
Gemini 3 is quite good for the general public, Google has the money to keep playing the AÍ games and also played ball with Apple, OpenAI only has 1 or 2 of those going for them.
Possibly. Until online features are added, marking a slow decline into the everything-as-a-subscription-service world (E-ASS). Look at Apple vs Google stock price for the carrot.
Even if I turn it off, how to I remove it and reclaim the space I paid for?
The delusion of the “Mac just works” crowd is only matched by the delusion of the Linux “year of the Desktop” crowd.
They share the same trait of “it works on my machine, I like it, therefore it’s my identity and everything else is wrong”
I use Linux regularly as a second OS for more than 15 years. Driver compatibility improved, software design and quality didn’t, in fact it suffers more or less the same problems of other mainstream OSs.
Linux on the desktop has been winning because the others got bad at a faster pace, I’m not sure there is anything to be celebrated.
It's probably not worth it to say more than that my experience simply differs from yours. I've found it incredibly unproductive to quibble with people who have jumped to the conclusion that some difference of opinion must stem from some kind of identity-justification/confirmation bias delusion out the gate. This seems to be the most common mindless kneejerk criticism people jump to these days when they're engaging not with the person they're talking to, but a strawman or stereotype they believe that they conceptually represent, which in turn seems to be the most common failure mode of internet argumentation in general. It's interesting to see how the real phenomenon of confirmation bias and some relatively well-respected theories about opinion-as-identity have jumped from psychological literature to being basically pervasive thought-terminating cliches. But I like writing out my thoughts so... against my better judgment I'll write 'em out here. As a treat
My experience with the linux ecosystem overall, which seems consistent with that of the person you're responding to from what little information that post gives, has been of consistent improvement over a long timescale with an increasingly capable stack of open-source software whose exact pieces have shifted with various community and maintainer dramas and the natural process of the birth of new projects and death of old ones over time. I've found that I have my preferences within that ecosystem, like I settled on archlinux as a distro about ten years ago and haven't really seen a strong reason to switch, despite periodically working with other popular ones in the course of a career as a software engineer and researcher. I have strong reasons to prefer a modular, composable operating system that I control, so I wouldn't consider using proprietary software if there's a working FOSS alternative. This is a bias for sure! But I find my frustration with these things has decreased in aggregate over time, even as I've changed tools and suffered switching costs for it numerous times, and dealt with the general hostility with which a lot of manufacturers seem to view open-source software running on their hardware, and their attempts to make this more difficult. However, the aggregate experience of proprietary software users seems to have significantly degraded over the same period. They generally insist that this is still worth it to them over doing what I do, and again I've been in enough dumb internet arguments to know that it's not worthwhile to do more than gently suggest that alternatives exist and may be worth trying unless I know them personally.
I do get a window into proprietary ecosystems nonetheless, because I still don't feel I can replace the use cases required of me on mobile phones with an open-source alternative yet, and have seen my frustrations steadily increase over time with both these and SaaS products that I've been required to use for work. I also got frustrated enough with game consoles that I've entirely switched over to using PCs, running linux, for any games I want to play. At every turn, I have found that while computers are always error-prone in some way or another, and using them extensively will result in some frustration, this is significantly less when I have more control over the computer, and has become less rather than more frequent as open-source projects mature. I can not only observe that my own experience with proprietary products has followed the opposite pattern, but that more and more people talking about tech companies with scorn rather than effusive praise, yelling at their phones, and the public discourse adopting terms like "platform decay", "enshittification", "tech rot", etc all suggest that this is a general trend rather than my biases
Again, your mileage may vary, but I do find it odd that you are so immediately dismissive of this perspective, accusing a pretty innocuous comment about it of reactionary identity-defense basically immediately without engaging at all. If you're inclined to listen to a zealot like me at all, I would only urge you to consider why you have assumed this so quickly, why you are so adamant that this is the only sort of person who could form such an opinion
Even most toy databases "built in a weekend" can be very stable for years if:
- No edge-case is thrown at them
- No part of the system is stressed ( software modules, OS,firmware, hardware )
- No plug is pulled
Crank the requests to 11 or import a billion rows of data with another billion relations and watch what happens. The main problem isn't the system refusing to serve a request or throwing "No soup for you!" errors, it's data corruption and/or wrong responses.
I really never understood how people could store very important information in ES like it was a database.
Even if they don't understand what ES is and what a "normal" database is, I'm sure some of those people run into issues where their "db" got either corrupted of lost data even when testing and building their system around it. This is and was general knowledge at the time, it was no secret that from time to time things got corrupted and indexes needed to be rebuilt.
Doesn't happen all the time, but way greater than zero times and it's understandable because Lucene is not a DB engine or "DB grade" storage engine, they had other more important things to solve in their domain.
So when I read stories of data loss and things going South, I don't have sympathy for anyone involved other than the unsuspecting final clients. These people knew or more or less knew and choose to ignore and be lazy.
> I really never understood how people could store very important information in ES like it was a database.
I agree.
Its been a while since I touched it, but as far as I can remember ES has never pretended to be your primary store of information. It was mostly juniors that reached for it for transaction processing, and I had to disabuse them of the notion that it was fit for purpose there.
ES is for building a searchable replica of your data. Every ES deployment I made or consulted sourced its data from some other durable store, and the only thing that wrote to it were replication processes or backfills.
we had something like this to scale out for higher throughput. just in the 10's of thousands requests per second required 100+ nodes simply because each query would have a expensive scatter and gather
I’ve no experience with Elastic but what they’re getting at I think is indexes in Elastic is actually your data because that’s all it does due to the purpose it was built for, whereas in Postgres indexes are, well, indexes — that is, derived data, not the source of truth.
usually in companies, people have a main durable store of information that is then streamed to other databases that store a transformation of this data with some augmentation.
these new data stores don't usually require that level of durability or reliability.
Good decision for a change, now looking at execution track record and ability to stick with it..
yeah, that's where the bad news start.
They have a tendency to go from trend to trend and always a "me too, I'm here" player. Deliver first and stick with it, Mozilla's goodwill fund is long gone to be excited about "mission statements".
Stoicism is like recommending having a couple drinks ( literally ) to a "normal" person with mild social anxiety with a need to go out in the World and live life.
It works and it's good advice.
Unfortunately it gets recommended to everybody at every point in their lives, which include alcoholics and people in crisis.
In a more direct way: Stop with this "no emotion" "I'm a fortress" bullshit. It only helps a narrow group of people in specific circumstances of their lives but wreaks havoc on everybody else because it's misplaced and mostly a lie or at least a very incomplete picture.
"In most organizations, knowledge increases as you go up the hierarchy. CEOs understand their business better than middle managers. "
I chuckled on this one.
I'll give the author the benefit of the doubt and imagine he's was referring to the act of running a "business"/agenda in parallel of the business that is conducted day by day by normal people.
Yes, employees and managers can be doing the business of selling paper while the CEO is conducting the business of inflating the stock and massaging the numbers in order to fulfill the objective the board told him privately because the owner wants to sell the business to buy a bigger boat and buy a nice apartment in NYC for his angel of a daughter.
As CEO of Htmx, I would like to express my gratitude for your ongoing support as we continue our journey of strategic execution and operational excellence. I remain steadfast in my commitment to delivering incremental yet impactful value to our stakeholders, optimizing synergies where possible, and increasing market share in a manner that will look excellent in future investor updates.
Can't wait for all the profound and fulfilling work ahead of us in 2026!
As CEO of htmx I agree with everything you said. The synergies we are developing within the web development ecosystem continue to make us a leader in the space.
What's holding me back from trying out HTMX is that people seem to be hitting roadblocks with it when it comes to larger or more complex codebases. Is HTMX suitable for larger enterprise applications? Or is it, as some people have suggested - perhaps cynically - a simple lightweight replacement of jQuery?
For a start, it doesn't have to be a replacement. You can progressively add it in. I work at a very very large organisation with a multi-million line codebase and we splash htmx here and there where it is useful (and where a full blown SPA would be too much to set up). We don't have to ditch any other FE tooling in favour of htmx - htmx "just works" nicely alongside everything else.
many enterprise applications are "wide" complex (that is, lots of screens, but relatively simple individually) where the complexity can mainly live server side in the domain model, and hypermedia is great for these
hypermedia isn't always as good for "deep" complex apps, with complicated individual screens, because server round trips are often unacceptable from a latency perspective for them, here client-side scripting of some sort is a better solution. You can use islands for this situation to mix the two models.
I have seen people rewrite entire application from React to htmx.
It works. But the architecture required is a tad different. Also you need Alpine as a complementary library for the reactive parts. (I mean you could do a lot just with htmx but I find Alpine more convenient in many places when I need to work with json - since I don't control all backend and json isn't really a first class citizen of htmx)
The beauty of it is that you don't _need_ Alpine at all, Alpine just comes up because it's popular, it solves the problem of lightweight inline scripting, and it integrates relatively seamlessly with htmx.
If you don't want to use Alpine for whatever reason, you can just write your own javascript, you can use hyperscript, you can use some other inline scripting library.
> when I need to work with json - since I don't control all backend and json isn't really a first class citizen of htmx
yeah, if you can't make the backend return HTML, you're in a worse off place if you want to use htmx.
There's extensions [1][2] for receiving and rendering JSON responses with htmx (though I haven't used them), but I totally understand it starting to feel like a worse fit.
If you’re using Alpine already, then is there a good reason to use HTMX over alpine Ajax? They both look quite similar to me, but I don’t do enough front end work to tell the difference.
Htmx offers more flexibility than Alpine Ajax. Here's an example: htmx allows using relative selectors, which allow you to target elements relative to the triggering element in the DOM tree. This gives us a lot of power for swapping in pieces of UI without having to make up ids for lots of elements.
I have a blog post in the works for this feature, here's a small code sample I made to show the idea:
I have tried to use exclusively each of the libraries to better understand their limit, overtime I got to the following observations:
- htmx is more straightforward (because a lot of the magic basically happening in the backend) and helps a lot to keep some sanity.
- Alpine shines when you need more composition or reactivity in the frontend. But it gets verbose quickly. When you feel you are reimplementing the web, it means you went too far.
For pagination, page structure, big tables, confirmation after post etc. I usually go with htmx. Modals, complex form composition (especially when you need to populate dropdowns from differents APIs), fancy animations, I prefer Alpine. (I probably could do that with htmx and wrapping it in a backend - but often more flexible in the frontend directly.)
To me, the main reason why I use these libraries, is what I write today will still be valid in 5 years without having to re-write the whole thing, and it matters since I have to maintain most of what I write.
So, instead of using one JavaScript library with an entire ecosystem of tools that work together, you use two separate uncoordinated JavaScript libraries? Why do you think that's better?
Different libraries composing well together is the default assumption in most of software development. Only in Javascript have people given up on that and accepted that libraries don't work together unless they've been specifically designed, or at least given a compatibility layer, for the framework they're being used in.
Qt widgets don't work together with GTK widgets, and nobody considers this a crisis. I'm pretty sure you can't use Unreal engine stuff in Unity. GUIs require a lot of stuff to compose together seamlessly, and it's hard to do that in a universal way.
HTMX achieves its composability by declining to have opinions about the hard parts. React's ecosystem exists because it abstracts client-side state synchronization, and that inherent complexity doesn't just disappear. When you still have to handle the impedance mismatch between "replace this HTML fragment" and "keep track of what the user is doing", you haven't escaped the complexity. You've just moved it to your server, and you've traded a proven, opinionated framework's solution for a bespoke one that you have to maintain yourself.
If anything, the DOM being a shared substrate means JS frameworks are closer to interoperable than native GUI toolkits ever were. At least you can mount a React component and a Vue component in the same document. They're incompatible with each other because they're each managing local state, event handling, and rendering in an integrated way. However, you can still communicate between them using DOM events. An HTMX date picker may compose better, but that's just because it punts the integration to you.
Ecosystems have their downsides too. Just a small example, no htmx users were impacted by the React Flight Protocol vulnerabilities. Many htmx users have no-build setups: no npm, no package.json, nothing. We don't have to worry about the security vulnerability treadmill and packages and tools arbitrarily breaking and no longer building after some time passes. We just drive the entire webapp from the backend, and it just works.
One never uses just one JS lib :) The JS ecosystem always comes with lost of tools, and libs, and bells, and whistles.
I like Elm for this reason. Less choices. Zero runtime errors (I know it is possible in contrived examples, but I've seen and many teams have said the promise holds true after many years of using in production).
HTMX would work well with jQuery. But Alpine seems to be more popular in the HTMX crowd. I'd say Alpine is a good replacement for jQuery in my conceptual model.
HTMX just means: just send incomplete HTML documents over the wire. (some that is done for a long time, but was always frowned upon by the API-first and SPA movements -- and for good reasons (ugly APIs and architecturally less-compatible with SPAs).
I've found that just using vanilla JavaScript for this handlers and simple state management also works fine. If you are using a template, each HTML page can have a little JS section at the bottom with glue logic, and it's super easy to read and maintain.
I think the reason htmx and alpine are both popular is that they both get added as html attributes. So you really feel like youre just writing things html a lot of the time.
I mean, there's a reason people made client-side frameworks in the first place. Distributed state synchronization is really, really hard to do manually.
I think HTMX is really well designed for what it is, but I struggle to think of an occasion when it would be the best option. If you have any amount of interactivity, you'll get better DX with a full client side framework. If you don't have much interactivity, then you don't even need a JavaScript library in the first place. Just write a static website.
For the vast majority of web apps (including the ones that are built with SPA frameworks now), "how do I do distributed state synchronization" is an example of the XY problem. Most of the time, you don't actually need to write an entire separate app that understands your domain and synchronizes state with the backend, you need something that allows your users to trigger network requests and for the HTML displayed to them to be updated based on the response. Hypermedia is fully capable of solving that problem, completely sidestepping the need to solve the sort of state synchronization problem you mention.
> If you display mutable derivations of the server-side state in more than one place on the client, you're synchronizing it.
Again, this is the XY problem. Your actual requirement isn't "display mutable derivations of the server-side state in more than one place on the client", it's "update two parts of the DOM in response to user action". You can usually accomplish this with HTMX just fine by either using out of band swaps or swapping out a mutual parent element, depending on your actual needs. You can think of this as state synchronization if you really want to, but it's meaningfully different and significantly easier. Your frontend state isn't a synchronized partial copy of the backend state requiring custom software to manage, it's a projection from that state with embedded possible transitions/mutations.
> If you're not displaying mutable derivations of server-side state in more than one place on the client, then you don't need HTMX.
Even if you think HTMX isn't a good solution and limiting ourselves to swapping out a single element, it very clearly enables a lot of behavior that just isn't possible with standard HTML hypermedia controls (links and forms). Things like active search, infinite scroll, etc. cannot be done with vanila HTML, because you can only trigger HTTP requests with a small subset of events, and if you do trigger one you must replace the entire page.
OOB swaps are exactly what I'm talking about. That's imperative state management that would be easier with a client side framework. You shouldn't have to manually write out every single part of the app that relies on a single state transition. Do you really think that's scalable? Why would you choose to do that instead of using a tool that completely negates that concern?
You have to manually write that out either way. In the SPA/reactive paradigm, you have to specify that multiple parts of the UI depend on the same part of the state, vs sending down those multiple parts of the UI from the backend.
I'd also argue that if you look at the interactions on web apps, you'll find the number of cases where you would actually need an OOB swap is more limited than you might be thinking.
This isn't just a hypothetical. I have written apps both ways, including porting a few from a SPA framework to a hypermedia based solution. It allowed me to sidestep several major sources of complexity.
The reasons however are not so valid anymore in 2026.
Plain HTML has lots of extra features that did not exist in 2010 (form-validation and input-types, canvas, fetch, history-API), and some shortcomings have disappeared (like 'Flash of Unstyled Content'.)
Endless scrolling (made popular by Facebook/react) used to be heavy on the browser and sometimes made mobile devices unresponsive. That is not an issue anymore.
Tbh, I can't name a single issue we have today that requires large client-side frameworks for a fix.
I use it in one place, where it makes sense: i want the server to template something that works interactively/asyncly. The rest of my current app is, thank God, oldskool SSR HTML request-response over an SQL db.
I've shipped multiple projects running HTMX, and I generally like it.
Grain of salt too, I'm typically a "DevOps engineer", and I generally lean towards backend development. What I mean to say is that I don't know react and I don't want to.
My understanding of it is that HTMX is a library, whereas React is a framework. With a library, you need to figure out the structure yourself, and that sometimes makes things more difficult since it's another responsibility. This is likely where things fail for the large enterprise apps _not_ using a framework, since structuring the codebase for an enterprise application (and convincing your colleagues to like it) is genuinely difficult and there's no way around that.
> as some people have suggested - perhaps cynically - a simple lightweight replacement of jQuery?
I don't even see this as cynical, I think it's a relatively fair assessment. A key difference is that jQuery has it's own language to learn, whereas htmx is pretty much a few extra html tag attributes.
I'd recommend you just try HTMX out when you have an opportunity to write something small and full stack, you might like it a lot.
I can’t tell if calling themselves the CEO of Htmx(sic) is satire or meant to be taken seriously. Heck, I can’t tell if this entire post is satire or meant to be taken seriously.
Maybe that’s the point? In the “it’s satire” vein, the “htmx.org” URL points to a… X Profile for @htmx_org where the display title is “CEO of National Champs (same thing)”, and has (from the logged out perspective) a lot of memes that are programmer centered around htmx.
In the “it’s a serious post” vein, unfortunately a non-trivial number of HN-linked posts contain verbiage like:
> I would like to express my gratitude for your ongoing support as we continue our journey of strategic execution and operational excellence. I remain steadfast in my commitment to delivering incremental yet impactful value to our stakeholders, optimizing synergies where possible, and increasing market share in a manner that will look excellent in future investor updates.
> Can't wait for all the profound and fulfilling work ahead of us in 2026.
And those sentiments are not wholly and consistently criticized as the BS they are, so it’s plausible to believe this about a JavaScript library.
For all the insightful takes about everything under the Sun, Dan's cynicism and skewed view towards "Europe" are shown in this letter.
It's not that all his takes are wrong, it's the exaggeration, the doom and gloom and a somewhat dismissal or some unsolved personal issues he has with "Europeans".
The irony is not lost that Dan acts as smug and dismissal as he accuses Europeans to be.
Regarding the whole "Degrowth" thing: yes Europe has those and they found their gold in Governmental entities and they entertain the rich. But.. that's exactly what happens in the US too and Dan as knowledge as he is should know this was mostly an American academia export, he just needs to talk with some people in the very same colleges he regularly set foot into.
Also, he should take a hint when he says historically liberal societies have fared much better than autocratic ones even if those are very focused and appear to make progress very quickly. Having a few mega-bilionaires directing what the populace do or not do might not be a smart move as it sounds. We'll see when the AI musical chairs stops.
Btw, Europe has been dead and on the brink of destruction for a few centuries by now. And according to experts the EU is about to collapse 3 or 4 times a year - minimum.
AI is changing at a rapid pace ( still ) and OpenAI is no longer the only game in town at the top, plus their finances are.. something we’ll hear about the next year and Sam Altman is an incredible unscrupulous person with past actions and decisions catching up to him. Not exactly the situation you want to partner with.
At this point you don’t need your AI on Apple devices to be revolutionary, it needs to work and be better than the current situation which is not difficult.
Gemini 3 is quite good for the general public, Google has the money to keep playing the AÍ games and also played ball with Apple, OpenAI only has 1 or 2 of those going for them.
reply