I don't buy it, I've used LLMs (well, mostly sonnet 4.5 and sometimes gpt5) in a variety of front-end frameworks (react, vue, htmx) and they do just fine. As usual, requires a lot of handholding and care to get good results, but I've found this is true for react codebases just as much as anything else.
> As usual, requires a lot of handholding and care to get good results, but I've found this is true for react codebases just as much as anything else.
I think you and others in this thread have either just skimmed the article or just read the headline. The point isn't that you can't use LLMs for other languages, its that the creators of these tools AREN'T using other languages for them. Yes, LLM's can write Angular. But if there's less data to train on, the results won't be as good. And because of this, it's creating a snowball effect.
Not your parent commenter but their point was clear to me.
To me, they don't buy the argument that the snowball effect is significant enough to overcome technical merits of different frontend frameworks.
And I'll add that: older libraries like React have at least one disavantage: there's a lot of outdated React code out there that AI is being trained on.
> I wonder if React has something to keep AI on their toes about best practices.
Ahh, I wouldn't hold my breath.
And to your point, I guess another thing Svelte has is it's compatibility with just vanilla JS, meaning (I think) it doesn't necessarily have to be "Svelte" code to still work with Svelte.
I don't buy the premise - that LLMs being trained on more React code than other frameworks is going to cause the collapse of alternatives. The data presented in the article isn't very convincing to me - it's absolute numbers, it's not a zero-sum game, and besides LLM coding is the worst it's ever going to be. Hypothetically, even if the data was convincing (showing a massively increasing relative share of React usage since LLMs entered the scene), I don't think it's sensible to extrapolate from current trends about LLM coding anyway. This stuff is barely a few years old and we want to make confident predictions about it?
> I don't buy the premise - that LLMs being trained on more React code than other frameworks is going to cause the collapse of alternatives
But if less people are exposed to those frameworks, then surely that means they will be less popular? I'm struggling to understand your argument.
> he data presented in the article isn't very convincing to me - it's absolute numbers, it's not a zero-sum game,
Of course it is. If I'm using React to build a site, I'm not using Svelte to build it. It less people are using a framework, there will be less funding. If more people use it, more money.
> I don't think it's sensible to extrapolate from current trends about LLM coding anyway.
The actual tools themselves are using React. Bolt, a UI design LLM, uses React by default. i don't even think there's an option to use a different language right now. These tools have taken over the industry, and have absolutely exploded in popularity in the few years they've been available. This is going to create a snowball effect.
> This stuff is barely a few years old and we want to make confident predictions about it?
I don't think you read the article as closely as you think you do. Saying "React has probably spiked in popularity because LLM's use it be default" isn't that controversial. And it's true. And I don't think it's a long shot to say "If there's less data associated with a framework, it'll be less likely to be used by these tools and then less likely to be used at all." In fact, it feels like a pretty obvious conclusion.
We can ignore what is clearly happening (which even as a React dev I don't want because it WILL limit my future options) or work to make sure those tools are offering other defaults.
> But if less people are exposed to those frameworks, then surely that means they will be less popular?
I agree, but I don't think the data suggests that is what's happening. The data presented in the article shows only that the number of new sites made with React has increased greatly since LLMs arrived on the scene. But there's a base rate fallacy here - we aren't shown data for any other frameworks!
>Of course it is.
That's not what I mean by a zero-sum game. There isn't a fixed number of websites that different frameworks are taking a share of (this would be a zero-sum game). The number of websites itself has massively increased since LLMs arrived on the scene. You can very quickly spin up 100 new sites using your new framework without all the other frameworks "losing" 100 sites, you know what I mean? Similarly I think the number of people making websites has exploded for the same reason.
And this is another explanation for the data in the article - that there are simply way more sites being created now that it's so trivial for anyone to make one. Have a look at the StackExchange links I gave in my last comment. There isn't much evidence there that React is overwhelming the industry (especially amongst professional devs), although I grant you it would be difficult to measure if it were true.
> The actual tools themselves are using React. [...] These tools have taken over the industry.
Yes, but so have plenty of other tools that don't use React by default, like Claude Code or Codex. There are plenty of new websites being made across all of the major frameworks.
> I don't think you read the article as closely as you think you do.
Do you mind cutting it out with the ad-hominems? I've been nothing but respectful to you, and in each of your replies you've made little jabs at me about "not understanding the article". I just disagree with you, friend, be nice =)
Why is this on the front page of hacker news? Hopefully that comes across as a genuine question and not snark. I mean as an ex-mathematician I'm thrilled, but schemes are an incredibly abstract object used in an incredibly abstract branch of mathematics (algebraic geometry).
Interesting, yeah. I guess he was the mathematical equivalent of the "rogue" archetype. Brilliant, did things in his own way, total lack of respect for authority, shrouded in mystery. I can definitely see the appeal =)
in many educational systems, aptitude in math (the more abstract, the better) is conflated with intelligence. so maybe many of us have internalized we should valorize it?
Sometimes, the best way to learn about abstruse topics one has a passing curiosity in is to upvote what pops up on HN and hope that some nerd might drop by and comment with a simplified intuitive picture for plebs :-)
These days, some nerds prefer to ask AI to confirm their "precious" intuitions of why schemes might be needed in the first place. To fix the problems with certain basic geometric notions of old timers? They are then so spooked that the AI instantly validates those intuitions without any relevant citations whatsoever that they decide not to comment
But still leave warnings to gung-ho nerds in the form of low-code exercises
That's a theory, but I think it's more likely that the few people in the world who deeply understand schemes are locked in the basement of a mathematics department somewhere, and not on hacker news =P
>
That's a theory, but I think it's more likely that the few people in the world who deeply understand schemes are locked in the basement of a mathematics department somewhere, and not on hacker news =P
I rather think that because of the very low career prospects in research, quite a lot of people who are good in this area rather left research and took some job in finance or at some Silicon Valley company, and thus might actually at least sometimes have a look at what happens on Hacker News. :-)
I think you overestimate how many people exist in the world with a professional interest in algebraic geometry! The vast majority of mathematicians have no idea how to compute with schemes (and there aren't that many of them to begin with).
Even though I am from in a different area of mathematics, I know quite many people who work(ed) in algebraic geometry (and at the university where I graduated there wasn't even an academic chair for (Grothendieck-style) algebraic geometry).
The amount of people I know who would love to learn this material is even many, many magnitudes larger (just to give some arbitrary example: some pretty smart person who studied physics, but (for some reasons) neither had any career prospects in research nor found any fullfilling job, who just out of boredom decided that he would love to get deeply into Grothendieck-style algebraic geometry).
I guess we hang out in different academic circles. I met a single algebraic geometer in my whole academic career. But people are into very different stuff where I come from, which may have biased me (topology, number theory and category theory for the most part, and a lot of relativity/fluid dynamics on the applied side of the department). Based on rough estimates from papers published on arxiv over the last few years, I (very) conservatively estimate there are ~5000 working algebraic geometers in the world right now.
> The amount of people I know who would love to learn this material [...]
I am one of them =) but my point wasn't really about people who want to learn the material (which I assume includes many orders of magnitude more humans) it was about people who already deeply understand it.
It's hard to help GP but I'm gonna try (pls forgive me):
I believe that the masses don't have a deep understanding of Schemes because of enemy action by the sufficiently advanced stupidity (aka loneliness) of the intelligent :)
Their interest is "pro" and they are not a hypothesis
(& I'd NOT bet against that they understand deeper than Sturmfels and his students)
Schemes (like cat theory) have become a sort of religion-- it's sad because Grothendieck himself might not have understood them intuitively.. and it won't be the first time.. Feynman didn't understand Path Integrals, nor Archimedes integration!! BECAUSE they were all loners whose first resort was WRITING LETTERS
Ps: as with Jobs.. I hesitate to call Buzzard a full-time salesman
If you want to hang out in meatspace: do you have a public key?
This is very cool as a science experiment, but if you're interested in getting the best results (for you) you should just taste as you cook. We're born with high-fidelity chemical and tactile sensors - use them!
Can you provide examples of software that should have literally been a spreadsheet or an ETL? Not to call you out specifically but this feels like "I could have written that in a weekend". Personally whenever I have felt that way about a project it turned out I was just missing 95% of the business context/domain knowledge (part of the reason I think rewrites are a bad idea - chesterton's fence).
I can give you one - a billing system with reporting features that suck dreadfully compared to what can be done with a spreadsheet. Why not "just" let the user download a CSV and then do whatever they want with it?
I was working on this project and I kept suggesting it. It was seen as inferior and yet the system we were going to produce was far inferior in every way.
Is LLM output the kind of clever we're talking about here? I always thought the quote was about abstraction astronautics, not large amounts of dumb just-do-it code.
I love Go and have played it a lot in person, but I always struggle to get games online, even on OGS. Feels like the online community is very small compared to chess (which is now my boardgame of choice, basically for this reason). Has this changed? Are there better sites now where a beginner can find matches without waiting half an hour or more?
To be clear, there are 140 active games right now. That 21k number is active “correspondence” games where moves can take a day and games can take months.
What are the kinds of things real engineers do that we could learn from? I hear this a lot ("programmers aren't real engineers") and I'm sympathetic, honestly, but I don't know where to start improving in that regard.
This is off the cuff, but comparing software & software systems to things like buildings, bridges, or real-world infrastructure, there's three broad gaps, I think:
1) We don't have a good sense of the "materials" we're working with - when you're putting up a building, you know the tensile strength of the materials you're working with, how many girders you need to support this much weight/stress, etc. We don't have the same for our systems - every large scale system is effectively designed clean-sheet. We may have prior experience and intuition, but we don't have models, and we can't "prove" our designs ahead of time.
2. Following on the above, we don't have professional standards or certifications. Anyone can call themselves a software engineer, and we don't have a good way of actually testing for competence or knowledge. We don't really do things like apprenticeships or any kind of formalized process of ensuring someone has the set of professional skills required to do something like write the software that's going to be controlling 3 tons of metal moving at 80MPH.
3. We rely too heavily on the ability to patch after the fact - when a bridge or a building requires an update after construction is complete, it's considered a severe fuckup. When a piece of software does, that's normal. By and large, this has historically been fine, because a website going down isn't a huge issue, but when we're talking about things like avionics suites - or even things like Facebook, which is the primary media channel for a large segment of the population - there's real world effects to all the bugs we're fixing in 2.0.
Again, by and large most of this has mostly been fine, because the stakes were pretty low, but software's leaked into the real world now, and our "move fast and break things" attitude isn't really compatible with physical objects.
There's a corollary to combination of 1 & 3. Software is by its nature extremely mutable. That in turn means that it gets repurposed and shoehorned into things that were never part of the original design.
You cannot build a bridge that could independently reassemble itself to an ocean liner or a cargo plane. And while civil engineering projects add significant margins for reliability and tolerance, there is no realistic way to re-engineer a physical construction to be able to suddenly sustain 100x its previously designed peak load.
In successful software systems, similar requirement changes are the norm.
I'd also like to point out that software and large-scale construction have one rather surprising thing in common: both require constant maintenance from the moment they are "ready". Or indeed, even earlier. To think that physical construction projects are somehow delivered complete is a romantic illusion.
> You cannot build a bridge that could independently reassemble itself to an ocean liner or a cargo plane.
Unless you are building with a toy system of some kind. There are safety and many other reasons civil engineers do not use some equivalent of Lego bricks. It may be time for software engineering also to grow up.
Right, your number 1 is quite compelling to me - a lack of standard vocabulary for describing architecture/performance. Most programmers I work with (myself included sometimes) aren't even aware of the kinds of guarantees they can get from databases, queues, or other primitives in our system.
On the other hand 3 feels like throwing the baby out with the bathwater to me. Being so malleable is definitely one of the great features of software versus the physical world. We should surely use that to our advantage, no? But maybe in general we don't spend enough energy designing safe ways to do this.
> 3. We rely too heavily on the ability to patch after the fact...
I agree on all points and to build up on the last: making a 2.0 or a complete software rewrite is known to be even more hazardous. There are no quarantees the new version is better in any regards. Which makes the expertise to reflect more of other highly complex systems, like medical care.
Which is why we need to understand the patient, develop soft skills, empathy, Agile manifesto and ... the list could go on. Not an easy task when you include you are more likely going to also fight shiny object syndrome of yours execs and all the constant hype surrounding all tech.
What concerns me the most is that a bridge, or road, or building has a limited number of environmental changes that can impact its stability. Software feels like it has an infinite number of dependencies (explicit and implicit) that are constantly changing: toolchains, libraries, operating systems, network availability, external services.
Yeah, I think safety factors and concepts like redundancy have pretty good counterparts in software. Slightly embarrassed to say that I don't know for my current project!
Act like creating a merge-request to main can expose you to bankruptcy or put you in jail. AKA investigate the impact of a diff to all the failure modes of a software.
I’m not disparaging it, just actualizing it and sharing that thought. If you don’t understand that most modern “tools” and “services” are gamified, then yes I suppose I seem like a huge jerk.
The author literally talks about managing a team of multiple agents and Llm services requiring purchase of “tokens” is similar to popping a token into an arcade machine.
"Hacker culture never took root in the AI gold rush because the LLM 'coders' saw themselves not as hackers and explorers, but as temporarily understaffed middle-managers"
Also hacking really doesn’t have anything to do with generating poorly structured documents that compile into some sort of visual mess that needs fixing. Hacking is the analysis and circumvention of systems. Sometimes when hacking we piece together some shitty code to accomplish a circumvention task, but rarely is the code representative of the entire hack. Llms just make steps of a hack quicker to complete. At a steep cost.