"a superhuman AI that can brainwash people over text" is the dumbest thing I've read this year. It's incredible to me that this guy has some kind of cult following among people who should know better.
The real answer is that people are lazy and as soon as a security barrier forces them to do work, they want to tear down the barrier. It doesn't take a superhuman AI, it just takes a government employee using their personal email because it's easier. There's been a million MCP "security issues" because they're accepting untrusted, unverifiable inputs and acting with lots of permissions.
Indeed - the problem here is "How can we prevent a somewhat intelligent, potentially malicious agent from exfiltrating data, with or without human involvement", rather than the superhuman AI stuff. Still a hard problem to solve I think!
A set of ideas presented to people, and a notion of being smarter for believing in them seems enough to fuel enough of thought-problem-keyboard-warriorism.
I have a 2018 Forester and it holds a surprising amount of furniture or 8' lumber. My only regret is that it won't fit 4x8 sheet materials well - if only they had designed the interior plastic cladding a little better it would be a great workhorse.
I remember my 1982 Toyota Corolla wagon had an obvious cut-out in the plastic interior, that was just a hair wider than a 4x8 sheet. I still miss that car.
Manufacturers are selling what their market wants. It sucks.
Ironically, the Honda Ridgeline - long lambasted as “not a real truck” - can fit a 4x8 sheet of plywood, at least width-wise. You’ll have to either prop it up on the tailgate, or drop it and strap it down (which honestly you should do either way), but it will fit between the wheels.
I love my Ridgelines; had a Gen1 RTL, and now a Gen2 BE. A neighbor I used to have traded his F-150 for an F-350. The most I ever saw him haul was a very small trailer with some furniture. I’ve had a cubic yard of mulch dumped into mine repeatedly (a Gen1 Ridgeline will hold and haul this, but it’s heaped, and depending on moisture content it’s slightly over max rating, so maybe don’t bring a passenger).
The difference is that in a large organization the people documenting the procedure, the people doing the procurement, the people receiving the order and the people packing the drums are all different people. Potentially in different buildings. You can't expect the original scientist who wrote the white paper based on experiments in a glovebox to be present every time they pack waste into drums.
> He also explicitly gave up his leadership position and then later wanted a say in management's direction. Ultimately, he sounds like a caring, nice guy, who was more interested in "having everyone heard" than learning some management skills. What happened later after he dropped out of the leadership circle is just a product of that and I imagine significant bad blood between him and those who remained.
This stuck out to me too. There's nothing more frustrating for the actual leadership than someone with soft power who says they don't want to lead trying to come in and obstruct every decision.
As an armchair quarterback I feel like if he had kept his tinder dry he probably could have gotten some of what he wanted? He could have advocated to head up the casual spin-off app as a small team. Giving a founder who wants to step out of leadership a pet project is a very common way to handle this situation.
Instead it sounds like he got caught up picking fights on every decision and wasted his credibility. Talking to leadership is a skill and part of that skill is packaging things concisely and effectively. Even if the leadership used to be your confounders.
When you say plagiarizes, do you mean they are publishing their own docs without ads? Or you mean when the AI is reading the docs instead of a person they ignore the ads?
People don't just ask AI to produce a Tailwind app, they also ask AI specific questions that are answered in the docs. When the AI regurgitates the answers from the docs they don't visit the actual docs. Like the Google answer box in search results stealing clicks from the pages that produce the content.
The answer is "it depends". If someone printed out the documentation and bound it together to sell without permission? Yes. The mere act of converting from one medium to another usually isn't transformative.
The test for writing a book is whether the author applied their own judgement in the creation of the book. Even if some explanations of concepts are inevitably similar the structure of the book, the example code, etc. will reflect the author's judgement and experience.
An LLM is incapable of authorial intent. It's not synthesizing the docs with a career of experience and the input of an editor. It's playing madlibs with the work of one or more prior authors.
It was a problem with their revenue stream, which was documentation website -> banner for lifetime payment.
All customers already had lifetime access and couldn't pay more. Plus noone was reading the docs on the webpage anymore.
Recurring subscriptions, ads in AI products (think Tailwind MCP server telling you about subscription features.) Those were just two things I pulled out of the hat in a minute.
I can understand recurring subscriptions and ads in MCP being a bright line that the team doesn't want to cross. You will probably say it's a bad business model to not make everything a recurring charge and packed full of ads.
I've experienced this in my own life - I ran my own business and I had to choose between doing a worse job and enshittifying the product to make more money, or doing a good job but risking bankruptcy. I choose bankruptcy, because I believed strongly in doing a good job and not enshittifying the product. I don't regret it.
AI coding has massive factors that should make it the easiest to drive adoption and monetize.
The biggest is FOMO. So many orgs have a principle-agent problem where execs are buying AI for their whole org, regardless of value. This is easier revenue than nickle-and-diming individuals.
The second factor is the circular tech economy. Everyone knows everyone, everyone is buying from everyone, it's the same dollar changing hands back and forth.
Finally, AI coding should be able to produce concrete value. If an AI makes code that compiles and solves a problem it should have some value. By comparison, if your product is _writing_, AI writing is kind of bullshit.
To be clear this is me making the most generous case for LLMs, which is that some people really do just want a shitty app to check a box. In my experience fixing LLM-produced software is worse than just writing it from scratch.
I think LLM writing replacing actual authors or AI "art" is fundamentally worthless though, so at least coding is worth more than "worthless"
I've got to wonder what the potential market size is for AI driven software development.
I'd have to guess that competition and efficiency gains will reduce the cost of AI coding tools, but for now we've got $100 or $200/mo premium plans for things like Claude Code (although some users may exceed this and pay more), call it $1-2K/yr per developer, and in the US there are apparently about 1M developers, so even with a 100% adoption rate that's only $1-2B revenue spread across all providers for the US market.... a drop in the bucket for a company like Google, and hardly enough to create a sane Price-to-Sales ratio for companies like OpenAI or Anthropic given their sky-high valuations.
Corporate API usage seems to have potential to be higher (not capped by a fixed size user base), but hard to estimate what that might be.
ChatBots don't seem to be viable for long-term revenue, at least not from consumers, since it seems we'll always have things like Google "AI Mode" available for free.
Like so many things in American life these days, we've arrived at a "better solution" that extracts value from the past without producing a future. The decades of Stack Overflow answers fed into LLM training produce plausible answers for today. In 10 years what will the LLMs train on?
So much of life these days is purely extractive, trying to squeeze more money out of less productive activity. It's no wonder young people feel disillusioned and are increasingly focused on gambling and "investing" in meme stocks.
I think this is seeing the past with rose tinted glasses, it’s not like SO was on the cutting edge of computer science. The world is probably better off that we don’t need another 12 ways to develop a CRUD app or learn the framework of the month from gatekeepers with a bad attitude.
You must have been using SO different to me then. For me it was more like a Wikipedia for specific language errors, compiler/IDE issues etc.
Never once saw anyone discussing how to implement CRUD or claiming one framework was better than another. That was the point - concrete answers not opinions.
Most people's problems aren't the cutting edge of computer science. They're "what does this log mean" or "how do I do X in Y framework". These are answers LLMs are great at regurgitating based on a big corpus like Stack Overflow.
As the underlying software evolves the log messages will change and the APIs will change and the answers won't make sense anymore.
In fact, I think this is part of what lead to the downfall of SO.
The moderation could be very aggressive with "duplicate" posts getting closed fast. The problem is sometimes the "solution" in the duplicate was either irrelevant or dated. Things like telling someone to us jQuery in 2020.
If you lookup how to do any sort of standard front end operation there's a good chance the top SO answer will reference jQuery or some other outdated approach. I don't typically save these cases for future reference but have seen them many times and expect most people who have spent much time searching these topics have as well.
> The world is probably better off that we don’t need another 12 ways to develop a CRUD app or learn the framework of the month from gatekeepers with a bad attitude
Are you saying that because you don't like web apps or the frameworks that people use to make them, there shouldn't be a way for people to publicly ask questions about programming?
Is anyone actually doing that level of fine-tuning? My understanding of LLMs is that they shovel in all the code they can find regardless of quality and let the Lord sort it out.
If you truly believe that the ai companies, who have used essentially all of the worlds ip without asking for permission won’t use yours because you gave them $20 a month, I have some magic beans to sell you.
I do not know whether I'm misinterpreting this comment but:
Yes, the context from working sessions moves over the wire - claude "the model" doesn't work inside the CLI on your machine - it's an API service that the cli wraps.
I think they are positing that LLMs do not produce new thought. If a new framework (super magic new framework) is released, current LLMs will not be able to help.
Why wouldn't the LLM just read the source of the framework to answer questions directly? That's how I do things as a human. Given the appropriate background knowledge (which current LLMs are already extremely capable with), it should be pretty easy to understand what it's doing, and if it's not easy to understand the source, it's probably a bad framework.
I don't expect an LLM to have deep inbuilt knowledge of libraries. I expect it to be able to use a language server to find the right definitions and load them into context as needed. I expect it to have very deep inbuilt knowledge of computer science and architecture to make sense of everything it sees.
Because LLMs do not work like that - there's no "understanding" the source and answering questions, it simply "finds" similar results in its training data (matching it with the context) and regurgitates (some part of) it (+ other "noise").
Meaning as technology evolves and does things in novel ways, without explainers annotating it the LLM won't have anything to draw on - reducing the quality of answers. Which brings us full circle, what will companies use as training data without answers in places like SO?
I just downloaded "Degeneration in discriminantal arrangements", by Saito, Takuya from the journal "Advances in applied mathematics" dated November 2025 and fed it to Claude.
It not only explained the math but created a react app to demonstrate it. I'm not that can be explained by regurgitating part of it with noise.
I encourage you to try it with something of your own.
Abstract:
Discriminantal arrangements are hyperplane arrangements
that are generalization of braid arrangements. They are con-
structed from given hyperplane arrangements, but their com-
binatorics are not invariant under combinatorial equivalence.
However, it is known that the combinatorics of the discrimi-
nantal arrangements are constant on a Zariski open set of the
space of hyperplane arrangements. In the present paper, we
introduce (T, r)-singularity varieties in the space of hyper-
plane arrangements to classify discriminantal arrangements
and show that the Zariski open set is the complement of
(T, r)-singularity varieties. We study their basic properties
and operations and provide examples, including infinite fami-
lies of (T, r)-singularity varieties. In particular, the operation
that we call degeneration is a powerful tool for constructing
(T, r)-singularity varieties. As an application, we provide a
list of (T, r)-singularity varieties for spaces of small line ar-
rangements.
It's well known that even current LLMs do not perform well on logic games when you change the names / language used.
i.e. try asking it to swap the meanings of the words red and green and ask it to describe the colors in a painting and analyse it with color theory - notice how quickly the results degrade, often attributing "green" qualities to "red" since it's now calling it "green".
What this shows us is that training data (where the associations are made) plays a significant role in the level of answer an LLM can give, no matter how good your context is (at overriding the associations / training data). This demonstrates that training data is more important (for "novel" work) than context is.
Write "This sentence is green." in red sharpie and "This sentence is red" in green sharpie on a piece of paper. Show it to someone briefly and then hide it. Ask them what color the first sentence said it was and what color the second sentence was written in.
Another one: ask a person to say 'silk' 5 times, then ask them what cows drink.
Exploiting such quirks only tells you that you can trick people, not what their capabilities are.
The point isn't that you can trick an LLM, but that their capabilities are more strongly tied to training data than context. That's to say, when context and training disagree, training "wins". ("wins" isn't the correct wording, but hopefully you understand the point)
This poses a problem for new frameworks/languages/whatever that do things in a wholly different way since we'll be forced to rely on context that will contradict the training data that's available.
What is an example of a framework that does things in a wholly different way? Everything I'm familiar with is a variation on well explored ideas from the 60s-70s.
If you had someone familiar with every computer science concept, every textbook, every paper, etc. up to say 2010 (or even 2000 or earlier), along with deep experience using dozens of programming languages, and you sat them down to look at a codebase, what could you put in front of them that they couldn't describe to you with words they already know?
Even the differences between React and Svelte are big enough for this to be noticeable. And Svelte is actually present in the training data. Given the large amount of react training data, svelte performs significantly worse (yes, even when given the full official svelte llms.txt in the context)
But it doesn't pose a problem. You are extrapolating things that are not even correlated.
You started with 'they can't understand anything new' and then followed it up with 'because I can trick it with logic problems' which doesn't prove that.
Have you even tried doing what you say won't work?
If I make up a riddle and ask an LLM to solve it, it will perform worse than a riddle that is well known and whose solution will be found in the dataset. That's just a foundational component of how they work.
But it’s almost trivial for an LLM to generate every question and answer combo you could every come up with based on new documentation and new source code for a new framework. It doesn’t need StackOverflow anymore. It’s already miles ahead.
My recent experience with codex is that they absolutely do work that way today (this may be recent as in within the last couple months), and will autonomously decide to grep for things in your codebase to get context on changes you've asked for or questions you've asked. I've been pretty open to my manager about calling my upper management delusional with this stuff until very recently (in the sense that 6 months ago everything I tried was still a toy), but it's actually now reaching a tipping point that's drastically changing how I work.
LLM’s can already do this without training. I recently uploaded a manual for an internal system and it can perfectly answer questions from the context window.
So adding a new framework already doesn’t need human input. It’s artificial intelligence now, not a glorified search engine or autocomplete engine.
The issue here is borrowing from the future to pay for the present. The bicycle analogy (unless I'm missing something huge here) does not seem relevant at all.
How will CharGPT/CoPilot/whatever learn about the next great front-end framework? The LLMs know about existing frameworks by learning on existing content (from StackOverflow and elsewhere). If StackOverflow (and elsewhere) go away, there's nothing to provide a training material.
Whenever I hear people talk about rocket flights I think of the Stephen King short story "The Jaunt". Humans develop near-instant transportation but you have to be unconscious while travelling. A kid avoids being sedated and is driven insane by whatever interdimensional stuff he sees in transit.
Likewise for every fit 20-something being launched at Mach 5 you'd have 10 octogenarians dying of cardiovascular complications.
Subscriptions have a "boiling frog" phenomenon where a marginal price increase isn't noticable to most people. Our payment rails are so effective many people don't even read their credit card statements, they just have vampires draining their accounts monthly.
Starting with a low subscription price also has the effect of atrophying people's ability to self-serve. The alternative to a subscription is usually capital-intensive - if you want to cancel Netflix you need to have a DVD collection. If you want to cancel your thin client you have to build a PC. Most modern consumers live on a knife edge where $20/month isn't perceptible but $1000 is a major expense.
The classic VC-backed model is to subsidize the subscription until people become complacent, and then increase the price once they're dependent. People who self-host are nutjobs because the cloud alternative is "cheaper and better" until it stops being cheaper.
My bank has an option to send me a notification every time I'm charged for something. I've noticed several bills that were higher than they should have been "due to a technical error". I'm certain some companies rely on people not checking and randomly add "errors".
Notably there's no way (known to me) that you can have direct debits sent as requests that aren't automatically paid. I think that would put consumers on an equal footing with businesses though, which is obviously bad for the economy.
It's normally an option in my experience. I have mine set for charges over $100. I don't want a notification every time I buy gas (I do check my statements every month though).
What is the harm in being notified when you buy gas? It doesn’t hurt anything, and I DO want to be notified if someone else buys gas on my card!
The discussion started as a way to avoid forgetting to cancel subscriptions or to catch subscription price increases; if you are setting your limit to $100, you aren’t going to be seeing charges for almost all your subscriptions.
I have my minimum set to $0, so I see all the charges. Helpful reminder when I see a $8 charge for something I forgot to cancel.
Alert fatigue. Most people, if they get an alert for every single purchase they make, will learn to ignore the alerts as they are useless 99% of the time. Then when an alert comes through that would be useful, they won't see that either.
Anyone who has had the misfortune to work on monitoring systems knows the very fine line you have to walk when choosing what alerts to send. Too few, or too many, and the system becomes useless.
As I said, I have my alert set to $0 and it really hasn’t caused fatigue. For one thing, when it is something i just purchased, the alert is basically just a confirmation that the purchase went through. I close it immediately and move on.
If I get an alert and I didn’t buy anything, it makes me think about it. Often times it just reminds me of a subscription I have, and I take the moment the think if I still need it or not. If I start feeling like I am getting a lot of that kind of alert, I need to reevaluate the number of subscriptions I have.
If I get an alert and I don’t immediately recognize the source (the alert will say the amount and who it is charged to), it certainly makes me pause and try to figure out what it is, and that has not been “alert fatigued” away from me even after 10+ years of these alerts.
Basically, if I get an alert when I didn’t literally JUST make a purchase, it is worth looking into.
I dont think it causes alert fatigue; I am not getting a bunch of false alerts throughout my day, because I shouldn’t be having random charges appear if I am not actively buying something.
> The alternative to a subscription is usually capital-intensive - if you want to cancel Netflix you need to have a DVD collection.
I did Apple Music and Amazon Music. The experience of losing “my” streaming library twice totally turned me off these kinds of services. Instead I do Pandora, and just buy music when I (rarely) find something I totally love and want to listen to on repeat. The inability to build a library in the streaming service that I incorrectly think of as “mine” is a big feature, keeps my mental model aligned with reality.
I do wish these services would have an easier method to import/export playlists and collections. But that would make it easier to leave, so its not going to happen.
> if you want to cancel Netflix you need to have a DVD collection
You don't need a whole DVD collection to cancel Netflix, even ignoring piracy. Go to a cheaper streaming service, pick a free/ad supported one, go grab media from the library, etc. Grab a Blu-Ray from the discount bin at the store once in a while, and your collection will grow.
It really depends on the movies you're watching and how you watch them. I've watched "It Follows" like 4 times in the past year to show it to different people. I would watch The Shining every year at Halloween, and It's a Wonderful Life at Christmas. On the other hand, sometimes you just want to throw on one of your comforting favorite movies in the background.
There's also a media preservation angle - you can imagine the monopoly media companies of the next decade not wanting to stream "My Own Private Idaho" or "Female Trouble".
I've bought a ton of movies in the past. The vast majority I've sold second hand or thrown away because I just didn't care to watch again and I didn't feel like storing something I'd never use forever.
Same goes for a lot of other media. Some amount of it I'll want to keep but most is practically disposable to me. Even most videogames.
No, I do own some (actually it was more in the VHS days so tapes) and I just found that I never really watched them again. So I stopped buying movies. I'm the same with books. Once I read it, I've read it. I would rarely read a novel twice. I know what's going to happen, so what's the point? Reference books are different of course.
Some of us just consume media differently I suppose. I'm a big fan of going back to re-read/re-watch a lot of my favorite media. Sometimes it's because it got a new volume/season/movie came out years later so I'll take time to re-experience the original media to get ready for it. Never really had an issue experiencing something again and having it feel fresh because its been a few years.
I will admit that re-reading books has become less of a habit the older I get because it is time consuming to get through a longer series again.
I'm mostly the same, I don't watch movies twice. But there are exceptions. Some movies are just beautiful or I like how they make me feel, so I want to rewatch them. Groundhog Day is an example.
You're not really thinking this through enough. The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again? Presumably you do get something out of listening to music again (since you said you do listen to it more than once), so whatever that "something" is... you can infer that others get similar value out of rereading books/rewatching movies, even if you personally don't.
For myself, the answer is "because the story is still enjoyable even if I know how it will end". And often enough, on a second reading/viewing I will discover nuances to the work I didn't the first time. Some works are so well made that even having enjoyed it 10+ times, I can discover something new about it! So yes, the pleasure of experiencing the story the first time can only be had once. But that is by no means the only pleasure to be had.
> The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again?
Most music doesn't have the same kind of narrative and strong plot that stories like novels and movies do, this is a massive difference. And even if it does, it doesn't usually take a half hour or more to do such a change. That's a pretty big difference about the types of art.
You can only boil the frog until it dies. If there isn't a true dependency relationship then at some point the industry will die.
In the 2010's, when short on money, I noticed my cable+Internet package was above $200. I took a look at things and cut the TV service, keeping the Internet.
Movies and theaters thought they were untouchable until they weren't. Games can keep increasing their subscription fees until people just stop playing them. There was a world before video games, after all.
This is something I’ve been seeing for a while. As a teen that kept his 300 dollar paycheck in cash that money would last a very long time. Now I make a good 6 figures and was seeing my accounts spending way more than I should. It wasn’t big purchases it was 50 dollars here 200 hundred there. A subscription here and there. By the end of the month I would wrack 8k in spending.
Going line by line I learned how much I neglected these transactions being the source of my problem. Could I afford it? Yes. But saving and investing is a better vehicle for retirement early than these minor dopamine hits
Sure but modern cloud subscriptions have a lot of service layers you otherwise won't pay for so effectively you may be buying the hardware yearly that's a lot different than renting a media collection that would be assembled over a lifetime for the price of one new item a month.
> Subscriptions have a "boiling frog" phenomenon where a marginal price increase isn't noticable to most people.
This is so apt and well stated. It echos my sentiment, but I hadn't thought to use the boiling frog metaphor. My own organs are definitely feeling a bit toastier lately.
> Are we looking at a future where home computers are replaced by thin clients and all the power lies in subscription services?
Always have been. Ever since the SaaS revolution of the early 2000s high-growth software businesses have been motivated to chase subscription revenue over one-time sales because you get a better multiple.
From an economic perspective The Market would like the average person to spend all their money on rents, and the only option is how you allocate your spending to different rentals. Transportation, housing, food, entertainment (which is most of computing) are just different fiefs to be carved up by industry monopolists.
The real answer is that people are lazy and as soon as a security barrier forces them to do work, they want to tear down the barrier. It doesn't take a superhuman AI, it just takes a government employee using their personal email because it's easier. There's been a million MCP "security issues" because they're accepting untrusted, unverifiable inputs and acting with lots of permissions.
reply