I'm recycling below a comment[1] I made a few days ago very similar in tone to what the article conveys in a longer form. Unlike the author I believe that the LLM revolution is indeed the most transformative change in technology that my generation and the younger have wiwitnessed.
Whether it's going to amount to much more is to be seen, I do agree with the author but unlike him I'm somewhat hopeful and perhaps more credulous.
1. "As someone born in 1975 I always felt until the last couple of years that I had been stuck in a long period of stagnation compared to an earlier generation. My grandmother who was born in the 1910s got to witness adoption of electricity, mass transit, radio, television, telephony, jet flights and even space exploration before I was born.
Feels like now is a bit of a catchup after pretty tepid period that was most of my life."
Yeah, it's quite sad where we landed. Circa 2004-2006 while the internet was mostly open and accessible I mentally grouped "the internet" into two buckets. There was the real web plus usenet plus email and then there was "facebook" with its weird garden wall and exclusive invites or some such shit. I didn't think of facebook as being "on the web" even though they used the http protocol. It was highly unusual then to have any web content behind a registration wall.
So hardly anyone considered facebook to be a part of "the web". It was its own weird duck. Twenty years later and most people only frequent this "weird" part of the internet - this limited ensemble of paid and unpaid walled gardens.
Your statement of ‘hardly anyone considered facebook part of the web’ is incorrect. Facebook became popular a bit after the Web had become quite mainstream. The idea of signing up for online services was not foreign to most of these folks. Now, AOL/Compuserve and such were more considered as non web.
Yeah, those didn't count either. AOL and compuserve were not even available outside the USA in the late nineties. With AOL I'm quite sure nobody considered them to be a part of the web. Their pages didn't have URLs early on but AOL "keywords" instead. Compuserve also weren't using http I believe. It was some kind of commercial WAN that was pitched as a competitor to the internet, no?
Similarly Twitter; I signed up in I think 2007 and only used SMS for the next several years until they finally stopped it. Once I switched to the web/app version I was frankly appalled.
And by lack of taste I don't mean McMansions. The entire country is a little bit of a corporate dystopia. It's the end result of capitalism running with very little restraint. Sure, lots of people make great paycheques. But cities look and feel like crap, lack good mass transit, lack human scale, public education is on the ropes, healthcare is rationed according the level of wealth rather than need and people make individual choices that are just textbook cases of the Tragedy of the Commons. Good (at least in the short term) for them individually and disastrous for the society as a whole.
This is not terribly informative until expenses and safety nets are taken into account. Someone living in the Netherlands may have that 20% lower median income but being able to rely on public healthcare and get around without a personal vehicle does wonders for one's sense of peace and agency. That likely counts a lot more towards personal wellbeing than the addtional dollars in your account especially when health concerns can turn into financial concerns quite quickly.
The comment I am responding to is "because America's not rich; like 100 people here just have more money than most countries" not whatever you think I am responding to.
As someone born in 1975 I always felt until the last couple of years that I had been stuck in a long period of stagnation compared to an earlier generation. My grandmother who was born in the 1910s got to witness adoption of electricity, mass transit, radio, television, telephony, jet flights and even space exploration before I was born.
Feels like now is a bit of a catchup after pretty tepid period that was most of my life.
I've been working with it for the last couple of hours. I don't see it as a massive change from the behaviours observed with Opus 4.6. It seems to exhibit similar blind spots - very autist like one track mind without considering alternative approaches unless actually prompted. Even then it still seems to limit its lateral thinking around the centre of the distribution of likely paths. In a sense it's like a 1st class mediocrity engine that never tires and rarely executes ideas poorly but never shows any brilliance either.
But it did. What used to be unworthy of building is now totally doable. I can roll out one off ad-hoc UIs for our customer to navigate their bespoke data sets without having too much worry about expending a lot of development time to throw up a visualization page that may get discarded when its usefulness expires. So it has expanded the realm of the worthwhile if not necessarily the realm of the possible. At least not yet.
I agree. AI clearly expanded the set of things that are worth building, especially small or previously unjustifiable work. What I was trying to say is that once those things become real systems, the old constraints show up again. You still have to understand boundaries, failure modes, and how to operate what got produced.
Yes, no denying this. Now in time this will change too. I'm slowly letting Claude do some target maintenance work in AWS in non-critical environments. Over time I will let it perform reversible operations in production but that time has not come yet. And it still too often takes silly approaches to problems that have more elegant solutions. These days it has a personally of an ambitious, restless junior who wants to always get stuff done but doesn't have the depth of thought to just handle it all by itself. But the times they are a changing.
Your advice beautifully reinforces OP's case.
We now have to use auxiliary apps with more predictable behaviour because designers made such a mess of things.
This fascinates me because I tend to use Claude Code in _very_ long sessions, driving it until its context window is exhausted at which point I grab most of the session history from the terminal window and paste it right back into the context. This usually fills the context window right back up to 80-100K tokens. Seems a lot more successful especially with keeping track of recent developments than built in compaction does.
For some reason I get the best results this way. I know it's unorthodox but with my approach the agent seems to learn about the ongoing concerns as it stays 'in the loop' and I prefer it to use its minion agents to do grunt work like grepping sources or log files. That way the main context is free of monotonous blobs. I like having it act as a coding/troubleshooting companion rather than a minion to delegate short bursts of work to. I believe it's because I rarely feed it a big block of data to parse in a single prompt or let it grep incessantly in the main context that I don't get hit by the dreaded 'context rot'.
This little study seems to line up with my experiences.
However, when fed source material into the context they will lie less, right? So at this point is it not just a battle of the nines until it's called "good enough"?
I also wonder if I leave my secretary with a ream of papers and ask him for a summary how many will he actually read and understand vs skim and then bullshit? It seems like the capacity for frailty exists in both "species".
1. "As someone born in 1975 I always felt until the last couple of years that I had been stuck in a long period of stagnation compared to an earlier generation. My grandmother who was born in the 1910s got to witness adoption of electricity, mass transit, radio, television, telephony, jet flights and even space exploration before I was born. Feels like now is a bit of a catchup after pretty tepid period that was most of my life."
reply