> Ever since ChatGPT came out, the tech world has jumped on a new hype train—just like it did before with crypto, NFTs, and the metaverse.
Personally, I eschewed all of those — except for LLMs. I'm convinced this one's for real. People though can use "hype" to mean a number of things.
AGI by the end of the year? Hype.
Decimation of white-collar jobs? Hype.
Fundamentally new paradigm and tech the world will have to adapt to? Not hype at all.
> Eventually, these companies tried something new: agents.
Yeah, that one's still on the hype shelf for me.
> This floods social media and websites with low-quality, copy-paste content.
No! Welp, there goes social media. ;-)
> ChatGPT has around 500 million weekly active users, but only around 20 million of them actually pay for a subscription. That means the vast majority of people think it’s not worth $20 a month.
You could say the same for YouTube (and likely wildly surpass that scenario).
When you offer a free version, don't be surprised if most users (meekly raises hand) save their pocket money for something else. These are early days and people are sussing out what the thing can do for them.
> To me, Apple stands out. They aren’t trying to build a chatbot that knows everything. Instead, they’re focused on using AI as a tool to help people interact with their apps and data in smarter ways.
That feels like Apple in gap-filling mode: trying to show the world they're doing something while smart people are trying to figure out what Apple really ought to be doing.
They have their own chip dies — could perhaps do a dedicated LLM client architecture that allows it to run on-device? It makes you wonder what Jony Ives (and investors) could possibly be thinking when Apple could easily pivot and own the aiPhone market.
Waiting for my own aiPhone someday — with encrypted history saved to my cloud account. Wondering what it will be like for future generations who will have had a personal confidant for decades — since they were teenagers…
This is the appropriate view, but the market’s gone all in on hype so we must deal with the consequences when the hype doesnt deliver. It was the same for me with the blockchain/web3 hype craze - I saw interesting game changing applications but nothing anywhere near what the market was going nuts for. the funniest thing is that hype train seamlessly transitioned to “AGI” without batting an eye or an ounce of shame.
The area I was briefly interested in was in the fintech/lending space. Not an expert though. I saw some cool ideas out of it at that time and interviewed several rounds with a company in that space.
This hype cycle is definitely more like the original hype cycle of the internet in the late 90s - back when "the high street was going to disappear", "we'd all be hanging out in VR" or that it would "democratize" well, anything.
Clearly a lot did change but most of the bolder predictions still ended up not coming true.
Well, I think that the vast majority of those predictions about the internet did came true, with my main evidence being the extent through which the covid lockdowns didn't harm the productivity of the typical company - I think that the fact that all the operations just moved online would have been unimaginable in the early 90s, except by avid sci-fi readers.
I would have loved to have seen somebody predict mass wfh only being possible within the confines of a pandemic and being shut down shortly after.
Or that minus a few exceptions like blockbuster the majority of high street /mall stores would still exist in spite of online shopping being given more favourable tax treatment.
Or that democratic institutions would end up being eroded by the toxic spam that popped up when the barrier for entry for publishing was lowered.
So often so that I'm somewhat surprised I keep reading articles that say essentially, "Don't believe market-speak from someone who is trying to sell you something."
This. AI replacing all software engineers? Hype. Me, a SE with 20 years experience, using AI to do the heavy lifting of refactoring 2500 code points and corresponding tests, so that i can focus on what is the correct solution? That works very nicely.
Or just last week, I managed to do some research into how (we think) the brain works, and draft a position paper based on the research and my thoughts in a day or two, instead of weeks.
I don't care about AGI, or wether LLMs might approach that, or wether LLMs 'are just autocompletion'. It lets me focus on the things that matter more in my work. It still has a long way to go, but claude code is pretty decent at doing menial jobs for me already
It's a power saw, or screw gun, but not a Tunnel Boring Machine. When the task is big enough, it goes a little off the rails. With guidance, and persistence, you can churn out a lot of code with barely more than supervisory effort.
I somewhat agree with both your and GP perspectives. It's getting more hype than it has earned, and the promise that this path leads to AGI, despite 10× sizes of models yielding diminishing returns on performance. But it's not vaporware, it can produce fluent text faster and cheaper than humans, so it doesn't go in the "why are they buying?" bin with NFTs.
The questions getting lost in the middle is "do we need to churn out even more code and trust that it's been reviewed?" and "is using this going to semi-permanently disable most of the knowledge workers?"
If there's even a chance that my executive functions and related mental faculties are degraded by using LLMs then I would rather not. I try it a little and keep a finger on the pulse of the community that are going all-in on it. If it does transform into something that's 99% accurate and with a knob letting me dictate volume of output, I'll put more effort into learning how to hold it. And hopefully by then we'll be able to confirm or refute any of the long-term side effects.
Agreed. But the OP’s point is that LLM are not being presented as a productivity tool for developers, in the vein of an advanced IDE. They’re being presented as the “solution to everything”.
Here’s the problem: I don’t believe you. And if I don’t, many others must also not believe your productivity claims.
Why should that matter? Because the more that people automatically associate use of AI in software engineering with professional fraud, the more likely they are to assume that actual good work you are doing is actually fake, as soon as you admit you are using it.
The more grandiose the productivity claims are, the less likely that they could possibly have been tested in a responsible way.
The only thing that could save you is if enough devs clap their hands to keep Tinkerbell alive— a conspiracy of fellows who promise not to ask uncomfortable questions about the fairies who do your job for you.
>Fundamentally new paradigm and tech the world will have to adapt to? Not hype at all.
I generally agree with your statements. But i personally tend to think of of the various ML flavors as only a natural evolution of the same Turing/von Neumann paradigms. Neural networks simulate aspects of cognition but don’t redefine the computation model. They are vectorized functions, optimized using classical gradient descent on finite machines.
Training and inference pipelines are composed of matrix multiplications, activation functions, and classical control flow—all fully describable by conventional programming languages and Turing Machines. AI, no matter how sophisticated, does not violate or transcend this model. In fact, LLMs like ChatGPT are fully emulatable by Turing machines given sufficient memory and time.
(*) Not playing the curmudgeon here, mind you, only trying to keep the perspective, as hype around "AI" often blurs the distinction between paradigm and application.
You're right. I meant "new paradigm" though more in regard to its societal adoption — not that the number crunching going on in the GPUs was some new tech.
Umm, it's a pretty fundamental theorem (that's assumed to be true) that all computation is equivalent to Turing Machines. Unless we're very wrong about fundamentals of CS here, we'll never see anything that is more powerful, computationally speaking, than a Turing machine.
Personally, I eschewed all of those — except for LLMs. I'm convinced this one's for real. People though can use "hype" to mean a number of things.
AGI by the end of the year? Hype.
Decimation of white-collar jobs? Hype.
Fundamentally new paradigm and tech the world will have to adapt to? Not hype at all.
> Eventually, these companies tried something new: agents.
Yeah, that one's still on the hype shelf for me.
> This floods social media and websites with low-quality, copy-paste content.
No! Welp, there goes social media. ;-)
> ChatGPT has around 500 million weekly active users, but only around 20 million of them actually pay for a subscription. That means the vast majority of people think it’s not worth $20 a month.
You could say the same for YouTube (and likely wildly surpass that scenario).
When you offer a free version, don't be surprised if most users (meekly raises hand) save their pocket money for something else. These are early days and people are sussing out what the thing can do for them.
> To me, Apple stands out. They aren’t trying to build a chatbot that knows everything. Instead, they’re focused on using AI as a tool to help people interact with their apps and data in smarter ways.
That feels like Apple in gap-filling mode: trying to show the world they're doing something while smart people are trying to figure out what Apple really ought to be doing.
They have their own chip dies — could perhaps do a dedicated LLM client architecture that allows it to run on-device? It makes you wonder what Jony Ives (and investors) could possibly be thinking when Apple could easily pivot and own the aiPhone market.
Waiting for my own aiPhone someday — with encrypted history saved to my cloud account. Wondering what it will be like for future generations who will have had a personal confidant for decades — since they were teenagers…