I think you aren't understanding the meaning of the world bubble here. No one can deny the impact LLM can have but it still has limits.
And the term bubble is used here as an economic phenomenon. This is for the money that openai is planning on spending which they don't have. So much money is being l poured here, but most users won't pay the outrageous sums of money that will actually be needed for these LLM to run, the break even points looks so far off that you can't even think about actual profitability.
After the bubble bursts we will still have all the research done, the hardware left and smaller llms for people to use with on device stuff.
the real innovation is that neural networks are generalized learning machines. LLMs are neural networks on human language. The implications of world models + LLMs will take them farther
The neural net was invented in the 1940s, and LLMs were created in the 1960s. It's 2025 and we're still using 80yo architecture. Call me cynical, but I don't understand how we're going to avoid the physical limitations of GPUs and data to train AIs on. We've pretty much exhausted the latter, and the former is going to hit sooner rather than later. We'll be left at that point with an approach that hasn't changed much since WW2, and our only solution is going to hit a physical limit law.
Even in 2002, my CS profs were talking about how GAI was a long time off bc we had been trying for decades to innovate on neural nets and LLMs and nothing better had been created despite some of the smartest people on the planet trying.
they didnt have the compute or the data to make use of NNs. but theoretically NNs made sense even back then, and many people thought they could give rise to intelligent machines. they were probably right, and its a shame they didnt live to see whats happening right now
> they didnt have the compute or the data to make use of NNs
The compute and data are both limitations of NNs.
We've already gotten really close to the data limit (we aren't generating enough useful content as a species and the existing stuff has all been slurped up).
Standard laws of physics restrict the compute side, just like how we know we will hit it with CPUs. Eventually, you just cannot put things closer together that generate more heat because they interfere with each other because we hit the physical laws re miniaturization.
No, GAI will require new architectures no one has thought of in nearly a century.
We have evidence that general intelligence can be produced but a bunch of biological neurons in the brain and modern computers can process similar amounts of data to those so it's a matter of figuring how to wire it up as it were.
Despite being their namesake, biological neurons operate quite distinctly from neural nets. I believe we have yet to successfully model the nervous system of the nematodes, with a paltry 302 neurons.
dude who cares about data and compute limits. those can be solved with human ingenuity. the ambiguity of creating a generalized learning algorithm has been solved. a digital god has been summoned
I'm old enough to have heard this before, once or thrice.
It's always different this time.
More seriously: there are decent arguments that say that LLMs have an upper bound of usefulness and that we're not necessarily closer to transcending that with a different AI technology than we were 10 or 30 years ago.
The LLMs we have, even if they are approaching an upper bound, are a big deal. They're very interesting and have lots of applications. These applications might be net-negative or net-positive, it will probably vary by circumstance. But they might not become what you're extrapolating them into.
I love chatGPT5 and Claude but they aren't as big of a deal as going from no internet to having the internet.
That I think is the entire mistake of this bubble. We confused what we do have with some kind of science fiction fantasy and then have worked backwards from the science fiction fantasy as if it is inevitable.
If anything, the lack of use cases is what is most interesting with LLMs. Then again, "AI" can do anything. Probabilistic language models? Kind of limited.