An LLM is an algorithm. You can obtain the same result as a SOTA LLM via pen and paper it will take a lot of long laborious effort. That's ONE reason why LLM's do not have cognition.
Also they don't reason, or think or any of the other myriad nonsense attributed to LLM's. I hate the platitudes given to LLM's it's at PHD level. It's now able to answer math olympiad questions. It answers them by statistical pattern recognition!
If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.
If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".
Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.
We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.
Meta just laid 600 of them off.
All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.
For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.
The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.
Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.
Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.
AlphaEvolve is a system for evolving symbolic computer programs.
Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.
DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the pack who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.
Hassabi has talked about AGI in a lot of interviews. So has members of his Deepmind team. And of course current and former alphabet employees - the most prominent being schmidt. He definitely thinks it is coming and said we should prepare for it. Just search for his interviews on AI and you'll get a bunch of them.
I'm not so sure. I, for one, do not think purely by talking to myself. I do that sometimes, but a lot of the time when I am working through something, I have many more dimensions to my thought than inner speech.
But Meta specifically needs returns from AI products to justify the capex. Google and Microsoft eg. have profitable cloud businesses from where they can rent out GPU compute. Meta’s bet is far more risky.
As the Facebook generation dies out, so does Facebook. I just don't see it. Meta will have to continue to buy competition and hope that the ad market stays a racket forever. The only reason Meta is still relevant is advertising, just the same as Google. Eventually enough people will realize it for what it's worth: anti-competitive enshittification in order to preserve multi-billion dollar companies that have products and services that suck so bad you'd have a hard time paying people to use if they were startups today.
reply