Hacker Newsnew | past | comments | ask | show | jobs | submit | hagbarth's commentslogin

Ah yes, proving a negative. What makes you sure a stone is not capable of cognition?

An LLM is an algorithm. You can obtain the same result as a SOTA LLM via pen and paper it will take a lot of long laborious effort. That's ONE reason why LLM's do not have cognition.

Also they don't reason, or think or any of the other myriad nonsense attributed to LLM's. I hate the platitudes given to LLM's it's at PHD level. It's now able to answer math olympiad questions. It answers them by statistical pattern recognition!


Revenue is all up. And as far as I can see beating expectations.

> People think that because AI cannot replace a senior dev, it's a worthless con.

Quite the strawman. There are many points between “worthless” and “worth 100s of billions to trillions of investment”.


If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.


> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.

AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.

If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".


Sam Altman has been drumming[1] the ASI drum for a while now. I don't think it's a stretch to say that this is the vision he is selling.

[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...


Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.


We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.

Meta just laid 600 of them off.

All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.

For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.

The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.

Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.


Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.


Assuming the recursing self-improvement doesn't run into physical hardware limits.

Like we can theoretically build a spaceship that can accelerate to 99.9999% C - just a constant 1G accel engine with "enough fuel".

Of course the problem is that "enough fuel" = more mass than is available in our solar system.

ASI might have a similar problem.


I believe AlphaFold, AlphaEvolve etc are _not_ looking to get to AGI. The whole article is a case against AGI chasing, not ML or LLM overall.


AlphaEvolve is a general system which works in many domains. How is that not a step towards general intelligence?

And it is effectively a loop around LLM.

But my point is that we have evidence that Demis Hassabis knows his shit. Just doubting him on a general vibe is not smart


AlphaEvolve is a system for evolving symbolic computer programs.

Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.

DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the pack who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.


Has he, his team, or DeepMind used any AGI rhetoric, even just as advertising?


Hassabi has talked about AGI in a lot of interviews. So has members of his Deepmind team. And of course current and former alphabet employees - the most prominent being schmidt. He definitely thinks it is coming and said we should prepare for it. Just search for his interviews on AI and you'll get a bunch of them.


This entire thread: people who are too lazy to read basic info about AI companies but have an opinion about "AGI rhetoric".

What do you think OpenAI was founded in a response to?


Musk’s wild-eyed AGI visions and hysteria towards the sober-minded, research-focused efforts of DeepMind / Demis Hassabis?


"You're absolutely right!"


I'm not so sure. I, for one, do not think purely by talking to myself. I do that sometimes, but a lot of the time when I am working through something, I have many more dimensions to my thought than inner speech.


But Meta specifically needs returns from AI products to justify the capex. Google and Microsoft eg. have profitable cloud businesses from where they can rent out GPU compute. Meta’s bet is far more risky.


True. But then again they own the consumer side.

If Meta hadn’t invested in AI recommendations a while back they would have lost against TikTok big time.


As the Facebook generation dies out, so does Facebook. I just don't see it. Meta will have to continue to buy competition and hope that the ad market stays a racket forever. The only reason Meta is still relevant is advertising, just the same as Google. Eventually enough people will realize it for what it's worth: anti-competitive enshittification in order to preserve multi-billion dollar companies that have products and services that suck so bad you'd have a hard time paying people to use if they were startups today.


While facebook does seem to decline somewhat in use in my younger friendcircle, Instragram and Whatsapp seem to be larger than ever.


Being skeptical of all the numbers I see - it still seems instagram is on roughly even footing with TikTok for upcoming generations.

I don’t doubt they may destroy their own product (like google search) but I do think it’s going to take a long long time


And now threads which apparently is quietly growing


Maybe the plan is to buy up all the companies that currently pay them for ads?


Very vibes based take.


I agree with that. It's a bet they have to make. They are just in a worse position to make it than some of the other companies.


They will still need to learn to recognise if the output from AI is good or not.


Version control is different since it’s collaboration with the rest of the org.

The rest: if they are just as productive as others, I would not care one bit. Tool use as a metric is just bad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: