AlphaEvolve is a system for evolving symbolic computer programs.
Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.
DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the pack who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.
Hassabi has talked about AGI in a lot of interviews. So has members of his Deepmind team. And of course current and former alphabet employees - the most prominent being schmidt. He definitely thinks it is coming and said we should prepare for it. Just search for his interviews on AI and you'll get a bunch of them.
Yeah, in reality it seems that DeepMind are more the good guys, at least in comparison to the others.
You can argue about whether the pursuit of "AGI" (however you care to define it) is a positive for society, or even whether LLMs are, but the AI companies are all pursuing this, so that doesn't set them apart.
What makes DeepMind different is that they are at least also trying to use AI/ML for things like AlphaFold that are a positive, and Hassabis' appears genuinely passionate about the use of AI/ML to accelerate scientific research.
It seems that some of the other AI companies are now belatedly trying to at least appear to be interested in scientific research, but whether this is just PR posturing or something they will dedicate substantial resources to, and be successful at, remains to be seen. It's hard to see OpenAI, planning to release SexChatGPT, as being sincerely committed to anything other than making themselves a huge pile of money.
> "While AlphaEvolve is currently being applied across math and computing, its *general* nature means it can be applied to any problem whose solution can be described as an algorithm, and automatically verified. We believe AlphaEvolve could be transformative across many more areas such as material science, drug discovery, sustainability and wider technological and business applications."
Is that not general enough for you? or not intelligent?
Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
> Do you imagine AGI as a robot and not as datacenter solving all kinds of problems?
AGI means it can replace basically all human white collar work, alpha evolve can't do that while average humans can. White collar work is mostly done by average humans after all, if average humans can learn that then so should an AGI.
An easier test is that the AGI must be able to beat most computer games without being trained on those games, average humans can beat most computer games without anyone telling them how to do it, they play and learn until they beat it 40 hours later.
AGI was always defined as an AI that could do what typical humans can do, like learn a new domain to become a professional or play and beat most video games etc. If the AI can't study to become a professional then its not as smart or general as an average human, so unless it can replace most professionals its not an AGI because you can train a human of average intelligence to become a professional in most domains.
AlphaEvolve demonstrates that Google can build a system which can be trained to do very challenging intelligent tasks (e.g. research-level math).
Isn't it just an optimization problem from this point? E.g. now training take a lot of hardware and time. If they make it so efficient that training can happen in matter of minutes and cost only few dollars, won't it satisfy your criterion?
I'm not saying AlphaEvolve is "AGI", but it looks odd to deny it's a step towards AGI.
I think most people would agree that AlphaEvolve is not AGI, but any AGI system must be a bit like AlphaEvolve, in the sense that it must be able to iteratively interact with an external system towards some sort of goal stated both abstractly and using some metrics.
I like to think that the fundamental difference between AlphaEvolve and your typical genetic / optimization algorithms is the ability to work with the context of its goal in an abstract manner instead of just the derivatives of the cost function against the inputs, thus being able to tackle problems with mind-boggling dimensionality.
The "context window" seems to be a fundamental blocker preventing LLMs from replacing a white collar worker without some fundamental break through to solve it.
* AlphaFold - SotA protein folding
* AlphaEvolve + other stuff accelerating research mathematics: https://arxiv.org/abs/2511.02864
* "An AI system to help scientists write expert-level empirical software" - demonstrating SotA results for many kinds of scientific software
So what's the "fantasy" here, the actual lab delivering results or a sob story about "data workers" and water?