I'm not sure that I ever read a piece written with so much certainty and arrogance on a field that is completely unexplored.
Just as an example:
"The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false."
From the little that we know as of today I call bullshit.
Even Alpha Go, that is arguably a quite primordial AI, managed to achieve super-human performance in a ridiculously short amount of time, just playing against itself.
And it simply crashed all the collective effort of the human players that honed their strategies for literally millennia in what is considered one of the most difficult games.
I don't think the author has any insight at all on what a general AI will be.
Alpha Go is good at playing Go. It can't do anything else. That's the point the author is making at the start of the article, that intelligence develops by focusing on specific tasks.
There's a lot that is hardly substantiated in the OP, but the truth is that just because you have a machine that's smart enough to play Go better than any human being, doesn't mean you can anticipate a machine that can learn to play the bassoon better than any human being.
The argument about the no free lunch theorem is informative and one of the few good points in the article. An algorithm that is good at Χ is eventually going to be pretty bad at Y. A superintelligence would have to beat humans in all possible X, even the ones it would be really bad at. And that sounds like an impossibility.
AlphaGo had a trivial simulator for the Go game, so it could play millions of self play games. Reality is not so easy. We need a simulator for reality, not for go, in order to reach human level. The author emphasizes the need for a nurturing, challenging environment, and difficult problems that are within its capabilities to solve, for the agent to become more intelligent. In other words, the AI needs good data to train on, and that is the bottleneck, not the AI itself. That's why there can be no AGI - there is no universal data.
This certainty comes from being more knowledgeable in the field of Reinforcement Learning than the vast majority of his readers. It's not hot air, I think he has good reasons, but they can't be expressed so easily. I got this intuition after reading many RL papers and I completely agree with him. In fact I am grateful to him for expressing this intuition better than I could have.
The main idea: it's the environment, not the brain/neural net that is the bottleneck. Intelligence is situated, limited by the complexity of the environment and problem it has to solve. You can't have a singularity in a vat. The environment matters most.
Human environment puts a hard limit on intelligence in our society. If we can create richer environments, intelligence could increase, but not exponentially. It's still limited even in the new environment. The exponential trend of AI is at most a sigmoid.
Plus - I think the community needs some hard truth, and the hype is way off right now. That's probably why he was so sure of himself - it was community service.
You're missing an important thing - that Go is trivial to simulate. AG can play millions of self games relatively cheaply. A human-level intelligence would need to run trials on a simulation of the real world, which is impossible to create as of yet. That is why the author was insisting on the environment (the world or a simulator) - that the environment is the bottleneck. You can think of the environment as a dynamic dataset. You know that data is crucial in AI, thus, the lack of sufficiently complex data would hamper AGI.
The fact that there is no "universal environment" means there can be no general intelligence. There can be just environment specific intelligences (situated intelligence, as the author said). The concept of AGI is just a reification of narrow AI - an illusion, there is no such thing.
Probably you don't, it's likely an unverifiable proposition. But "better at ethics" is a completely different question than "better at implementing a given ethical stance".
The people frightened of intelligence explosion are worried about something like an AI version of existentialism: a mind that accepts some moral system without even trying to justify it, and then optimizes accordingly. It's certainly possible to just accept as axiomatic ethical standards which don't come from any intrinsic feature of the world.
I've seen lots of essays (not this one) claim that morality will "inherently" emerge from intelligence, which I think is absurd. Shit, my moral views aren't an 'inherent' product of anything except my evolution-shaped brain that feels empathy.
That's where I think "AI won't be like humans so it's fine!" essays screw up so catastrophically; "not like humans" is exactly what people are worried about.
> I've seen lots of essays (not this one) claim that morality will "inherently" emerge from intelligence, which I think is absurd. Shit, my moral views aren't an 'inherent' product of anything except my evolution-shaped brain that feels empathy.
You're evolution-shaped brain is what an AI will become too. It won't start out super-human smart; it will evolve to an equal level. Then keep going. Morals are basically lower-level approximations which are beneficial for survival and AI's will certainly evolve those too.
AI will reach parity with humans before it exceeds them, agreed. But intelligence parity is not the same as "matching my evolution-shaped brain". Feeling empathy for other humans is not a requirement for intelligence; I routinely make supposedly-irrational choices to help humans in non-reciprocal settings because I feel empathy. There's no reason to think an AI that can write code as well as me will feel the same.
> Morals are basically lower-level approximations which are beneficial for survival and AI's will certainly evolve those too.
AI will evolve approximations which are beneficial for survival. We agree there. But why would a bunch of approximations that helped my ancestors survive in low-tech communal environments benefit a strong AI trained with high tech, minimal survival needs, and no "peer group"?
I fully expect strong AI will pursue some set of real-world goals which aren't justified except by the anthropic principle. But the assumption that those goals will match human goals seems to completely ignore the fact that the first strong AI will live in a vastly different environment than the first human.
> AI will reach parity with humans before it exceeds them, agreed.
Intelligence is a hugely dimensional space, and AI constraints look very little like those on humans. It is unlikely there will ever be a time when AI is similarly smart, as much as differently smart to a similar context-dependent magnitude.
A good point, and I should have been clearer about that.
I do think it's reasonable to talk about intelligence 'growing', and consequently about one intelligence 'surpassing' another. But AI's methods of thinking certainly won't be human, and it may reach human-parity on different metrics at very different times. Hell, we're seeing some of that already: AI can do I/O and data processing at superhuman speeds, but humans can still extract much more knowledge from a small amount of data.
Your morals are not universal, so of course I wouldn't expect an AI to adhere to them. But I also don't adhere to your morals and I'm just another human.
Just another human still imply's you don't relax by selling yourself on fire. Specific instincts are not quite universal, but they very much shape behavior.
Oh ok - if your point is simply that an AI will probably have some set of goals which could be called values, we agree. I was attacking the common claim that superintelligence will inherently develop 'friendly' values.
The orthogonality thesis doesn't say AI will lack goals, only that those goals may be totally unrelated to the sort of beliefs you or I would recognize as morals.
Actually, I am saying that there are some universal morals which AI will also come to understand. They may not be your morals, but I think they are compatible with mine.
The morals we have work; that's why humans developed them. They seem to essentially amount to giving a little leeway to allow cooperation, even if it allows yourself to be taken advantage of, avoiding a constant race to win and giving up some personal gain for the benefit of the group.
It's mostly long-term game theory - and AI will be subject to the same laws of math, physics and group dynamics. If they don't try to get along, they'll reach an equilibrium of constant war. Those groups of AI who develop morals will avoid that and surpass the other groups.
This kind of stuff is pretty tricky. If you only account for average human suffering, not only do you not account for happiness but you fall into the trap of concluding that it's best to kill all people suffering.
Even without assuming some value system, we can measure progress in ethics by accumulating a growing set of necessarily false moral propositions. For instance, propositions which necessarily contradict themselves.
It is not completely unexplored. We already turned a superintelligence loose to create thinking machines (von Neumann and the von Neumann architecture). The only data point to date indicates that doing so doesn't lead to an intelligence explosion.
"The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false."
From the little that we know as of today I call bullshit. Even Alpha Go, that is arguably a quite primordial AI, managed to achieve super-human performance in a ridiculously short amount of time, just playing against itself. And it simply crashed all the collective effort of the human players that honed their strategies for literally millennia in what is considered one of the most difficult games. I don't think the author has any insight at all on what a general AI will be.