1) progress doesn't happen linearly, it's very coarse stepwise function (remember alphago?) 2) acceptance of an AGI actually being created is also a non-linear function - look at current definitely-not-AGI LLMs: some people think it's close, some people say that absolutely not, these are big matrices of numbers regurgitating words in plausible combinations and that's all - and these people won't change their minds easily one way or the other, thus declaring delivering of AGI will be very controversial even if we could agree on the definition.
It would have to be reproducible: one second of a snippet of a video clip of something; the next, the thing happening. You rewind it and play again. Scrub through and try to determine a different branch. At one timestamp, separation, peace, unknowing. In the next, flames and smoke and a flashing blur.
But no--the state of the world in that second versus the next could not have been predicted. It just happened, just so, and it would take longer to calculate the determinism than to adjust to the new.
> Google Lambda and Bing/Sydney are not at least incrementally closer to evidence?
No, in the same way that boiling water in a kettle is not incrementally closer to inventing the fuel for a rocket ship.
You can, in hidsight do a linear connection of water boiling -> steam engine -> fuel engine -> aeronautics -> space travel
But the first time someone boiled water, if you said that got you closer to being on the moon I think you would be mad. There is a non trivial chance that all our current work in AI is functionally useless in terms of creating AGI, it might take a complete breakthrough in ML, tech not invented yet, or it might not be possible at all.
Yep, and there was a much mre promissing science on checking Ether, and the magical qualities of gravity defying physics.
Who is to say that our current ML models are not Ether studies and utterly useless.
If we crack AGI in 10 years, maybe our current models are the fuel engine. But if they are not even close, we might be boiling water. Or perhaps be wrong all together, thats why I said your observation of us getting closer thanks to those proyects is not right, at least not yet.
We have lots of examples of generally intelligent systems, they're common and they evolve by themselves, so the idea that it might not be possible at all just seems ridiculous.
Large language models might not be capable of general intelligence by themselves, but a model network that includes a LLM as part of a generalized game playing model would certainly have the potential to get there.
I think it is fair to say that "not at all possible" is perhaps too strong. On the potential of certain approaches I am less certain we have or know what we need.
But we also have examples of things all around us we cannot make ourselves (e.g., a simple animal or lifeform from scratch).
If we can define what "generally intelligent" means in concrete terms, we can absolutely create it. The problem is that we don't have a good understanding/shared idea of what we're trying to achieve. We can create models that can learn to play many different games using the same weights, solve IQ/logic tests and pass the Turing test consistently, but people are going to move the goalposts and say those are just complex mashups of autocomplete and flowcharts.
To be honest I don't think AI skeptics are ever going to believe until there's a Skynet trying to exterminate them.
I don't share the view that all that is holding us back from creating AGI is a definition. Btw., that synthetic genome is a long way from making even something like an little nematode.
I rather think AGI "idealists" (maybe there is a better word) will forever hold out (any) algorithmic advances as a sign of the immanent creation of AGI proper.
That's not AGI. An AGI would be able to tell you they don't know things, ask for clarification as to why you think they're wrong, etc.
ChatGPT just spouts out a different wild guess when you tell it it's wrong, but doesn't learn from its mistake—not even within a single chat. Sydney just goes full psychopath.
Not necessarily, much like every human does not know pharmacology, genetics, robotics or other specialties that could be used for self improvement. Including social engineering and programming.
What you're describing is a particular kind of superintelligence, recursively improving kind.
Between that and general intelligence is another gap.
Humanity as a whole can improve itself indefinitely, there is little doubt about that. For robots, there is no distinction between a single robot and robotuity, as software can be replicated cheaply and quickly.
That's a big assumption. No one has built an AGI yet so we don't know whether an AGI would be capable of improving it's own code exponentially. Any exponential growth function quickly runs into hard physical limits.
This seems dumb to me. It’s like if the Japanese had the clairvoyant ability to know that nuclear bombs were being built and used and the immediate future and said “We can ignore it until it exists)”. No. You leave Hiroshima and Nagasaki now and not when the freaking bombs are above your head.
We don't have the clairvoyant ability to know that AGI will exist in any future near enough to worry about. It might or it might not - and even if it exists we could get its nature wrong and worry about the wrong things.
This isn't congruent with the AI research community consensus. There's debate over many points, but that we will develop AGI is seen as a forgone conclusion almost universally. The most pessimistic place it closer to 2100, others say 5 years, most say <25 years. But nobody is saying it won't happen at all, or even that it won't happen beyond the lifetimes of anyone currently living. At best we would be kicking the can down the road and making it out children's or grandchildren's problem.
Notably the people closest to the fire, those advancing SOTA AI capabilities and those deepest into alignment/existential risk research, both say it is coming this decade. They predict very different outcomes (crudely summarized: utopia vs extinction), but they both predict the event horizon is on our doorstep.
You're right that we might get it wrong, but getting things wrong with a new technology really always means negative outcomes. When you're figuring out rocket science and you realize you worried about the wrong things your rocket doesn't somehow still succeed and make it to space, it blows up in a fireball. It might not blow up because of the specific thing you were paying attention to, but it still blew up.
So what. Research community "consensus" is meaningless when it comes to something that doesn't exist and which we don't even have a clear path to build. It's just a bunch of pontificating clowns engaging in idle speculation. Total waste of time, like trying to predict what space aliens will look like or whatever. It makes for fun sci-fi stories but it's not something that serious people actually care about.
Saying lies and denigration does not make it true.
Why should we believe you over the experts for whom this is their life's work? What are your credentials? Where is your experience? You seem to think they're unserious clowns so clearly you have a good reason.
You are certainly welcome to leave Earth right now if you like. But the notion that we should waste time worrying about some hypothetical future AGI is just dumb.