Part of the difficulty here is that new age AI / LLMs in discussion has a lot of similarities with crypto, in the sense that there is a lot of nonsense out there. Unlike crypto there is obviously some value, as opposed to none, so it's not all grift. But like you I find it hard to sort the difference between the two.
Fundamentally for me I can't get over the idea that extraordinary claims require extraordinary evidence, and so far I haven't really seen any evidence. Certainly not any that I would consider extraordinary.
It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.
I have not seen a convincing reason to think that the path that is being walked down ends up at actual intelligence, in the same way that there is no convincing reason to think the path magicians walk down ends up in actual magic.
> It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.
So, surgery?
As the stage magicians Penn and Teller (well, just Penn) said, stage magic is about making something very hard look easy, so much so that your audience simply doesn't even imagine the real effort you put into it.
Better analogy here would be asking if we're trying to summon cargo gods with hand-carved wooden "radios".
No, not surgery. Surgery wasn't gotten to by way of working really hard on a magic trick. I'm also reasonably sure surgery is not at a point where you can cut someone entirely in half and put them back together.
Since you brought up Penn and Teller, take the bullet catch. They are not actually catching a bullet. No matter how hard they work on perfecting their bullet catch trick, this will not allow them to catch a real bullet shot from a real gun in their real teeth. Working on the trick is not the journey where the end point is doing the actual thing you're representing in the trick.
Surgery is at a point where you can take a living human's heart out without killing them, and replace it with one you got out a corpse. And if the waiting list for donated corpse hearts is too long, you can have yours switched off for months while a machine pumps for you.
Lungs, kidneys, liver, lots can be successfully transplanted in similar ways.
> Surgery wasn't gotten to by way of working really hard on a magic trick
My point is other way around: if you wanted a magic trick where you could literally cut someone in half for real and then put them back together, that is literally "surgery".
> They are not actually catching a bullet. No matter how hard they work on perfecting their bullet catch trick, this will not allow them to catch a real bullet shot from a real gun in their real teeth. Working on the trick is not the journey where the end point is doing the actual thing you're representing in the trick.
For some tricks, sure.
For others… is the nailgun memorising trick faked for safety reasons? I mean, I sure would, and I assume their insurance requires at least some safety interlock that's not visible, but you can just memorise a sequence as Penn says while doing the trick.
We never expected that there even could be a magic trick that came so close to mimicing human intelligence without actually being it. I think there's only so many ways that matter can be arranged to perform such tricks, we're putting lots of work into exploring that design space more thoroughly than ever before, and sooner or later we'll stumble on the same magic tricks that we humans are running under the hood.
> and sooner or later we'll stumble on the same magic tricks that we humans are running under the hood.
Right, so this is the extraordinary claims bit. I'm not an expert in any of the required fields, to be clear, but I'm just not seeing the path, and no one as yet has written a clear and concise explainer on it.
So my presumption, given past experience, is that it is hype designed to drive investment plus hopium, not something that is based on any actual reasoned thought process.
Sure, but evolution isn't an actual reasoned thought process, and it still managed us without even having humans as an explicit goal, we just popped out of the process by accident as a way to be effective at surviving in the wild.
That's the usual way of things. Most of the time research progress is made at the boundary without anybody seeing the path. It happens more like slips in a fault line; little sudden steps forward in one area that nobody anticipated, and which create new strains elsewhere along the fault line, as those discoveries open up new attacks on old problems. Gradually the problem yields but not according to anybody's big plan.
In that sort of situation, the rate of progress is affected by two things: how many people are working at the frontier figuring out the problems that prevent the field from progressing, and how much economic pressure there is to exploit each new solution that gets identified. When the economic pressure is low, new breakthroughs mostly stay in the lab and circulate slowly. Researchers will come up with ideas that could solve the problem but don't have resources to test every one. But when the pressure is great, each new breakthrough quickly scales up, and more ideas get tested in parallel.
Sometimes a bunch of progress does happen on a schedule, as part of a master-planned research effort, like the Manhattan or Apollo projects, or semiconductor lithography R&D schedules. In those cases the main pathway is known at the outset but there a bunch of novel engineering sub-problems to solve along the way. Most research doesn't happen that way though. And even when it does, to anybody on the outside who doesn't themselves see the route laid out from a high altitude, it looks the same. And even in these cases, there may be a few big-picture questions that they aren't sure about until late into the project, resulting in multiple paths being tried at once to improve the chance of success.
I think there are several hard, fundamental problems currently being grappled with that stand between today's AI and AGI, and unlike Sam Altman I don't think scaling will be enough to overcome them. But I do think there are now tremendous forces being deployed to grapple with them, and tremendous pressures being built up behind that, so any slips along the fault line could yield rapid movement forward.
Is this hype to drive investment? Depends who you're listening to. If they're an executive at NVIDIA or OpenAI then sure, it probably is. But not all of it. One of the main advocates for the view that I share is Eliezer Yudkowsky, who has been talking about this since before AI was on any CEO's radar. His latest book is called "If Anyone Builds It, Everyone Dies". I'm not sure how he could phrase his concerns in any less-appealing way to an investor.
Fundamentally for me I can't get over the idea that extraordinary claims require extraordinary evidence, and so far I haven't really seen any evidence. Certainly not any that I would consider extraordinary.
It's like saying that if a magician worked _really really hard_ on improving, evolving and revolutionising their "cut the assistant in half and put them back together" trick, they'll eventually be able to actually cut them in half and put them back together.
I have not seen a convincing reason to think that the path that is being walked down ends up at actual intelligence, in the same way that there is no convincing reason to think the path magicians walk down ends up in actual magic.