This is true. The old original series (and later) Star Trek computers being able to interpret normal idiomatic humam speech and act upon it was, to those in the know, hilariously unrealistic until very suddenly just recently it wasn't. Pretty cool.
pretty much all of the classical ideas of what an ai could do, can be done with our existing capabilities. and yet, people continue to live as if the world has not changed
"AI" has been doing that since the 1950s though. The problem is that each time we define something and say "only an intelligent machine can X" we find out that X is woefully inadequate as an example of real intelligence. Like hilariously so. e.g. "play chess" - seemed perfectly reasonable at the time, but clearly 1980s budget chess computers are not "intelligent" in any very useful way regardless of how Sci Fi they were in the 40s.
Not OP, but I do think that because "humanlike abstract thinking and informal reasoning is completely unnatural to how computers work, and it's borderline impossible to make them do that" was by far the biggest AI roadblock, in my eyes.
And we've made it past. LLMs of today reason a lot like humans do.
They understand natural language, read subtext, grasp the implications. NLP used to be the dreaded "final boss" of AI research - and now, what remains of it is a pair of smoking boots.
What's more is that LLMs aren't just adept at language. They take their understanding of language and run with it. Commonsense reasoning, coding, math, cocktail recipes - LLMs are way better than they have any right to be at a range of tasks so diverse it makes you head spin.
You can't witness this, grasp what you see, and remain confident that "AGI isn't possible".