> [...] a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, [...]
Don't people learn from imperfect teachers all the time?
Yes, they do. In fact, imperfect teachers can sometimes induce more learning than more perfect ones. And that's what is insidious about learning from AI. It looks like something we've seen before, something where we know how to make it useful and take advantage even of the gaps and inadequacies.
AI can be effective for learning a new skill, but you have to be constantly on your guard to prevent it from hacking your brain and making you helpless and useless. AI isn't the parent holding your bicycle and giving you a push and letting go when you're ready. It's the welded-on training wheels that become larger and more structurally necessary until the bike can't roll forward at all without them. It feeds you the lie that all you need is the theory, you don't ever need to apply it because the AI will do that for you so don't worry your pretty little head over it. AI teaches you that if something requires effort, you're just not relying on the AI enough. The path to success goes only through AI, and those people who try to build their own skills without it are suckers because the AI can effortlessly create things 100x bigger and better and more complex.
Personally, I still believe that human + AI hybrids have enormous potential. It's just that using AI constantly pushes away from beneficial hybridization and towards dependency. You have to constantly fight against your innate impulses, because it hacks them to your detriment.
I'd actually like to see an AI trained to not give answers, but to search out the point where they get you 90% of the way there and then steadfastly refuse to give you the last 10%. An AI built with the goal not of producing artifacts or answers, but of producing learning and growth in the user. (Then again, I'd like to see the same thing in an educational system...)
> Personally, I still believe that human + AI hybrids have enormous potential. [...]
That was true in chess for a long time, but since at least 20 years or so, approximately anytime the human deviates from what the AI suggests, it's a mistake.
It turns out that with enough effort, "chess" can be lumped in with "arithmetic". The machines are just better at it. We're continually finding new things in that category, including things we never would have guessed. But that doesn't mean that everything is. At least right now, very little is.
Even things that AI has gotten best at, like coding, are nowhere near that category yet. AI-written text and code is still crap compared to what humans can write. Both can often superficially look better, but the closer you look and the less a human guided it, the worse you discover it is.
I suspect you are comparing AI output to the best human output?
Chess bots could beat the vast majority of humans at their game long before they could beat the world champion.
Similarly, AI generated code and text and images etc are getting more and more competitive with what regular humans can produce. Especially if you take speed and cost into account.
Don't people learn from imperfect teachers all the time?