Taking the example of egyptian archeology, if you're reading the work of someone who is well regarded as an expert in the field, you can trust their word a lot more than you can trust the word of an AI, even if the AI is provided the text you're reading.
This is a pretty massive difference between the two, and your narrative is part of why AI is proving to be so harmful for education in general. Delusional dreamers and greedy CEOs talking about AI being able to do "PhD level work" have potentially ruined a significant chunk of the next generation into thinking they are genuinely learning from asking AI "a few questions" and taking the answers at face value instead of struggling through the material to build true understanding.
There needs to be a reasonable chance of correctness. At least the local toddlers around here don’t randomly provide a solution to a problem that would take me hours to find but only minutes to validate.
>I'll take a potential solution I can validate over no idea whatsoever of my own any day.
If you have to validate what the LLM says, I assume you'd do that by researching primary sources and works by other experts. At that point, the LLM did nothing except charge you for a few tokens before you went down the usual research path. I could see LLMs being good for providing an outline of what you'd need to research, which is definitely helpful but not in a singularity way.
> If you have to validate what the LLM says, I assume you'd do that by researching primary sources and works by other experts.
For research, yes, and the utility there is a bit more limited. They’re still great at digesting and contextualizing dozens or hundreds of sources in a few minutes which would take me hours.
But what I mean by “easily testable” is usually writing code. If I already have good failing tests, verification is indeed very very cheap. (Essentially boils down to checking if the LLM hacked around the test cases or even deleted some.)
> At that point, the LLM did nothing […]
I’d pay actual money for a junior dev or research assistant capable of reading, summarizing, and coming up with proofs of concept at any hour of the day without getting bored at the level of current LLMs, but I’ve got the feeling $20/month wouldn’t be appealing to most candidates.
All of the information available from an LLM (and probably more) is available in books or published on the internet. They can go to a library and a read a book. They can be fairly certain books written by subject matter experts aren’t just made up.
This is a pretty massive difference between the two, and your narrative is part of why AI is proving to be so harmful for education in general. Delusional dreamers and greedy CEOs talking about AI being able to do "PhD level work" have potentially ruined a significant chunk of the next generation into thinking they are genuinely learning from asking AI "a few questions" and taking the answers at face value instead of struggling through the material to build true understanding.