Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t know if it’s my fault or what but my “LLMs will obviously improve” comment is specifically not “llms will stop hallucinating”. I hate the AI fad (or maybe more annoyed with it) but I’ve seen enough to know these things are powerful and going to get better with all the money people are throwing at them. I mean you’d have to be willfully ignoring reality recently to not have been exposed to this stuff.

What I think is actually happening is that some people innately have taken the stance that it’s impossible for an AI model to be useful if it ever hallucinates, and they probably always will hallucinate to some degree or under some conditions, ergo they will never be useful. End of story.

I agree it’s stupid to try and inductively reason that AI models will stop hallucinating, but that was never actually my argument.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: