Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs giving confidently incorrect information like this is so much worse

And even in the face of its pervasiveness I see more and more people just accepting their output.

Recently there was a post about using an “LLM” to do sentiment analysis on 600k orange site posts. Without even a modicum of irony.

Both the post and the comments lack any instance of the word “hallucination”, but I’ve seen downvotes being used as a means of silencing a lot here so maybe any dissenting voices were killed off.

https://news.ycombinator.com/item?id=41241124



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: