Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not necessarily. The LLM doesn't know what it can answer before it tries to. So in some cases it might be better to make an attempt and then later characterize it as a hallucination, so that the error doesn't spill over and produce even more incoherent nonsense. The chatbot admitting that it "hallucinated" is a strong indication to itself that part of the previous text is literal nonsense and cannot be trusted, and that it needs to take another approach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: