Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That would be nice, but I cynically suspect it's not something LLMs are constitutionally able to provide.

Since they don't actually model facts or contradictions, adding prompt-text like "provide alternatives" is in effect more like "add weight to future tokens and words that correlate to what happened in documents where someone was asked to provide alternatives."

So the linguistic forms of cautious equivocation are easy to evoke, but reliably getting the logical content might be impossible.



I agree, it is unlikely we’ll be able to get LLMs to provide “informed uncertainty” because they can’t interrogate any internal confidence in the correctness of the output.

But I wonder if tuning the output to avoid definitive statements would be beneficial from a UX perspective.


I think it would help curb people over-trusting the model, yeah.

Heck, imagine how terrible the opposite would be: "When answering, be totally confident and assertive about your conclusions."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: