Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs are very sensitive to leading questions. A small hint of that the expected answer looks like will tend to produce exactly that answer.


You don't even need a leading direct question. You can easily lead an LLM just by having some statements (even at times single words) in the context window.


As a consequence LLMs are extremely unlikely to recognize an X-Y problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: