Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If only we had some kind of tool that could give you a great summary of his points…


Especially a reliable one, that I could trust not to hallucinate extra bullet points!


I question whether you’ve used a SOTA LLM recently.


Like o3, which hallucinates more (as I learned from this piece)? https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: