Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're generalizing widely between your experience and the ability of researchers publishing these millions of papers, which as of 2024 at least ~13% were being LLMed.

Seems important to know. LLMs lie and mislead and change meaning and completely ignore my prompt regularly. If I'm just trying to get my work out the door and in the editing phase, I don't trust myself to catch errors introduced by and LLM in my own work.

There is a lack of awareness of the importance of the conveyed meaning in text, not just grammatical correctness. Involving people, not word machines, is the right thing to do when improving content for publication.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: