Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs aren't logical machines, so any non-trivial bug-fix is just likely to introduce more bugs.

It's a bit of a misunderstanding of how LLMs are supposed to be used.

One caveat is if you're very untalented, it might be able to solve very common patterns successfully.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: