Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With part of the problem being that it's literally impossible to sanitize LLM input, not just difficult. So if you have these capabilities at all, you can expect to always be vulnerable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: