Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People break out of prompts all the time though, do devs working on these systems not aware of that?

It's pretty common wisdom that it's unwise to sanity check sql query params at the application level instead of letting the db do it because you may get it wrong. What makes people think an LLM, which is immensely more complex and even non-deterministic in some ways, is going to do a perfect job cleansing input? To use the cliche response to all LLM criticisms, "it's cleansing input just like a human would".



I think it's reasonably safe to assume they're not, or they wouldn't design a system this way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: