Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes totally. This really is no different than any other code injection vulnerability. Only allow symbols that you expect, and don't concatenate user input and logic unless the bounds between the two are guaranteed to be explicit.


> don't concatenate user input and logic unless the bounds between the two are guaranteed to be explicit.

Well that's kind of the whole problem - LLM-based agents inherently work by literally concatenating logic with user input, and the bounds aren't guaranteed to be explicit. There is a discussion about finding a way to implement such bounds, but we don't have a good solution yet.


> don't concatenate user input and logic unless the bounds between the two are guaranteed to be explicit

Which is achieved how for an LLM?


this is not correct




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: