Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my testing with GPT-4 (via API), it handles this fine if I split the prompt into multiple user inputs and provide clear instruction (provide a summary and nothing else). But ya, there are an infinite way to attack this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: