Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I remember reading another comment a while ago about being able to only trust an llm with sensitive info only if you can guarantee that the output will only be viewed by people who already had access to the sensitive info already, or cannot control any of the inputs to the llm.


Uhm... duh?

> or cannot control any of the inputs to the llm

Seeing as LLMs are non-deterministic, I think even this is not enough of a restriction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: