I remember reading another comment a while ago about being able to only trust an llm with sensitive info only if you can guarantee that the output will only be viewed by people who already had access to the sensitive info already, or cannot control any of the inputs to the llm.