They literally announced their partnership with OpenAI today, and I've seen no sign of this data being "publicly auditable" - can you share this with me?
All the stuff that works on your private data is Apple models that are either on-device or in Apple's private cloud (and they are making that private cloud auditable).
The OpenAI stuff is firewalled off into a separate "ask ChatGPT to write me this thing" kind of feature.
> I've seen no sign of this data being "publicly auditable" - can you share this with me?
They announced it in the same keynote where they announced the partnership with OpenAI (and stated that sharing your data with OpenAI would be opt-in, not opt-out).
WTF are you talking about, the guy literally said that to connect to Apple Intelligence servers the client side verifies a publically registered audit trail for the server. He then followed up saying no data on chatgpt will keep session information regarding who the data came from.
Apples big thing is privacy, i doubt they'd randomly lie about that
This still runs on external hardware which can be spoofed at the demand of authorities. It may be private as in they themselves won’t monetize it but your data certainly won’t be safe
I can't speak towards Apple's or $your_government's trustworthiness, but MTLS wouldn't protect against an attack where Apple collaborates with a data requester.
There are people and orgs out there who (justifiably or not) are paranoid enough that they factor this into their threat model.
This is a bit academic right now, but it's also worth mentioning that in the coming years, as quantum computing becomes more and more practical, snapshots of data encrypted using quantum-unsafe cryptography, or with symmetric keys protected by quantum-unsafe crypto (like most Diffie-Hellman schemes) will be decryptable much more easily. Whether a motivated bad actor has access to the quantum infrastructure needed to do this at scale is another question, though.