Right, but the Apple paper I read [1] said that if you did not have iCloud Photos (iow, "put it on the Internet") turned on, CSAM scanning would not occur.
So, how is it different, again?
1. I can't link the paper, because apparently, Apple took it offline. But it was widely-reported on.
Generating a encrypted voucher based on a known CSAM image is not spying anymore than the device cataloguing images by descriptions.
For a technical forum the lack of willingness of individuals to read into the system is perplexing. So far most of the arguments I've read are entirely based on creating a strawman then beating that to death.
Is there anything which prevents non-CSAM images from being added into this catalogue? As I understand it, the only thing stopping that is a promise from Apple - which can be steamrolled by a government request.
>For a technical forum the lack of willingness of individuals to read into the system is perplexing.
Admittedly, I may have missed it... But can you point out to me where this system cannot be expanded to non-CSAM material?
As a totally wild example, is this technology restricted, in some technical manner, from scanning for images which display a certain political leader as Winnie the Pooh?
+ Approximate matching. They might want to have images almost similar also to be flagged.
+ Scope creep. Particular images of people who are wanted or areas where they might live or shots of joints etc.
+ Mistakes. Accidentally flagging honest citizens and the bureaucracy that will follow. "Innocent until proven guilty..." That's not preemptively scanning someone's personal devices.
Signed ~(Mostly)Happy google-less LineageOS user