They deserve to be raked over the coals for this, there's no world where their current design is a "good" or "right" one.
Child abuse is a serious problem, but building a surveillance panopticon is not an acceptable solution to it. Better investment in education, health care, and reporting hotlines are the way forward to stop this issue at its source.
I think one of the problems here is a reality and perception don't align:
- Apple has over a billion devices out there.
- Child abuse is a rare problem, but with over a billion devices, there will be enough of it for a lot of newsworthy stories.
- Child pornography takes just one abused child for an arbitrary number of viewers. Arguably, by the time you're limiting the number of viewers, most of the harm has been done.
On the whole, I'm not quite sure how the Apple plan will protect actual children from rape (except to somewhat reduce the secondary harm of distribution). I can clearly see how it will protect Apple from bad press, though -- people won't use iPhones to record that.
On the other hand, an investment in education, health care, reporting, and enforcement could significantly reduce the amount of child abuse, but with 7 billion people in the world, no expense would bring it to zero. So long as it's not zero, the potential for bad press is there. Indeed, usually if something happens a few times per year, it receives more bad press than if it happens a few times per day.
Apple has every incentive to be (1) seen as doing something (2) do things which protect its brand value. Apple has no incentive to invest in education, health care, reporting, and enforcement. Those seem like good things to do, but if anything, if a scandal comes up, those sorts of things are used to say "See, Apple new, and was trying to buy an out."
As a footnote, if we value all children equally, a lot of this is super-cheap. This is a good movie:
And the problem it portrays could probably be solved with the same finances as the salaries of a few Apple engineers, and a focused, targeted effort to identify child prostitutes, help their families with the economics which force those kids to become child prostitutes, and get those kids into schools instead.
I'm guessing the $100k raised from this film will do more to protect kids than this whole Apple initiative will do.
Ford has a large number of cars out there
- Drunk driving is a rare problem but with a large number of cars there will be enough cases for there to be newsworthy stories.
-Drunk driving just takes one driver to create an arbitrary number of deaths.
We would not accept having breathalyzers in every car.
Or to bring it closer to the child abuse problem: Would we accept cameras that take pictures of the occupants of the car to make sure that the minors in the care are not being trafficked?
There's a stipulation just above that portion of the bill where the Secretary of Transportation can determine that it is not possible to 'passively' determine if a driver is impaired and decline that rule so long as they issue a report to congress as to why.
And I trust Buttigieg to give the issue a solid looking over, but aren't breathalyzers pretty well established as a positive indicator of driver impairment?
Though requiring the driver to blow into a straw doesn't seem particularly "passive"--whatever that means.
That is less invasive than making you blow. But there will always be edge cases.
Imagine a medical condition that makes it look like you are impaired. Now, you have to go to the dealer with a doctor's note to get this system disabled. Or when you want to rent a car.
Or, if there is a case when driving impaired would be better then the alternative. You and a friend are camping in the woods out of cell range, you both have some beers then one of you trips and gets a deep cut on the leg. Now you have to wait a couple hours before he can drive you to where you can get cell signal, hope you don't bleed out.
> We would not accept having breathalyzers in every car.
Funny you would bring that up. I think the new infrastructure bill requires that for cars built after 2029 (or some other "future, but not that far" date)
That's not the same thing at all. This would be like your car reporting you to authorities if you get into it drunk, turn the key, and step on the gas. It does nothing unless you've committed a crime.
All of the photos that you upload scanned and hashed. All of the hashes are either sent out for comparison to the database or checked locally.(I do not know which.) That means that for every picture you want to upload to iCloud, you must prove it is not abusive material.
So the equivalent is that for every single trip you take, you must prove you are not under the influence.
That's not true. The photos you upload are hashed, yes, but they're not scanned. Only the hashes are compared and that's done locally. Apple never gets any of your content so your equivalency is completely false. Signatures only get sent if the hashes match known CSAM. Therefore, it's like your car reporting you if and only if you've broken the law.
The equivalent is that for every single trip you take on public roads, you must prove you are following the public road rules - like you do with having to first obtain a driving license, registered car, car insurance, MOT (in the UK), road tax (UK), medical approval if you have certain health conditions.
If you're going to pay to use a hired car, expect to have to show the car hire company sufficient proof that you won't expose them to unnecessary risks. If you're going to pay to use a hired server to store your photos, why shouldn't you demonstrate to the owner that you aren't going to misuse their services or break their terms of service or break the law?
If you want to drive your car on your land, it doesn't need any of that.
So we should mandate a scanner in the car that makes you input your planned route, takes a driver license, has a camera to do facial recognition. It will then connect to a DMV database that verifies the information is correct and then to the insurance database to verify coverage. Check the tax database to make sure that has been paid, check with a medical database to make sure that you don't have any conditions as well as making sure that you have not been prescribed any medicine that says not to operate heavy machinery.
If you are going to hire someone else's car[1], you will need to provide them with your driver's license and the person at the desk will do "face recognition" to check whether it's your license, and they will check with some kind of database - at least their own to see if you've been banned from their premises, maybe a DMV one or their insurance to see if you have points on your license for previous driving related convictions which will affect their decision to lend you a car. Since it's their car they will deal with tax, but they will ask you if you have medical conditions which will affect your driving (or make you read the terms and sign that you haven't). And they will do all this in advance of you hiring their car, and after you're done they will check over the car looking to see if you misused it, and will keep a record of use so if they get informed about a speeding ticket or parking fine in future, it goes to you to pay it.
So ... this is your hellish dystopia, your "boot stomping on a human face forever", Hertz rent-a-car?
[1] analogous to you using Apple's iCloud servers.
Trucks have tachometers which track drivers aren't driving too long, and are taking sufficient breaks.
> "It will then connect to a DMV database that verifies the information is correct and then to the insurance database to verify coverage."
Wouldn't it be nice to know that if you're in an accident, the other party can't simply say "I'm not insured lol" and drive away and leave you and your insurance to pick up all the costs?
> On the whole, I'm not quite sure how the Apple plan will protect actual children from rape (except to somewhat reduce the secondary harm of distribution).
You bring up the distinction between "possession offenses" (i.e., a person who has CSAM content) and "hands-on offenses" (i.e., a person who abuses children and possibly, but not necessarily, produces CSAM). Detecting possession offenses (as Apple's sytem does) has the second-order effect of finding hands-on offenders because hands-on offenders tend to also collect CSAM and form large libraries of it. So finding a CSAM collection is the best way to find a hands-on offender and stop their abuse. Ideally, victims would always disclose their abuse so that the traditional investigatory process could handle it -- but child sexual abuse is special in that offenders are skilled in manipulating children and families in order to avoid detection.
I think that the case of USA v. Rosenchein [0] is a good example because it shows the ins and outs of how the company->NCMEC->law enforcement system tends to work and how it leads to hands-on offenders. It's higher profile than most, perhaps because the defendant (a surgeon), seems to have plenty of resources for fighting the conviction on constitutional grounds (as opposed to actually claiming innocence). But the mechanism leading to the prosecution is by no means exceptional.
No. This is not true, and I think I provided a good reference to that effect (it's really quite a good documentary too). A US surgeon engaging in child abuse is a statistical anomaly in the world of child sexual abuse. The best way to find child sexual abuse is to hop onto an airplane, and go to a region of the developing world where child sexual abuse is rampant.
It's not all hard to find such places. Many children are abused at scale, globally. I think few of those kids are getting filmed or turned in CSAM.
I'm also not at all sold on your claim that hands-on offenders tend to collect CSAM materials either, but we have no way to know.
I am sold on the best way of reducing actual abuse involves some combination of measures such as:
1) Fighting poverty; a huge amount of exploitation is for simple economic reasons; people need to eat
2) Providing social supports, where kids know what's not okay, and have trusted individuals they can report it to
3) Effective enforcement everywhere (not just rich countries)
4) Places for such kids to escape to, which are safe and decent. Kids won't report if the alternative is worse
... and so on. In other words, building out a basic social net for everyone.
We already live in a police state. The federal, state and local infrastructure and resources are mind bogglingly massive. They have laws granting them near carte blanche rights and actions.
We are citizens of our country and we deserve a dignified existence. We are supposed to have rights, and they're being worn away, formally and informally, by our governments and megacorps acting like NGOs.
I'm sympathetic to the overwhelming horrors of drunks, drunk driving, violent actors, child abuse, child porn, economic crimes, etc.
I've done my calculus, and I got my vaccine and I wear my mask in the current circumstances of our pandemic. But in a similar calculus, what Apple has planned to subject a huge portion of our population to, by din of their marketshare in mobile and messaging. I personally can't accept the forces at play in this Apple decision, and I'm continually baffled by those who think this is overblown.
Have you imagined what a near-future Mars colony will be like? You can't live on the surface, so it will be as high-tech and enclosed and cramped as a space station; an air-tight pressure vessel with no escape. It will have limited energy and resources so there will likely be rationing. It will be vulnerable to any pressure breach or loss of power, so can take no risks with mechanical failure, bad actors, disease spread, etc. so it will likely be sensored and surveilled all over. It will likely be funded in large part or entirely by private investors. Musk has estimated $500k for a ticket to go there and people have estimated $3Bn/year for 30 years to keep a base running with no economic return from that.
No government, no police, no Wild West "run them out of town" option. You think they're going to want to spend $500,000 return flight cost to send potential criminals away or just "let them be" in an environment like that?
The idea that you might be able to go there and "demand your freedom" without being a billionaire owner of the colony is ill-thought-out. Subjects will have no leverage and no options, and leaders will have billions sunk into it and demand obedience like a Navy Submarine.
Yes, I've thought about it. I was kind of hoping for a better suggestion.
However, I'd rather voluntarily subject myself to a dictatorship like that than believe all my life I have rights that are sacred, only to look up and find myself in an authoritarian panopticon.
I do harbor fantasies of some day collaborating on a new system of government, or at least laying the groundwork. It's not going to be Musk's planet forever, and the first generation of Martians will be volunteers who want the project to succeed. Which makes it more like the 13 original colonies than the Wild West.
> Child pornography takes just one abused child for an arbitrary number of viewers.
This is the thing that privacy advocates seem to ignore. Measures taken to reduce child abuse won’t reduce the circulation of whatever CASM does get created.
Some even seem to think, a la the ACLU, that viewing child abuse material is a victimless crime, and only the creators of the CASM should be punished.
I think it’s actually a good way to look at the problem from a different, broader, perspective that isn’t the average HN user and privacy minded individual standpoint. Also, it interprets Apple’s decisions in the wider framework of their B2C business. Apple’s privacy engineers don’t have the luxury of being radical like their critics when it comes to taking a decision like this.
Given this state of things, have they picked the lesser of two evils to solve the thorny problem of CSAM detection? I think it’s fair to say yes, they did, while still criticizing them for it (which is what they were of course expecting anyway).
Nope. If scanning was implemented outside the device on the iCloud as everyone else, may be. But this is intrusion of privacy on a new "on device surveillance" level and Apple deserves hostile reaction.
No form of apologetic or "technical" explanation can remove this from reality now.
They are betting heavily on their "core" demographics to trust them automatically and without any form of critical thinking.
If this implementation has no effect on Apples bottom line. Things are over.
We will live in badly implemented version of the Minority Report.
If Apple wants to get the same detection ability as server-side, they'll have no choice* but to expand and lock down client-side much more than they publicized. At which point this method is not the lesser evil at all.
* Think about what happens to CSAM uploaded to iCloud before NCMEC tags it. This has to happen for each new CSAM, since NCMEC can't tag what it doesn't see yet.
Surely Apple and NCMEC want to be able to catch these perps (which they easily would have with server-side). Doing it client-side requires expansion of scanning to do much more.
> Given this state of things, have they picked the lesser of two evils to solve the thorny problem of CSAM detection? I think it’s fair to say yes, they did,
First option is not to encrypt data at all (current state, server side does not count), second option is to use end-to-end encryption with hidden backdoor. They found a (third) way, to lock themselves out of most of the data, and for example FBI can’t ask them to show some arbitrary images.
>Apple’s privacy engineers don’t have the luxury of being radical
Not doing anything anti-consumer that the law doesn't force you to do is "radical"? I know you're not an astroturfer, but I had to double check because this is textbook astroturfing tactics.
Apple simply does not have to do this, as far as I'm concerned it's obvious they're either currying political favors or being incompetent. It's perfectly fine if they want to run it on their own unencrypted devices, they absolutely don't have to overstep into their user's devices.
But the point is, once you accept something noble and difficult like "preventing CSAM" as your primary overriding goal, then there's nothing that's too far or too extreme if it will help you with your noble goal.
Five years ago, the idea of Apple scanning photos on your phone would have been absurd.
Five years from now, what will people think about hotels installing AI-powered cameras in every room? The vendor swears they only start recording when they detect an act of abuse. It sounds absurd now, but where do you draw the line?
Many (maybe 5) years ago Apple launched Neural Net to categorize your photos and this scans all of them, a lot whether they are in the cloud or not. Difference is, that we don’t know where this information is stored. Still nobody is worried about that. Feature, which allows more than this newly added CSAM functionality. If someone wants to misuse that in hidden, there is no difference of now or future. Because all we have is trust. Speculation suddenly raises, when common politic reasons are mentioned.
It does not really matter if the scanning happens on device or iCloud in this situation, because you have to always trust their closed source system.
Google has scanned your images since 2009 in the cloud unencrypted, but now when Apple makes situation better, it is suddenly bad. All tools have been out there already. There are no really other options to get more privacy than this, but people refuse to see that.
Well, there is voting. Vote people who puts privacy over everything. That would make everything easy.
> Google has scanned your images since 2009 in the cloud unencrypted, but now when Apple makes situation better, it is suddenly bad.
At the moment Apple's scanning policy is about the same as it was before. They claim they're only scanning photos if iCloud photos are enabled. The change they're advertising is doing the actual scanning process locally.
The problem is two fold. The first is Apple went from scanning only explicitly uploaded content to local content. Since they've decided to intrude on local content once "for the children" it's not out of the realm of possibility (if not likely) they will make further intrusions in the future for prima facie noble reasons. Are third party apps going to be restricted on saving data unless they allow access to Apple's CSAM scanner? Will it start scanning texts or e-mails tomorrow letting and rando flood a person's phone with CSAM and get them arrested? Adding a local scanning system like this is a slippery slope.
The second problem is the opaqueness of the system. This has multiple sub-problems. While the NCMEC has a laudable goal, involving them in the CSAM scanning process involves an outsize level of trust I don't think they have earned. They have law enforcement's unfortunate disdain for personal privacy coupled with a fanatical devotion to their cause. They believe their actions are always correct and just so long as they supposedly serve their goal of "protecting children".
Due to the opaque nature of their content library it's not crazy to think repressive regimes will get self-serving content added to the source libraries for CSAM scanning. There's plenty of places where homosexuality is punishable by death and even mildly anti-government content will land you in jail. Obviously you and I can't go look at NCMEC/ICMEC CSAM libraries to check for falsely added content. So how are we supposed to trust a system run by fanatics to not have simple errors?
Which leads to the other opaqueness sub-problem. Apple's design is interesting, if not laudable, but is closed source and full of black boxes. PhotoDNA, NeuralHash, and the like are not published algorithms anyone can verify. We don't even have a way of knowing if some image we have tripped a false positive and have to trust Apple's unknown "threshold" isn't 1. So not only does the public, the subject of these new intrusions, have no way of auditing the database but they have no way of auditing the code or process. A stupid bug in the scanning system could get a user reported to Apple which we then have to trust not to forward (and not to have additional bugs in their reporting system) them to law enforcement and ruin their life.
So I am concerned with scope creep and bugs/false positives. I can live with a bug that causes video playback to stutter or a black box system in Maps that gives me the wrong hours for a restaurant. It's much harder to live with bugs that can get me arrested or even killed thanks to trigger happy police. Apple's system might be technically adept but their promises of future behavior aren't trustworthy since they've already changed their behavior with this new system.
> At the moment Apple's scanning policy is about the same as it was before. They claim they're only scanning photos if iCloud photos are enabled. The change they're advertising is doing the actual scanning process locally.
Major difference is, that they have no access for other images anymore as they used to have. They leave device as encrypted. Images used to be plaintext in the eyes of Apple.
> The problem is two fold. The first is Apple went from scanning only explicitly uploaded content to local content. Since they've decided to intrude on local content once "for the children" it's not out of the realm of possibility (if not likely) they will make further intrusions in the future for prima facie noble reasons. Are third party apps going to be restricted on saving data unless they allow access to Apple's CSAM scanner? Will it start scanning texts or e-mails tomorrow letting and rando flood a person's phone with CSAM and get them arrested? Adding a local scanning system like this is a slippery slope.
Emails have been scanned for long time in the cloud already. The rest is only speculation and against what they have told. It might be hard to trust, but in closed systems it is all we have. We should be worried when they actually say or start doing that.
> There's plenty of places where homosexuality is punishable by death and even mildly anti-government content will land you in jail.
It is fair to not trust third parties (NCMEC/ICMEC), but Apple is responsible for making the algorithm and testing that. Misuse must be part of their tests at this level. iCloud photos used to be plaintext so this hasn't changed from that perspective. If there is evidence that they are scanning other images outside of iCloud as well, then we should get the pitchforks and torches.
> We don't even have a way of knowing if some image we have tripped a false positive and have to trust Apple's unknown "threshold" isn't 1. So not only does the public, the subject of these new intrusions, have no way of auditing the database but they have no way of auditing the code or process.
This isn't true, since all math of their system is public and available on here: https://www.apple.com/child-safety/pdf/Apple_PSI_System_Secu...
But code is as closed as always been. You have same level of trust for iMessage E2EE or even the screen lock of your phone.
Due to the way how system is expected to behave (it only looks existing matches from the provided data, with certain modifications), it is certainly possible that 1/1 trillion false positives is reachable, because they can validate it during development. They are not developing some AI to match totally new wild images. There is human validation, so nothing is automatically triggering police.
Apple isn't required to stop redistribution of existing material; they're required to make a report if they have actual knowledge of users possessing or distributing apparent CSAM.
This is different from what your comment implies in two ways. First, they do not have an obligation to actively look for CSAM; they only incur an obligation if they find it. Second, the obligation applies to apparent illegal content rather than known illegal content. What qualifies as apparent could end up in court.
> Apple isn't required to stop redistribution of existing material; they're required to make a report if they have actual knowledge of users possessing or distributing apparent CSAM.
This isn't that simple. If NCMEC comes with the properties of CSAM (e.g. hashes) and asks provider especially those to be removed from their cloud, it is hard to remove them without looking for them. This is different than an obligation to actively look for CSAM in general.
Can you cite a statute that requires a provider to look for hashes when NCMEC asks them to?
If NCMEC told a provider that a specific URL (or similarly unique identifier) contains CSAM, the provider would be obligated to destroy the associated file or be guilty of possession/distribution because at that point they know what they have. That's different from NCMEC providing hashes that could identify files the provider may or may not be storing.
They deserve to be raked over the coals for this, there's no world where their current design is a "good" or "right" one.
Child abuse is a serious problem, but building a surveillance panopticon is not an acceptable solution to it. Better investment in education, health care, and reporting hotlines are the way forward to stop this issue at its source.