Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Stratechery covered this … they pointed out the really f*ed: part: Google is refusing to reinstate the account.

Protecting children is important. AI is imperfect.

But there is no reason the keep the account suspended once it’s clear there was no wrongdoing.

This man is innocent of doing anything wrong. Google had suspended him, and removed access to all his online account data. And refuses to reinstate.

We balance out liberties with responsibilities all the time. We allow the state power to protect children, and corporations have the right to assist them.

There are careful balancing acts being done.

But there is no balancing act here. There is no justification for Googles action.



IIRC there was a follow up which was arguably worse.

Google implied that the account was suspended because of real CSAM concerns unrelated to the photos sent to the doctor/police officer (other photos in their account? they wouldn't go into details about their decision) and that the officer closing the case doesn't mean their judgement here is wrong.

If that's the case I can understand why they're obstinate about their decision (which otherwise would seem like a dumb mistake they should just reverse), but the problem is none of this happens in a place where users have any ability to get reinstated or have any sort of control over their digital life - there's no real path any individual has out of this even after going to the press. There's no due process, no way to defend yourself, no way to get them to show how they made their decision. As I understand it you have no rights beyond the ToS.

The user also losing all related account access (two factor, email, etc.) is particularly bad. This is also categorically different from Apple's approach which compared hashes of CSAM with what's in the NCMEC database which would not have caused this mistake (here Google is using computer vision to discover and flag novel images).


> There's no due process, no way to defend yourself, no way to get them to show how they made their decision. As I understand it you have no rights beyond the ToS.

This is where legislation is required. Mass-scale social media/cloud providers/etc are effectively public utilities, and you should be able to challenge their decisions in court - the current situation is as bad as if the electricity company could disconnect you for non-specific reasons and you had no reasonable prospects of a successful legal challenge to their decision.

(Such regulation should be limited to providers above a certain size-maybe a cutoff like 10 million MAU or 500 million annual revenue-so small players aren’t burdened by it.)


That's spin. The real reason is discussed near the end of the article.

They don't want to get into the business of deciding what is and isn't sexual imagery. Instead, it's easier to just ban people and forget about it.

It's the same type of behaviour and attitude that lead to damoore being fired. No room for ambiguity or nuance at Google. Everything can be decided by an algorithm.


If the reason stated at the end of the article is the reason I can also understand their decision.

If my SO was surreptitiously taking nude pictures (videos?) of me and our kid while we were sleeping (and uploading them to the cloud!), I’d be pissed (and this is the most benign interpretation).

That said, there should still be due process.

Ultimately, I think it’s an issue with the local max of computing we’re trapped in. We need tools that can free us from dependence on a handful of centralized companies that have this kind of discretionary power over our lives.


I dunno, cloud or no cloud there is an expectation of privacy. What they've done amounts to a warrantless fishing expedition and it should concern everyone.

The problem is, the alternative is this stuff runs rampant on their service, not something I'd want either.

So I understand their position, but their approach here is lacking.


Zero transparency and zero due process is a massively underrated feature of big tech censorship/policing.

It's one thing to debate the benefits that society gets for removing/flagging content, but the costs are immediately 10x higher when the false positives have zero recourse.

Even 'legitimate' cases often can't speak to a human to find out why or how they were flagged (unless they have influence). The public can't find out what motives/reasoning or who was behind certain content being removed. So it feeds into conspiracies and builds resentment.


Morally speaking, once a company becomes big enough they should owe a minimal standard of customer service to society.

I'd hope to see this covered under anti-trust laws (not a lawyer though, don't know the laws or their applicability). There's only one Google, and Google had quite the monopoly on being Google. They're big enough to be either regulated or broken up.


I wonder what images they used to train an AI to recognize a child penis.


There is no reason for Google to not be lying about it.


I faced a somewhat similar predicament earlier this year.

I still have no idea how, but my 2FA enabled Facebook account (with a unique and secure password) was compromised while I was sleeping. Shortly after, the attacker started using my Business Manager account to run ads for fake products on their scam stores.

Here's the catch: my personal Facebook account was permanently banned right away, but my Business Manager account wasn't.

How? According to other folks who had the same done to them around that time, the attackers would upload CSAM content on your timeline so that you get immediately banned/locked out.

Well, that means I could no longer retrieve/change my business manager account, which gave the attacker free reign to run ads for about a month. To some degree this means that Facebooks CSAM system gives the attackers a way to compromise Business Manager accounts more efficiently.

I submitted a ban appeal, but didn't hear back. I read online that if you have an Oculus, reaching out via their support is the only real option, so I did just that.

I wrote down a detailed account of the timeline of events, along with screenshots etc., and sent it to an Oculus support agent. In fact, they thanked me during the interaction or providing 'the most detailed' report they'd seen.

The evidence was pretty clear: at 5am or so, someone had logged in to my account via a foreign IP, change my email to a Chinese address and added a hardware 2FA key. The ads they were running to scam stores were often in Chinese, too. Not exactly a difficult case to crack.

They assured me I'd hear back within 7 days, but a month or so later I received an automated email from Facebook stating that the time for my appeal had expired, so the account would stay permanently banned.

That was mildly infuriating, given I never heard back from anyone.

What did losing my Facebook account mean to me?

As much as I'd been considering moving off social media, it briefly ruined my life.

* I'd had my account since I was 13 years old in 2008. I had a few thousand connections on there, many fleeting and superficial, but at least a few hundred with folks around the world that I care about and have no way of reaching now.

* 90% of communication here in NZ transpires via Facebook Messenger, so I was immediately cut off from my community and friends. What's worse, many have since told me that they were worried I'd blocked them.

* My income from the time came from selling trading cards in FB groups while I was closing an investment round. I lost the ability to do so, and had to move out of my apartment to live outside of the city with my in-laws.

* My father passed away a few years ago, and I had countless photos of him on my account, as well as our message history. This honestly hurt more than anything else.

All in all, this experience has left me a deep scar. I guess I needed to learn a lesson around not relying on one platform so heavily, and to some extent not backing things up such as the photos, but I really wish Facebook could have just done the reasonable thing and let me back in.

Finally, I have no idea if I was reported to the police/LE in any capacity regarding whatever was posted on my account to have me banned. Am I on some kind of list now?

A boring, technocratic dystopia.

edit: on the off chance anyone from Meta reads this and thinks they can help, I would be over the moon to get even a chance of having my account restored. I was told to speak to an Australian law firm who charge $3,500 to hound Facebook to get accounts restored in situations like mine, but unfortunately that's just not within my means.


> Protecting children is important.

Google protecting children is not important.

> This man is innocent of doing anything wrong

He is not entirely without fault: He negligently enabled surveillance of his personal files, including allowing Google to make copies of his son's genitalia / his son's private personal medicall information. He's not exactly innocent - but of course, his misdeed is not vis-a-vis the state or Google, but vis-a-vis his son, his wife and himself.

> We allow the state power to protect children

You don't allow the state anything. The state allows itself and you pretend we control the state because we get to make some choices via elections every once in a while.


> But there is no reason the keep the account suspended once it’s clear there was no wrongdoing.

For you, Google is your everything. For Google, you're an tiny ant. If a tiny ant steps out of line, they kill it. They don't feel bad about the ant; they don't feel anything. It's a free service and it's not worth the cost to literally do anything.


This is what software as law looks like. The software actually found one or more pictures of a nude child and banned the account - the software worked exactly as intended. As humans we can see the issue here, the software cannot see or understand the issue. Google is not about to start carving exceptions out of its government mandated or coerced scanning feature, it can only follow the rule that any nudity involving children is bad.


Expect that the software did not work as intended. It’s intended to identify child sexual abuse material and this was not that.


This is one group that Google works with: https://www.missingkids.org/theissues/csam

Their definition and yours are not exactly the same. If you read what they have to say, you'll see that the issues are more broad than whatever you personally call abuse, and those issues include exploitation and distribution of images that victims find harmful. Are you a victim of abuse or exploitation? Who are you to determine what a victim finds abusive or exploitative?

I'm not saying I agree with this situation, just that I can see how it happened and why Google won't do anything in this case.


I didn't give a definition for what constitutes abuse.

My point is that the software is not working as intended. The intent is to stop child abuse, but the software they build does not identify child abuse it identifies what it believes to be naked (semi-naked?) minors. Those are two different things.

A photo you take of your kid for the doctor to see in order to treat them is not CSAM. Neither is a photo of your naked baby.

But those same photos if stolen from you or taken by others for different (less wholesome) reasons would be. But, and most importantly, that context is not to be found in the photo itself.


This is the part that should receive a Congressional inquiry. I would love to see Congress drag Sundar Pichai (and no one else) before them and scream “What the hell is wrong with you?” in his face.

We have courts for a reason. They found no wrongdoing. Google is objectively deciding they know better than an elected court on what happened.

This whole situation is absurd and evil.


Remember Google can ban any account at any time for any reason including because they just do not like you.

No Google software, devices, or data on their systems belongs to you. Sue them and they will remind you of this fact in detail.

If you want to own your digital life you will have to commit to being a pariah in your social circle that is ridiculed constantly for using open source tools.


> If you want to own your digital life you will have to commit to being a pariah in your social circle that is ridiculed constantly for using open source tools.

I think your friends are just a little too obsessed with social media. The only response I ever hear to "I don't have a facebook/twitter/whatever" is "yeah, that makes sense, I think about deleting mine all the time."


> Sundar Pichai (and no one else)

I'd go a little broader than that. I'd also want to see them drag in whoever made the decision to not reinstate the account, plus everyone who knew what happened and could have overridden the decision but chose not to do so.


> everyone who knew what happened and could have overridden the decision

that will never happen, unless documents & emails get discovered and all the cc'ed people get subpoena'ed and deposed. And even that won't catch everybody. Congressional committees don't have time for that.

Big corporations are adept at diffusing real responsibility among a faceless mass of people.


I'm not normally one to indulge conspiracy theories.

... But given the situation and high profile nature of this incident?

If that account's still locked, it's locked under sealed FBI warrant.

Google has had situations where they work hand-in-give with law enforcement to resolve something, and when they do, they're radio-silent on the situation. Sometimes for years, given the scope.


>Protecting children is important. AI is imperfect.

is protecting children more important than a software engineer with no backup's mistakes and first world inconveniences? yes.

the AI did its job flawlessly - detected a naked toddler. as did the human verifier. bravo. should it have been detected and flagged? yes.

should we be surprised at the chosen free cloud provider's attitude or dismissal? no.


CSAM stands for Child Sexual Abuse Material. It was not CSAM, quite the opposite actually.


thanks. so CSAs have no interest in pictures of naked toddlers..


Maybe naked toddlers should not be flagged? You can see naked toddlers bathing in lakes and such. They are naked, cause overwhelming majority if people dont see toddlers as sexual and they themselves are not ashamed of body yet.

Naked pre-teen? Sure flag absolutely. Toddler? No.


They're not looking for "normal people", hence why a naked toddler is flagged.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: