That's too funny. No, it really is me, but now that I re-read it I totally see how the kind of praise I reserve for maybe one thing on HN a year is the kind of praise that ChatGPT outputs all the time...
And hell, I even use three em-dashes! But maybe the fact that I typed them out using hyphens is the telltale sign this is actually human...
That's too funny. No, it really is me, but now that I re-read it I totally see how the kind of praise I reserve for maybe one thing on HN a year is the kind of praise that ChatGPT outputs all the time...
Everyone in my friend group is between 20-26, and for the guys it’s mostly been dealing with their self image. Their careers aren’t what they thought it would be, and most struggle with trying to get to some ideal body standard.
For the girls, it’s even worse. Most of them have been diagnosed with an eating disorder, probably due to the constant influence of people they see online.
So in a nutshell, the aspiration to be the same as “everyone else”, which eventually disappoints when they realise that what they want is unobtainable without drugs, surgery, luck or other extraordinary measures.
What's the deal with Google removing features, without warning, from their products that a part of their userbase still uses? They own the largest analytics platform on the planet, why go against the experience most users are used to?
Apple has the decryption keys to iCloud... It is one of those things that many people assume the other way given Apple's marketing and PR stands.
I for one am aware of this with iCloud and am still using iCloud for convenience (and by choice). If I were in need of better privacy, I can always use the iPhone without iCloud, and it will work. After this "in phone" watchdog implementation, that will not be the case. I will assume that Apple is constantly watching all my unencrypted content in the worst case, on behalf of state actors and intelligence agencies.
We have seen Apple give in to the Chinese government spying because it is the law there. US government could easily ask for this and also apply a gag order preventing Apple from being able to tell the user. The best situation is to lack such a tool.
A vulnerability by itself is not that dangerous, but in combination with a sophisticated attack, or another vulnerability can be disastrous. State actors have the resources to exploit a number of unknown bugs in combination with this collision to have Apple's systems flag persons of interest.
This, combined with human error during the manual review process might result in someone getting reported. Seeing as twitter (and other social media sites) jump on the bandwagon whenever someone gets accused of being a pedophile, this might destroy someones life.
The entire story might seem a bit to far fetched, but based on past events, you never know how bad something 'simple' as a hash collision can be.
No one is going to send gray blobs, they will be finding legal porn (like pussy close ups, tongue pics, whatever) and then disturbing it to trigger a CSAM hit.
The low res derivative will match, perhaps even closely, because pussy closeups look similar to an apple employee when its grayscale 64 by 64 pixels (remember: it's illegal for Apple to transmit CSAM, so it must be so visually degraded to the point where it's arguably not visual).
The victim will get raided, be considered a paedophile by their workplace, media, and family, and perhaps even go into jail.
The attacker in this case can be users of Pegasus unhappy with a journalist.
Ok, so you posit an attacker could find/generate 30+ pictures that are
1. accepted by innocent user,
2. flagged as known CSAM by NeuralHash,
2b. also flagged by the second algorithm Apple will run over flagged images server side as known CSAM,
3. apparently CSAM in the "visual derivative".
That strikes me as a rather remote scenario, but worth investigating. Having said that, if it's a 3-letter adversary using Pegasus unhappy with a journalist, couldn't they just put actual CSAM onto the journalist's phone? And couldn't they have done that for many years?
Incorrect. A Chinese state actor can't just go around imprisoning journalists they don't like in America, but they can now do this through planting child porngoraphy via remote malware (Pegasus) and watch their enemies get arrested by the US Feds.
Disagree. It isn't just jurisdiction. It is resource access. If the Chinese gov't were coming after little old me right now, I'd be properly worried, even though I'm safely within the boundaries of the US.
It's not true at all. You're assuming state actors are in the same jurisdiction. This isn't always the case - think an oppressive authoritian regime wanting to get an American journalist arrested for child pornography.
It's always possible before, but client-side CSAM detection and alerting has weaponised this.
Previously, you always had to somehow alert an unfriendly jurisdiction. Now, you just use malware like Pegasus to drop CSAM, whether real or disturbed from legal porn, and watch as Apple tips off the Feds on your enemies.
State actors will just Gitmo you, without all this wasteful effort on hashes. This system offers no benefit sufficient to make it worth their time if they want to cull you from the population somehow.
No, China can't just Gitmo an American journalist on American soil.
But now China can send some legal pornography (eg closeup pussy pictures), disturbed to match a CSAM hit, to a journalist they don't like and get them in jail.
Why can't China do this before? Because previously, they'd still need to tip off authorities, which has an attribution trail and credibility barrier. Now, they can just use Pegasus to plant these images and then watch as Apple turns them into the Feds. Zero links to the attacker.
The scenario you describe has already been extant for the past ten years. Unreported zerodays could have been used at any time to inject a CSAM hit into someone's camera roll, way back in time where they wouldn't see it, in order to get them investigated. Their phone would have uploaded it to iCloud or Google Photos or Dropbox Whatever and the CSAM detections at each place would have fired off. No need for any of this fancy AI static nonsense.
I know of zero instances of this attack being executed on anyone, so apparently even though it's been possible for years, it isn't a material threat to any Apple customers today. If you have information to the contrary, please present it.
What new attacks are possible upon device owners when the CSAM scanning of iCloud uploads is shifted to the device, that were not already a viable attack at any time in the past decade?
We did 1.5 years ago, and to be honest, it's been a mixed bag. Large workspaces become pretty slow, but it's nice to have everything in a single place (docs, onboarding, issues etc). The only thing I really miss is the jira auto issue closing feature when merging a PR. Besides that it's been decent, though the slowdowns and slow search can be a pain.