I recently discovered a unit test file in the SymPy repository that demonstrates how to use SymPy for matrix calculus, specifically for finding derivatives of symbolic matrix expressions. This is of course very useful when working with optimization problems in e.g. machine-learning. The point is that SymPy can do this directly (in matrix form), yet this is not really obvious from the available documentation / content on forums.
Just came across this and wanted to share it here for more visibility – it's currently at 11 GitHub stars. I'm not the author, but as someone who frequently uses Typora, I've often felt the lack of certain features that are common in other tools, such as callouts. This community-driven plugin system seems like a promising way to bridge those gaps.
What I find most disturbing about this is the possibility to use real money to promote the games. Due to the misrepresentation of the chances of success this turns the platform into a gambling den for ten year olds.
"According to media reports, the cloud computing industry does not take full advantage of the existing CSAM screening toolsto detect images or videos in cloud computing storage. For instance, big industry players, such as Apple, do not scan their cloud storage. In 2019, Amazon provided only eight reports to the NCMEC, despite handling cloud storage services with millions of uploads and downloads every second. Others, such as Dropbox, Google and Microsoft perform scans for illegal images, but 'only when someone shares them, not when they are uploaded'." [1]
So I guess the question is what exactly "others" are doing, 'only when someone shares them, not when they are uploaded'. The whole discussion seems to center around what Apple intends to do on-device, ignoring what others are already doing in the cloud. Isn't this strange?
It is a shift in trust. If my things are scanned on my local device, I now must trust that Apple:
* Will not be compelled to change the list of targeted material by government coercion.
* Will not upload the "vouchers" unless the material is uploaded to iCloud.
* Will implement these things in such a way that hackers cannot maliciously cause someone to be wrongly implicated in a crime.
* Will implement these things in such a way that hackers cannot use the tools Apple has created to seek other information (such as state sponsored hacking groups looking for political dissidents).
And for many of us, we do not believe that these things are a given. Here's a fictional scenario to help bring it home:
Let's say a device is created that can, with decent accuracy, detect drugs in the air. Storage unit rental companies start to install them in all of their storage units to reduce the risk that they are storing illegal substances, and they notify the police if the sensors go off.
One storage unit rental company feels like this is an invasion of privacy, so they install the device in your house but promise to only check the results if you move your things into the storage unit.
> The whole discussion seems to center around what Apple intends to do on-device, ignoring what others are already doing in the cloud. Isn't this strange?
Very strange. Especially when this on-device technique means that Apple needs to access far less data than when doing it on the cloud.
I think this should only work if you could reasonably be assumed to be unaware of the existence of any look-a-likes (including the celebrity) you don't have a permission from.
You wouldn't even need to be clever. Use a bunch of stock photos mixed in with the target photo, use a Co-pilot-level GAN to "sort" the bodily features of this photo in a way that would suit your liking and - voila!- "It's an algorithmic choice. It can't be helped". That or anime-ification.
But none of this would be needed if people respected freedom of speech. No person should be obligated to abstain from composing the digital equivalent of a nude statue because it can resemble a living person. One shouldn't even be a need to ask what happens when satire and sexuality are policed. Anyone who has been reading the news over the past few decades would know the answer to that.
Even if a "malicious" intent were relevant to the production or distribution of deepnudes being considered a crime (which I am convinced is not a crime under any circumstances), judges and lawmakers have historically shied away from arguments of intent in the past under the argument that it's too difficult and time consuming. Present laws regarding "revenge" porn , for example, are assessed under strict liability and don't require any proof of any actual revenge plot being involved. There was a case in Illinois [1] in which the defendant was trying to prove her ex-boyfriend's infidelity by distributing photos he had carelessly synchronized to her iCloud account. In doing so she was tried with distribution of non-consensual pornography without any consideration for intent. Her legal team appealed on 1st amendement grounds where the Illinois law did not apply strict scrutiny standard required for rulings that curtail free speech on the supposed basis of serving a compelling government interest. The appeals were eventually filed all the way to the Supreme Court but the case was not taken up.
If a deepnude ban were to happen, I don't expect the arguments and legal standards under which such a ban is judged to be any different. That's what I find troubling.