The court relied on Google's TOS to conclude that users opted to use the service, fully aware of their data being stored:
* when a person performs a Google search, he or she is aware (at least constructively) that Google collects a significant amount of data and will provide that data to law enforcement personnel in response to an enforceable search warrant. For present purposes, what Google does with that information, including the standards it imposes upon itself before providing that information to investigators, is irrelevant. For Fourth Amendment purposes, what matters is that the user is informed that Google—a third party—will collect and store that information.
IANAL and can't understand whether now, every 3rd party storing my data is obligated to share it without a warrant.
Genuine question: given LLMs' inexorable commoditization of software, how soon before NVDA's CUDA moat is breached too? Is CUDA somehow fundamentally different from other kinds of software or firmware?
> Pre-installed App must be Visible, Functional, and Enabled for users at first setup. Manufacturers must ensure the App is easily accessible during device setup, with no disabling or restriction of its features
While I can get behind the stated goals, the lack of any technical details is frustrating. The spartan privacy policy page[2] lists the following required permissions:
> For Android: Following permission are taken in android device along with purpose:
> - Make & Manage phone calls: To detect mobile numbers in your phone.
> - Send SMS: To complete registration by sending the SMS to DoT on 14422.
> - Call/SMS Logs: To report any Call/SMS in facilities offered by Sanchar Saathi App.
> - Photos & files: To upload the image of Call/SMS while reporting Call/SMS or report lost/stolen mobile handset.
> - Camera: While scanning the barcode of IMEI to check its genuineness.
Only the last two are mentioned as required on iOS. From a newspaper article on the topic[3]:
> Apple, for instance, resisted TRAI’s draft regulations to install a spam-reporting app, after the firm balked at the TRAI app’s permissions requirements, which included access to SMS messages and call logs.
Thinking aloud, might cryptographic schemes exist (zero knowledge proofs) which allow the OS to securely reveal limited and circumscribed attributes to the Govt without the "all or nothing", blanket permissions? To detect that an incoming call is likely from a spam number, a variant of HIBP's k-Anonymity[4] should seemingly suffice. I'm not a cryptographer but hope algorithms exist, or could be created, to cover other legitimate fraud prevent use cases.
It is a common refrain, and a concern I share, that any centralized store of PII data is inherently an attractive target; innumerable breaches should've taught everyone that. After said data loss, (a) there's no cryptographically guaranteed way for victims to know it happened, to avoid taking on the risk of searching through the dark web; (b) they can't know whether some AI has been trained to impersonate them that much better; (c) there's no way to know which database was culpable; and (d) for this reason, there's no practical recourse.
I recently explained my qualms with face id databases[5], for which similar arguments apply.
When visiting Bath[1] in UK (mentioned in the article), I learned the Romans used a clever contraption, the "three legged lewis", to lift heavy stones[2].
Referring to the diagram[3] on Wikipedia, a concave hole is first cut into the stone. Parts 1 and 2 of the lewis are inserted, one at a time. Inserting part 3 between 1 and 2 results in all three locking into place. A pin and ring at the top keeps the 3 parts from separating.
Given how many pictures governments and corporations collect from public places, the GP's concern seems moot. I'll try to articulate my reasons as follows:
- In every authentication system (the airports' face scanning ones and others) there's a point at which a yes/no decision must be made: is this person authentic or is not?
- This yes/no "decision module" must base its determination solely on a series of bits presented to it by the image sensor.
- Every series of bits can be spoofed because the decision module can't tell whether the bits originated from a real image sensor or from a very convincing AI or elsewhere. The only exception to this is when the bits include a cryptographic signature, generated using a private key, securely embedded within the image sensor.
- The chance of such spoofing is minuscule if the sensor and the decision module coexist within a single piece of hardware that's tamper-proof. The decision module for airport face scanners can't be, given the large number of faces that must be queried. When such a decision module and its image sensor are separated by a network, possibilities for intrusion and spoofing can no longer be ignored.
- A helpful analogy is how we decry passwords stored as plain text in backend databases; after the inevitable compromise, these passwords get used to attack other systems. If backend systems store face data as a set of images (as I believe most do), how's that different in principle from storing passwords in a DB, in plain text?
- I'll grant that a careful designed system will allay my fears. The backend should store nothing but salted hashes and the image sensors must send only signed images of the subject.
- Stepping back, my ultimate concern with face authentication systems is that their technical details are opaque and they're used in situations where recourse is limited at best.
> Given how many pictures governments and corporations collect from public places, the GP's concern seems moot.
That data is not centralized. If anytime you entered a gas station surveillance footage of you were associated with your passport and added to a centralized registry, I think you'd be worried too. That's what's going on here.
Yet. Flock, et al is working on that. My brother in law runs a tie company. His trucks all roll with LPR, and they get pings on the location of repo vehicles in seconds.
At the government level, slot of the Palantir work is (often illegally) joining all sorts of data for total awareness.
Where you are every minute is centralized if you use a cell phone. Even if your phone isn’t sending GPS data back somewhere, it’s still constantly pinging cell phone towers.
I hope this could be a "teachable moment" for all involved: have some students complete their assignments in person, then submit their "guaranteed to be not AI written" essays to said AI detection tool. Objectively measure how many false positives it reports.
> you would expect that the reputation of the bad credentials would go down and the good credentials would go up.
We should expect this if employers can efficiently and objectively evaluate a candidate's skills without relying on credentials. When they're unable to, we should worry about this information asymmetry leading to a "market for lemons" [0]. I found an article [1] about how this could play out:
> This scenario leads to a clear case of information asymmetry since only the graduate knows whether their degree reflects real proficiency, while employers have no reliable way to verify this. This mirrors the classic “Market for Lemons” concept introduced by economist George Akerlof in 1970, where the presence of low-quality goods (or in this case, under-skilled graduates) drives down the perceived value of all goods, due to a lack of trustworthy signals.
Loved the post. I too am an avid Emacs and Org user but just starting to play with org-babel. I wonder how well this workflow could be replicated in org-babel.
> is it about doing, or is it about getting things done?
For me it is getting things done while also understanding the whole building, from its foundation up. Only with such a comprehensive mental model can I predict how my code will behave in unanticipated situations. I've only ever achieved this metal model by doing.
Succinctly, "it is about doing" to guarantee I'm "getting things really done".
> my time goes into thinking and precisely defining what I want
I'm reminded of the famous quote "Programs must be written for people to read, and only incidentally for machines to execute." [1]
A programming language is exactly the medium that lets me precisely define my thoughts! I think the only way to achieve equivalent precision using human language is to write them in legalese, just as a lawyer does when poring over the words and punctuation in a legal contract (and that depends upon so much case law to make the words really precise).
> For me, AI allows me to realize my ideas, and get things done.
More power to you! Bringing our ideas to life is what we're all after.
* when a person performs a Google search, he or she is aware (at least constructively) that Google collects a significant amount of data and will provide that data to law enforcement personnel in response to an enforceable search warrant. For present purposes, what Google does with that information, including the standards it imposes upon itself before providing that information to investigators, is irrelevant. For Fourth Amendment purposes, what matters is that the user is informed that Google—a third party—will collect and store that information.
IANAL and can't understand whether now, every 3rd party storing my data is obligated to share it without a warrant.