Sounds like a combination of 'can it be geocoded?' and 'is their location precise enough?' There is some progress on resolving human-written locations in cities ( https://www.danvk.org/2026/03/08/oldnyc-updates.html ) but I imagine once you lose reference points, '100 feet into Golden Gate Park...' would be interpretable but not possible to fix to one point.
You're absolutely right. Highways are a little better since they have mile markers, but once you get into a nature preserve you're dealing with a whole bunch of "If you pass the pond with the cattails on your left, you've gone too far." Fishermen, it turned out, LOVED sending coordinates for stuff they saw so long as their fishing spot wasn't nearby.
I've also noticed that iNaturalist also fuzzes exact locations for some species within a geographic grid (example: zebra) even the ranch zebra in California.
This section stood out to me because it started out explaining PGP to a layman like this, but then the author gets overly excited that a cryptographer would be interested in... basic cryptography
> I’d learned enough by then to know that P.G.P. relies on public-key cryptography. So does Bitcoin. [...]
> How interesting, I thought, that Mr. Back’s grad-school hobby involved the same cryptographic technique that Satoshi had repurposed.
Bob uses electricity provided from a coal power plant, therefore he must be able to design a Fission plant. Yeah, these are some massive leaps, the question of why, beyond morbid curiosity, one must dox Satoshi not withstanding. Satoshi or the wallets they controlled were never associated with anything beyond the creation of BTC after all, making the value of knowing who they are or were not really great in my view. If these coins suddenly started funding someone or something, there could be an argument, but this coupled with such a layperson approach makes me doubtful about the ethics or approach.
> And Mr. Back’s thesis project focused on C++ — the same programming language Satoshi used to code the first version of the Bitcoin software.
Amazing! I bet they both for loops too! I heard Bitcoin relies heavily on for loops.
Infuriatingly, to people who don't know much about programming, these pieces of 'evidence' might sound quite compelling, because it will all sound equally obscure to them.
I'm only a quarter of the way through this piece, but I'm finding it very hard to take seriously.
It's strange. I'm sure that he talked to experts who would immediately say, yes many programming languages exist. But two cryptographers who wrote money systems both using C++ is not informative. Today maybe we could expect one to use Rust.
People have tried to suss this out on the ML subreddit, and it is confusing. Most of the worst messages from Tay were just people discovering a "repeat after me: __" function, so it's hard just to figure out which Tay messages to consider as responses of the model.
There seems to have been interest in a model which would pick up language and style of its conversations (not actually learning information or looking up facts). If you haven't trained an LSTM model before - you could train on Shakespeare's plays and get out ye olde English in a screenplay format, but from line to line there was no consistency in plot, characters, entrances and exits, etc. in a way which you'd expect after GPT-2. Twitter would be good for keeping a short-form conversation. So I believe Tay and the Watson that appeared on Jeopardy are more from this 'classical NLP' thinking and not proto-LLMs, if that makes sense.
My problem with this is less that it's perpetual engagement, but that I use ChatGPT for direct programming outputs, like "go through a geojson file and if the feature is within 150 miles of X, keep and record the distance in miles". Whether it gives a good answer or not, the suggestion at the end is a synthesis of my ChatGPT history, so it could be offering to rewrite a whole script, draw diagrams, or bring in past questions for one franken-suggestion. This is either the wrong kind of engagement for me, or maybe "teaching" me to move my full work process into the chat. I've asked it many times to give concise answers and to not offer suggestions like this, but the suggestions are really baked in.
If you participate in certain online communities where posts used to generally share real ideas and ask real beginner questions, you get tired of it. I am especially tired of seeing "it's not X - it's Y" on /r/MachineLearning posts, claiming that they've found some "geometry" or basic PyTorch code which they think will solve AI hallucinations. And it's becoming clear these people are not just doing this sort of a thing on a whim, but spending days in delusional conversations with the AI.
Reminds me of Google, Apple, Microsoft, and Facebook releasing similarly-worded statements denying that they would share information with NSA PRISM, despite the Snowden docs
I don't think carbon capture / sequestration is going to do enough, but if we continue slipping into this trajectory I think there will be more support for changing reflectivity (spraying sea water, or putting particles in the stratosphere).
Stratospheric aerosols: the dangers of this seem overblown. It is milder than a volcanic eruption. It seems like a reasonable thing governments should be attempting.
reply