> The Far Side is the only place in the Earth Moon system where you can hide military hardware and basically disappear. No optical tracking, no radar, no interception.
What prevents someone from sending a Lunar-orbiting imaging satellite to image everything on the Far Side? The Lunar Reconnaissance Orbiter has already been imaging the Far Side for over a decade.
I agree with your general points about it being a difficult location to get to, but if it's possible to put regular satellites in Lunar orbit, surely its possible to park some warheads too just in case...
If a big power gets there first, they’re not going to treat lunar orbit like some kind of shared international space. They’d treat it as their turf. At that point you can’t just assume you can drop an imaging satellite into whatever orbit you want they’d have both the motive and the capability to deny access.
Hi, curious, did you know about OpenRouter before building this?
> OpenRouter provides a unified API that gives you access to hundreds of AI models through a single endpoint, while automatically handling fallbacks and selecting the most cost-effective options. Get started with just a few lines of code using your preferred SDK or framework.
It isn't OpenAI API compatible as far as I know, but they have been providing this service for a while...
That would be a fun project. Capture some WiFi geolocation data and rebroadcast it later with an ESP32 that switches its BSSID/SSID/frequency/transmit power to match an existing fingerprint.
And then see if you can be magically transported somewhere else.
could easily be done by malicious JS, an ad script, or the website itself, and then as the RP gets the output of 6.4) email and email_verified claims.
I'm guessing that this proposal requires new custom browser (user-agent) code just to handle this protocol?
Like a secure <input Email> element that makes sure there is some user input required to select a saved one, and that the value only goes to the actual server the user wants, that cannot be replaced by malicious JS.
You'd have to make an authenticated cross-origin request to the issuer, which would be equivalent to mounting a Cross-Site Request Forgery (CSRF) attack against the target email providers.
Even if you could send an authenticated request, the Same Origin Policy means your site won't be able to read the result unless the issuer explicitly returns appropriate CORS headers including `Access-Control-Allow-Origin: <* or your domain>` and `Access-Control-Allow-Credentials: true` in its response.
Browsers can exempt themselves from these constraints when making requests for their own purposes, but that's not an option available to web content.
> I'm guessing that this proposal requires new custom browser (user-agent) code just to handle this protocol?
Correct; which is going to be the main challenge for this to gain traction. We called it the "three-way cold start" in Persona: sites, issuers, and browsers are all stuck waiting for the other two to reach critical mass before it makes sense for them to adopt the protocol.
Google could probably sidestep that problem by abusing their market dominance in both the browser and issuer space, but I don't see the incentive nor do I see it being feasible for anyone else.
One other problem is there isn't a way to definitely know that a given OIDC provider is authoritive for a given email. Although, this spec could probably be simplified by just having a dns record that specifies the domain to use for oidc for emails on that domain.
Another is that there is a lot of variance in OIDC and OAuth implementations, so getting login to work with any arbitrary identity provider is quite difficult.
I wouldn't mix OAuth and OIDC up when thinking about this. OAuth is a chaotic ecosystem, but OIDC is fairly well standardized.
OIDC actually does have a discovery mechanism standardized to convert an email address into an authoritative issuer. Then, it has a dynamic registration mechanism standardized so that an application could register to new issuers automatically. Those standards could absolutely be improved, but they already exist.
The problem is that no one that mattered implemented them.
If you want to get anywhere with something like this, you need buy-in from the big email providers(Google, Microsoft, Yahoo, and Apple) and the big enterprise single sign on providers(Ping, OneIdentity, and Okta). All of those companies already do OIDC fairly well. If they wanted this feature to exist, it already would.
Instead, it seems like big tech is all-in on passkeys instead of fixing single sign on.
It's more of an invisible feature than a protocol.
The signup protocol and user flow is the same if the feature is supported or not. You just skip a step if the convenience feature is supported.
With SSO the user is inconvenienced with an additional option at sign up and login, and there's the risk of duplicate accounts. Also stronger vendor lock in.
Additionally, some corporate or personal policies might prefer to NEVER use SSO, even if it is sometimes accepted. I hate being presented with option to login with email or login with Google, and I don't know which I signed up with.
God forbid I accidentally make an account with SSO and another with email but the same email. I'd rather just always use email, it's supposed to be a convenience, the advantages are lost when it goes south once
You can have both a gmail and a google workspace account with the same email, but I'm sure someone can do better than Google.
Also I'm pretty sure that since google is itself an SSO provider, this add another layer of clusterfuck that I don't even want to think about, regardless of whether there's a clean implementation or not, I don't even want that on my mental capacity.
I'm permanently banned from Reddit for calling a "power mod" of SF Bay Area a nimby when he was complaining about increased traffic in his neighborhood and that they had to put a stop sign up at a major intersection.
They IP and hardward device banned me, it's crazy! Any appeal auto-rejected and can't make new accounts.
Any word can be a dirty word when someone who is definitely that thing is insecure about being that thing. Bonus points - those kind of people tend to love going around calling other people that thing too.
i deleted my reddit account because the feed is organized for you. It is entirely possible that reddit will organize a feed designed to make you come to a certain conclusion across viewing a number of posts. And whether this is done purposefully by some bad actor or it is done because some algorithm latches on to a particular theme, it doesn't matter. I prefer that things be voted directly without interference, which I assume is how hacker news works.
The other thing is that it is simply a complete waste of time. Commenting on pop culture or news or whatever, when I could be reading books, working on projects, or otherwise interacting with people in the real world is better. We don't have so much time on Earth, I am not sure I want to keep spending so much of it in cyberspace.
> If the directory containing the rollback journal is not fsynced after the journal file is deleted, then the journal file might rematerialize after a power failure, causing sqlite to roll back a committed transaction. And fsyncing the directory doesn't seem to happen unless you set synchronous to EXTRA, per the docs cited in the blog post.
I think this is the part that is confusing.
The fsyncing of the directory is supposed to be done by the filesystem/OS itself, not the application.
From man fsync,
As well as flushing the file data, fsync() also flushes the metadata information associated with the file (see inode(7)).
So from sqlite's perspective on DELETE it is either: before the fsync call, and not committed, or after the fsync call, and committed (or partially written somehow and needing rollback.)
Unfortunately it seems like this has traditionally been broken on many systems, requiring workarounds, like SYNCHRONOUS = EXTRA.
No, the metadata is information like the modification time and permissions, not the directory entry.
The next paragraph in the man page explains this:
> Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed.
Edit to add: I don't think there's a single Unix-like OS on which fsync would also fsync the directory, since a file can appear in an arbitrary number of directories, and the kernel doesn't know all the directories in which an open file appears.
This is a moot point anyways, because in DELETE mode, the operation that needs to be durably persisted is the unlinking of the journal file - what would you fsync for that besides the directory itself?
OK, interesting, I think I see... So you are asking about if SQLite opens and finds a not-committed rollback journal that looks valid, then it rolls it back?
and found something similar to what you are asking in this comment before `sqlite3PagerCommitPhaseTwo`:
** When this function is called, the database file has been completely
** updated to reflect the changes made by the current transaction and
** synced to disk. The journal file still exists in the file-system
** though, and if a failure occurs at this point it will eventually
** be used as a hot-journal and the current transaction rolled back.
So, it does this:
** This function finalizes the journal file, either by deleting,
** truncating or partially zeroing it, so that it cannot be used
** for hot-journal rollback. Once this is done the transaction is
** irrevocably committed.
Assuming fsync works on both the main database and the hot journal, then I don't see a way that it is not durable? Because, it has to write and sync the full hot journal, then write to the main database, then zero out the hot journal, sync that, and only then does it atomically return from the commit? (assuming FULL and DELETE)
> OK, interesting, I think I see... So you are asking about if SQLite opens and finds a not-committed rollback journal that looks valid, then it rolls it back?
Right.
> Assuming fsync works on both the main database and the hot journal, then I don't see a way that it is not durable? Because, it has to write and sync the full hot journal, then write to the main database, then zero out the hot journal, sync that, and only then does it atomically return from the commit? (assuming FULL and DELETE)
DELETE mode doesn't truncate or zero the journal; it merely deletes it from the directory. You need to switch to TRUNCATE or PERSIST for that behavior: https://sqlite.org/pragma.html#pragma_journal_mode
I confirmed all of this by attaching gdb to SQLite and setting a breakpoint on unlink. At the time of unlink the journal is still "hot" and usable for rollback: https://news.ycombinator.com/item?id=45069533
Yeah, I see the comments down below in pager.c which explain it a bit better. I guess I thought its behavior was more like PERSIST by default.
** journalMode==DELETE
** The journal file is closed and deleted using sqlite3OsDelete().
**
** If the pager is running in exclusive mode, this method of finalizing
** the journal file is never used. Instead, if the journalMode is
** DELETE and the pager is in exclusive mode, the method described under
** journalMode==PERSIST is used instead.
So I guess this is one of the tradeoffs SQLite makes between extreme durability with (PERSIST, TRUNCATE, or using EXTRA) vs speed.
I know we have used SQLite quite a bit without getting into this exact scenario - or at least the transaction that would have been rolled back wasn't important enough in our case, I guess when the device powers down milliseconds later to trigger this case we never noticed (or needed this transaction), only that the DB remained consistent.
And it seems like if DELETE is the default and doesn't get noticed enough in practice that it needs to be changed to a different safer default (like to PERSIST or something,) I guess it is better to have the speed gains from reducing that extra write + fsync.
But, now I guess you know enough to submit better documentation upstream for a sliding scale of durability, I definitely agree that the docs could be better about how to get ultimate durability, and how to tune the knobs for `journal_mode` and `synchronous`.
~~~~But your VM TPM won't be signed during manufacturing by a trusted root. No attestation.~~~~
OK I take it back, privacy is one of their specified goals:
> Note that the certificate chain for the TPM is never sent to the server. This would allow very precise device fingerprinting, contrary to our privacy goals. Servers will only be able to confirm that the browser still has access to the corresponding private key.
However I still wonder why they don't have TLS try and always create a client certificate per endpoint to proactively register on the server side? Seems like this would accomplish a similar goal?
> why they don't have TLS try and always create a client certificate per endpoint to proactively register on the server side
That is effectively what Token Binding does. That was unfortunately difficult to deploy because the auth stack can be far removed from TLS termination, providing consistency on the client side to avoid frequent sign outs was very difficult, and (benign) client side TLS proxies are a fairly common thing.
Dude; please stop spamming misinformation, this was already debunked in previous commentary you saw and responded to, showing that the website never sees the raw TPM data at any stage under this proposal.
Session cookies have zero correlation to fingerprinting.
Seems like this is close to the Uncanny Valley effect.
LLM intelligence is in the spot where it is simultaneously genius-level but also just misses the mark a tiny bit, which really sticks out for those who have been around humans their whole lives.
I feel that, just like more modern CGI, this will slowly fade with certain techniques and you just won't notice it when talking to or interacting with AI.
Just like in his post during the whole Matrix discussion.
> "When I asked for examples, it suggested the Matrix and even gave me the “Summary” and “Shortening” text, which I then used here word for word. "
He switches in AI-written text and I bet you were reading along just the same until he pointed it out.
What prevents someone from sending a Lunar-orbiting imaging satellite to image everything on the Far Side? The Lunar Reconnaissance Orbiter has already been imaging the Far Side for over a decade.
I agree with your general points about it being a difficult location to get to, but if it's possible to put regular satellites in Lunar orbit, surely its possible to park some warheads too just in case...