There’s a lot going on in this blog. Interestingly, the core mechanism at play here is the http-01 challenge validations which they state is fetched by the CA over HTTPS. This is particularly amusing when you consider that http-01 is explicitly NOT HTTPS (it’s HTTP), and this is actually the entire reason there’s a different code path to take.
The modern web requires secure (HTTPS) context for many things to work, so it’s commonplace to do so “HTTPS enforcement”; all requests are forcibly upgraded to HTTPS. However, you can’t do that to the CA when it’s performing a http-01 challenge validation. This necessitates a “well known” URL route be used for challenges so that they can very deliberately take a different code path that doesn’t enforce HTTPS (and be routed differently).
This is true of basically every ACME client used for http-01 challenges, not just cloudflare. So while they’ve unfortunately missed the mark on correctly explaining the mechanism at play here, I hope that I succeeded in making it a bit more clear. Other implementations are, of course, similarly exploitable.
These BGP leaks do happen all the time. Cloudflare is right. This is a gap to the http-01 challenge on cloudflare’s end. It should be changed to match the RFC, but not because it’ll change anything meaningful for security.
It doesn’t matter because this (and similar http-01/dns-01 challenge exploits that allow the issuance or interception of CA signed certificates) are not a rare occurrence, and are surprisingly easy to perform as an individual. Even more so for governments.
Addendum: certificate transparency logs are free and are scraped and sold. Don’t believe for a second anyone out there is doing any free analysis at scale to watch your back. The orgs doing analysis are ultimately paid by orgs using it to hide their operations better. Your small business use-case for the data is pocket change compared to those contracts.
I run mine on the public internet and it’s fine, because I put it behind auth, because it’s a tool to remotely execute code with no auth and also has a fully featured webshell.
To be clear, this is a vulnerability. Just the same as exposing unauthenticated telnet is a vulnerability. User education is always good, but at some point in the process of continuing to build user-friendly footguns we need to start blaming the users. “It is what it is”, Duh.
This “vulnerability” has been known by devs in my circle for a while, it’s literally the very first intuitive question most devs ask themselves when using opencode, and then put authentication on top.
Particularly in the AI space it’s going to be more and more common to see users punching above their weight with deployments. Let em learn. Let em grow. We’ll see this pain multiply in the future if these lessons aren’t learned early.
Can you share what made this behavior obvious to you? E.g. when I first saw Open Code, it looked like yet another implementation of Claude Code, Codex-CLI, Gemini-CLI, Project Goose, etc. - all these are TUI apps for agentic coding. However, from these, only Open Code automatically started an unauthenticated web server when I simply started the TUI, so this came as a surprise to me.
Doesn't the UEFI firmware map a GPU framebuffer into the main address space "for free" so you can easily poke raw pixels over the bus? Then again the UEFI FB is only single-buffered, so if you rely on that in lieu of full-fat GPU drivers then you'd probably want to layer some CPU framebuffers on top anyway.
Someone last winter was asking for help with large docker images and it came about that it was for AI pipelines. The vast majority of the image was Nvidia binaries. That was wild. Horrifying, really. WTF is going on over there?
As a (previous) customer of Proton from many years and a user of their drive product, you should be aware that earlier this year the drive API endpoints began to block their own VPN egress quite often for rate limiting. They also block many cloud provider’s egress. They also don’t officially support rclone, and their changing API spec often breaks the compatibility.
I saw the writing on the wall and migrated rapidly earlier this year ahead of crypto product launches ahead of the email fiasco. It was hard to get data back out, even then.
Proton still stands for privacy. But the dark patterns for lock-in I can do without.
Hetzner Storage boxes with rclone and the “crypt” option are a drop-in replacement, at ~$40 for 20TB. That’s where I went instead.
I have, and the technical support representative at Proton confirmed
it, but not without implying that it was my fault for using rclone. I
asked the official recommendation for Linux users to do automated or
scriptable backups onto a Proton drive and the answer was that some
kind of SDK was planned for the future. Proton drive stopped working
completely with rclone shortly after that, which was about two months
ago.
To be honest, all consumer cloud storage providers get touchy when you access them via API.
Dropbox API refuses to sync certain 'sensitive' files like game backups (ROMs or ISOs). There is no way for Dropbox to know if you own the game and thus can own a backup, they just play file police.
I wonder if it would ever be possible to reach that value-per-dollar in the current economy.
Hetzner works because it was built a long time ago when talent was cheap, which it was because the property Ponzi wasn't at the stage where an average post-tax middle-class salary barely covers rent. Since then they've managed to stay afloat because it's only maintenance and small incremental changes from that point on.
Building such a new operation (and offering competitive prices) from scratch today would be impossible based on labor costs alone. This is presumably the same reason they don't offer their very-good-value dedicated servers in the US either, only "cloud" VPSes which are orders of magnitude more expensive.
This theory ignores the entire Midwest rust belt where the property pricing squeeze often barely exists and senior level engineers barely cross $100k for salary.
By your logic AWS should also be cheap since it was also built under similar timing.
Hetzner is cheap because they don’t provide the same level of abstractions. They also have competitors in the same price range. They aren’t wildly unique.
Dutchie here married to someone from the Midwest. Can confirm, those houses look really cheap there. It was one of the reasons why we considered living there. But the Netherlands won out over other things (e.g. healthcare).
I think the situation may not reflect cost of hands and housing. But the sunk cost of Hetzner to be in Germany, compared with the break-ground cost to construct their existing model in the rest of the world: I think that part is true. Selling off services in German hosted racks is at this point, massive profit on low price because the sunk cost has already been covered. They are sweating an asset into people like us, who want cheap disk but not the 100% reliable coverage of a contract which gives us replication, offsite, 3-2-1 class services. If they took that into the US the sunk cost component would not be covered, their sell price would be significantly less profitable.
The cost of hands and housing for hands, yea thats marginal in this.
A non technical person would probably Google “Hetzner Storage Box”, click the first link, and read the page that answers all of those questions.
There is many free software suites that Hetzner Storage box supports, up to and including official support for rclone (the free tool used in the post we’re replying to).
Effectively there was a proposed Swiss Law that would force Protonmail to cooperate in sharing customer data with authorities if requested.
The law hasn't passed, and it was even deemed illegal by the EU.
It did raise an interesting issue though, as Protomail was strictly in Switzerland, they realised that they were at the whim of their lawmakers (which was kinda the point in the first place as Switzerland has great privacy laws). However, if those laws did become adversarial, it would greatly affect Protonmail users. This is why they started diversifying some services outside of Switzerland, in case something like this ever did come to pass.
They lost thousands of emails and they treated every customer individually while blocking people from complaining on their subreddit.
Then, it was posted here on HN and they finally decided to stand up and fix their reputation by saying they care and want to do better, after months of silencing the issue as much as possible.
Oh... It appears we were talking about 2 different things. After reading what you wrote, it appears that too is a storm in a teacup.
You are complaining about them "losing thousands of emails" when that is clearly not the case. The issue was with their IMAP bridge, meaning the emails in question would have been lost on a local host, not on Protonmail, and the 'lost emails' were fully recoverable just by logging into the web interface.
comment about the linked blog post: he replaced Proton Drive with Synology, which is kinda cheating (comparing apples to apple trees). Also he did not include a cloud drive in his pricing calculations, which is also cheating...
Anyway, for anyone actually looking for good cloud drive hosting, without any BS: rsync.net (you encrypt on your side before sending anything. I use Vorta with them).
Also the same server can be used by multiple (trursted) users, like family members etc.
As someone whose devices randomly became unverified just a few months ago, signed out, and then tried to use my recovery keys: I was authenticated, but unverified.
When attempting to verify iOS, Desktop linux didn’t work. When attempting to verify Desktop Linux, Desktop Windows didn’t work. When verifying Android, iOS didn’t work. Every verified official client for every platform was verified, tried a different verification method than expected, and failed.
All of this to say, this isn’t the first time this has happened to myself and others. Forcing verification is otherwise known as unexpected “offboarding”. If some verification methods have problems, publish a blog about their deprecation instead.
I love element, but this can’t be done without prior work to address.
I've had constant problems with the verification ever since it was introduced. As far as I can tell it hasn't improved at all. Sometimes it works, sometimes it repeatedly kicks me out moments after succeeding, and it's still prompting me to verify some old devices that I removed Element from years ago and I can't find any way to make the constant pop-ups go away (when they feel like appearing again - sometimes they go away for a couple months).
I went through the same frustration recently. I only occasionally use it, but every second or third time I have to open it up to talk in some channel I lose 30 minutes chasing my tail trying to work through the latest set of problems.
I like the idea, but the effort to reward ratio for using the product has not been good. It has caused visible churn and attrition in the few channels I’ve tried to participate in and it’s become a problem for the OSS projects I’m part of that try to use it for their communication. Of course, there are some people who like it that way and think making communication spaces difficult to access is a bonus, but that’s another topic.
I have never heard of such issue and not experienced it despite intensive use, so it's a bit strange that you and people you know have experienced this repeatedly.
Vulnerabilities can and often are chained together.
While the relevant configuration does require root to edit, that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
There are frivolous CVEs issued without any evidence of exploitability all the time. This particular example however, isn’t that. These are pretty clearly qualified as CVEs.
The implied risk is a different story, but if you’re familiar with the industry you’ll quickly learn that there are people with far more imagination and capacity to exploit conditions you believe aren’t practically exploitable, particularly in highly available tools such as dnsmasq. You don’t make assumptions about that. You publish the CVE.
>that doesn’t mean that editing or inserting values to dnsmasq as an unprivileged user doesn’t exist as functionality in another application or system.
The developer typically defines its threat model. My threat model would not include another application inserting garbage values into my application's config, which is expected to be configured by a root (trusted) user.
The Windows threat model does not include malicious hardware with DMA tampering with kernel memory _except_ maybe under very specific configurations.
The developer is too stupid to define the threat model — they’re too busy writing vulnerabilities as they cobble together applications and libraries they barely understand.
How many wireless routers generate a config from user data plus a template. One’s lucky if they even do server side validation that ensures CRLFs not present in IP addresses and hostnames.
And if Unicode is involved … a suitcase of four leaf clovers won’t save you.
Honestly after witnessing "principal" software engineers defend storing API keys plaintext in a database in the year of our Lord 2025, and ask how that someone possibly exploit that if they can't access that column directly through an application, my cynicism is strong enough that I can believe that even a majority of "developers" don't even know what a threat model is.
> The developer typically defines its threat model.
The people running the software define the threat model.
And CNA’s issue CVEs because the developer isn’t the only one running their software, and it’s socially dangerous to allow that level of control of the narrative as it relates to security.
> The developer typically defines its threat model.
Is this the case? As we're seeing here, getting a CVE assigned does not require input or agreement from the developer. This isn't a bug bounty where the developer sets a scope and evaluates reports. It's a common database across all technology for assigning unique IDs to security risks.
The developer puts their software into the world, but how the software is used in the world defines what risks exist.
Buried in the article is the primary relevant bit that gives the product hope of success beyond other comparable products in my mind: WebXR.
Many incredible things are developed with a product once it hits market saturation, but it has to make it that far. The VCR saw its initial success for a reason, and these companies have danced around the elephant in the room under the guise of intentional vendor lock-in to apps stores for best functionality.
reply