Hacker Newsnew | past | comments | ask | show | jobs | submit | Strilanc's commentslogin

The recommendation is to not use QKD. This is the correct recommendation. QKD solves key agreement if you have an authenticated line. But authentication is the harder more crucial problem.

Here's an interesting related aside: the likely design of a practical quantum internet would make QKD totally trivial. What a quantum internet would do is deliver kinda-noisy entangled Bell pairs to endpoints that wanted to communicate. The endpoints would then purify [1] this kinda-noisy entanglement into actually-good entanglement (e.g. from 1% error to 0.0000000000001% error). The purified Bell pairs can then be consumed in order to transmit qubits [2]. However, because of the monogamy of entanglement [3], the purification process must detect and correct eavesdropping (or else fail to produce output). So, once you have a sufficiently purified Bell pair, it can be measured to get a bit that can be used as a one time pad. (That said, this does still assume you have an authenticated channel! Purification requires communication, because without authentication you can be man-in-the-middle'd.)

[1]: https://en.wikipedia.org/wiki/Entanglement_distillation

[2]: https://en.wikipedia.org/wiki/Quantum_teleportation

[3]: https://en.wikipedia.org/wiki/Monogamy_of_entanglement


Of course that also means you need a mesh network topology (every node needs a direct link to the node it wants to share qubits with), so a quantum internet (interconnected network of networks) is impossible.

That is not true. A spanning tree of physical links is sufficient to make a network where anyone can talk to anyone else.

The key ingredient here is entanglement swapping [1]. Entanglement between routers A and B can be merged with entanglement between routers B and C to form entanglement between A and C. This accumulates noise, but purification can be used at each merging step to push the noise back down to 1%.

So what transmitting a message looks like is a path between the two endpoints is selected and then entanglement swapping+purification is used to turn 1-hop entanglement into 2-hop entanglement, then into 4-hop, then etc until the entire path is spanned. Then purification+teleportation are used by the endpoints to move the message.

[1]: https://en.wikipedia.org/wiki/Entanglement_swapping


The dominant cost in Shor's algorithm is the elliptic curve point addition subroutine. That subroutine can be implemented using reversible classical gates. For that kind of implementation, approximate correctness can be verified by fuzz testing classical trajectories through the subroutine.

Note you could ask the same question about Shor's original paper: how did he show the algorithm works without running it? Running X just isn't the only way to analyze X.


This was exactly the premise of my sigbovik April Fool's paper in 2025 [1]: for small numbers, Shor's algorithm succeeds quickly when fed random samples. And when your circuit is too long (given the error rate of the quantum computer), the quantum computer imitates a random number generator. So it's trivial to "do the right thing" and succeed for the wrong reason. It's one of the many things that make small factoring/ecdlp cases bad benchmarks for progress in quantum computing.

I warned the project11 people that this would happen. That they'd be awarding the bitcoin to whoever best obfuscated that the quantum computer was not contributing (likely including the submitter fooling themselves). I guess they didn't take it to heart.

[1]: https://sigbovik.org/2025/proceedings.pdf#page=146


You wrote that? Nice piece of work! Came here to post exactly this, that's the sigbovik paper in practice.

I'm still waiting for the Quantum Bogosort version of this "factorisation". For those not familiar with the algorithm, it relies on the many-worlds interpretation and is:

  Shuffle the list randomly
  If the list is sorted, stop
  If it isn’t sorted, destroy the entire universe
Adaptation of this algorithm to factorisation is left as a homework exercise for the student.


Minor optimization: it is sufficient to merely destroy the user.


Good post. Entirely correct, and well known amongst quantum researchers, but under appreciated in general.

Grover attacks are very blatantly impractical. When someone describes Grover-type attacks in the same breath as Shor-type attacks, without caveats, that's a red flag.


> That graph suggests that even with the best error correction in the graph, it is impossible to factor RSA-4 with less then 10^4 qubits. Which seems very odd.

It's because the plot is assuming the use of error correction even for the smallest cases. Error correction has minimum quantity and quality bars that you must clear in order for it to work at all, and most of the cost of breaking RSA4 is just clearing those bars. (You happen to be able to do RSA4 without error correction, as was done in 2001 [0], but it's kind of irrelevant because you need error correction to scale so results without it are on the wrong trendline. That's even more true for the annealing stuff Scott mentioned, which has absolutely no chance of scaling.)

You say you don't see the uranium piling up. Okay. Consider the historically reported lifetimes of classical bits stored using repetition codes on the UCSB->Google machines [1]. In 2014 the stored bit lived less than a second. In 2015 it lived less than a second. 2016? Less than a second. 2017? 2018? 2019? 2020? 2021? 2022? Yeah, less than a second. And this may not surprise you but yes, in 2023, it also lived less than a second. Then, in 2024... kaboom! It's living for hours [4].

You don't see the decreasing gate error rates [2]? The increasing capabilities [3]? The ever larger error correcting code demonstrations [4]? The front-loaded costs and exponential returns inherent to fault tolerance? TFA is absolutely correct: the time to start transitioning to PQC is now.

[0]: https://www.nature.com/articles/414883a

[1]: https://algassert.com/assets/2025-12-24-qec-foom/plot-half-l... (from https://algassert.com/post/2503 )

[2]: https://arxiv.org/abs/2510.17286

[3]: https://www.nature.com/articles/s41586-025-09596-6

[4]: https://www.nature.com/articles/s41586-024-08449-y


The newest transaction mechanism (taproot; P2TR) exposes the public key of the receiver as part of the transaction. If it becomes more commonly used, the supply of bitcoins with exposed public keys would start going up again. See figure 5 of https://arxiv.org/pdf/2603.28846#page=14 .


Caution: that 10M estimate assumes gate error rates 10x lower than the ones assumed in the papers from TFA.


You are assuming that progress on factoring will be smooth, but this is unlikely to be true. The scaling challenges of quantum computers are very front-loaded. I know this sounds crazy, but there is a sense in which the step from 15 to 21 is larger than the step from 21 to 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139 (the RSA100 challenge number).

Consider the neutral atom proposal from TFA. They say they need tens of thousands of qubits to attack 256 bit keys. Existing machines have demonstrated six thousand atom qubits [1]. Since the size is ~halfway there, why haven't the existing machines broken 128 bit keys yet? Basically: because they need to improve gate fidelity and do system integration to combine together various pieces that have so far only been demonstrated separately and solve some other problems. These dense block codes have minimum sizes and minimum qubit qualities you must satisfy in order for the code to function. In that kind of situation, gradual improvement can take you surprisingly suddenly from "the dense code isn't working yet so I can't factor 21" to "the dense code is working great now, so I can factor RSA100". Probably things won't play out quite like that... but if your job is to be prepared for quantum attacks then you really need to worry about those kinds of scenarios.

[1]: https://www.nature.com/articles/s41586-025-09641-4


The best proposal I have heard for rescuing P2SH wallets after cryptographically relevant quantum computers exist is to require vulnerable wallets to precommit to transactions a day ahead of time. The precommitment doesn't reveal the public key. When the public key must be exposed as part of the actual transaction, an attacker cannot redirect the transaction for at least one day because they don't have a valid precommitment to point to yet.


That’s kind of adorable. Would you need to pay to record a commitment? If so, how? If not, what stops someone from DoSing the whole scheme?


I don't think you're understanding how cryptography works. A commitment is basically a hash that is both binding and hiding. In this example it's probably easiest to think of it as a hash. So you hash your post-quantum public key (something like falcon-512) and then sign that hash with your actual bitcoin private key (ecdsa, discrete-log, not quantum safe) and then publish that message to the bitcoin network. Then quantum happens at some point and bitcoin needs to migrate but where do funds go? Well you reveal the post-quantum public key and then you can prove that funds from the ecdsa key should go there. From a technical perspective, this is a complete and fool proof system. DoSing isn't really a concern if you publish to the actual bitcoin network and it's impossible for someone to use up the key space (2^108 combinations at least).

The reason this is a dumb idea is because coordination and timing. When does the cutover happen? Who decides which transactions no longer count as they were "broken" b/c of quantum computing? The idea is broken but not from technical fundamentals.


The DoS attack in this scenario is someone just submitting reasonable-looking but ultimately bad precommitments as fast as possible. The intuition is that precommitments must be hard to validate because, if there was an easy validation mechanism, you would have just used that mechanism as the transaction mechanism. And so all these junk random precommitments look potentially legitimate and end up being stored for later verification. So all you have to do to take down the system is fill up the available storage with junk, which (given the size of bot networks and the cost of storing something for a day) seems very doable.


If the question is storage, bitcoin itself provides a perfectly good mechanism. idk the exact costs but it'd be in the range of ~$0.45 to store a commitment. That's cheap enough to enable good users with small numbers of keys but also expensive enough to prevent spam. It's kind of the whole point of blockchains.

As for verification being expensive, it sounds like you don't know the actual costs. It's basically a hash. Finding the pre-image of a hash is very expensive to the point of being impossible. Verifying a pre-image + hash function = a hash is extremely cheap. That's the whole point of 1-way functions. Bitcoin itself is at ~1000 EH/s (exahashes per second)

Again, this isn't a technical problem. It's a coordination problem.


Commitment validation would indeed be trivial.

This whole scheme fails if an attacker can manage to delay a transaction for a day, and if the commitment also commits to a particular transaction fee, then the user trying to rescue their funds can’t easily offer a larger transaction fee if their transaction is delayed. But if the commitment does not commit to a transaction fee, then an attacker could force the transaction fee to increase arbitrarily.

Maybe the right strategy would be commit, separately, to multiple different choices of transaction fee.


Yes, that would be a concern. You could require a proof of work to submit a precommitment, so that DoSing was at least expensive to do. You could have some sort of deposit mechanism, where a precommitment would lock down 0.1 bitcoins (from a quantum-secure wallet) until the precommitment was used. I admit I'm glad I don't have to figure out those details.


24-hour latency to make a payment? What is this, the 20th century?


This is for rescue, not for payment. Once you've moved the coins to quantum-secure wallet, the delay would no longer be needed.

...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.


> ...probably some people would be very inconvenienced by this. But not as inconvenienced as having the coins stolen or declared forever inaccessible.

I don't know why anyone f's around with crypto anymore. So many caveats, such a scammy ecosystem. It just doesn't seem worth the trouble to support a ransomware and money laundering tool.


> [0.1% gate error rate] is still wildly out of reach

This is false. When Fowler et al assumed 0.1% gate error rates would be reached for his estimates in 2012 [0], that was ostentatious. Now it's frankly a bit overly conservative. All the big architectures are approaching or surpassing 0.1% gate error rates.

From 2022 to 2024, the google team improved mean two qubit gate error rate from 0.6% [1] to 0.4% [2]. Quantinuum's Helios has a two qubit gate error rate of 0.08% [3]. IBM has Heron processors available on their cloud service with two qubit gate error rates ranging from 0.2% to 0.7% [4]. Neutral atom machines have demonstrated 0.5% gate error rates [5].

[0]: https://arxiv.org/abs/1208.0928

[1]: fig 1c of https://arxiv.org/pdf/2207.06431

[2]: fig 1b of https://arxiv.org/pdf/2408.13687

[3]: https://arxiv.org/abs/2511.05465

[4]: https://quantum.cloud.ibm.com/computers?processorType=Heron (numbers may vary as the website is not static)

[5]: https://arxiv.org/abs/2304.05420


I can think of a case where it turned out that there was some aspect of the noise performance that made the technology unsuitable for running Shor's algorithm. So would one of the presented low noise approaches actually work for Shor's?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: