> [CRQCs] will be slow, expensive, and power hungry for at least a decade
How could you know that? What if it was 5 years? 1 year? 6 months?
I predict there will be an insane global pivot once Q-day arrives. No nation wants to invest billions in science fiction. Every nation wants to invest billions in a practical reality of being able to read everyone's secrets.
The absolute low end of cost of a QC is the cost of an MRI machine ~100k-400k (cost of cooling the computer to super low temps). Sure we expect QCs to get faster and cheaper over time, but putting 100% faith in the security of the PQC algorithms seems like a bad idea with no upside.
It is the paradox of PQC: from a classical security point of view PQC cannot be trusted (except for hash-based algorithms which are not very practical). So to get something we can trust we need hybrid. However, the premise for introducing PQC in the first place is that quantum computers can break classical public key crypto, so hybrid doesn't provide any benefit over pure PQC.
Yes, the sensible thing to do is hybrid. But that does assume that either PQC cannot be broken by classical computers or that quantum computers will be rare or expensive enough that they don't break your classical public key crypto.
I don't think you said (or cited) what you think you said.
Leaving aside that you actually didn't cite a lattice attack paper, the "dual attack" on lattice cryptography is older than P-256 was when Curve25519 was adopted to replace it. It's a model attack, going all the way back to Regev. It is to MLKEM what algebraic attacks were (are?) to AES.
You know you're in trouble in these discussions when someone inevitably cites SIDH. SIDH has absolutely nothing to do with lattices; in fact, it has basically nothing to do with any other form of cryptography. It was a wildly novel approach that attracted lots of attention because it took a form that was pin-compatible with existing asymmetric encryption (unlike MLKEM, which provides only a KEM).
People who bring up SIDH in lattice discussions are counting on non-cryptography readers not to know that lattice cryptography is quite old and extremely well studied; it was a competitor to elliptic curves for the successor to RSA.
With that established: what exactly is the point you think those three links make in this discussion? What did you glean by reading those three papers?
He's obviously not saying that you can "trust blindly" any PQ algorithm out there, just that there are some that have appeared robust over many years of analysis.
He is assessing that the risk of seeing a quantum computer break dlog cryptography is stronger than the risk of having post quantum assumptions broken, in particular for lattices.
One can always debate but we have seen more post quantum assumptions break during the last 15 years than we have seen concrete progress in practical quantum factorisation (I'm not talking about the theory).
It's purely a matter of _potential_ issues. The research on lattice-based crypto is still young compared to EC/RSA. Side channels, hardware bugs, unexpected research breakthroughs all can happen.
And there are no downsides to adding regular classical encryption. The resulting secret will be at least as secure as the _most_ secure algorithm.
The overhead of additional signatures and keys is also not that large compared to regular ML-KEM secrets.
No it's not. This is the wrong argument. It's telling how many people trying to make a big stink out of non-hybrid PQC don't even get what the real argument is.
Perhaps you would care to enlighten us ignorant plebs rather than taunting us?
My understanding (obviously as a non expert) matches what cyberax wrote above. Is it not common wisdom that the pursuit of new and exciting crypto is an exercise filled with landmines? By that logic rushing to switch to the new shiny would appear to be extremely unwise.
I appreciate the points made in the article that the PQ algorithms aren't as new as they once were and that if you accept this new imminent deadline then ironing out the specification details for hybrid schemes might present the bigger downside between the two options.
I mean TBH I don't really get it. It seems like we (as a society or species or whatever) ought to be able to trivially toss a standard out the door that's just two other standards glued together. Do we really need a combinatoric explosion here? Shouldn't 1 (or maybe 2) concrete algorithm pairings be enough? But if the evidence at this point is to the contrary of our ability to do that then I get it. Sometimes our systems just aren't all that functional and we have to make the best of it.
"taunt" in the sense that you dangle some knowledge in front of people and make them beg, not "taunt" in the sense of "insult".
You said:
>"[...] don't even get what the real argument is."
and then refuse to explain what the "real" argument is. someone then asks for clarification and you say:
"It's definitely not [...]""
okay, cool! you are still refusing to explain what the "real" argument is. but at least we know one thing it isnt, i guess.
you haven't even addressed the "mistaken assertion". you just say "nah" and refuse to elaborate. which is fine, i guess. but holy moly is it ever frustrating to read some of your comment chains. it often appears that your sole goal in commenting is to try and dunk on people -- at least that is how many of your comments come across to me.
I was explicit about what the real argument isn't: the notion that lattice cryptography is under-studied compared to RSA/ECC.
I understand what your takeaway from this thread is, but my perspective is that the thread is a mix of people who actually work in this field and people who don't, both sides with equally strong opinions but not equally strong premises. The person I replied to literally followed up by saying they don't follow the space! Would you have assumed that from their preceding comment?
(Not to pick on them; acknowledging that limitation on their perspective was a stand-up move, and I appreciate it.)
You do "XYZ isn't the right argument, ABC is" on a thread like that, and the reply tends to be "well yeah that's what I meant, ABC is just a special case of XYZ". No thanks.
I'm not a professional cryptographer, but I _am_ really interested in opinions of experts in the field and I do have a lot of prior experience with crypto (the actual kind, not *coin). From my point of view, I just don't see what's the fuss is all about.
There's no shared understanding, just a snarky expert claiming (in effect) "I know better than all you simpletons but I'm not going to share". At best it's incredibly poor behavior. At worst it's the behavior of someone who doesn't actually have a defensible point to make.
As far as I know, the currently standardized lattice methods are not known to be vulnerable? And the biggest controversy seemed to be the push for inclusion of non-hybrid methods?
I'm not following crypto closely anymore, I stopped following the papers around 2014, right when learning-with-errors started becoming mainstream.
We can disagree on the tradeoff, but if you see no upside, you are missing the velocity cost of the specification work, the API design, and the implementation complexity. Plus the annoying but real social cost of all the bikeshedding and bickering.
All of those are costs are at least as high for non-hybrid. The spec and API are just as easy to design (because we have really good and simple ECC libraries), and the bikeshedding and bickering will be a lot less if people stop trying to force pure PQC algorithms that lots of people see as incredibly risky for incredibly little benefit.
It's not a tautology because it's not guaranteed. There are plenty of plausible sounding claims that fail to be true. That's why science is needed: to provide _empirical_ evidence for/against a claim.
Not to be an "uhm actually" guy but this goes into a lot of interesting philosophy in the first half of the 20th century. You would probably agree that "a fish is a fish" is a tautology, but for more complicated things it gets murkier and murkier. Separating out what are the tautologies from not was a big effort. Then Quine came along, and a big portion of people migrated away from the distinction
I dabble in "um actually"s myself (especially given that my original comment was one), so no worries :)
I don't disagree with your comment exactly. But I primarily wanted to push back on a common response to scientific works. Something to the effect of "Well obviously, everyone knew that!".
Except they didn't because they (presumably) didn't actually investigate. And even after the science, they still don't _know it_ know it. But post-scientific inquiry, they have a much stronger claim to the knowledge than they did before. So the type of dismissal in the root comment is seriously missing the point.
I'd be interested in knowing what the CO2 emissions were from these. You still need to feed the yeast, so you'll have the CO2 emissions involved in growing a crop associated with this. And if you look at the chart in the OP, you'll see that grain production is about half the CO2 emissions of milk. That's likely part of the milk CO2 production accounting.
In addition, you'll need more cleaning/sterilization/mixing. I'd guess that it's lower, but I wonder how much lower.
And then there's the other products that generally get thrown into the mix to make up for things like missing fats. For example, a vegan cheese based on bacteria will often include coconut oil, probably to get the same fat profile.
Whey is an interesting product in general because it's a waste product of cheese making.
Feed efficiency is critical when doing these calculations as cows inherently need energy to survive not just produce milk. As such even if you use the same crop two different sources of protein can have wildly different levels of CO2 emissions embedded in their creation. https://en.wikipedia.org/wiki/Feed_conversion_ratio
I think it is likely more efficient. That said, cows do have the advantage that the food they consume needs little to no processing in order to produce milk. The yeast needs pretty precise processing of the incoming mash both to make sure a wild yeast strain doesn't make it's way in, and to make sure the yeast ultimately produces the right proteins.
You can't just throw in grass clippings into a vat and get whey. You can throw grass clippings into a cow to get milk (though, TBF, I dislike grassy milk).
I agree it’s likely to be more labor intensive per lb of feedstock, but only 21% of calories in milk are protein and overall milk has ~10% of the initial energy. So you’re looking at ~2% of the energy from these crops ending up as milk protein.
That’s a lot of room for improvement which then means far less labor on growing crops.
Cows are pretty terrible because of methane from their burps (not farts; burps). People are working on that but it's still real. A 50% drop would be very significant
Ohhh thank you! I thought the same as the parent comment: I expected that button to turn off the animation immediately. I guess the author wanted the yellow background to "melt" the snowflakes?
I used this recently for my resume and I recommend it.
I have the technical background to write Latex and Typst documents but I honestly didn't want the headache. Plus I'm the type to futz with styling all day long instead of putting down actual content. RenderCV was simple to use and did exactly what I wanted.
You don't have that power, you'll either be beaten by your adversaries unless you only target weak people. And then you'll be arrested. You don't have the power you claim to have. You can't punch people.
> as a code reviewer [you] are only expected to review the code visually and are not provided the resources required to compile the code on your local machine to see the compiler fail.
As a PR reviewer I frequently pull down the code and run it. Especially if I'm suggesting changes because I want to make sure my suggestion is correct.
I don't commonly do this and I don't know many people who do this frequently either. But it depends strongly on the code, the risks, the gains of doing so, the contributor, the project, the state of testing and how else an error would get caught (I guess this is another way of saying "it depends on the risks"), etc.
E.g. you can imagine that if I'm reviewing changes in authentication logic, I'm obviously going to put a lot more effort into validation than if I'm reviewing a container and wondering if it would be faster as a hashtable instead of a tree.
> because I want to make sure my suggestion is correct.
In this case I would just ask "have you already also tried X" which is much faster than pulling their code, implementing your suggestion, and waiting for a build and test to run.
I do too, but this is a conference, I doubt code was provided.
And even then, what you're describing isn't review per se, it's replication. In principle there are entire journals that one can submit replication reports to, which count as actual peer reviewable publications in themselves. So one needs to be pragmatic with what is expected from a peer review (especially given the imbalance between resources invested to create one versus the lack of resources offered and lack of any meaningful reward)
> I do too, but this is a conference, I doubt code was provided.
Machine learning conferences generally encourage (anonymized) submission of code. However, that still doesn't mean that replication is easy. Even if the data is also available, replication of results might require impractical levels of compute power; it's not realistic to ask a peer reviewer to pony up for a cloud account to reproduce even medium-scale results.
No, because this is usually a waste of time, because CI enforces that the code and the tests can run at submission time. If your CI isn't doing it, you should put some work in to configure it.
If you regularly have to do this, your codebase should probably have more tests. If you don't trust the author, you should ask them to include test cases for whatever it is that you are concerned about.
If there’s anything I would want to run to verify, I ask the author to add a unit test. Generally, the existing CI test + new tests in the PR having run successfully is enough. I might pull and run it if I am not sure whether a particular edge case is handled.
Reviewers wanting to pull and run many PRs makes me think your automated tests need improvement.
Your reasoning relies on this being true:
> [CRQCs] will be slow, expensive, and power hungry for at least a decade
How could you know that? What if it was 5 years? 1 year? 6 months?
I predict there will be an insane global pivot once Q-day arrives. No nation wants to invest billions in science fiction. Every nation wants to invest billions in a practical reality of being able to read everyone's secrets.
reply