Breaking a 512-bit key for a good demonstration is very valuable security research, even if it's been done before. It's also legit to call out "Hey, here's a list of folks still using 512 bit, they should move off." ... but for me, actually cracking a real-world in-use key crosses an ethical line that makes me uncomfortable. IANAL but it might even be criminal. Just seems a bit unnecessary.
> for me, actually cracking a real-world in-use key crosses an ethical line that makes me uncomfortable
They've contacted the company with the vulnerability and resolved it before publishing the article - search the original article for the substring "now no longer available".
Usually, you demonstrate that an online system is vulnerable by exploiting that vulnerability in good faith, documenting the research, and submitting it for review. It does not matter if you're cracking an encryption scheme, achieving custom code execution for a locked-down game console, proving that you can tamper with data in a voting machine, or proving that you can edit people's comments on a Google Meet Q&A session - the process is the same.
If you say something's vulnerable, people can shrug it off. If you say and prove something's vulnerable, the ability to shrug it off shrinks. If you say and prove something's vulnerable and that you'll publish the vulnerability - again, using the industry standard of disclosure deadlines and making the vulnerability public after 60-or-so days of attempting to contact the author - the ability to shrug it off effectively disappears.
I read the article, and I don't think it changes it. If you crack someone's key, they might be well within their rights to pursue a criminal prosecution. Of course it would also have a Streisand effect and there's reasons not to, but I personally wouldn't allow or recommend a security researcher to do it. It's needlessly risky.
In general, subverting security and privacy controls tends to be illegal in most jurisdictions. Best case is when you have clear permission or consent to do some testing. Absent that there's a general consensus that good faith searching for vulnerabilities is ok, as long as you report findings early and squarely. But if you go on to actually abuse the vulnerability to spy on users, look at data etc ... you've crossed a line. For me, cracking a key is much more like that second case. Now you have a secret that can be used for impersonation and decryption. That's not something I'd want to be in possession of without permission.
> If you crack someone's key, they might be well within their rights to pursue a criminal prosecution.
If that were true there would be no market for white hat hackers collecting bug bounties. You need to be able to demonstrate cracking the working system for that to be of any use at all. No company will listen to your theoretical bug exploit, but show them that you can actually break their system and they will pay you well for disclosure.
What is or isn't illegal depends on where you live. Where I live, using any kind of digital secret to do something you shouldn't be doing is technically illegal. Guessing admin/admin or guest/guest is illegal, even if they're public knowledge, as long as you could reasonably know you're not supposed to log in.
Generally, law enforcement and judges don't blame you as long as you use best practices, but you need to adhere to responsible disclosure very strictly in order for this not to be something the police might take an interest in.
Demonstrating the insecurity of a 512 bit key is easy to do without cracking a real life key someone else owns; just generate your own to show it can be done, then use that as proof when reporting these issues to other companies. The best legal method may be to only start cracking real keys if they ignore you or deny the vulnerability, or simply report on the fact you can do it and that the company/companies you've reached out to deny the security risk.
Companies that pay for disclosure won't get you into trouble either way, but companies that are run by incompetent people will panic and turn to law enforcement quickly. White-hat hackers get sued and arrested all the time. You may be able to prove you're right in the court room, but at that point you've already spent a ton of money on lawyers and court fees.
In this case, the risk is increased by not only cracking the key (which can be argued is enough proof already, just send them their own private key as proof), but also using it to impersonate them to several mail providers to check which ones accept the cracked key. That last step could've easily been done by using one's own domains, and with impersonation being a last resort to prove an issue is valid if the company you're reporting the issue to denies the risk.
> Demonstrating the insecurity of a 512 bit key is easy to do without cracking a real life key someone else owns; just generate your own to show it can be done
As I said in my post, no company will listen to your hypothetical exploit.
Show them youve hacked their system and they listen.
Bug bounties are a form of consent for testing and usually come with prescribed limits. Prescribed or not, actually getting user data tends to be a huge no go. Sometimes it can happen inadvertently, and when that happens it's best to have logs or evidence that can demonstrate you haven't looked at it or copied it beyond the inadvertent disclosure.
But to pursue data deliberately crosses a bright line, and is not necessary for security research. Secret keys are data that be used to impersonate or decrypt. I would be very very careful.
I see it the other way around. If some hacker contacted me and proved they had cracked my businesses encryption keys and was looking for a reward, I dont think I would be looking to prosecute them and antagonise them further.
They can pursue what they want, it doesn't mean it will go through.
Looking at public data, using some other public knowledge to figure out something new does not make it inherently illegal. They didn't crack it on their systems, they didn't subvert it on their systems, they did not use it against their systems. I'd love to see some specific examples under what it could be prosecuted under specifically. Because "that door doesn't actually have a lock" or "the king doesn't actually have clothes" is not practically prosecutable anywhere normal just like that.
Especially in the EU, making such cryptographic blunders might even fall foul of NIS2, should it apply to you.
It's more like the door has a weak lock that can be picked. Just like many real world doors do. Here's how it would go in court:
"Are you aware that this key could be used to decrypt information and impersonate X?"
"Are you aware that this key is commonly called a Private key?"
"Are you aware that this key is commonly called a Secret key?"
"Are you aware that it is common to treat these with high sensitivity? Protecting them from human eyes, using secure key management services and so on?"
"Was it even necessary to target someone else's secret private key to demonstrate that 512-bit keys can be cracked?"
"Knowing all of this, did you still willfully and intentionally use cracking to make a copy of this secret private key?"
I wouldn't want to be in the position of trying to explain to a prosecutor, judge, or jury why it's somehow ok and shouldn't count. The reason I'm posting at all here is because I don't think folks are thinking this risk through.
If you want to continue with the analogies, looking at a lock and figuring out it's fake does not constitute a crime.
That key can not be used to decrypt anything. Maybe impersonate, but the researchers haven't done that. It's also difficult to claim something is very sensitive, private or secure if you're publicly broadcasting it, due to the fact that the operation to convert one to an another is so absolutely trivial.
And they did not make a copy of their private key, they did not access their system in a forbidden way. They calculated a new one from publicly accessible information, using publicly known math. It's like visually looking at something and then thinking about it hard.
I wouldn't want to explain these things either, but such a prosecution would be both bullshit and a landmark one at the same time.
Breaking a key isn't criminal. It's just math. Sending emails that would constitute fraud or a violation of another jurisdictional law, relying on that math, is illegal. But again -- it's not the math, it's the action and intent.
Pointing out that someone is doing something stupid is also not illegal, though they make try to make your life painful for doing so.
Secrets themselves are often regarded as sensitive information, and often also categorized in the highest classifications for information protection. This stuff isn't just a game, and in my experience it's not how law enforcement authorities or courts see things.
Just as if you made a copy of the physical key for a business's real-world premises, you could well find yourself in trouble. Even if you never "use" it.
You can say this about a lot of security research, i.e "Showing a bug is exploitable is one thing, but writing a POC is crossing the line!" The problem is that most people won't listen to your security research unless you demonstrate that something is actually possible.
They only called out 1 out of the ~1.7k domains they found with keys shorter than 1024bits.
They _did_ call out the 3 of 10 tested major email providers that don't follow the DKIM RFC properly: Yahoo, Tuta, Mailfence (after having notified them).
I am also not a lawyer, but I would suspect it's not criminal to crack a real-world in-use key if you do so using only publicly available information and you don't actually do anything with the result.
Let's say you local coffee shop is featured in a local news piece and the blithe owner happened to get photographed holding the store key. That's now easy to clone from public information. Would you be comfortable actually doing it? Reporting the issue is fine - "Hey you should rekey your locks!".
Actually making a clone of the key, and then showing "hey it fits" will get you more traction more quickly ... but there's also plenty of Police Departments who might well arrest you for that.
That's exactly what I meant in terms of not actually doing anything with the result. That said, it's obviously somewhat different with a physical key than a cryptographic key.
I don't think there's a way to make it criminal, any more than publishing a vulnerability that got you code execution on their servers could be. Actually exploiting it, of course, would be.
I think many in cryptography would see cracking a key as a precondition to "actually exploiting it" ... because you've only gotten a cryptographic secret, not "actual data".
But I think many others, and many in law enforcement, will see cracking a key as "actually exploiting it". You've exploited the cracking vulnerability to target a particular key, is how they'll see it. Law enforcement also have a natural incentive to want possession of harm-adjacent paraphernalia to carry substantial liability.
I think they have a point; that key is private data, and there's a reason people lock keys up in KMSs and HSMs; they can have a large blast radius, and be hard for companies to revoke and rotate. Importantly, a compromise of a key will often trigger notification requirements and so now it is a breach or an incident, in a way that a good faith security vulnerability report is not.
To make an extreme example; if you were to crack an important key for a government agency, good luck with that is all I'll say. I sure wouldn't.
The US law most at play in criminal prosecution of computer usage is CFAA, and a clear CFAA predicate is intentional access to a protected computer (18 USC 1030(a)(2)). This distinction is what makes vulnerability research on things like Chrome vulnerabilities generally safe (as long as you don't knowingly spirit exploits to people who are actually attacking people), while vulnerability research on other people's websites (looking for SQLI and SSRF) is risky.
The CFAA has a clause about "trafficing in passwords or similar information" (18 USC 1030(a)(6)), but the mental state requirements are very high: that trafficking has to be knowing and with intent to defraud (that intent will be something prosecutors will have to prove at trial).
There might be some state law somewhere that makes this risky, but virtually every hacking prosecution in the US anyone has heard of happens under CFAA. I'm not a lawyer, but I've spent a lot of time with CFAA, and I think cracking DKIM keys is pretty safe.
By this interpretation it would be perfectly legal to abuse a wifi encryption vulnerability to spy on your neighbors, because that doesn't involve accessing a computer of theirs.
My understanding, and IANAL, is that decrypting things that aren't yours is a bad idea and is covered mainly by electronic communications and wire acts, e.g. U.S. Code § 2511 and others.
A wifi encryption vulnerability that required you to interact with a base station or remote computer would implicate CFAA. A wifi encryption vulnerability that allowed for pure passive interception --- a devastating flaw in 802.11/WPA3 --- might not actually violate any federal law directly. There are probably state laws (I believe Michigan has one) that implicate packet sniffing directly (they were problematic in the early oughts for security researchers).
Worth remembering: when CFAA was originally passed, an objection to it was "we already have laws that proscribe hacking computers"; the fraud statutes encompass most of this activity. CFAA's original motivation was literally WarGames: attacks with no financial motivation, just to mess things up. So even without statutory issues, breaking an encryption key and using it to steal stuff (or to gain information and ferry it to others who will use it for crimes) is still illegal.
Your guess is as good as mine about whether ECPA covers wifi sniffing. But: presuming you obtain an encryption key through lawful means, ECPA can't (by any obvious reading) make cracking that key unlawful; it's what you'd do with the key afterwards that would be problematic.
My understanding of the ECPA and other acts is that you can't intercept, decode, or receive by other intentional means any information or communications that aren't "generally accessible" without permission. It's pretty broad and doesn't care about the "how".
Private keys are not "generally accessible" and my concern is that the authorities will see cracking the key itself as issue enough, and unlawful. If a security researcher triggers painful breach notifications, which could well happen for a compromised private key, I don't think it's unthinkable at all that an upset target will find a DA who is happy to take this interpretation.
I don't think this specific DKIM case is particularly high-risk, but I still wouldn't do it without permission from the key holder.
As mentioned in the article, the key's no longer available and I presume the article has been released only after responsible disclosure was done and with the approval of the key owner (hope so). At this point, it's not more unethical than any other security research performed in the same conditions and with the same outcome.