Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Valid Signal privacy issues shrugged off while patches quietly rolled out (403forbiddenblog.blogspot.com)
114 points by Nevor on June 5, 2021 | hide | past | favorite | 50 comments


(via mobile quickly, same as my tweet replies on this one)

~~

hi there! signal did not start silently rolling out patches because there is nothing here to patch. friday’s releases were part of our regular cadence of shipping features and improvements to the apps.

by design, SNs don't change when doing a signal device transfer or when making a linked device change, because the key material doesn't change. we explained this several times and even added to our support article/FAQ. no behavior here has changed.



I was expecting to hear about someone who got a new phone, set up Signal using the phone number, and then communicated with contacts without any more authentication.

This doesn't appear to be anything of the kind. Signal has implemented a migration system so that people can move to a new phone without changing the safety number.

Then a bunch of references to the promises Signal makes about re-verifying if the safety number changes. Since the safety number doesn't change, none of those actually apply.


I imagine there is a human factors tradeoff here. If signal notified on each migration, users with even medium sized contact lists would likely be getting constant notifications - making the alert useless. See medical device or aviaton alerts experiences.


The problem is that there's no one size fits all solution. Security is all about making tradeoffs based on your threat models.

I think this is probably a sensible default for many people engaging in casual conversations.

However, this notification should probably be configurable. That way you can be notified when communicating with high risk individuals.


What threat model would it protect against?

The thing this proposed change would protect against is, if an attacker gets access to a participant's unlocked phone, and then transfers their Signal account, it will notify the other participants.

But an attacker can just as easily retain access to the original phone and send and receive messages with it - either physically, or by installing malware (a RAT or something) and remotely accessing the phone.

And when you move devices, because of the way the Signal ratchet works (I think), the original device doesn't get access to send or receive messages. So a device move is detectable if the original phone is returned.


Not in this case. If the long term identity is transferred from device to device there is no extra risk to trade off.

It is like copying your PGP private keys from one device to another. The associated public keys are still in your correspondents key stores, certified or not. Nothing changes for them and there is no reason for them to know what you have done.

There is nothing here for Signal to do. Their issue is is the one that everyone who creates an encrypted E2EE messenger faces. There is no good analogy for cryptographic identities. So it is hard to produce a good conceptual model that the user can use to understand what they should expect and how they should react.


You basically lose trust after 2 false alerts.

Third false alarm in college - no one left dorm.

Third false alarm working overseas (required we wake up etc) everyone turned off buzzers.


What’s up always notified you inline in chats and it worked out just fine even for larger group chats


Am I misreading this, or is this the Signal version of the bug bounty classic "user impersonation vulnerability: if I steal this session token, I can impersonate the user who it belongs to"?


No, you're absolutely right. The author complains that with physical possession of the device that it's possible to transfer Signal's private key material to a new device, leaving the old safety number intact.

The author apparently expects the safety number to change in order to alert the person on the other end that there "might be a hostage situation," evidently not realizing that the attacker could just, well, use the unlocked phone right in front of them.


Well, if I assume that I just got temporary access to someone’s unlocked device, then it would probably be a lot more convenient for me to quickly transfer the account to one of my own devices and then access it from there instead of accessing it from my targets device which I might lose access to any moment.

So from that point of view it would be legitimate to argue that I might want to get notified if one of my contacts transfers his account. I can then double check : “Did you just transfer your signal account to a new device or was that an attacker?”

That might only be interesting for high-risk users though and could impair the UX. Why not make it optional?


Configurable security posture is the sort of thing that got RSA into trouble. For the huge majority of users, opinionated security is a much better approach, even ignoring the maintenance problems of having special features.

The temporary access threat model is a common criticism that people use, but it is largely incoherent. Once you are making human judgements like "enough time to transfer a signal account but not enough time to install a rootkit" things quickly break down into meaninglessness.


I don't really like trusted computing, but it is part of the mobile security model. There's a distinction between Signal deliberately facilitating extraction of the keys, and having to break a device's security to do so.


No, the author is right.

There are many cases where an attacker can access a device for a short time and/or without the owner realizing that the phone was tampered with.


Just because that's possible doesn't mean that it's within Signal's threat model.


Sure, but exactly how would you build something that's robust against that kind of access?

If you leave cryptographic keys lying around unprotected they should be assumed to be compromised.


Signal has a PIN, too. If that's required for the transfer, then it would prevent this in the case of brief, surreptitious access. A hostage scenario is impossible


Well, maybe, but 'brief' is doing a lot of the heavy lifting in that sentence.


> There are many cases where an attacker can access a device for a short time and/or without the owner realizing that the phone was tampered with.

This is what you originally responded to. I paraphrased it. The "heavy lifting" meme that you've employed is rarely more than a shallow dismissal. Be better.


no session tokens involved here; we talk about the crypto behind device-to-device transfer in this blog post (https://signal.org/blog/ios-device-transfer/)

and the concepts and UX research surrounding Safety Numbers (what they are, how they're represented, and how we found they bring the most utility to the platform) in these 2016 & 2017 blog posts:

https://signal.org/blog/safety-number-updates/ https://signal.org/blog/verified-safety-number-updates/


Yes, but it's the same general idea of obtaining private keys. I agree that this isn't much of a vulnerability.


All that text for a feature that no one, including the author uses correctly. Messaging your mates to reassure them everything is correct via Signal after you update your device and your safety number changes is pointless.

The main function of safety numbers is to theoretically prevent mass mitm of Signal.


Or targeted, for that matter. But yeah, either way.


I fail to understand why you would want your security numbers to refresh if there's no known risk of your private key having been leaked or stolen. Every time your keys change and you don't manually verify the new ones you risk a mitm attack. Signal just opts to trade off that risk in favor of convenience and more people using end to end encryption for every day communication. I guess OP is saying the fact that they were transferred to a new phone means they might have been stolen? I disagree.


The flow requires both devices to be in the possession of the account holder for the migration to start, mitigating any risks of foul play. I just tried migrating on my (Android) phones. If an attacker can get you put in your passcode, they don't need to mess with migrating between two phones.


I guess if you devote a lot of energy to researching a vulnerability and the developers explain to you that it's not actually a vulnerability, you only have two choices:

- Accept that you misunderstood and have mostly wasted your time by pursuing unnecessary research. Refocus on new vuln research.

- Become indignant and accuse the developers of being uncooperative and conspiratorial by refusing to validate your research. Double down on publicizing your original research.

Which response is correct hinges on whether the developers are justified in denying the vulnerability. As a reader, I was prepared to have the second response but after reading the article I feel the first - this was a waste of everyone's time.


Related question… How can an app developer guarantee end to end encryption when the main entry point is almost always a virtual keyboard? Seems like the weakest part of the chain.


How can they guarantee E2EE when the message is converted to analog light waves? What about in your brain?

The end of end-to-end is the two messaging devices. That is all and in almost every case, it’s enough.


How can they guarantee end-to-end encryption when someone could be looking over your shoulder as you type your message?

Same answer. They don’t control the input, they just keep it safe once it’s in transit.


Yeah, fair point and largely the point of the article. Too many conditions exist for Signal to accidentally reveal information. I still wonder how many people assume the privacy offered is much more than just the data in transit. I’d bet a large percentage of users believe that.


No one can guarantee any thing. Not even most gods or a time traveler from 1000 years in the future. An omnipotent being could.

It makes sense to attack and work on problems that are actually happening and are solvable within some level of aspiration.

I’m not sure how things work on either mobile OS. Perhaps there can be more clear language and notifications on if everything is being passed to the virtual keyboard developer or not.

Unless you mean Apple, Google, and other Android manufacturers themselves. That’s beyond the scope of any E2E stuff.


TL;DR of article: Signal transfers key material upon migrating to a new device if you use the "transfer messages" workflow. As a result, safety numbers do not change.

I don't see how this is a problem at all. This was actually a feature that many Signal users wanted to use - they didn't want to re-verify safety numbers every time that they had to reinstall Signal or switch to a new phone.

> We don't want anyone to get hurt by way of trusting privacy guarantees which may be more conditional than they appear from the docs!

> If Bob notices the chat safety number with Alice has changed and then Alice sends a bunch of suspect-sounding messages or asks to meet in person and Bob has never met Alice in person before, for example, Bob should be wary. After Alice for example is forced to provide device passcode or unlock their device with their fingerprint or face, Alice's device could be cloned over to a new device by way of quick transfer functionality without Alice's consent, and the messages could be coming from the cloned device rather than Alice's actual device.

Respectfully, this doesn't make any sense. Signal provides security from device to device, it doesn't stop someone from pointing a gun to your head and looking at your messages or pretending to be you after stealing your phone. If someone has the physical possession of your phone necessary to perform a device transfer, then you're already screwed. The idea that a safety number change would alert the person on the other end that you're being held hostage is outlandish and is completely divorced from any normal use of Signal.


You assume here that you are aware of the fact that your device is in the hands of someone else.

I could ask you for your device under the pretense of making a phone call and then secretly transfer your account to my device. I could then secretly read your chats from my device and no one would be aware of it - until you check the amount of active sessions in settings.


All security ultimately reduces to physical security.

If you can’t secure your physical phone, all digital security is moot.


I would say "requires" rather than "reduces to". Just like all security requires vetting personnel. There are just a lot of checkboxes that need to be ticked as table stakes in the security game.


I would state that physical security is both necessary and sufficient to protect information.

Vetting personnel isn’t necessary, nor is it sufficient, to protect information.

Protecting more than just the information, is a different argument.

If we’re talking about securing personal information that has a physical footprint of a cell phone, vetting personnel is irrelevant. Never let your phone leave your person, on pain of death, so to speak.

If we’re talking about securing a building, vetting personnel is just an extension of physical security, anyways.

All of those checkboxes will ultimately reduce to being an extension of physical security.


It would be nice if there was a notification saying the person transferred to a new phone but kept the safety number. This way you can choose your level of paranoia. And there could be a setting to suppress this message (on the receiver).

But in general, if the migration is done securely then migrating the key material seems fine too. By cutting down on benign cases where something changes, this makes the safety number change warning something that warrants more attention.


I also remember getting this bug/issue a couple months ago, and being really confused and frustrated by it. Thanks for publishing.


Does the safety number count as a sort of hash in this case? Or is it absolutely nothing like an MD-5?


The "safety number" also encodes both participants phone numbers (the verifier and the person to be verified). So you would be a little foolish to publish it in say, a blog post if you weren't intending those phone numbers to be public information. I'm also very amused that the author censored part of the QR code, but not the human readable text below it containing the exact same data.


> The "safety number" also encodes both participants' phone numbers

Um, not that I know of. A quick check, though, the QR code is a bunch of binary data, so I dove into the source code: https://github.com/signalapp/libsignal-client/blob/4446b648f...

> very amused that the author censored part of the QR code, but not the human readable text below it containing the exact same data

So what is it now, does it contain the phone numbers or not? Am I allowed to be 'very amused' at you for also not knowing what's in the QR code? :-)

Any phone numbers are censored in the screenshots, only the safety number is not. But the QR code doesn't contain the phone number, as you just saw in the source code (assuming I correctly identified the relevant part, I just looked for uses of qr codes, found getScannableFingerprint and followed the trail through libsignal from there).

Strictly speaking, though, the QR is not the same as the text below: the safety number below doesn't appear to include a version number (same file, lines 14-16). But that's just a technicality.

Either way, I was also amused as my understanding was also that the QR code is the same as the 'safety number' shown below. What confuses me is that the author knows what a CVE is and knows all the right channels to do responsible disclosure, but is apparently confounded by the situation that no key change is shown when they key didn't change. I am rather happy that you can keep your key material: I'd go insane if I had to reverify everyone who adds a new device legitimately (try Wire if you want that experience), I'd certainly stop doing it for any but the most stable and important of contacts. I'd also somehow need to establish a second encrypted channel to exchange the new key material because I don't meet most people in real life these days.


If I remember correctly the safety number is a concatenation of hashes of both peers’ id (phone number) and public key. So while it doesn’t leak your phone number, you can probably bruteforce the phone number if you know the pubkey. But you also probably can’t get the pubkey without knowing the phone number so…


I don’t know if it’s actually a hash, but it’s probably some kind of diffuse one-way function: it would need to be impossible to pre-image or reproduce from a different input in order to serve its purpose.

Edit: Signal’s blog refers to it as a hash[1]. It’s apparently essentially just a hash of the contact’s public key, so rotating that causes the hash change.

[1]: https://signal.org/blog/safety-number-updates/


> essentially just a hash of the contact’s public key

and yours*. They're concatenated with deterministic sorting such that they are the same on both devices.

Which is a big pain because that means I can't simply tell everyone my public key, I have to explain that half the number is their own key and that they can ignore the non-matching part, which is dangerous advice... it could have been so simple but moxie is moxie


My understanding was that it was a fingerprint of the public key, much like an SSH fingerprint.


It's specific to the two public keys involved in that conversation.

It was renamed from "fingerprint" because to those unfamiliar with cryptography, "fingerprints" are things used when a crime has been committed. There was a study out some years ago that illustrated that most of the crypto jargon in use is not helpful in conveying mental models to laypeople.


As much as I support the fight against crypto jargon, I don't think the term "safety number" really works for this. You don't risk a broken leg if you don't verify your safety numbers. The term "safety" is already one metaphor away from the actual issue.

At least "fingerprint" directly maps to the idea of an aspect of something that relates to the identity of someone.

I dunno, could you call this a "serial number"? The root problem here is that our culture does not have the idea of a cryptographic identity in the first place.


I raised a similar issue 4 years ago - https://github.com/signalapp/Signal-Android/issues/6703

Signal used to silently fail if you changed device. I guess not much has changed.


That looks to be an entirely different issue: you're reporting an error that says the client is unable to decrypt a message, not that the key wasn't rotated upon reinstallation or device transfer. Ctrl+f "safe"(ty number) does not even appear on the page.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: