1. The police are either lazy or incompetent if they say they cannot trace criminals because of E2E secure chat.
2. You don't need to know the contents of a chat to glean massive amounts of metadata. FB Messenger and WhatsApp going truly E2E encrypted will still put FB (and anyone serving them with warrants) to know in real time who is talking to whom, what their IP addresses are, and possibly real location (if they are using the app on their phone). This can be used to created a Signature profile... many Pakistanis and Yemeni have died from a Hellfire missile strike because they matched a pattern of activity. Google "signature strike" for more info.
3. The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication than Wire, Signal, WhatsApp, Wickr, etc. Saying that this is "for the children" or "for our safety" is complete bullshit and anyone saying otherwise needs to prove it.
> The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication
The "most dangerous" part is doing a lot of work there. Just like I think law enforcement needs to admit what they can and cannot do (e.g. they cannot protect a golden key), I think we need to admit some things too. A lot of dangerous criminals are stupid. Maybe not the most dangerous ones, sure. But if law enforcement has a tactic that lets them catch, say, the stupidest 30% of terrorists, that's an extremely valuable tactic that probably saves a lot of lives in practice. It would be wrong to claim that society loses nothing by engineering away that tactic.
I think this sort of thing leads to a lot of frustration on both sides. As a programmer, I find it very frustrating that law enforcement and the media consistently get some of the most basic details wrong about how communication and encryption work, and about the negative side effects of the new laws they're proposing. But I assume that law enforcement folks also feel frustrated about how people like me have no idea how they actually get their jobs done day-to-day, or the negative side effects of the technologies we're building.
The nice thing about stupid criminals is that they tend to be indiscriminately stupid. The ones who don't use encrypted messaging are the same ones who proceed to brag about their crimes in front of strangers, and have their phones turned on and with them during the commission of their crimes, and post incriminating pictures on Facebook, and choose equally stupid and unreliable criminal partners.
They are the low-hanging fruit, so you don't need powerful and invasive tools to catch them because they're practically self-incarcerating. When there are 100 other ways to catch them, there's no point in paying a high price just to have 101.
It's the non-stupid criminals that they have trouble catching, but those are the ones this won't catch either. So you're still paying a high price for really nothing in return.
I think you may be missing a large group of criminals in the middle. Like with ordinary humans in non-criminal context, you have a group of indiscriminately stupid people, a group of very smart people, and a large group - I think majority - that just parrots what everyone else is doing or recommending around them, with very little individual thoughts given.
You can compare it to COVID-19 reactions among the people you know. Almost everyone now keeps distance in public, because everyone knows they should and are expected to. But how many people don't connect this with the fact that they should absolutely not meet up with their friends now? Or that they should absolutely not visit their families this Easter? Or that it would be wise to wash groceries and deliveries?
We could say this parroting group is doing cargo-cult OPSEC. They can know they shouldn't brag about their crimes in person or on social media, and yet at the same time they could easily trip using communication tools they don't understand - unless the industry goes out of its way to make such tripping impossible. I think this is the group the law enforcement is talking about. Not the idiot criminals, not the smart criminals - just regular ones, who don't understand the world they live in well, and occasionally make mistakes.
The group in the middle is the group I'm talking about. At the far edges of stupidity are the sort of criminals who break into an electronics shop to steal GPS tracking devices or try to stick up a police station. The far extremes give you 1000 ways to catch them instead of 100.
The guy who carries his phone with him during the commission of the crime is the guy at the median.
It also doesn't hurt that the average criminal skews dumber than the average law-abiding citizen to begin with. But even for the somewhat above average criminal who gives you ten ways to catch them instead of a hundred, you still don't need eleven because you only need one.
What do you suppose the percentage of criminals is who are so diligent that having default insecure communications is the only way to catch them and they wouldn't have chosen a secure alternative regardless?
>It also doesn't hurt that the average criminal skews dumber than the average law-abiding citizen to begin with.
Is this true? I'd be interested to see the research for this. I would believe that the average convict is dumber than the average law-abiding citizen, but how many criminals are lumped in with the law-abiding citizens simply because they don't say "oh yeah, I break the law all the time"?
You're going to have the Three Felonies A Day problem there, where in practice everybody commits crimes all day long and the people "not getting caught" is really everybody, even including people currently incarcerated who are still guilty of many other crimes they haven't been convicted of.
But if you want to talk about, shall we say, "real" crimes then that's another story. The solve rate for murders is actually pretty high (because they're given significant investigative resources), to the point that the population of convicts is probably not a terribly unrepresentative sample, and the lower intelligence of the convicts is pretty well established.
It also depends how you measure intelligence. The IQ of people who commit politically-motivated bombings is often significantly above average, but they also choose to commit a crime that attracts a hugely disproportionate level of investigative resources and correspondingly has quite a high solve rate despite the perpetrators' supposed intelligence, so maybe there are different kinds of stupid too.
Even if it caught 100%, it would not be worth violating the privacy rights of all of the people who are not criminals.
Signing up to be law enforcement comes with an implicit acceptance of the frustration caused by mechanisms designed to prevent infringing upon the rights of the innocent. It’s part of the job to work hard for a long time and sometimes have to let the criminal go free.
Unfortunately, many prosecutors and cops never learned this, and are all too happy to pursue illegal and invasive methods, or to employ parallel construction to conceal illegal methods.
> Even if it caught 100%, it would not be worth violating the privacy rights of all of the people who are not criminals.
Sounds nice, but have you really thought that through? I think you might be surprised what people would be willing to give up to live in a crime-free society.
It is not possible to imprison 100% of criminals without imprisoning some innocent people by accident (or perhaps intent, as is the case in the USA today). What you are describing is a totalitarian society without the presumption of innocence.
I don't think the "well it wouldn't happen to me" delusion is strong enough for people to actively want that, no.
> I think you might be surprised what people would be willing to give up to live in a crime-free society.
1. Very often, they're quite willing to give up _other people's_ privacy.
2. Have you considered what many people are willing to give up to live in a mass-surveillance-free society? Probably not, because we're never given these options for serious considerations and for us to choose. It's a false dilemma - the state makes the decision, eats away our privacy and uses things like pedophilia as the excuse because it's scary.
3. Let's start with making some sacrifices to prevent criminal behavior by elected officials (Trump family, Biden family, Bush family, Clinton family - I'm looking at you people), and in high finance (2008 crisis racketeers who never faced any criminal action) and once that's sorted out, then let's talk about what more needs to be done to achieve a "crime-free society".
1. Presumption of guilt: Law enforcement doesn't go after "terrorists" or "criminals"; they go after _suspects_ in acts of terror or crime. Part of the norms in non-totalitarian states is that people don't get subjected coercive, violent and otherwise harmful action as though they are guilty of anything - until they are formally proven guilty.
2. The assumption that what the state legally defines as "terrorism" is indeed terrorism, i.e. "the calculated use of violence to create a general climate of fear in a population and thereby to bring about a particular political objective. " There is a definite tendency to broaden the operative definition in many states in the world beyond the dictionary definition.
3. The assumption that the state, and its law enforcement organizations, always have the moral high-ground legitimizing its pursuit of terrorists. This is often not the case, as many states engage in terrorism against populations or groups they are hostile towards, while at the same time facing terrorism from those groups.
4. The assumption that the state, and its law enforcement organizations and personnel, don't misuse their capabilities to spy, harass or harm people who are not suspected of committing "terrorism" or any other crime for that matter.
The worry, though, is that the state grows too powerful. A lot of things in our society are built on the foundation of curbing state power (it's actually about curbing absolute power - the state is part of the solution to that). Constitutions serve that function. Every time we let a state erode those kinds of protections we take another step towards the state gaining more control. That usually doesn't end well. Countries that were part of the Soviet Union are still recovering 30 years later.
Speaking of the Soviet Union, you have to remember that it was "law enforcement" that carried out the oppression by the government. Limiting law enforcement seems reasonable to me.
> and pedophiles that are the most dangerous are using far more sophisticated means of communication
Tiktok?
That has a ton of content that can be considered CP if reviews on reddit/yt are believed. Which given its sordid past as Musical.ly its totally believable.
>But if law enforcement has a tactic that lets them catch, say, the stupidest 30% of terrorists, that's an extremely valuable tactic that probably saves a lot of lives in practice.
Let's mix time frames. Should police be able to catch the laziest/stupidest 30% of people who sell weed? Of people who marry across racial boundaries? Of people who traffic freed property back north? Should the police be able to catch the 30% laziest gays?
This is a ridiculous argument - now. But in 30 years when the US still has the laws, we have to understand there are social norms now that will be completely different and maybe people in the US won't want police to do their job.
Anecdotally, I can assure that upwards of 70% of criminal defendants have some form of tracking capable phone on them when a crime happens, I don’t think this makes them stupid, just a general reflection of society that such tracking doesn’t happen or isn’t something anyone can see.
If you have a tactic that catches 30% of the stupidest terrorists you have multiple tactics that catch 30% of the stupidest terrorists, because they are the stupidest.
The actual problem is not being able to catch the smart ones who every now and then do something stupid or lazy or expedient (since even the smartest of humans have moments where they are not at their best).
> Odd for government to go after chat apps and online encryption when they can't stop child sexual abuse in those places where it happens the most.
No, its totally consistent with the State's MO; using the 'Helen Lovejoy' argument [1] is entirely specious reasoning when even the most superficial analysis on the perpetrators of said crime is done... but its not meant to appeal to reason, rather its meant to create a knee-jerk reaction when someone tries to refute it before being coaxed down the collectivize population's throat.
It's so easy and simple to say 'what, do you want pedophiles to use this tech now?' and end any semblance of coherent logical discourse on the matter: and that's the aim, to end any discussion or counter arguments before its enacted and further erode privacy and civil liberties.
When I really started to delve into the 'why and hows' of cryptocurency I came to the conclusion that after Wikileaks/Assange got cut off from the legacy system in 2010 that we were already in the 2nd Crypto War (Julian is a key target and is shows [2] as he's been treated like a POW) that followed after Zimmerman's PgP project succeeded and ended the 1st.
I'm a Signal user and I'm not entirely sure what that 'dumping the US Market' would entail, will they pull Signal from an app store? Meaning I could just compile it while accessing it from a VPN, or compiling it myself on PC?
> I'm a Signal user and I'm not entirely sure what that 'dumping the US Market' would entail
Yeah, it's decidedly weird turn of phrase since it is (a) open source and (b) they don't try to monetise it.
> will they pull Signal from an app store?
I don't really see what the app store has to do with Signal - it's just a way of distributing it. It's not like you need the app store to avoid compiling it - there are other avenues.
The risk for them is they or their servers come under some pressure from the US Law Enforcement Agencies. Given their programmers and servers are based in the US, that seems like it could be a real risk. Withdrawing from that would involve moving themselves and presumably families out of the US. It sounds like an almost impossible ask.
It's a bad faith argument. They don't care about pedophiles.
You're right. It's usually a teacher, neighbor, pastor, uncle, etc. "Stranger danger" is mostly BS unless you live in a really dangerous neighborhood, and there the risk is more likely to be simple robbery with incidental harm to the child.
Child sex abuse is also under-prosecuted and under-sentenced. Your average child rapist serves less time than people convicted of selling small amounts of drugs. It's really bad if the abuser is wealthy and can really put up a fight. Google Jeffrey Epstein's original indictment and the non-punishment he received.
If they really cared about child abusers they'd prosecute them more aggressively and sentence them more severely.
Generally when people say “90% of child sexual abusers are known to the victim” they are referring to “contact sex offenders.” All child abusers take advantage of vulnerable children, but children are more likely to be physically vulnerable around a trusted, known adult. In the last decade it has become much more common for children to be psychologically vulnerable to online predators as many more children, disproportionately those who are vulnerable for other reasons, have private access to the internet via smartphone 24 hours a day. Some predators use the internet to groom children and then commit contact offences against them. Others manipulate children into creating more child pornography.
In 2014, Aslan and Edelmann [1] undertook “a comparison of sex offenders convicted of possessing indecent images of children, committing contact sex offences or both offences” and, while expressing caution about the “contradictory findings” of previous studies, examined a data set of “230 offenders who had been convicted either of possessing indecent images (Internet offenders n = 74) or committing actual direct abuse of children (contact offenders n = 118) or committing both offences (Internet-contact offenders n = 38).” They found:
> There were significant differences between the three groups of offenders in the way the victim was found. Internet-contact offenders (45%) were more likely to target their victims online and use downloaded indecent images to help recruit their victims … Only 15% of Internet offenders initiated online contact, grooming their victims then requesting indecent images without physically coming into contact with the victim. The majority of contact sex offenders (87%) were known to their victims … Internet-contact offenders were more likely to target stranger victims than contact offenders.
This data reflects the offences that are detected and prosecuted, so you could read it as suggesting that law enforcement (in London) is focusing on internet offending at the expense of contact offending. It’s hard to say. The data also says nothing about whether anti-encryption laws are needed. However, it does indicate that there is a substantial amount of internet-enabled child sexual abuse and that law enforcement bodies should use some of their finite resources to address it.
What is proportionate is certainly debatable. There is often a fundamental difference of values between civil liberties advocates on the one hand, and victims’ advocates and law enforcement on the other, with respect to the seriousness of internet-based non-contact offences, including the possession of child pornography. When these offenders are counted among child sexual abusers, the proportion who are known to their victims is much less than 90%.
It is hard to tell there, because someone who commits a physical crime is much more likely to get caught than someone who does not, but it does seem like some resources should be devoted to the dangerous ones.
This doesn't excuse the government trying to destroy security for everyone else. One of the biggest problems highlighted by the NYTimes is insufficient funding leading to an inability to apprehend culprits, not the widespread use of end-to-end encryption.
On 3, I don't know about pedophiles but terrorists do indeed use consumer apps; they're 'good enough' and the traffic doesn't stand out. Many (most?) cases are broken open because law enforcement turns a human source or manages to place someone undercover.
Of course, those apps are not all that they use. There are definite advantages to things like encrypted digital radio vs IP communications, and concomitant downsides such as standing out like a sore thumb in the RF spectrum or being more vulnerable to zero-days against niche platforms.
The problem with the idea that if all consumer apps played ball with law enforcement (and as always, I would like to point out that it isn't clear which nation's law enforcement agencies are supposed to get access) then suddenly there would be no choice but to roll your own encryption tools is naive. It is born from a type of politician's mindset where communication between people is done via an 'app', and an 'app' means that there is a large company that invariably wants to deal with the US or the EU that can be pressured into building a backdoor. And for 99% of today's messaging apps they are right (which is a nice mess we're in by the way).
But anyone can use OpenPGP (or any other tool) today, and anyone can tomorrow, even if such a project stops completely. The source is out there, and so is the source for hundreds of other related tools. There will also be people with — subjectively, depending on whom and where you ask — non-nefarious reasons to have their communications end-to-end encrypted who will find ways to provide such software in a decentralised manner without the point of failure that laws like EARN-IT target.
Maybe the terrorists. Anyone who's seen "to catch a predator" knows that most pedophiles are borderline mentally handicapped and are way more likely to get caught by their own incompetence; no extra laws necessary.
But you're otherwise right that people running CP rings are probably using more sophisticated means that can't be stopped by conventional means.
>Anyone who's seen "to catch a predator" knows that most pedophiles are borderline mentally handicapped and are way more likely to get caught by their own incompetence; no extra laws necessary.
I wouldn't be surprised to learn that pedophilia correlates with lower intelligence, but a more accurate conclusion to arrive at after watcing TCAP is that most people who fall for a fairly obvious sting operation (in some cases, after having watched the show themselves) are borderline mentally handicapped.
> The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication
Terrorism is mostly opportunistic radicals communicating via YouTube and Twitter and Fox News, or national / quasinational governments that are brazen and flagrant and don't need to worry about being noticed.
BTW, one of Signal's weaknesses is that you MUST use a phone number with it. If you're savvy you realize this can be a Twilio number you control making your account immune from SIM hijacking. However, unless you override a bunch of defaults Signal is not immune to other attack vectors like attempting to unfurl a URL sent in a message -- which can expose your true IP address -- or generate a thumbnail of a video -- which can launch a malware attack -- which is the method of attack alleged to have been used by Saudi intelligence to hijack Jeff Bezos' phone (via an E2E encrypted WhatsApp message no less). A more sophisticated messenger system would turn off lots of "convenience" features by default and let me pick a random username and NOT make me enter a phone number or email address. People who care about security don't need a way to reset their randomly generated 128 character passwords.
> BTW, one of Signal's weaknesses is that you MUST use a phone number with it.
This isn't a weakness, it is a tradeoff. You use phone numbers (downside) but the server does not have to store any information about who is talking to who (upside). Other tools reverse this choice and don't use phone numbers but do need to maintain the communication metadata.
Sure, and Signal is already working on usernames. Here's the kink: When you have low latency (video) calls, you can't route via Tor. When you can't route via Tor, you leak your IP to the server. When you leak your IP you're not anonymous, and when you're not anonymous, the server having the hash of your phone number isn't adding too much data to them.
When the server knows who you are, the app can use your existing contact list to discover contacts. This means unlike e.g. Telegram, Signal server doesn't store your contact list.
I e.g. constantly see people whose phone number I've already deleted appear on my Telegram contact list "X joined Telegram". Telegram knows I had the number at some point. This would never happen with Signal.
> the server having the hash of your phone number isn't adding too much data to them.
Wait how big is the hash of the phone number?
If it's enough bits (e.g., a full sha hash) then it's not that secure to hash at all. 10^10 or even 10^11 is just 10 or 100 billion. I can easily try all phone numbers until I find the one that matches the hash.
It maybe protects against attacks against lots of people, but it really doesn't protect an individual.
You are correct that using a hash does not protect an individual from other users discovering that they can contact them with Signal, which is to be expected because that's the purpose of this feature. If you suspect that Bob, with phone number +15555551234 has Signal installed, you can verify that by... typing Bob's phone number into your contacts list and installing Signal so you can send messages to Bob.
For the purposes of entropy, you need only consider 10 valid choices for each symbol of a phone number so it's closer to 33.21 bits (10 * (log(10) / log(2))) and smaller still when discarding impossible area, trunk & subscriber numbers.
So given than 80 bits is much bigger than 30-40 bits, if I know someone's hash I can very easily narrow down their phone number to one or sometimes two candidates.
That system a) has a paytrail, b) involves companies that can be coerced / hacked with relative ease, c) is a paid system and d) is quite a bit for average user to handle.
Also, if you're going to stay anonymous, you need something that is extremely hard to misconfigure. I use wireguard on my Android and I've set the VPN to activate automatically, and I only allow connection via VPN, but I'd never imagine any of the apps I'm running are properly anonymized.
Also, since you're apparently working for or affiliated with VPN providers[1], you might want to be more transparent about possible vested interests.
I've never hidden the fact that I've worked for IVPN and Restore Privacy. But they pay me by the word, so I gain nothing by promoting them.
I haven't actually used Orchid, because there's no Linux app. But I did buy some of their Etherium currency. And I recall no money trail. As I recall, I converted well-mixed ~anonymous Bitcoin to plain-vanilla Etherium, and then to Orchid's currency.
But whatever, I'm not going to defend Orchid.
Anyway, I use nested VPN chains. It's like a multihop VPN, except that each hop is a different VPN service, and each of them is leased with a different pool of well-mixed Bitcoin. I do all the Bitcoin mixing via Tor, in Whonix instances. That way, I don't need to trust any of them, only that an adversary won't manage to compromise or coerce all of them. It's the same logic as Tor uses, based on Chaum.
If you want to read more, just search "mirimir" on IVPN's and Restore Privacy's sites. There's also https://github.com/mirimir/vpnchains which is pretty over the top. And I've also played with something like that which routes VPNs via Tor.
I'm not an expert on cryptocurrency so I can't say how well you managed to anonymize the paytrail but the problem of logs and the lifetime of the chain concerns me.
When you start to chain VPN nodes you gain latency so you might as well use Tor. These days Tor has enough bandwidth to play 720p video with ease and there's less hassle. Also once you hit three modes you won't really benefit from longer chain so mixing VPN with Tor isn't really beneficial unless you're evading censorship of Tor.
OK, fair enough. I'm no expert on Orchid. I rather lost interest, after it became clear that it was useless to me.
You're wrong about nested VPN chains, however. Depending on geographical distribution, each VPN adds 50-100 msec rtt. And bandwidth doesn't drop that much after the first VPN.
I use both nested VPN chains and Tor to mitigate the risk of Tor circuits being compromised. The lesson of CMU's "relay early" exploit for the FBI was sobering. Given that lesson, only fools use Tor without protection.
Bad guys might rather hack different servers in different countries and use something like a chain of SSH tunneling after making sure they patched the security vulnerability they used to get into.
Add in some routing trough Tor.
That would be harder to beat by a single law agency.
Particularly harder if the countries implied are not friendly towards each other.
> I e.g. constantly see people whose phone number I've already deleted appear on my Telegram contact list "X joined Telegram". Telegram knows I had the number at some point. This would never happen with Signal.
This literally happens with Signal. And it makes sense too, the message that Signal gets telling it someone is now on Signal is presumably the same one letting it know it can use encryption rather than SMS to talk to that person.
Signal is not built for anonymity. It's built for message privacy. It's a lot like PGP in that the government know who emailed whom, but they cannot read the email. That's the whole point. If you are trying to hide your phone number, Signal is not going to help you and it's not meant to.
PGP doesn't hide metadata, anonymous remailers hide metadata. Add a sufficient volume of dummy messages and all of a sudden nobody can do traffic analysis, either. Think ATM: There's a constant volume of "cells" but only some of them are actually carrying anything.
That, or blasting your message to a huge number of people, only one or a few of whom actually receive it because it's encrypted and then steganographically hidden in spam. Again, use dummy messages and there's no way to predict anything by divining the ebb and flow of spam volumes.
I've never understood the point of privacy without anonymity. Or of plausible deniability. Both depend on rather idealistic assumptions about adversaries.
The practical upshot of Signal's deniable authentication is that a Signal message isn't proof of anything. It has zero weight because everybody can make fake Signal messages apparently from somebody else to them about anything.
If Alice tells Bob a secret via Signal, this means Alice cannot be worse off than if she'd used any other means of telling Bob. Can Bob reveal the secret? Yes. Can he claim Alice told him? Yes. Can he prove it? No.
This is a sharp contrast to something like PGP where Bob can prove Alice sent the message.
That's nice. But choosing to believe nonsense won't make it true. The United States of America chose to believe that torturing people is an effective means of securing reliable intelligence. Because that's how it works in Hollywood movies, so how can reality be different? But of course the "intelligence" they obtained this way was not in fact reliable, because a person being tortured doesn't magically know the truth and you don't know if they're telling the truth, so they'll say whatever they think will make you stop hurting them, which is utterly useless.
The only way you can know if intelligence obtained is reliable is to actually test it. With systems like PGP you get proof. Did Alice send this message as Bob alleges? Yes, the message includes proof so he was telling us the truth.
With Signal all you have is Bob's word as I described.
Signal can't stop the Secret Police from torturing Bob, but they can ensure they don't have any way to know if he told them the truth. If the Secret Police were rational that's enough reason not to bother torturing Bob. But we can't make them rational, for some people just inflicting pain for no reason is their goal.
Nope. Signal's messages are relayed by Signal's servers over IP like anything else, your phone has no evidence this message ever came from anybody's phone, let alone that it was Alice's phone. If you use Signal Desktop it didn't come from a phone at all. Signal doesn't keep any proof that it got these messages from an "authentic" source. Either they check out as from Alice or they don't and in the latter case they clearly shouldn't be displayed at all.
The way you normally know a message is from Alice on Signal is that the message was sent using keys only you and Alice share†, and you know you didn't write the message. But a third party has no way to verify that last part. That's the entire trick (in layman's terms).
† Signal and similar systems provide a means to do out-of-band verification that the long term identity key for people you know matches. You probably don't use this with most people, but you can and it's made easy if you want to.
The vast majority of communications occur between people who are publicly known to have an association and have no need to deny the association. Some common examples:
1. Friends
2. Family members.
3. Members of a business.
If your life or freedom is on the line because of an association with someone then most systems out there are somewhat dangerous due to the weakness of the endpoints. You would want something like an airgapped computer with on or off line dead drops possibly hidden with stenography.
> You would want something like an airgapped computer with on or off line dead drops possibly hidden with stenography.
Well, "the best is the enemy of the good". That's the whole point of risk management. As a practical matter, I do the best that I can manage, or at least, be bothered with ongoingly. If I were as paranoid as you're advocating, I'd be cowering in a bunker. Also, for me there's the fact that I have little left to lose.
Beyond the (slightly behind trend) enthusiasm for blockchains Session is the same punt on contact discovery as lots of other systems that went nowhere. This works great for little secret decoder ring cliques but doesn't actually secure real people's day-to-day messages due to lack of discovery - your local butcher and the guy your sister went to college with never find out that you have the same secure messaging app, and so their messages to you aren't secured.
In contrast to your disinterest in convenience features, Session does have a bunch of things that presumably its principles felt were non-negotiable but clearly harm security. The "Open Groups" feature for example is basically "Eh, this is hard, we give up" for larger groups (500+ people). No end-to-end encryption and you're given either a moderator tool that doesn't work ("Ban" pseudonymous people who can for zero cost just create a new pseudonym) or one that's onerous ("Invite" everybody manually).
"BTW, one of Signal's weaknesses is that you MUST use a phone number with it. If you're savvy you realize this can be a Twilio number you control making your account immune from SIM hijacking."
Does Signal not ever send messages from, or otherwise use, SMS shortcodes ?
I ask because no twilio number can receive an SMS shortcode (because no twilio number is classified as a "mobile" number).
To be fair, "Signal the App" and "Signal the Protocol" are two different things. If you were talking about the later then your statement is quite possibly correct.
Signal is all about making good cryptography usable for the general public. If you actually use the "safety numbers" to verify the identity of who you are communicating with then you have real guaranteed end to end encryption. Unfortunately not everyone does that.
People that really really need to be sure probably use something super simple like PGP after they take the time to learn how.
Why are the feds watching these conversations in the first place? Has a crime been committed? If they’re investigating a crime, surely there are more avenues of investigation than Facebook chats that didn’t even exist ten years ago. Whatever happened to good old fashioned police work? Seems like they just expect everyone’s chats to be handed to them on a silver platter when they ask for it.
I'm responding to this statement and showing how it is rather ignorant:
1. The police are either lazy or incompetent if they say they cannot trace criminals because of E2E secure chat.
As for the rest of your comments: The feds are watching criminals online because lots of crime is committed online. I do not think weakening encryption will help them in this pursuit.
>Whatever happened to good old fashioned police work?
That implies effort and people are lazy. "Hey, Mr Criminal, can you be so nice to use App X when you plan to commit your crime so our automated system can mail us when you are going to break the law and also set up an event in our calendar so we can come and arrest you. Please be nice, we can make each other lives easier if we work together. "
> The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication than Wire, Signal, WhatsApp, Wickr
You'd be surprised how poor their opsec can be. Regular file transfer services for instance see this traffic, entirely in the clear, not even the slightest attempt at encryption is made.
The term "extremist" is used for certain activists whose views skew too far outside of the usual range, although one man's "extremist" are another's "unorthodox" view. Depending on the era, a view may be perfectly reasonable or ridiculous, compare the idea of "protecting the environment" in the past.
Some countries do (or have) crack down on really outlandish views for a time. One country's views may also differ from another.
As a matter of principle, I don't much like terrorists as they operate under the goal of spreading terror. I have strong doubts cracking down on encryption would stop them, as they operate perfectly fine with fairly mundane tools and the "mass-surveillance" machine loses them in the noise.
It's hard to collect metadata if the traffic is inside Tor, I2P or behind some clever tunneling.
But government programs have other means of collecting data: OS level backdoors, flawed random number generators like DUAL_EC_DRBG, "unintended hardware bugs" in Intel's CPUs.
I guess they mostly rely on these alternative means. These "let's forbid strong encryption" might be just dust in the eyes to make their targets feel secure if they use apps with "strong encryption".
Or far more simple means. It's trivial, really, to write your own app for encrypted communication or signaling. I bet I could build one in a day.
Even without programming skills, you could set up a shared drive containing only a keepass file. Download the file, use your key and password to open it, then read the message. Monitor the last updated timestamp to see if there have been any changes.
2. You don't need to know the contents of a chat to glean massive amounts of metadata. FB Messenger and WhatsApp going truly E2E encrypted will still put FB (and anyone serving them with warrants) to know in real time who is talking to whom, what their IP addresses are, and possibly real location (if they are using the app on their phone). This can be used to created a Signature profile... many Pakistanis and Yemeni have died from a Hellfire missile strike because they matched a pattern of activity. Google "signature strike" for more info.
3. The terrorists and pedophiles that are the most dangerous are using far more sophisticated means of communication than Wire, Signal, WhatsApp, Wickr, etc. Saying that this is "for the children" or "for our safety" is complete bullshit and anyone saying otherwise needs to prove it.