>Unlike any other existing messaging platform, SimpleX has no identifiers assigned to the users
Lies by omission. SimpleX doesn't mask your IP-address by default. It leaks to the server. The ENTIRE public SimpleX network is hosted by two companies, Akamai and Runonflux. Metadata of two conversing users running on the same VPS can be detected with end-to-end correlation attacks, so pray that the two are not PRISM partners or whatever has replaced that program.
I'd be fine with SimpleX if they
1) bundled Tor and had a toggle switch during initial setup.
2) were transparent about what the toggle switch does (lag/bandwidth vs IP masking)
This is crucial as they already have Tor Onion Service server infra set up, but they're not making it easy for a layperson to use those. Instead they lie by omission. Their
"SimpleX has no identifiers"
only means
"SimpleX does not add additional identifiers"
They don't give a damn about your router gluing your IP address, that's increasingly becoming a unique IPv6 address, to every TCP packet header.
> "SimpleX has no identifiers" only means "SimpleX does not add additional identifiers"
These two statements are identical. IP addresses are Internet user identifiers, not SimpleX identifiers. All other application-level networks have identifiers of their own, in addition to IP addresses.
The goal of the design is:
- to prevent correlation of which IP address communicates with which,
- to prevent IP address from servers not chosen by the users.
It is not supposed to protect IP addresses from all servers, and Tor does not achieve that either, as Tor relays are servers too.
>These two statements are identical. IP addresses are Internet user identifiers, not SimpleX identifiers.
You are promoting SimpleX as an metadata-privacy improvement over Tor Onion Service based messengers like Cwtch, that hides the IP address by default. IP-addresses can be linked to users, and users will have to blindly trust the server is not collecting them. TelCos and ISPs keep logs of those as per data retention laws, so it's not hard to determine who a SimpleX user is if SimpleX wants to disclose that information.
>to prevent correlation of which IP address communicates with which
Which Akamai can do, and Runonflux can do. With 50% probability on per-target basis I might add.
>It is not supposed to protect IP addresses from all servers, and Tor does not achieve that either
Tor relays actively mask the IP of previous node from the next node.
Tor relays do not have access to internal protocol of SimpleX queues etc. SimpleX servers do, so they can collaborate with better efficiency.
Tor relays are chosen at random by the user, and random collaborating entry/exit nodes expose 10 minute windows for ciphertext-only metadata collection without access to IPs. SimpleX has 50% chance same company runs the server of both users.
>Tor does not achieve that either, as Tor relays are servers too.
This is ridiculous. You're effectively arguing, that because Tor isn't literally magical in being able to send TCP packets without IP addresses in headers, it's not significant improvement. As I showed you last time, the NSA itself has admitted they will NEVER be able to deanonymize all Tor users all the time, and that nor are they able to do that on-demand. Which is quite different from your "we run servers on two VPS companies ourselves, but pinky promise, they don't aggregate and correlate information."
>I designed SimpleX network, and the founder of SimpleX Chat.
I know. We two have had a looong conversation about this, first in Reddit, then here, then in privacyguides forum, and now again, here. Every single time you run to the hills.
Link your open, honest, non-misleading threat model to your front page. Make sure it makes it extremely clear that "Unless you install and configure Tor, SimpleX client does not take actions to hide your IP-address from the server".
I mean, look how professional https://tryquiet.org/ looks when the treat model is up there in the title bar, and not as a fine print behind menus.
Do that and we're done. I won't call you out anymore.
We plain a private payment mechanism for the servers that will utilize blockchain for valid reasons - we call it Community Vouchers. But they are not coins, they are service credits that cannot be created out of nothing (as coins) and cannot be sold - they can only be used to pay for the servers.
I understand that servers need to be paid for but that's why I run my own matrix server. So I pay for that and for the users on it. Much nicer than having to trust another party to run them.
Didn't knew. Do you have a source on that? Can't see any mention of coins in their blog. Are you maybe referring to the crypto exchange with the same name that normally appears on top of google searches?
Not exactly making their own "coin", but definitely involved with cryptocurrency, and if I understand correctly, the vouchers themselves involve blockchain.
Not open source, you can't verify the end-to-end encryption or any other measures the client uses actually happen. This makes it trivial to hide backdoors.
The entire secure messaging app space is open source, why anyone would bother with writing a proprietary app and thus omit verifiability of the security claims is beyond me.
EDIT: Also, no proxy settings, meaning your IP address can't be masked with Tor/SOCKS5 proxy.
Yeah if you buy a number with Durov's TON shitcoin. The original sales are over and number auctions start from opening bid of 37 dollars, and run all the way to 14,000 USD https://fragment.com/numbers, and they take very long, even up to one year to close.
Also, Telegram is not private.
1. It's not E2EE by default
2. It's not E2EE for groups on any platfrom
3. It's not E2EE 1:1 on desktop clients forcing you to downgrade from secret chats to insecure chats
4. It's collecting 100% of your metadata, including
* who you talk to, when, how much, what type of data you exchange,
* your IP-address which sort of defeats the purpose of having no phone number, and
* when you enable secret chats
Telegram is also not transparent about its funding, about who develops it, and who has access to the plaintexts stored on their server (meaning, anyone with a zero day or two).
Journalists who went to look for Telegram's office in Dubay found out no-one in the neighboring office had ever seen Telegram staff enter the space https://www.youtube.com/watch?v=Pg8mWJUM7x4
Telegram was built with blood-money from VKontakte, and Durov has been marketed as living in exile, when in reality he has visited Russia on average once every 2.4 months since the exile began, and strangely Durov has not had his underwear poisoned and windows have been kind to him despite supposedly betraying Putin's interests.
Btw interesting connection between Durov/TON and Jan Marsalek (alleged Russian spy) was recently uncovered by FT:
>In 2018 Marsalek invited Ben Halim and other backers of the Libya projects to invest in a new crypto token being launched by messaging platform Telegram, whose founder Pavel Durov had met Marsalek and invited him to participate.
>A special purpose vehicle was set up for them to pool their money and invest but Credit Suisse, which was organising the sale of the token, blocked the transaction. It turned out the bank was happy to take money from Marsalek, whose role in the biggest corporate fraud in recent European history had yet to be revealed, but was wary of his Libyan friends.
>As a workaround, Ben Halim and others decided to let Marsalek invest their money in his name, sidestepping Credit Suisse’s money laundering checks. However, the US Securities and Exchange Commission blocked Telegram’s issuance of the tokens and Marsalek refunded his Libyan associates.
> Yeah if you buy a number with Durov's TON shitcoin
Not even. If you actually try you will discover at the last step (after full KYC, signing some dubious agreements, and linking an existing TG account) that the Fragment "market" is actually fully centralized and has not been open for new buyers-users for a good while. No secondary markets out there (maybe not even possible on their network) afaik.
I mean as in the number is not tied to the identity, maybe you are asked your number to verify the account, but after that you can have a non number linked account. The account is tied to a username @blablabla.
I think Telegram is filth as much as the next guy, but I'm just making that technical point.
Yup. As the guy who put together the most secure FOSS messaging system*, it's not "impossible to hack". It's a caveat ridden, inconvenient to use, tedious to setup, hardware-isolated, multinode application, with long must-read documentation, and that requires experience with electronics and soldering.
Yup, it's almost like they're feelings/emotions over evidence/science. It's not that hard to understand considering how that weird lot consists of all sorts of cranks, pooled by the alt right radicalization pipelines of wellness/conspirituality/flat earth/alt-med/anti-vaccine/UFOs...
Signal provides content-privacy by design with E2EE. Signal provide metadata-privacy by policy, i.e. they choose to not collect data or mine information from it. If you need metadata-privacy by design, you're better off with purpose-built tools like Cwtch, Ricochet Refresh, OnionShare, or perhaps Briar.
SimpleX front page lied by omission about it having no identifiers. The fine print threat model did not mention the server has access to your IP addresses, and the mitigation to create "decentralized" system of users talking via separate servers ran into the problem of there being two VPS companies hosting the entire public server infrastructure. These issues were major as SimpleX advertised itself as an improvement over Cwtch, which should've meant superset of metadata had been protected. But that obviously wasn't the case.
The thing is, there's Akamai and Runonflux, two companies hosting the entire public SimpleX infrastructure. If you're not using Tor and SimpleX Onion Services with your buddies, these two companies can perform end-to-end correlation attacks to spy on which IPs are conversing, and TelCos know which IPs belong to which customers at any given time. Mandatory data retention laws about the assigned IPs aren't rare.
Yes, that's why I said I don't like their relays. It doesn't even have to be Akamai, you need to trust SimpleX first that not to track your IP. I'd rather use a messenger where something is not possible (or even hard) than trust.
As long as IP leaks are possible, I'd rather also use Signal, where at least the rest is battle tested and state of the art.
My concern with Signal is they'll either comply or move out of the EU with the incoming Chat Control, and I'd rather have a fully decentralized messenger with as few leaks as possible.
The sad part is, that's what's keeping Signal safe from spam.
Also, average Joe is not using proxy to hide the IP-address of their device so they leak their identity to the server anyway. Signal is not keeping those logs so that helps.
Messaging apps cater to different needs, sometimes you need only content-privacy. It's not a secret you're married to your partner and you talk daily, but the topics of the conversation aren't public information.
When you need to hide who you are and who you talk to (say Russian dissident group, or sexual minorities in fundamentalist countries), you might want to use Tor-exclusive messaging tools like Cwtch. But that comes at a near-unavoidable issue of no offline-messaging, meaning you'll have to have a schedule when to meet online.
Signal's centralized architecture has upsides and downsides, but what matters ultimately is, (a) are you doing what you can in the architectural limitations of the platform (strong privacy-by-design provides more features at same security level), and (b), are you communicating the threat model to the users so they can make informed decision whether the applications fits their threat model.
If you intend to use SMS (phone numbers) as a resource constraint (sign up requires 'locking up' a resource that is worth at least a few cents) then at least you can offer a ZKP system where the 'consumed' phone number is not tied to an account. You could also offer to accept cryptocurrency for this function - call it a donation.
That Signal did none of those things implies that privacy was not their objective. Only secure communications was.
It's possible that the reason behind their anti-privacy stand is strategic, to discourage criminal use which could be used as a vector of attack against them. Doesn't change the fact that Signal is demonstrably anti-privacy by design.
> privacy was not their objective. Only secure communications was.
> Signal is demonstrably anti-privacy by design.
But your second is uncharitable and misses Signal's historical context.
The value of a phone number for spam prevention has been mentioned, but that's not the original reason why phone numbers were central to Signal. People forget that Signal was initially designed around using SMS as transport, as with Twitter.
Signal began as an SMS client for Android that transparently applied encryption on top of SMS messages when communicating with other Signal users. They added servers and IP backhaul as it grew. Then it got an iOS app, where 3rd party SMS clients aren't allowed. The two clients coexisted awkwardly for years, with Signal iOS as a pure modern messenger and Signal Android as a hybrid SMS client. Finally they ripped out SMS support. Still later they added usernames and communicating without exposing phone numbers to the other party.
You can reasonably disdain still having to expose a phone number to Signal, but calling it "anti-privacy by design" elides the origins of that design. It took a lot of refactoring to get out from under the initial design, just like Twitter in transcending the 140-character limit.
> You can reasonably disdain still having to expose a phone number to Signal, but calling it "anti-privacy by design" elides the origins of that design.
They introduced usernames without removing the requirement for phone numbers.
The parent attempted to excuse them by pointing out that the initial design was based on phone numbers. Putting aside the fact that initial design is irrelevant to present design criticism, they went out of their way to design usernames yet deliberately disallow signup without phone numbers.
> Not a very good case made since you obviously didn’t read the parent discussion.
This isn't an argument, do you have anything to back up your assertion?
I don’t understand, you know what I will ask next.
And broadcasting on FM radio is then what?
You’re just redefining words, there’s no need for this. We agree it would be better from a privacy point of view if Signal did not require a phone number but you’re nit picking: it’s a one time thing, and you can take a public phone that no one can associate with you for this. And then never need it again if you have proper backups.
If privacy wasn't their objective they would just have a database of all the phone numbers.
Perfect privacy would mean not sending any messages at all, because you can never prove the message is going to the intended recipient. Any actual system is going to have tradeoffs, calling Signal anti-privacy is not serious, especially when you're suggesting cryptocurrency as a solution.
A ZKP system where you make a public record of your zero-knowledge proof sounds anti-privacy to me. Even if you're using something obfuscated like Monero, it's still public. I see where you're coming from, but I think I would prefer Signal just keep a database of all their users and promise to try and keep it safe rather than rely on something like Monero.
They have exactly that. They rely on TPMs for "privacy" which is not serious.
> Perfect privacy would mean not sending any messages at all
Not sending messages is incompatible with secure messaging which is the subject of the discussion...
> ZKP system where you make a public record of your zero-knowledge proof sounds anti-privacy to me.
A zero-knowledge proof provably contains zero information. Even if you use a type of ZKP vulnerable to a potential CRQC it's still zero information and can never be cracked to reveal information (a CRQC could forge proofs however).
> especially when you're suggesting cryptocurrency as a solution
Would you elaborate on why cryptocurrencies are not a solution? Especially if combined with ZKPs to sever the connection between the payment and the account. When combined with ZKPs, they could even accept Paypal for donations in exchange for private accounts.
It's also possible that a lot of the criticism for Signal setting a practical/realistic level of what security they will try to provide, is from people who would rather that people either
1. were unable to communicate effectively, or
2. used no security at all.
Do you really use a communication system where you have all exchanged private keys in person and where even the fact that you use it is hidden from your government and phone operator?
If you wanted to keep it safe from spam, you'd use a proof-of-work scheme using a memory-hard hash function like scrypt, or a Captcha, or an invite-code system like lobste.rs or early Gmail. Signal's architects already knew that when they started designng it.
>proof-of-work scheme using a memory-hard hash function like scrypt
So who's doing the computation? The spammer can't afford to run 3 second key derivation time per spam device? Or how long do you think normal user will wait while you burn their battery power before saying "Screw it, I'll just use WA"? Or is this something the server should be doing?
>Captcha
LLMs are getting quite good at getting around captchas.
>invite-code system
That works in lobste.rs when everyone can talk together, and recruit interesting people to join the public conversation. Try doing that with limited invites to recruit your peers to build a useful local network of peers and relatives. "I'm sorry Adam, I'm out of invites can you invite my mom's step-cousin, my mom needs to talk to them?"
>Signal's architects already knew that when they started designng it.
I think they really did, and they did what the industry had already established as the best practice for a hard problem.
The only reasonable alternative would've been email with heavy temp-mail hardening, or looking into the opposite end of Zooko's triangle and having long, random, hard-to-enumerate usernames like Cwtch and other Tor-based messengers do. But even that's not removing the spam-list problem of any publicly listed address ending up in a list that gets spammed with contact requests or opening messages with spam.
Those are reasonable questions, but they suggest that you don't understand the landscape very well.
The user's device has to do the computation for it to be effective. How long does it normally take to sign up for a new messaging service like WhatsApp? Five minutes? You should burn the user's cellphone battery for about half that long, 150 seconds, 50 times more than you were thinking. Plus another half-minute every time you add a new contact. Times two for every time someone blocks you, up to a limit of 150 seconds. Minus one second for each day you've been signed up. Or something like that.
The value of signing up for Signal is much higher to a real user than it is to a spammer, so you just have to put the signup cost somewhere in the wide range in between.
LLMs didn't exist when Signal was designed, and Captchas still seem to be getting a lot of use today.
Invite codes worked fine for Gmail, and would work even better for any kind of closed messaging system like Signal; people who don't know any users of a particular messaging system almost never try to use it. The diameter of the world's social graph is maybe ten or twelve, so invite codes can cover the world's social graph with only small, transitory "out of invites" problems.
The "industry" had "established" that they "should" gather as much PII as possible in order to sell ads and get investments from In-Q-Tel.
This might depend on the country you're in, but I'm quite certain I've gotten locked out of the signup flow in the past when I refused to provide a phone number.
I just tried it from my Android phone (GrapheneOS) and it still asks to verify a phone number when trying to create an account via a web browser. (Strangely, even though it's a private browser session it just asks to confirm my number by sending an SMS, not asking me for my phone number like it does on desktop -- I wonder how that works...)
If you're saying that the account creation flow through the system accounts application doesn't require a phone number, how are you sure that Google doesn't just collect the phone number directly from your device (they could even silently verify it through a class-0 silent SMS)?
Does it also not ask for a phone number if you factory reset, remove the SIM card, and do not register the phone with a Google account? Maybe they track the IMEI instead?
In the case of Twitter, there is evidence that the initial implementation was meant to just be a security mechanism but later someone else noticed they had a handy database of user phone numbers and decided to treat them as free marketing contact information.
> How long does it normally take to sign up for a new messaging service like WhatsApp? Five minutes? You should burn the user's cellphone battery for about half that long, 150 seconds
If you actually do that you're going to crash a lot of cellphones and people will rightly blame your app for being badly coded.
What, their CPUs will overheat? I've run infinite loops on cellphones lots of times without that happening. In fact, I'm running four of them right now, and have been for the last five minutes as I write this comment. The battery drain is annoying but I haven't seen instability. I've run plenty of compiles on cellphones (things like BLAS and Numpy) that take longer than that, and I've never seen one crash a phone.
>but they suggest that you don't understand the landscape very well.
Yeah, what could I possibly know about secure messaging.
>Plus another half-minute every time you add a new contact.
Can you point to some instant messaging app that has you wait 30 seconds before talking to them? Now niché is it?
You want proper uptake and accessibility to everyone, you need something like Samsung A16 to run the work in 150 seconds. Some non-amateur spammer throws ten RTX 5090s to unlock access to random accounts at 80x parallelism (capped by memory cost), with the reasonable time cost of whatever iterations that is, with quite a bit shorter time than 150 seconds. 121.5GFLOPs vs 10x104.8 TFLOPs leads to overall performance difference of 8,800x. And that account is then free to spam at decent pace for a long time before it gets flagged and removed.
The accounts are not generated in five minutes per random sweat shop worker: https://www.youtube.com/watch?v=CHU4kWQY3E8 has tap actions synced across sixty devices. And that's just to deal with human-like captchas that need to show human-like randomness. Proof-of-work is not a captcha, so you can automate it. Signal's client is open source for myriad of reasons, the most pressing of which is verifiable cryptographic implementations. So you can just patch your copy of the source to dump the challenge and forward it to the brute force rig.
Either the enumeration itself has to be computationally infeasible, or it has to be seriously cost limited (one registration per 5 dollar prepaid SIM or whatever).
>Invite codes worked fine for Gmail
Yeah and back in ~2004 when Hotmail had 2MB of free storage, GMail's 1,000MB of free storage may have also "helped".
If the PoW cost is a low-end cellphone CPU for 2.5 minutes, then it's nothing to the spammer with the 200-core hourly AWS server. If each spammer can create 10000 identities (not connections, identities) per hour, then you might as well not have a limit at all. If they could even create only 2 identities per day that would be enough to spam with (yet still unacceptable to actual users). 250000 identities per day is way too many.
The speed ratio is much smaller than you say with memory-hard PoW problems, which depend on the amount of RAM you have (and its response time). But it's surely true that a spammer could create many accounts per day, perhaps 1000 per hour on a big server, which could then go on to spam a few accounts each before becoming uneconomical to keep using.
But that would still put the CPM of the spam around US$2, which very few spammers can afford. Maybe mesothelioma lawyers and spearphishers.
You don't have to make spamming physically impossible, just unprofitable.
Invite codes worked fine for Gmail, but you weren't limited to only the people on Gmail to talk to. It was a full, regular email service. You could email anyone and receive mail from anyone. I doubt it would have been very successful if it was invite only and you could only email other Gmail users for the first few years.
Waze was also invite-only, G+ was initially invite only. Did that model help or hurt them?
I think it helped them. Gmail had more trouble with invite codes because some people wanted a Gmail account, but didn't know any existing Gmail users, because Gmail was useful for communication with non-Gmail users.
G+ didn't have that problem so much, but I don't remember it using invite codes.
There are people who believe that proof-of-work isn't very effective, but none of them have succeeded in spamming the Bitcoin network with blocks they've mined, driving the other miners out of business, nor (for the last several years) with spamming the Bitcoin network with dust transactions they've signed, so I don't think we should take their opinions very seriously.
Bots may be better than humans at Captchas now, although I'm not certain of that, but they certainly weren't when Signal was designed.
I don't see why invite codes would be a problem for mainstream use.
> There are people who believe that proof-of-work isn't very effective, but none of them have succeeded in spamming the Bitcoin network with blocks they've mined, driving the other miners out of business, nor (for the last several years) with spamming the Bitcoin network with dust transactions they've signed, so I don't think we should take their opinions very seriously.
Different system. The parent and GP are talking about proof-of-work being used directly for account creation. If a chat service required mining-levels of PoW (and hence any prospective new users to have an ASIC), it would not be very popular. Nor would it be very popular if it used a relative difficulty system and the spammers used dedicated servers while the legitimate users had to compete using only their phones.
> none of them have succeeded in spamming the Bitcoin network with blocks they've mined
I'm not saying you're wrong, but I have no idea what you're getting at, because the sentence sounds kind of absurd. As a result, I'm not sure if it addresses your point, but just to throw it out there: Bitcoin and anti-spam are different applications of proof of work. Anti-spam has to strike a compromise between being cheap for the user (who is often on relatively low-powered mobile hardware), and yet annoying enough to deter the spammer. It's not unreasonable to believe that such a compromise does not exist.
> Bots may be better than humans at Captchas now, although I'm not certain of that, but they certainly weren't when Signal was designed.
Fair point, but again, even in 2014, an instant messenger with captchas would have much more friction than every other messenger. And captchas aren't just bad because they introduce enough friction to drive away pretty much everybody: they also make users feel like they're being treated as potential criminals.
> I don't see why invite codes would be a problem for mainstream use.
Can you elaborate? Invite codes blocking access to the service itself "like lobste.rs" mean that no one can use your service unless they've been transitively blessed by you. That's obviously going to limit its reach...
Bitcoin had a spam transaction problem ("dust transactions") which was a bigger problem than email spam, because every transaction is received by every node. It was easy to solve because Bitcoins are minted by proof of work.
I don't think a Captcha for signup would have been much friction. Certainly less than providing a phone number.
Why would someone want to use a closed messaging service like Signal unless they knew an existing user? I don't think that the requirement for that existing user to invite them would be a significant barrier. So I think it's not going to limit its reach.
Yeah it depends on where the producer expects the CD to be played.
99% of music is made to be played on radio / in car etc., a noisy environment, where you don't want to be adjusting the volume knob all the time. So the dynamics are stripped in mastering phase.
Music that gets pressed on vinyls isn't mastered for car-play, but home stereo equipment, so it makes more sense to have larger dynamic range.
CDs have objectively lower noise floor (less hissing), and more dynamic range (difference between loudest and quietest note), but it's the mastering that usually destroys the sound. And nothing can be done about it on consumer end. Except find a less remastered version of the album in a thrift store that isn't scratched to oblivion.
There's really no reliable way to tell if a CD is going to have high dynamic range, except perhaps niche audiophile studios like https://www.stockfisch-records.de/sf12_start_e.html, but https://dr.loudness-war.info/ has fantastic list of records with their dynamic ranges, so you can check before you buy, and you can also explore and find new stuff to use to listen to your speakers ;)
Lies by omission. SimpleX doesn't mask your IP-address by default. It leaks to the server. The ENTIRE public SimpleX network is hosted by two companies, Akamai and Runonflux. Metadata of two conversing users running on the same VPS can be detected with end-to-end correlation attacks, so pray that the two are not PRISM partners or whatever has replaced that program.
I'd be fine with SimpleX if they
1) bundled Tor and had a toggle switch during initial setup.
2) were transparent about what the toggle switch does (lag/bandwidth vs IP masking)
This is crucial as they already have Tor Onion Service server infra set up, but they're not making it easy for a layperson to use those. Instead they lie by omission. Their
"SimpleX has no identifiers"
only means
"SimpleX does not add additional identifiers"
They don't give a damn about your router gluing your IP address, that's increasingly becoming a unique IPv6 address, to every TCP packet header.
reply