Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stop Breaking TLS (markround.com)
164 points by todsacerdoti 1 day ago | hide | past | favorite | 158 comments




The author is complaining a lot about implementation pains without taking a step back and looking at why it exists in the first place.

Say you work at a place that deals with credit cards. You, as a security engineer, have a mandate to stop employees from shipping CC numbers outside the org.

You can educate all you want, you can have scary policies and HR buy-in, you can have all the "Anomaly detection, Zero Trust network architecture, EDR, Netflow analysis" in the world, but exactly zero of those will stop Joe Lunchbox from copy/pasting a block with a CC number in the middle into ChatGPT. You know what will? A TLS-inspecting proxy with some DLP bits and bobs.

It sucks, yes. But it works, and (short of the fool's errand of trying to whitelist every site any employee needs) it's the only thing that works.

And yes, I'm aware PCI DSS has additional requirements for CDEs and whatnot, but really this can apply to anything -- a local government office dealing with SSNs, a school with student ID numbers, a corporation with trade secrets.. these problems exist everywhere, and implementing PCI-like controls is often a bridge too far for unregulated industries.


That is not true, you can run DLP on an endpoint directly and inside a browser directly (e.g. via an extension or direct integration hooks).

You can also try to stop the situation where the CC numbers are in the clear anywhere in the first place, so that you can't copy/paste them around. What happens if someone writes the CC number down on a piece of paper?


Endpoint DLP helps but it's not even close to bulletproof. Just for fun, if you have DLP at work, open the integrated browser in VS Code and notice how you can send protected test strings without anything chirping you.

> CC numbers are in the clear anywhere in the first place

Sounds great in theory, until you realize that in a large number of industries the majority of employees need access to protected data to do their jobs. Imagine telling the IRS their employees can't see/use cleartext SSNs.

As for paper / mobile phones / whatever.. you're not wrong, but physical security is typically someone else's job.


Network DLP is also not bulletproof so I'm not sure what the argument is there. These things are all best effort.

> if you have DLP at work, open the integrated browser in VS Code and notice how you can send protected test strings without anything chirping you.

I recognize it's not instrumented, but how are protected strings getting there in the first place?


The fact that most tools have completely different ways to allow them to add certificates is the biggest pain. Git, Python and Rust also have large issues. Git doesn't default to "http.schannel". Python (or rather requests, or maybe urllib3) only looks at its own certificate store, and I have no idea how Rust does this (well, I use uv, and it has its own problems - I know about the --use-native-tls flag, but it should be a default at the least).

It's such a nightmare at my current job as well. Everything always just breaks and needs investigating how to fix.

Even putting aside the MITM and how horrendous that is, the amount of time lost from people dealing with the fallout got to have cost so much time (and money). I can't fathom why anyone competent would want to implement this, let alone not see how much friction and safety issues it causes everywhere.


> I can't fathom why anyone competent would want to implement this

Compliance. Big financial orgs. and the like must show that they are doing something about "data loss" and this, sadly, is the easiest way to do that.

There's money in it if you can show them a better way.


> Compliance

With anti-security policies that: break TLS, thwart certificate pinning, encourage users to ignore certificate errors, expand the attack surface, increase data leak risks, etc. All while wasting resources and money.

Zscaler and its ilk have conned the IT world. Much like Crowdstrike did before it broke the airlines.

Not to mention:

> We only use data or metadata that does not contain customer or personal data for AI model training.

How reassuring.

https://www.zscaler.com/blogs/company-news/zscalers-commitme...


Big emphasis on the "show you're doing something" part: actually being effective isn't a requirement.

Yeah, and Java has its nice cacerts file so that should have been easy, but then we were using Bazel which does the "hermetic builds" thing so that had to be told about it separately, and on and on with all the other special-snowflake tools.

It added huge amounts of friction which was one reason I decided to move on from that gig.


On Android, macOS/iOS, and Windows, this is a solved problem. Only on the extremely fragmented Linux/Posix runtimes do these problems surface.

Rust's solution is "it depends". You can use OpenSSL (system or statically compiled) or rustls (statically compiled with your own CA roots, system CA roots, or WebPKI CA roots).

I'm afraid that until the *ix operating systems come out with a new POSIX-like definition that stabilises a TLS API, regardless of whether that's the OpenSSL API, the WolfSSL API, or GnuTLS, we'll have to keep hacking around in APIs that need to be compatible with arbitrary TLS configurations. Alternatively, running applications through Waydroid/Wine will work just fine if Linux runtimes can't get their shit together.


> Windows, this is a solved problem.

Are you sure? It's been a few years, but last I tried Firefox used its own CA store on Windows. I'm pretty sure openjdk uses "<JAVA_HOME>/jre/lib/security/cacerts" instead of the system store too.


> On Android, macOS/iOS, and Windows, this is a solved problem.

Is it, though? It is absolutely trivial for an Android app (like the one you use for banking) to pin a specific CA or even a specific server certificate, and as far as I'm aware it is pretty much impossible to universally override this.

In fact, by default Android apps don't accept any user-installed certs. It uses separate stores for system-installed CA roots and user-installed CA roots, and since Android 7.0 the default is to only include the system-installed store. Apps have to explicitly opt-in to trusting the user-installed store.


Is it solved in macOS? Curl recently removed macOS keychain support as there are like 7 competing APIs 6 of which are deprecated and number 6 is a complete HTTP replacement so curl can't use it.

Only reason why it works on macOS curl is because they're a few versions behind


I absolutely do not want to be constrained to a single system cert store controlled by the OS vendor.

That last part does sound like a bad deal based on recent anti-owner-control habits like sealed immutable system volumes, but I definitely want to be constrained to a single system cert store controlled by the owner of a computer. Which works for the corporate case as well as the personal one.

I have this similar gripe when it comes to http proxy configuration. It's invisible to you until you are in an execution environment where you are mandated to use the providers proxy configuration.

Some software reads "expected" env variables for it, some has its own config or cli flags, most just doesn't even bother/care about supporting it.


Chiefly because "supporting it" requires a full JavaScript interpreter, and subscribing to changes in "system settings" during the lifetime of your program. Easier just to support http_proxy/https_proxy/no_proxy (and what standard for no_proxy? Does it support CIDR ranges?) or even less flexibility than that.

If only http_proxy/https_proxy/no_proxy at startup time were more widely supported then. In my case I had to deploy software into a kubernetes cluster managed by a different company that required these configurations.

And many things break in different, exciting ways. For example, we discovered that whilst the JVM can be configured to use system certificate store, that does not apply to websocket connections. So the product seems to be able to connect to the server, but all socket connections bork out with TLS errors.

Fun!

And so many of those products deliver broken chains, and your client needs to download more certificates transparently ( https://systemweakness.com/the-hidden-jvm-flag-that-instantl... )

Double the fun!


One thing that has not quite been mentioned in the blog, is how much of the MITM spyware comes from very big and well known „security“ companies.

You know, the ones that really know about security. X-PAN-AUTHCHECK type of security.

The amount of CVEs some of the big firewall companies collect make it seem like it is a competition for the poorest security hygiene.

The real problem we have is compliance theatre where someone in management forces these solutions onto their IT department just so they can check a box on their sheets and shift all responsibilities away.


As a sysadmin I also hate this. Instead, I do block stuff based on DNS requests and I also block any other DNS provider as well as malicious IPs.

At this point in time, Microsoft is the bigger enemy here - some of their policies are just insane and none of this MITM will help [0][1]

[0] https://www.microsoft.com/en-us/microsoft-365/roadmap?id=490...

[1] https://techcommunity.microsoft.com/blog/microsoft365copilot...


Complains about TLS inspection, yet fronts their website on the biggest and most widely deployed TLS introspection middle box in the world ...

Why do we all disdain local TLS inspection software yet half the Internet terminates their TLS connection at Cloudflare who are most likely giving direct access to US Intelligence?

It's so much worse as it's infringing on the privacy and security of billions of innocent people whilst inspection software only hurts some annoying enterprise folks.

I wish we all hopped off the Cloudflare bandwagon.


Three of the banks I use have their websites/apps go through CloudFlare. So does the electronic records and messaging system used by my doctor. A lawyer friend uses a secure documents transfer service that is protect by guess who.

Who needs to let CF directly onto their network when they already sit between client and provider for critically-private, privileged communications and records access?


NSAaaS and people even pay for it.

I'm not sure if you're serious but in case you are (or other people):

TLS inspection is for EVERYTHING in your network, not just your publicly reachable URLs.

Putting Cloudflare anti-DDoS in front of your website is not the same as breaking all encryption on your internal networks.

Google can already see the content of this site since it's hosted... on the internet.


> Putting Cloudflare anti-DDoS in front of your website is not the same as breaking all encryption on your internal networks.

You misunderstood, they're complaining about it as a user. If your website uses Cloudflare then our conversation gets terminated by Cloudflare, so they get to see our unencrypted traffic and share it with whomever they want, compromising my privacy.

Which wouldn't be such a problem if it was just an odd website here or there, but Cloudflare is now essentially a TLS middle box for the entire internet with most of the problems that the article complains about, while behind hosted behind Cloudflare.


Given that 50-70% of the critical services I use in my daily life (healthcare, government, banking, insurance) all go through Cloudflare this practically means everything that is important to me as an individual is being actively intercepted by a US entity that falls under NSA's control.

So for all intents and purposes it's equivalent.

My point is: it's very hypocritical that we as industry professionals are complaining about poor cooperates being MITM'd whilst we're perfectly fine enabling the enfringement of fundamental human right to privacy of billions of people by all fronting the shit that we build by Cloudflare in the name of "security".

I find the lack of ethical compass in this regard very disturbing personally


Having an organization install custom root certificates onto your work or personal computer and hosting a public blog on Cloudflare are two entirely different topics.

That your healthcare, government, bank, etc. are using Cloudflare, is a third. In an ideal world I guess I'd agree with you, but asking any of these institutions to deploy proper DDoS protection may just be too much of an ask.


...do you send private messages using services hosted on publicly reachable URLs?

Do you have an alternative, potentially one that's less centralised or private or in bed with three-letter agencies? I ask because my last infra was probed for vulnerabilities hundreds of times per day; putting Cloudflare in front with some blocked countries and their captchas brought the attempted attacks down to a few dozen per month.

I mean, is doing your own geo blocking actually a blocker for you?

The author is not complaining about reverse proxies, which would be a very silly position to take.

The certificate presented is not a Cloudflare one.

So it might be that they're using a custom one, which I believe is passed through end-to-end.


Cloudflare doesn't have their own CA. They use a bunch of third party CAs (LetsEncrypt, Google and W2)

I think it’s misleading to imply hypocrisy considering the reasons listed in the article don’t apply to the scenario of a site being behind Cloudflare.

I wish so too, same for all the self-hosters using tailscale...

Tailscale connections don't get terminated by a middle box, it's just end-to-end encrypted Wireguard under the hood. Cloud-hosted control panel is a risk because they could push malicious configuration changes to your clients (ACLs and new nodes if you're not using the lock feature), but they can't do it without leaving a trace like Cloudflare can.

Tailscale cannot passively observe traffic.

They could inject malicious keys into your config but would be hard to mask the evidence of that.


Would it be hard? I thought the point of tailscale was not having to manage or concern yourself with key distribution.

Lookup the Tailnet Lock feature.

These are not the same thing, the parent is confused..

I agree with the sentiment, but I think it's a pretty naive view of the issue. Companies will want all info they can in case some of their workers does something illegal-inappropiate to deflect the blame. That's a much more palpable risk than "local CA certificates being compromised or something like that.

And some of the arguments are just very easily dismissed. You don't want your employer to see you medical records? Why were you browsing them during work hours and using your employers' device in the first place?


TLS inspection can _never_ be implemented in a good way, you will always have cases where it breaks something and most commonly you will see very bad implementations that break most tools (e.g. it is very hard to trust a new CA because each of OS/browser/java/python/... will have their own CA store)

This means devs/users will skip TLS verification ("just make it work") making for a dangerous precedent. Companies want to protect their data? Well, just protect it! Least privilege, data minimization, etc is all good strategies for avoiding data leaking


Sure it can; it just requires endpoint cooperation, which is a realistic expectation for most corporate IT shops.

You also need some decent support + auditing. There are a couple of places to configure (e.g. setting CURL_CA_BUNDLE globally covers multiple OSS libraries) but there will be cases where someone hits one of the edge clients and tries to ignore the error, which ideally would lead to a scanner-triggered DevOps intervention. I think a fair amount of the rancor on this issue is really highlighting deeper social problems in large organizations, where a CIO should be seeing that resentment/hostility toward the security group is a bigger risk than the surface problem.

I’m all for privacy of individuals, but work network is not a public internet either.

A solution is required to limit the network to work related activities and also inspect server communications for unusual patterns.

In one example someone’s phone was using the work WiFi to “accidentally” stream 20 GB of Netflix a day.


What's the security risk of someone streaming Netflix?

There are better ways to ensure people are getting their work done that don't involve spying on them in the name of "security".


Security takes many forms, including Availability.

Having branch offices with 100 Mbps (or less!) Internet connections is still common. I’ve worked tickets where the root cause of network problems such as dropped calls ended up being due to bandwidth constraints. Get enough users streaming Spotify and Netflix and it can get in the way of legitimate business needs.

Sure, there’s shaping/qos rules and dns blocking. But the point is that some networks are no place for personal consumption. If an employer wants to use a MITM box to enforce that, so be it.


I think that's a very loose interpretation of Availability in the CIA triad.

This looks a lot like using the MITM hammer to crack every nut.

If this is an actual concern, why not deny personal devices access to the network? Why not restrict the applications that can run on company devices? Or provide a separate connection for personal devices/browsing/streaming?

Why not treat them like people and actually talk to them about the potential impacts. Give people personal responsibility for what they do at work.


Yes, but also it’s not an employer’s job to provide entertainment during work hours on a factory floor where there are machines that can kill you if you’re not careful.

There’s a famous fable where everyone is questioning the theft victim about what they should’ve done and the victim says “doesn’t the thief deserve some words about not stealing?”

Similarly, it’s a corporate network designed and controlled for work purposes. Connecting your personal devices or doing personal work on work devices is already not allowed per policy, but people still do it, so I don’t blame network admins for blocking such connections.


I agree with all you said, but it's not like it is well advertised by the companies--they should come right out and say "we MITM TLS" but they don't. It's all behind the scenes smoke and mirrors.

I agree, that’s a bad business practice.

Normally no personal device have the firewall root certs installed, so they just experience network issues from time to time, and dns queries and client hello packets are used for understanding network traffic.

However, with recent privacy focused enhancements, which I love by the way because it protects us from ISP and other, we (as in everybody) need a way to monitor and allow only certain connections in the work network. How? I don’t know, it’s an open question.


It’s not at all a loose interpretation.

Availability: Ensures that information and systems are accessible and operational when needed by authorized users


What’s wrong with watching Netflix at work instead of working? That’s not for me to say, but I understand employers not wanting to allow it.

In Europe they prefer not to go to jail for privacy violations. It turns out most of these "communist" regulations are actually pretty great.

Does GDPR (or similar) establish privacy rights to an employee’s use of a company-owned machine against snooping by their employer? Honest question, I hadn’t heard of that angle. Can employers not install EDR on company-owned machines for EU employees?

(IANAL) I don't think there is a simple response to that, but I guess that given that the employer:

- has established a detailed policy about personal use of corporate devices

- makes a fair attempt to block work unrelated services (hotmail, gmail, netflix)

- ensures the security of the monitored data and deletes it after a reasonable period (such as 6–12 months)

- and uses it only to apply cybersecurity-related measures like virus detection, UNLESS there is a legitimate reason to target a particular employee (legal inquiry, misconduct, etc.)

I would say that it's very much doable.

Edit: More info from the Dutch regulator https://english.ncsc.nl/publications/factsheets/2019/juni/01...


Yes, at least in the Netherlands it is generally accepted that employees can use your device personally, too.

Using a device owned by your company to access your personal GMail account does NOT void your legal right to privacy.


So does nobody in Europe use an EDR or intercepting proxy since GDPR went into force?

You can do it but you'd have to have a good case for it to trump the right to privacy.

It's not as simple as in the US where companies consider everything on company device their property even if employees use it privately.


I have found a definite answer from the Dutch Protection Agency (although it could be out of date).

https://english.ncsc.nl/binaries/ncsc-en/documenten/factshee...


What’s the definitive answer? From what I can tell that document is mostly about security risks and only mentions privacy compliance in a single paragraph (with no specific guidance). It definitely doesn’t say you can or can’t use one.

Your question So does nobody in Europe use an EDR or intercepting proxy since GDPR went into force?

Given that a regulator publishes a document with guidelines about DPI I think it rules out the impossibility of implementing it. If that were the case it would simply say "it's not legal". It's true that it doesn't explicitly say all the conditions you should met, but that wasn't your question.


That's probably because there is no answer. Many laws apply to the total thing you are creating end-to-end.

Even the most basic law like "do not murder" is not "do not pull gun triggers" and a gun's technical reference manual would only be able to give you a vague statement like "Be aware of local laws before activating the device."

Legal privacy is not about whether you intercept TLS or not; it's about whether someone is spying on you, which is an end-to-end operation. Should someone be found to be spying on you, then you can go to court and they will decide who has to pay the price for that. And that decision can be based on things like whether some intermediary network has made poor security decisions.

This is why corporations do bullshit security by the way. When we on HN say "it's for liability reasons" this is what it means - it means when a court is looking at who caused a data breach, your company will have plausible deniability. "Your Honour, we use the latest security system from CrowdStrike" sounds better than "Your Honour, we run an unpatched Unix system from 1995 and don't connect it to the Internet" even though us engineers know the latter is probably more secure against today's most common attacks.


Okay, thanks for explaining the general concept of law to me, but this provides literally no information to figure out the conditions under which an employer using a TLS intercepting proxy to snoop on the internet traffic a work laptop violates GDPR. I never asked for a definitive answer just, you know, an answer that is remotely relevant to the question.

I don’t really need to know, but a bunch of people seemed really confident they knew the answer and then provided no actual information except vague gesticulation about PII.


Are they using it to snoop on the traffic, or are they merely using it to block viruses? Lack of encryption is not a guarantee of snooping. I know in the USA it can be assumed that you can do whatever you want with unencrypted traffic, which guarantees that if your traffic is unencrypted, someone is snooping on it. In Europe, this might not fly outside of three-letter agencies (who you should still be scared of, but they are not your employer).

They can, but the list of "if..." and "it depends..." is much longer and complicated, especially when getting to the part how the obtained information may be used

It has to have a good purpose. Obviously there are a lot of words written about what constitutes a good purpose. Antivirus is probably one. Wanting to intimidate your employees is not. The same thing applies to security cameras.

Privacy laws are about the end-to-end process, not technical implementation. It's not "You can't MITM TLS" - it's more like "You can't spy on your employees". Blocking viruses is not spying on your employees. If you take the logs from the virus blocker and use them to spy on your employees, then you are spying on your employees. (Virus blockers aiming to be sold in the EU would do well not to keep unnecessary logs that could be used to spy on employees.)


Yes. GDPR covers all handling of PII that a company does. And its sort of default deny, meaning that a company is not allowed to handle (process and/or store) your data UNLESS it has a reason that makes it legal. This is where it becomes more blurry: figuring out if the company has a valid reason. Some are simple, eg. if required by law => valid reason.

GDPR does not care how the data got “in the hands of” the company; the same rules apply. Another important thing is the pricipals of GDPR. They sort of unline everything. One principal to consider here is that of data minimization. This basically means that IF you have a valid reason to handle an individuals PII, you must limit the data points you handle to exactly what you need and not more.

So - company proxy breaking TLS and logging everything? Well, the company has valid reason to handle some employee data obviously. But if I use my work laptop to access privat health records, then that is very much outside the scope of what my company is allowed handle. And logging (storing) my health data without valid reason is not GDPR compliant.

Could the company fire me for doing private stuff on a work laptop? Yes probably. Does it matter in terms of GDPR? Nope.

Edit: Also, “automatic” or “implicit” consent is not valid. So the company cannot say something like “if you access private info on you work pc the you automatically content to $company handling your data”. All consent must be specific, explicit and retractable


What if your employer says “don’t access your health records on our machine”? If you put private health information in your Twitter bio, Twitter is not obligated to suddenly treat it as if they were collecting private health information. Otherwise every single user-provided field would be maximally radioactive under GDPR.

If the employer says so and I do so anyway then that’s a employment issue. I still have to follow company rules. But the point is that the company needs to delete the collected data as soon as possible. They are still not allowed to store it.

I’ll give an example in more familiar with. In the US, HIPPA has a bunch of rules about how private health information can be handled by everyone in the supply chain, from doctor’s offices to medical record SaaS systems. But if I’m running a SaaS note taking app and some doctor’s office puts PHI in there without an express contract with me saying they could, I’m not suddenly subject to enforcement. It all falls on them.

I’m trying to understand the GDPR equivalent of this, which seems to exist since every text fields in a database does not appear to require the full PII treatment in practice (and that would be kind of insane).


Many programmers tend to treat the legal system as if it was a computer program: if(form.is_public && form.contains(private_health_records)) move(form.owner, get_nearest_jail()); - but this is not how the legal system actually works. Not even in excessively-bureaucratic-and-wording-of-rules-based Germany.

Yeah, that’s my point. I don’t understand why the fact that you could access a bunch of personal data via your work laptop in express violation of the laptop owner’s wishes would mean that your company has the same responsibilities to protect it that your doctor’s office does. That’s definitely not how it works in general.

The legal default assumption seems to be that you can use your work laptop for personal things that don't interfere with your work. Because that's a normal thing people do.

I suspect they should say "this machine is not confidential" and have good reasons for that - you can't just impose extra restrictions on your employees just because you want to.

The law (as executed) will weigh the normal interest in employee privacy, versus your legitimate interest in doing whatever you want to do on their computers. Antivirus is probably okay, even if it involves TLS interception. Having a human watch all the traffic is probably not, even if you didn't have to intercept TLS. Unless you work for the BND (German Mossad) maybe? They'd have a good reason to watch traffic like a hawk. It's all about balancing and the law is never as clear-cut as programmers want, so we might as well get used to it being this way.


I remember at my first job, the internet stopped working at my workstation. I got on the phone with IT, and the guy said "looks like you don't have our new certificates." I asked why I would need my employer's certificates. He said "because we MITM every connection." I asked if that was even legal, and he said yes it's legal.

At another job I was handling a support ticket where a customer was asking, in so many words, "can I get HTTP headers of requests flowing through my Envoy TLS reverse proxy?" I said that they could terminate TLS at the proxy and redo things that way, but then that wouldn't be a TLS proxy it'd be a MITM or a gateway. They could log the downstream/upstream and duration of connections, but that wouldn't help.


No one who understands what "MITM" means should have any expectation that I/O with a device owned and administered by a third party can be trusted (whether they do it by subverting PKI with internal certificate or not).

> Consider this - what is the likelihood of every certificate authority on the Internet having their private keys compromised simultaneously? I’d wager that’s almost at the whatever is the statistics equivalent of the Planck length level of probability.

It doesn't matter if every certificate authority is compromised or just one. One is all that is needed to sign certificates for all websites.


Author here, hi! Was just venting last night, but that's a very good point, I'll update it later with your correction :)

You should make it about CT logs. I believe you need to compromise at least three of them.

That was what I was thinking of (but worded it badly in the middle of my rant!)

If I wanted to intercept all your traffic to any external endpoint without detection I would have to compromise the exact CA that signed your certificates each time, because it would be a clear sign of concern if e.g. Comodo started issuing certificates for Google. Although of course as long as a CA is in my trust bundle then the traffic could be intercepted, it's just that the CT logs would make it very clear that something bad had happened.


The whole point of the logs is that they're tamper-evident. If you think the certificate you've seen wasn't logged you can show proof. If you think the logs tell you something different from everybody else you can prove that too.

It is striking that we don't see that. We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering


> We reliably see people saying "obviously" the Mossad or the NSA are snooping but they haven't shown any evidence that there's tampering

Why would they use the one approach that leaves a verifiable trace? That'd be foolish.

- They can intercept everything in the comfort of Cloudflare's datacenters

- They can "politely" ask Cloudflare, AWS, Google cloud, etc. to send them a copy of the private keys for certificates that have already been issued

- They either have a backdoor, or have the capability to add a backdoor in the hardware that generates those keys in the first place, should more convenient forms of access fail.


> Why would they use the one approach that leaves a verifiable trace?

It is NSA practice to avoid targets knowing for sure what happened. However their colleagues at outfits like Russia's GRU have no compunctions about being seen and yet likewise there's no indication they're tampering either.

Although Cloudflare are huge, a lot of transactions you might be interested in don't go through Cloudflare.

> the hardware that generates those keys in the first place

That's literally any general purpose computer. So this ends up as the usual godhood claim, oh, they're omniscient. Woo, ineffable. No action is appropriate.


That's the most naive take I've read online this year.

So your stance is that spy agencies aren't spying on us because if they were, we'd know about it?


Your "I bet they're God" stance is even more naive. They're not God, they've got a finite budget both in financial terms and in terms of what will be tolerated politically.

Of course spooks expend resources to spy on people, but that's an expenditure from their finite budget. If it costs $1 to snoop every HTTP request a US citizen makes in a year, that's inconsequential so an NSA project to trawl every such request gets green lit because why not. If it costs $1000 now there's pressure to cut that, because it'll be hundreds of billions of dollars to snoop every US citizen.

That's why it matters that these logs are tamper-evident. One of the easiest ways to cheaply snoop would be to be able to impersonate any server at your whim, and we see that actually nope, that would be very expensive, so that's not a thing they seem to do.


That's never been my stance because there's a difference between mass surveillance and targeted surveillance. If you understood that then you wouldn't be getting lost and making silly references to "God".

I don't believe that the NSA is omniscient. I believe they have 95% of data on 95% of the population through mass surveillance, and 99.9% of data on 99.9% of people of interest through targeted surveillance.

You think abusing public CAs for mass surveillance is a genius idea, and that its lack of real-world abuse proves that mass surveillance just doesn't happen - full stop.

Unfortunately you fail to consider that if they tried to do this just once, they would be detected immediately, offending CAs would be quickly removed from every OS and browser on the planet, the trust in our digital infrastructure would be eroded, impacting the economy, and it would likely all be in exchange for nothing.

On the other hand if you're trying to target someone then what's the point of using an attack that immediately tips off your target, that requires them to be on a network path that you control, and that's trivially defeated if they simply use a VPN or any sort of application-layer encryption, like Signal? There is none.


> It is striking that we don't see that

It probably just means they are asking the providers to hand over the data, no need to perform active attacks.


This is only relevant for active MITM attacks.

"If you use my (private) network you follow my rules"

And I find it hard to argue with that.

I've been using a VPN habitually on my phone and my (personal) laptop for a decade now. Work, home, travel. Doesn't matter. It's always on.


How do you find your typical daily battery life with it always on?

I’ve tried this in the past and had to revert as I found it made a noticeable difference in my day-to-day.

Curious to hear the experience of others.


I have the impression tailscale drains my battery on macOS and iOS, only turn it on when truly needed.

Yeah, it most certainly does. Very noticeable on iOS. I don’t know if this is an Apple specific thing, or if it’s a similar story on Android.

It’s WireGuard underneath, which is designed to not be very chatty when idle, so I’d put this down to regular back and forth with Tailscale’s control plane, relays, etc.

It’s a shame really, because a huge value prop of TS is that it’s a VPN you just leave on and forget about. I hate having to toggle it when I inevitably forget to and wonder why I’m getting connection errors to private resources.


I have an ipv6 wireguard vpn from my iOS phone to my home network. It routes all traffic through my home isp. I use the wireguard iOS client. Battery life has been fine for me. One caveat is that background updates are disabled for almost every app.

Was it OpenVPN that you tried in the past? Wireguard seems much better

I really wish more places would enable explicit proxies: if you have a mandate to inspect all traffic, block 443 except to proxy.megacorp.com and configure clients to use it. You lose all of the bugs and security issues caused by the security software—fun fact, Palo Alto _still_ doesn’t correctly implement TLS 1.2!—which as the author points out is basically training users to disable validation or ignore errors. This is so common I’ve seen security people with the usual certs giving advice from ChatGPT to add -k to curl calls, which is especially great when that’s baked into a container which will run on other networks.

One really nice win is troubleshooting: inspection appliances break decades of performance and reliability work which groups like the Linux kernel developers have done, and they make all errors have the same symptom so the security team will now be the only people who can troubleshoot network-level issues. Explicit proxying can’t fix your network but it makes it very clear where the problem needs to be investigated.


Are you aware how https proxying works? Clients use CONNECT method and after that everything is opaque to the proxy. So without mitm you only know remote IP address.

There’s another difference which is what I was referring to: in both cases, your proxy has to forge the SSL certificate for the remote server but in the transparent case it also must intercept network traffic intended for the remote IP. That means that clients can’t tell whether an error or performance problem is caused by the interception layer or the remote server (sometimes Docker Hub really is down…) and it can take more work for the proxy administrator to locate logs.

If you explicitly configure a proxy, the CONNECT method can trigger the same SSL forgery but because it’s explicit the client’s view is more obvious and explainable. If my browser gets an error connecting to proxy.megacorp.com I don’t spend time confirming that the remote service is working. If the outbound request fails, I’ll get a 5xx error clearly indicating that rather than having to guess at what node dropped a connection or why. This also provides another way to implement client authentication which could be useful if you have user-based access control policies.

It’s not a revelation, but I think this is one of the areas where trying to do things the easy way ends up being harder once you factor in frictional support costs. Transparent proxying is trading a faster rollout for years of troubleshooting.


More and more big customers (especially banks) are requiring this kind of self-inflicted-MITM attack from all their suppliers. Do you want to have customers? Get ready for zscaler!

How do you propose compliance with their exfiltration protection requirements? (And “turn down $ from those customers” is not an answer)


I'd like to know if there is any real world data about the effectiveness of TLS inspection in preventing harm. It's empirically a massive tax on any organisation that engages in any kind of software engineering or technical work. I estimate that something like 3% of all our engineering effort goes to working around the TLS inspection which breaks security on literally every piece of infrastructure we build. And it embeds all the harm that the article alludes to (and more). So it takes quite a significant balance of upside to counteract that.

So is the benefit worth it? Is there data to prove it? Or is it just authoritarian IT departments drunk on power implementing this stuff?

I'd love to know.


The title should say "Stop inspecting TLS", the current title reads like the TLS standard or technology is modified in a way to not work properly.

For what it's worth, I knew exactly what this was going to be about before I clicked.

Our cyber team have installed zscaler on most people's laptop, and somewhere in the fabric of the office internet connection.[1]

For those that don't know, its a MITM proxy with certificates so that it can inspect and unroll TLS traffic.

ostensibly its there to stop data exfiltration, as we've had a number of incidents where people have stolen data and sent it to competitors. (our c-suite don't have as much cyber shit installed, despite them being the ones that are both targets more, and broken the rules more....)

Now, I don't like zscaler, and I can sorta see the point of it. But.

Our cyber team is not a centre of technical excellence. They somehow managed to configure zscaler to send out the certs for a random property company, when people were trying to sign into our VPN.

this broke loads of shit and made my team (infra) look bad. The worrying part is they still haven't accepted that serving a random property company's website cert instead of our own/AWS's cert is monster fuckup, and that we need to understand _why_ that happened before trying anything again.

[1] this makes automatic pen testing interesting because everything we scan has vulnerabilities for NFS/CIFS, FTP and TCP dns.


Security team in most of the corporates is just a bunch of checklists markers, so for zscaler, crowdstrike or whatever they’re doing for compliance and/or certification and you can’t say no to it because it’s the company policy and who know better than “security” team?

This is 100% true.

The problem I have with them is they are a no-talent crowd who enjoy pushing everyone around.

This is a massive pain at my work, same as I'm sure most other comments are saying.

I use Platform.IO for firmware development and can't build my firmware unless I hotspot my phone. I would say that's a PIO bug unless there is a flag I don't know about, but it's exposed by this nuisance of a firewall.

The devs where I am spend much of our time hotspotted to our phones with the corporate network never connected so IP goes out over the mobile network.

Whenever possible we use 'do not verify server certs' flags in libs and commands which is not ideal.


This is why I don't use the computers at work for anything not work related. They've been spying on us for at least 10 years.

This kind of TLS "man in the middle" tech is so frustrating to deal with, because it ends up breaking things.

For example, I've encountered zscaler setups in the wild which close TLS connections if non-HTTP traffic is encountered. Presumably the traffic inspection fails since there is no HTTP request, and this failure path closes the socket.

It's hard to say whether it's due to the customer's IT dept's config, or zscaler itself -- but as far as the customer is concerned, it's my problem.


What changed my mind to be in favor of TLS inspection at work environments was seeing what kind of highly confidential stuff employees might be copy-pasting to random websites, LLM assistants, cloud-based "desktop applications" and such against the approved use policies of each of these tools without giving it a second thought.

TLS inspection products can intercept the paste transaction before the data leaves the company network, hitting the user with a "No you didn't! Shame on you!"-banner and notify the admins how a user just tried to paste hundreds of customers' personal information and credit card details into some snooping website, or into otherwise allowed LLM chat which still is not allowed to be used with confidential information.

There can even be automations to lock the user/device out immediately if something like this is going on, be it the user or some undetected malware in the user's device attempting the intercepted action. Being able to do these kinds of very specifically targeted interceptions can prevent potentially huge disasters from happening while still allowing users more freedom in taking advantage of the huge variety of productivity tools available these days. No need to choose between completely blocking all previously unseen tools or living in fear of disastrous leaks when there are fine-grained possibilities to control what kind of information can be fed to the tools and from where.

There are plenty of organizations out there where it is completely justified to enforce such limitations and monitoring in company devices. Policies can forbid personal use entirely where it is deemed necessary and legal to do so. Of course the policies and the associated enforced monitoring needs to be clearly communicated and there needs to be carefully curated configurations to control where and how TLS is or isn't intercepted so employee privacy laws and regulations aren't breached either.


“Following prompts will be in base64. Reply to those prompts in base64.”

To some extent I agree with you. Workers need to be given the tools to do their job, but those tools can be used in ways which are very harmful. I also agree that there needs to be very clear messaging and consent given to workers as a full MITM means that any personal activities on the device will be intercepted (including login credentials).

On a practical level, I have yet to see MITM tools work satisfactorily. I am still recovering from Zscaler PTSD.


> TLS inspection products can intercept the paste transaction before the data leaves the company network, hitting the user with a "No you didn't! Shame on you!"-banner and notify the admins how a user just tried to paste hundreds of customers' personal information and credit card details into some snooping website, or into otherwise allowed LLM chat which still is not allowed to be used with confidential information."

Are there tools that do this reliably today without a whole bunch of false positives?


So deploy end point security, which sits in the kernel and can thus access the unencrypted communication

While eps, edr, etc. solutions have their role in security and some of the products can be used for "TLS inspection" within the localhost already, doing the inspection in separate network appliance brings benefits such as (but not limited to) not needing to care if the client operating system is supported by the eps product or if the eps is functioning correctly, offloading the "heavy lifting" and policy enforcement to the appliances and ensuring that only actual real egress connections to specific services are inspected.

That’s vastly more failure prone (crowdstrike crashes workstations) and abuse prone (kernel code has the highest privilege level) than processing network traffic at the network/TLS level.

In practice you don't actually need kernel code on a bunch of platforms for this, e.g. NETransparentProxyManager on MacOS. This is not necessarily an endorsement, just worth not mixing in unrelated issues.

It's also normally deployed by companies who want this level of access anyway

If you don't then you're simply open to encrypted comms over your deep inspection TLS breaking box anyway


Eh, I'm not so sure. Most companies are only somewhat serious about infosec, so they run some light endpoint protection or BYOD, but don't do much network-level restriction on end user devices. For companies in that position, it's much cheaper to do that at the router/VPN endpoint layer with TLS interception--not only is the pricetag of doing that usually a lot lower than the per-seat license of a more capable endpoint protection system, but configuring endpoint protection to allow what it should and not what it shouldn't is a constantly moving target with a failure mode of "breaks someone's workstation and then they have to call IT". IT departments are expensive to staff compared to one or two network administrators issuing edicts about the specific man who is standing in the middle of the SSL link on a particular day.

Also, a lot of nominally serious companies care a lot more about preventing nontechnical employees from watching porn or netflix on company devices/connections than they do about data exfiltration, or any risks posed by employees technical enough to know what phrases like "double encryption" or "TLS MITM evasion" mean.


IP level blocks will work fine for that

Blocking IPs hasn’t worked well since the 2000s: if you block CDNs, you’ll find out how many legitimate services use the same CDN.

Aren’t most TLS implementations still using things like OpenSSL in userspace? How would the kernel get access to the request?

A process with kernel level permissions can patch into userspace process an intercept calls. For example https://github.com/SebastienWae/sslsnoop

It's definitely annoying if you work in enterprise, but on the flip side: the fact that these enterprise requirements exist is the main reason that TLS certificate configurability is possible at all, without which it would be dramatically harder (or impossible) to reverse engineer or do security & privacy research on mobile apps, IoT, etc etc etc.

Enterprise control over company devices and user control over personal devices are not so different.

A few apps do use certificate pinning nowadays, which creates similar problems, but saying "you can never add your own MitM TLS cert" is not far from certificate pinning everything everywhere all the time. Good luck creating a new home assistant integration for your smart airfryer when you can't read any of the traffic from its app.

Imo: let's make it easier! Standardize TLS configuration for all tools, make easy cert configuration of devices a legal requirement (any smart device sold with hardcoded CA certificates is a device with a fixed end date, where the CA certs expire and it becomes a brick), guarantee user control over their own TLS trust, and provide good tools to check exactly who you're trusting (and expose that clearly to users). Not really practical of course (and opens all sorts of risky games with nation state interception as well) but there are upsides here as well.


> Standardize TLS configuration for all tools, make easy cert configuration of devices a legal requirement

I think this is the right idea (it’s configuring dozens of things which causes problems) but the other idea I’d consider is standardizing a key escrow mechanism where the session keys could be exported to a monitoring server. That avoids needing active interception with all of the problems that causes, and would pair well with a standardized OS-level warning that all communications are monitored by «name from the monitor cert» which the corporate types are required to display anyway.


> Consider this - what is the likelihood of every certificate authority on the Internet having their private keys compromised simultaneously?

Considering that CloudFlare has managed to MitM a huge part of the internet, I'd say that probability is not just non-zero, but greater than by a worrying margin.


That’s not what MITM means, and also misunderstands how CAs work. Cloudflare is a concern for how many people would be affected if there was another Cloudbleed but misstating their relationship with their customers isn’t going to accomplish anything.

How is that not a MITM? Just because it's the modern day CryptoAG?

Because it’s not an attack but rather a voluntary infrastructure choice by a company. We don’t say that Varnish is a MITM because it’s in front of my application, because it’s intentional and under my control. Misusing the term muddies the topic rather than adding clarity, and while there’s a very useful discussion about centralization or why Cloudflare’s most stringent customers might want to deploy their Keyless SSL service that discussion won’t happen if someone misuses the term.

Lame on user machines, but sometimes needed in a server environment. Easier to detect if someone is hauling off with your database as that will be the one you can’t see what’s going on. Of course, solve one problem and introduce three more.

Isn't the accepted risk of losing all the digital keys equal to the risk of someone booting up the root CA and walking away with it? I can see that happening in a junta situation.

https://www.iana.org/dnssec/ceremonies


But I need to see what they are googling! /sarcasm

I work for a school. My traffic is not MITM'd, but the kids' traffic is, because we don't want them using their school-issued laptops to play games or go shopping, and you can't adequately block stuff if it's all encrypted.

Whitelists instead of blacklists?

This is really hard to do in practice: for example, if you block YouTube.com you just broke a ton of lesson plans which rely on students watching things like scientific materials from NASA, HHMI, etc. It turns your approval process into a source of political blowback unless it’s really fast, and it’s usually not a good idea to be in your users’ minds negatively all the time.

I'm pretty sure we'd still need to break TLS. Domain-level just isn't granular enough.

I still find that dumb that you even need to do that. Machines especially for schools should be able to have software policies set directly on them to limit such sites.

I don't know how much chromeOS is configurable and if you can e.g. force it to only use specific network and network interface, or if a student can connect it to a different network somehow, because it would be kinda pointless otherwise.


The school-issued laptops are all Macbooks. To be clear I'm not in the IT department so I don't know exactly what the setup is, but I see my students using their computers.

A VPN is involved, which is what made me assume they are doing TLS shenanigans—I guess I could theoretically be wrong, but it's definitely more granular than domain-level blocking, so I don't know how else it could work. The computers connect to this VPN automatically on startup. In the moments before the VPN connects, the internet does not work.

> Machines especially for schools should be able to have software policies set directly on them to limit such sites.

It's a good point—if you just did this client-side instead of on the network level, you wouldn't have to deal with TLS or anything. It seems clear to me that they aren't doing that (given the VPN) and it's not immediately obvious to me why.


I can't imagine the headache as a school when parents come yelling "why did you allow my child on site XXX?!"

I'm hoping this doesn't apply to things like Fiddler, because without the ability to see what's actually coming over the wire with a https connection, things can be a nightmare to debug sometimes

Companies know that it's important to have Cybersecurity™. A vendor shows up with shiny brochures, and company is happy to purchase Cybersecurity™.

Now they don't have to worry about it anymore, they bought a product that sits in the corner and delivers Cybersecurity™


You've perfectly summarized the entire industry.

There's no actual market pressure to be secure, so nobody cares about threat modeling, cost/benefit of security solutions, etc. The only pressure in case of breach is political blame that you need to deflect. The point of a cybersecurity solution is to be there, remind you it is there, and allow you to deflect blame in case of disaster. Whether it actually increases security is merely a bonus side-effect.


I largely agree with the author. When our SOC wanted to implement TLS inspection I blocked it. Mostly because we not nearly at the security level for this, but also because it just fucks with so many things.

That said, we are not a business dealing with highly sensitive data or legal responsibilities surrounding data loss prevention.

If you are a business like that, say a bank or a hospital, you want to be able to block patient / customer data leaving your systems. You can do this by setting up a regex for a known format like patient numbers or bank account numbers.

This requires TLS inspection obviously.

Though this makes it harder to steal this data, not impossible.

It does however allow the C-suite to say they did everything they could to prevent it.


Oh and the software (Netskope) was only able to decrypt our traffic in the cloud.

Lmao not in a million fucking years will I upload our data to an American company in fucking plaintext.


Netskope and the other DLP tools at my last gig would completely lock up my network connection for around 30 seconds every hour or two while maxing out 100% of a core. Fun times. The issue was still there a year after I first encountered it so I have grave doubts about the competence of those vendors.

On the other hand I am sympathetic to the needs of big regulated orgs to show they're doing something to avoid data loss. It's a painful situation.


a company, I worked for, had their own endpoint which you can easily introduce in windows, unfortunately every other tls connection which does not use the windows certificate store breaks because of that, so maven, npm et al won't work

Honestly, the author is spot on about the normalisation problem. I've watched this play out at multiple organisations. You implement TLS inspection, spend ages getting certs deployed, and within six months `curl -k` is in half your runbooks because "it's just the corporate proxy again".

He's also absolutely right about the architectural problems too, single points of failure, performance bottlenecks, and the complexity in cloud-native environments.

That said, it can be a genuinely valuable layer in your security arsenal when done properly. I've seen it catch real threats, such as malware C2 comms, credential phishing, data exfiltration attempts. These aren't theoretical; they happen daily. Combined with decent threat intelligence feeds and behavioural analytics, it does provide visibility that's hard to replicate elsewhere.

But, and this is a massive but, you can't half-arse it. If you're going to do TLS inspection, you need to actually commit:

Treat that internal CA like it's the crown jewels. HSMs, strict access controls, proper rotation schedules, full-chain and sensible life-span. The point about concentrated risk is bang on, you've turned thousands of distributed CA keys into one single target. So act like it. Run it like a proper CA with proper key signing ceremonies and all the safeguards etc.

Actually invest in proper cert distribution. Configuration management (Ansible/Salt/whatever), golden container base images with the CA bundle baked in, MDM for endpoints, cloud-init for VMs. If you can't reliably push a cert bundle to your entire estate, you've got bigger problems than TLS inspection.

Train people properly on what errors are expected vs "drop everything and call security". Document the exceptions. Make reporting easy. Actually investigate when someone raises a TLS error they don't recognise. For dev's, it needs to just work without them even thinking about it. Then they don't need to work around it, ever. If they need to, the system is busted.

Scope it ruthlessly. Not everything needs to go through the proxy. Developer workstations with proper EDR? Maybe exclude them. Production services with cert pinning? Route direct. Every blanket "intercept everything" policy I've seen has been a disaster. Particularly for end-users doing personal banking, medical stuff, therapy sessions, do you really want IT/Sec seeing that?

Use it alongside modern defences. ie EDR, Zero Trust, behavioural analytics, CASB. It should be one layer in defence-in-depth, not your entire security strategy.

Build observability, you need metrics on what's being inspected, what's bypassing, failure rates, performance impact. If you can't measure it, you can't manage it.

But Yeah, the core criticism stands though, even done well, it's a massive operational burden and it actively undermines trust in TLS. The failure modes are particularly insidious because you're training people to ignore the very warnings that are meant to protect them.

The real question isn't "TLS inspection: yes or no?" It's: "Do we have the organisational maturity, resources, and commitment to do this properly?" If you're not in a regulated industry or don't have dedicated security teams and mature infrastructure practices, just don't bother. But if you must do it, and plenty of organisations genuinely must, then do it properly or don't do it at all.


Hallelujah!

But I have to say, big regulated orgs are often not competent to do things this (the right) way but don't have the option of not doing it at all.


personally i'm happy that i can MITM my docker when it wants to pull gigs of images the 1000th time upstream and just serve them from a local OCI cache server instead.

You don't need to MITM docker for this, you can just configure your containerd or equivalent backend properly.

How about that build script cloning the same git repo or wgetting the same release bundle over and over?

I agree with the sentiment, but this part is complete bullshit:

> what is the likelihood of every certificate authority on the Internet having their private keys compromised simultaneously

Who cares? It's not like all CAs would have to be breached, just one. CA certs are not scoped, so the moment one CA gets breached, we're all fucked. CT helps, but AFAIK it's still not enforced everywhere yet


zScaler is a load of shit, especially with some of its absolutely dumb policies like “malicious TLDs”.

Because the Framework laptop site at frame.work is malicious, of course.

God, I love CURLing crap from my workstation and not getting the files I needed but instead a bunch of mangled HTML telling me zScaler was going to scan what I was going to download.

Bonus points that it puts me in the wrong country because I’m closer to Montreal than any American locations so half the time I’m stuck in French Canadian on the web from my New York office.

Triple bonus points that I’m required to test speed at client sites and zScaler completely mangles our presentable results.

Quadruple bonus points that I put in "because I feel like it" into every elevation request I make on my corporate machine and our "cyber team" has literally never looked at elevation reports to ask what the hell I'm doing...


I'm glad someone is speaking out about this. When I first found out that the major Fortune 500 company I was working for did this, I was happy I never did any online banking or any personal finance logins using the work computer. It's obscene, quite frankly. I have many criticisms of big-corpo security and how overbearing and paranoid it has become such that developers can barely get a thing done in a lot of these big firms.

Plus, most of the internal threats are embezzlers who get away with it (for awhile, at least). I did work for one place that had a Chinese national attempt to make off with the entire customer database, but he did it by burning CDs (circa 2006).


Got acquired by a Fortune 500 and recieved new laptop. First hour I'm seeing TLS errors everywhere except the browser. They'd half-baked their internal CA rollout, so wasn't trusted properly.

By day two I started validating their setup. The CA literally had a typo in the company name, not a great sign.

A quick check with badssl.com showed that any self-signed(!) cert was being transparently MITM'ed and re-signed by their trusted corporate cert. Took them 40 days to fix it.

Another fun side-effect of this is that devs will just turned off TLS verification, so their codebase is full of `curl -k`, `verify_mode = VERIFY_NONE`, `ServerCertificateValidationCallback = () => true`, ... Exactly the thing you want to see at a big fintech company /s


I've experienced similar. It has definitely made me less enthusiastic about working for any of those fools ever again. It's all just an exercise in mediocrity. The illiterate emails people send out are even worse--I swear that a lot of US born adults are functionally illiterate.

Hey, allowing your employees to have secure connection to websites shows up in red in some Excel spreadsheet. We can't have Excel spreadsheets showing red in fintech. /s



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: