I got curious about how many wheelbarrows of cash $20bn actually is.
Two ways to think about it: weight vs volume.
By weight (assuming all $100 bills):
$20,000,000,000 / $100 = 200,000,000 bills
Each bill is roughly 1g, so total mass is ~200,000 kg
A typical builder’s wheelbarrow can take about 100 kg before it becomes unmanageable
200,000 kg total
/ 100 kg per wheelbarrow
≈ 2,000 wheelbarrows (weight limit)
By volume:
A $100 bill is ~6.14" × 2.61" × 0.11 mm, which comes out to about 102 cm³ per bill
200,000,000 bills × 102 cm³ ≈ 20,400 m³ of cash
A standard wheelbarrow holds around 0.08 m³ (80 litres)
20,400 m³ total
/ 0.08 m³ per wheelbarrow
≈ 255,000 wheelbarrows (volume limit)
So,
About 2,000 wheelbarrows if you only care about weight
About 255,000 wheelbarrows if you actually have to fit the cash in
So the limiting factor isn’t how heavy the money is; it’s that the physical volume of the cash is absurd. At this scale, $20bn in $100s is effectively a warehouse, not a stack.
I think your volume per bill should be 6.14 * 0.0254 * 2.61 * 0.0254 * 0.00011 ≈ 1.137e-6 m³. That means about 227 m³ total volume, or about 2800 wheelbarrows.
It's the classic strategy of floating an extreme change, "listening to feedback", and then coming back later with the price they intended to charge all along.
Honestly, the author is spot on about the normalisation problem. I've watched this play out at multiple organisations. You implement TLS inspection, spend ages getting certs deployed, and within six months `curl -k` is in half your runbooks because "it's just the corporate proxy again".
He's also absolutely right about the architectural problems too, single points of failure, performance bottlenecks, and the complexity in cloud-native environments.
That said, it can be a genuinely valuable layer in your security arsenal when done properly. I've seen it catch real threats, such as malware C2 comms, credential phishing, data exfiltration attempts. These aren't theoretical; they happen daily. Combined with decent threat intelligence feeds and behavioural analytics, it does provide visibility that's hard to replicate elsewhere.
But, and this is a massive but, you can't half-arse it. If you're going to do TLS inspection, you need to actually commit:
Treat that internal CA like it's the crown jewels. HSMs, strict access controls, proper rotation schedules, full-chain and sensible life-span. The point about concentrated risk is bang on, you've turned thousands of distributed CA keys into one single target. So act like it. Run it like a proper CA with proper key signing ceremonies and all the safeguards etc.
Actually invest in proper cert distribution. Configuration management (Ansible/Salt/whatever), golden container base images with the CA bundle baked in, MDM for endpoints, cloud-init for VMs. If you can't reliably push a cert bundle to your entire estate, you've got bigger problems than TLS inspection.
Train people properly on what errors are expected vs "drop everything and call security". Document the exceptions. Make reporting easy. Actually investigate when someone raises a TLS error they don't recognise. For dev's, it needs to just work without them even thinking about it. Then they don't need to work around it, ever. If they need to, the system is busted.
Scope it ruthlessly. Not everything needs to go through the proxy. Developer workstations with proper EDR? Maybe exclude them. Production services with cert pinning? Route direct. Every blanket "intercept everything" policy I've seen has been a disaster. Particularly for end-users doing personal banking, medical stuff, therapy sessions, do you really want IT/Sec seeing that?
Use it alongside modern defences. ie EDR, Zero Trust, behavioural analytics, CASB. It should be one layer in defence-in-depth, not your entire security strategy.
Build observability, you need metrics on what's being inspected, what's bypassing, failure rates, performance impact. If you can't measure it, you can't manage it.
But Yeah, the core criticism stands though, even done well, it's a massive operational burden and it actively undermines trust in TLS. The failure modes are particularly insidious because you're training people to ignore the very warnings that are meant to protect them.
The real question isn't "TLS inspection: yes or no?" It's: "Do we have the organisational maturity, resources, and commitment to do this properly?" If you're not in a regulated industry or don't have dedicated security teams and mature infrastructure practices, just don't bother. But if you must do it, and plenty of organisations genuinely must, then do it properly or don't do it at all.
It's a barrier not because it is hard, but because people are not familiar with it. Ask a non-technical user using the GUI to edit their display settings and they'll be equally flummoxed.
Interestingly, I feel polar opposite to you. Digging through a clunky GUI, going multiple levels deep, to find a tick box is annoying.. When I can just run a single one-line to achieve what I need. I suppose different strokes..
Not just that, but technologies which took me many months or even years to become and expert at, the latest generation of engineers seem to be able to pick up in weeks. It's scary how fast the world is moving.
One mainly, although not always, harms individual wellbeing, whilst the other causes mass death and lines on the map to change.
Hopefully you can work out which is which.
reply