Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One implementation detail about QUIC that I was surprised by was that it requires TLS. That’s great for improving the security on the public Internet but it seems like it adds complexity and CPU overhead if you’re running on something like an internal Wireguard network. Overall, though, it’s a minor complaint. I did like how they split apart the QUIC and HTTP/3 protocol from one another.


My experience as former technical lead of a major HTTP/3 deployment is that while TLS certainly adds CPU overhead, it's by far not as major as the overhead of TLS on top of TCP is.

QUIC in general is far less efficient than TCP+TLS. Very optimized implementations require about 2x CPU for bulk data transfers, more usual ones (and running on operating systems that don't support UDP segmentation offloads) require about 5x the CPU. From that CPU overhead, only a small part is crypto. Most CPU time is spent in the OS networking stack processing the tiny MTU sized packets. In an optimized implementation where crypto has a bigger impact, you might see around 30% CPU time being spent in crypto operations in a profile when using hardware-accelerated AES (chacha20 is worse). Which means one could gain that amount of CPU back for other things in a cryptoless QUIC. However in a less network-optimized deployment it would only be 10%.

What I however can understand is people not wanting to deal with the complexity of issuing, deploying and rotating certificates for internal deployments that are already secured by other means like wireguard. It could be a concern - but on the other hand tools like the k8s cert-manager already simplify the process for those environments. And of course one needs to consider whether QUIC is the perfect tool for those environments anyway - plain TCP has lots of strength too.


What kind of profiler can tell which areas the CPU is burning cycles?

Like, just mapping a bunch of known function and timing? Then every "ssl_" function is classified as crypto time? And every "net_" function is networking? (Or something like that?)


perf can do the job on Linux.

Networking cost will be within callstacks for the system calls that send and receive packets (sendmsg, sendmmsg, recvmsg, recvmmsg).

Crypto cost in functions which sound like the crypto primitives being used. They typically won't have ssl_ in the name, because QUIC implementations directly make use of lower-level primitives - that are e.g. exposed by libcrypto/ring/etc.

Here's one example of an old profile using Quinn:

https://gist.githubusercontent.com/Matthias247/47dc290dde72e...

It clearly shows the networking and crypto parts (here using ChaCha20). However don't interpret too much into the actual values in this graph, since the profile is nearly 2 years, uses ChaCha20 instead of AES (much more expensive), and used loopback networking (cheaper than the real thing).


Intel vTune and AMD uProf, as well as opensource tools like hwpmc for FreeBSD and perf for Linux.


My concern long-term is that browsers will simply drop other options.

And keep in mind that "internal deployments" also include home servers and such, where it all makes even less sense.


It does make sense though, our home networks aren't safe these days when random apps on your phone and random websites on your web browser can just make requests to random hosts inside your network, possibly exploit your router and then snoop on traffic.


The precedent for that was set by http/2; while the spec didn’t require it (Snowden happened after that decision was made), all of the inplementatuons did.


I believe GP is talking about the lower level QUIC protocol. That’s like TCP requiring TLS.

A tradeoff would have been QUIC not requiring it but the higher level http/3 protocol requiring (if that’s possible idk)


TLS delivers three things, Confidentiality, Integrity and Authentication. But intuitively these three things are what people expect their transport protocol to do anyway, it's actually surprising as a user that TCP doesn't really bother providing say, Integrity. "Oh, yeah, your data might be arbitrarily changed by the time it is delivered". Er, what?

BCP 78 "Pervasive Monitoring is an Attack" says the Internet should design new systems to resist such monitoring and so offering these intuitive properties for a new transport protocol made sense. And that's what QUIC is. In a BCP 78 universe it doesn't make sense to ship new protocols that will be rendered useless by the surveillance apparatus.


> "it's actually surprising as a user that TCP doesn't really bother providing say, Integrity"

TCP is a transport-layer protocol (ref the OSI model). The responsibilities of a transport-layer are neatly summarized here: https://en.wikipedia.org/wiki/Transport_layer

The OSI model is comprised of layers of abstractions, each building on-top of each other. Making TCP (a transport-layer protocol) depend on TLS (an application-layer protocol, ref https://en.wikipedia.org/wiki/Application_layer) is completely backwards, and makes absolutely no sense when considering the model.

The OSI model has been around for a very long while. If you have a degree in CS, or have studied networking in university - chances are you've had to learn about the OSI model. There's no reason to throw that away now, or reinvent the wheel.


> The OSI model has been around for a very long while.

Even imagining that for some reason I wasn't aware of that, why would it be relevant?

> If you have a degree in CS, or have studied networking in university - chances are you've had to learn about the OSI model.

And the waterfall software development model. A bunch of long obsolete data structures. Open Hypermedia (remember that? No? That's OK it doesn't matter any more).

What's happened here is that you've privileged a bad model that somebody probably taught you out of a textbook (hopefully while grimacing, since this is useless "information") over the real world experience of dozens of really smart people who work with actual networking and designed QUIC.

Maybe, if I'm giving you benefit of the doubt you've assumed "user" somehow means "undergraduate who is studying the OSI model" but it doesn't - billions of people use the Internet, far more than will study any degree, let alone Computer Science. And it's quite reasonable for them to expect their communications to have these three properties.

If your preferred model insists that we can't have security until the application layer then your model is wrong, just as surely as if you have a model of the Atom which assumes it's a solid ball of something (the plum pudding model, as with OSI perhaps some undergraduates were taught this model between when it was proposed and when experiments showed it's just wrong).


I have absolutely no idea what anything you just wrote means. You're ramblings make no sense whatsoever, and they lead towards no conclusions. Get over yourself.

> "What's happened here is that you've privileged a bad model that somebody probably taught you out of a textbook"

So you're dismissing the OSI model as a "bad model" that "somebody taught me", yet there's this other shiny thing developed by "dozens of really smart people" and that's clearly the way to go because those people "work with actual networking".

> "And it's quite reasonable for them to expect their communications to have these three properties."

Their communications are clearly secured and tamper-proof already today, despite not using QUIC. How do you square that?

The security happens at the application layer, using mechanisms such as TLS. Additional security mechanisms can clearly come on-top. No need to bake everything into a single transport-layer protocol when there's endless flexibility available by layering the protocols on-top of each other.

> "If your preferred model insists that we can't have security until the application layer then your model is wrong"

How about instead of throwing fits about some vague ideas of "security" you actually provide concrete examples of what it is you're talking about, so that we can have a meaningful and constructive conversation about technology?

All I was saying is that there's no reason to introduce more complexity into the transport layer, since "security" is already handled by the application layer (on a per-application basis). It's unclear whether a single "security" model fits into the transport layer, which should be as agnostic and light-weight as possible (hence it's original intention - to facilitate the transfer of information between endpoints, and leave the rest to whatever consumes that information).


> All I was saying is that there's no reason to introduce more complexity into the transport layer

But there is, and I explained what that reason was. The real world is under no obligation to faithfully copy your OSI Model, on the contrary, the fact the OSI Model doesn't resemble the real world is a good reason to abandon it.


> "But there is, and I explained what that reason was."

If your reason is "security" (whatever that means, you won't define that either), then I too, explained how that is guaranteed by application-layer protocols already today - so it's unclear why changing the transport layer is needed. Again, you're not saying anything concrete and keep handwaving - I don't think you really want to (or can) have a conversation, really.


I studied networking in the university, 20 years ago. We did study OSI as well as TCP/IP and the rest of the actual Internet stack, and it was blatantly obvious that the latter doesn't really conform to the former - protocols straddling boundaries etc. For example, TCP is not a strictly transport-layer protocol, since it also handles sessions.

When asked, our prof readily admitted that the OSI model was one of those design-by-committee experiments in purity that quickly broke down IRL.


TCP/IP does not use the OSI model. Stacking layers on top of each other promotes inefficiency and poor security, and is one reason why TCP/IP won over the OSI-recommended protocols.


> "Stacking layers on top of each other promotes inefficiency and poor security"

Yet that's exactly how the internet works today.


The advantage of merging TCP and TLS is improved performance and security/privacy (security/privacy is improved due to fewer session parameters being readable or modified by third parties)

So now what is the benefit of sticking to the OSI model, so that we can evaluate the pros and cons?


it's insane to expect the opposite: that traffic in my own network will need to keep reaching to a certificate authorities outside to validate packages from one host to another.

if you don't understand why these 3 things are on top of tcp, well, nevermind, I was going to say you shouldn't be designing networks buy you migth be already on the quic steering committee.

most committee now are a joke so that some googler middle mamaget makes it to jr director. sigh.


The use of TLS for QUIC does not imply or require the use of the Web PKI which is what I assume you're thinking of by "certificate authorities outside to validate packages".


> "The use of TLS for QUIC does not imply or require the use of the Web PKI"

Handling certificate revocations (which would be needed to "ensure security"), does indeed imply the use of some way to check for the revocations in a timely manner. The revocation lists themselves can be tampered-with.


You've jumped from assuming the Web PKI, which isn't required, to assuming online revocation checks, which is even more not required.


So how does your imaginary version of a transport-layer guarantee a message can't be tampered with if it trusts keys which are revoked?


Web PKI is not the only way to revoke keys.


> "Web PKI is not the only way to revoke keys."

You're not answering my question (we both know why), and I never mentioned anything about WebPKI in any of my comments anyways.


HTTPS does not address state-level threats. Your browser trusts a load of entities by default, any of which could be forced to meddle with your data.

Lenovo Superfish exposed the limitations of the CA system.


I think saying it "doesn't address" these threats is a bit extreme.

It significantly reduces the attack surface (Certificate Authorities vs every ISP), it makes it a lot harder for state actors to pull off those attacks deniably with a gag order, and it makes it a lot easier for an informed but non-expert consumer to pick a secure-by-default solution.


Lenovo Superfish was a local exploit; the software was installed at the factory. It could have been any sort of root kit or other client software. Once an attacker has control of your local device, lots of things are possible. It’s true that HTTPS won’t defend against local attacks, but that doesn’t really seem like a fair criticism since that is not what it is supposed to do.

The defense against compromised certificate authorities starts with platforms and browser makers. They demand that CAs implement certificate transparency logs to be included in root stores.

They also monitor CT logs, as do most large site operators. Facebook for example does not run an OS platform or browser, but has a robust CT monitoring program.

So if one of the random little CAs in the root store of your browser issues a rogue cert for “google.com”, it will be logged and seen, and that CA will risk getting kicked out of the root store. That’s what happened to Symantec, which was not a small CA.

In general it is safer and quieter for bad guys to target client devices with attacks like Pegasus, than systemic actors like entire CAs.


> So if one of the random little CAs in the root store of your browser issues a rogue cert for “google.com”, it will be logged and seen

The victim might be the only one getting a collision as governments target them (and no security researchers get the compromised site + public key), and the Superfish fiasco shows that a collision is simply ignored by the browser.


I think it makes sense, I would just love to improve ergonomics around getting certificates for internal services.

For example, I use the TP-Link Omada Wi-Fi access points and have a local hardware controller for them. The hardware controller can have a static IPv4 (ugh for no IPv6 support) but since it doesn’t support Let’s Encrypt its only way to get a certificate is to upload one via the web GUI. I can of course create my own CA and install my own cert with a long expiration date but then that would mean installing my CA on a bunch of devices from which I might access the controller.

Maybe the solution is something like having your DHCP box also be able to run an ACME server scoped just to your local domain and have that CA be trusted for your local domain by all your devices that get their IP from the DHCPv4/6 via a DHCP option.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: