Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't really see how rootless containers change anything at all. You're still "just" one kernel privilege escalation away from breaking out. The level of isolation is much better in virtual machines, and the performance penalty is comparable these days.

The virtual machine images are a bit heavier, since you need a kernel and whatnot, but it's negligible at best. The memory footprint of virtual machines with memory deduplication and such means that you get very close to the footprint of containers. You have the cold start issue with microvms, but these days they generally start in less than a couple of hundred milliseconds, not that far off your typical container.



Memory de-dup is computationally expensive, and KSM hitrate is generally much worse than people tend to expect - not to mention that it comes with its own security issues. I agree that the security tradeoffs need to be taken seriously but the realworld performance/efficiency considerations are definitely not negligeable at scale.

There are also significant operational concerns. With containers you can just have your CI/CD system spit out a new signed image every N days and do fairly seamless A/B rollouts. With VMs that's a lot harder. You may be able to emulate some of this by building some sort of static microvm, but there's a LOT of complexity you'll need to handle (e.g. networking config, OS updates, debugging access) that is going to be some combination of flaky and hard to manage.

I by no means disagree with the security points but people are overstating the case for replacing containers with VMs in these replies.


And these overheads are even smaller if you use unikernels as per the paper. Eg, cold starts of a few milliseconds depending on the app/size of the image.


I'm struggling a little bit to grasp all the concepts when we start talking about unikernels, wasm and so on. Hopefully that's just a sign of the maturity of it, and not a sign of my mental decline. But on paper (as I understand it) it looks /so cool/.


Unikernels aren't too complicated conceptually. They're more or less a kernel stripped down to the bare minimum required by a single application. The complete bundle of the minimal kernel and application together is called a unikernel. The uni- prefix means one as in the kernel only supports one userspace application, instead of something like linux, which supports many. The benefits, as mentioned in the paper and in this thread are that you can run that as a vm, since it contains it's own operating system, unlike a container which is dependent on the host operating system. Also, they boot very quickly.


Agree with epr's definition of a unikernel (and no, no mental decline on your part, this isn't always well defined).

First off, a unikernel is a virtual machine, albeit a pretty specialized one. They're are often based on modular operating systems (e.g., Unikraft), in order to be able to easily pick the OS modules needed for each application, at compile time. You can think of it as a VM that has a say NGINX-specific distro, all the way down to the OS kernel modules.

VMs provide what's called hardware-level isolation, running on top of a hypervisor like KVM, Xen or Hyper-V. Wasm runs higher up the stack, in user-space, and provides what's called language-level isolation (i.e., you could even create a wasm unikernel, that is, a specialized VM that inside runs wasm (eg, see https://docs.kraft.cloud/guides/wazero/). Generally speaking, the higher you go up the stack, the more code you're running and the higher the chances of a vulnerability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: