Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

what. docker isn’t the problem here. dbs on public subnets, and the lack of monitoring for accidental db exposure are the actual issues here.


> docker isn’t the problem here

I mean... if they weren't using docker it would have been fine, but because they used docker it wasn't fine. That reads like docker is the problem. That further layers could have mitigated it doesn't make docker not the problem.


Where does the buck stop? This was simply a few layers of bad configuration.

They didn't secure their database with auth/access control and they misconfigured docker.

They have a few things at their disposal:

- Using the docker-user chain to set firewall rules

- Running docker such that the default bind address for the port directive is 127.0.0.1 instead of 0.0.0.0. This puts a safety on the footgun.

- Explicitly setting the bind address of the port directive when bringing up the container.

Docker didn't come along, install itself, and open up the port. The sysadmin did. It's unfortunate they didn't know how it interacted with the firewall or how to properly configure it, but given that, should they really have been rolling it out to production systems?

They tested on prod and learned in trial by fire.


> They didn't secure their database with auth/access control

True, although they had reason to believe that that was safe.

> they misconfigured docker.

Ah, no, that's where we disagree. They didn't configure it, docker shipped an insane default that bypasses existing security measures. There is absolutely no reason to expect that running a program in docker magically makes it ignore the system firewall.


The sysadmin is responsible for what they install and how it's configured. Docker didn't specify what ports to open, what container to run, volume mounts, image etc. That's part of the config the sysadmin supplies when they spin it up. No matter how you slice it, it down to them not understanding what they were pushing to prod.

By the same token having an unsecured database is a bad default, so we can keep passing the buck around.


> True, although they had reason to believe that that was safe.

That's like the antithesis of layered security. There would be no point in layers if you assume a single layer will protect you.


If they had understood how docker works, it would've been fine, too. But because they didn't and used a software firewall as their only line of defense and didn't bother with authentication for the DB server, it wasn't fine.


> If they had understood how docker works, it would've been fine, too.

They shouldn't have needed to, for this.

> But because they didn't and used a software firewall as their only line of defense

Okay? That should have been safe. Like, sure, there are ways to add more layers, but that layer shouldn't have failed them.

> didn't bother with authentication for the DB server, it wasn't fine.

They shouldn't have needed to. Like yes, running a DB without authentication isn't a great idea, but they had good reason to believe that it wouldn't be exposed.


Running a server is a lot like doing electrical work. If you don't really know what you're doing and rely on vague intuition about how things should probably work, you might be in for a shock.


it wouldn’t have been fine—they didn’t realize the configuration mistake until their database had been dropped. there are plenty of scenarios that could have led to the exact same outcome. i’d argue it would have been a matter of time.

the problem is that it was possible to accidentally expose their credential-less db to the internet _at all_ and that they had no monitoring or tools in place to detect the misconfiguration. that’s a design flaw, and again, not a docker-specific problem.


By that logic you could say that iptables/netfilter was the problem. Or Linux. Or maybe the IP protocol.


I don't think so; at worst, each of those is passive and might fail to make you more secure. Docker goes out of its way to add holes to existing security. It's like... if iptables decided to ship a feature that detected when it was running in AWS and "helpfully" automatically reconfigured your security groups to allow any traffic that was allowed in iptables, that would be the same.


I posted that a bunch of times already but that's not what happens. Docker does not override the firewall, it just uses forwarding which kick in before filtering (iptables has separate "nat" and "filter" tables). The host's firewall just doesn't apply to containers and VMs, because they are not listening on a port on the host.

Docker could not extend the host's firewall to containers without changing how iptables work.


> I posted that a bunch of times already but that's not what happens. Docker does not override the firewall, it just uses forwarding which kick in before filtering (iptables has separate "nat" and "filter" tables).

That's exactly what happens. Docker sticking its rules before the normal filters is exactly what I mean when I say it bypasses the firewall rules. Like... you're literally describing the implementation details of what I said it does.

> The host's firewall just doesn't apply to containers and VMs, because they are not listening on a port on the host.

They clearly are? If a docker container was listening but not on a port on the host's internet-facing interface (indirected though it may be), none of this would be a problem. The problem is precisely that if you have a host firewall rule that says "this port is blocked", docker will "helpfully" preempt that and connect that port to a container, which is a massive and unreasonable footgun.

> Docker could not extend the host's firewall to containers without changing how iptables work.

podman seems to manage fine, so I have trouble believing that.


I am not saying that it isn't a footgun, but it is not a conscious decision of Docker to "override" or "insert an allow rule", who actively bypass restrictions if they exist. Rather the situation is a result of implementing this the "naive" way, doing forwarding in the "nat" table that is meant for other hosts (e.g. treating containers as distinct hosts, which in a sense they are, since they have their own (virtual) network interfaces and port namespace).

VirtualBox works the same way, if you use the bridge networking. So does kvm. And so does Podman, I am not sure why you thought otherwise (maybe there is a mode where it uses a userspace proxy rather than iptables? When running rootless for example? Does not happen on my machine, ufw rules are ignored).


And running a database without authentication.


True, but even the author covered that. No one has mentioned Docker.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: