Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is docker to blame?


It's subjective I guess, but I feel as though containerisation has greatly supported the large Cloud vendor's desire to subvert the more common model of computing... Like, before, your server was a computer, much like your desktop machine, and you programmed it much like your desktop machine.

But now, people are quite happy to put their app in a Docker container and outsource all design and architecture decisions pertaining to data storage and performance.

And with that, the likes of ECS, Dynamo, RedShift, etc, are a somewhat reasonable answer to that. It's much easier to offer a distinct proposition around that state of affairs, than say a market that was solely based on EC2-esque VMs.

What I did not like, but absolutely expected, was this lurch towards near enough standardising one specific vendor's model. We're in quite a strange place atm, where AWS specific knowledge might actually have a slightly higher value than traditional DevOps skills for many organisations.

Felt like this all happened both at the speed of light, and in slow motion, at the same time.


Containers let me essentially build those machines but at the actual requirements I need for a particular system. So instead of 10 machines I can build 1. I then don't need to upgrade that machine if my service changes.

Its also more resilient because I can trash a container and load up a new one with low overhead. I can't really do that with a full machine. It also gives some more security by sandboxing.

This does lead to laziness by programmers accelerated by myopic management. "It works" except when it doesn't. Easy to say you just need to restart the container then to figure out the actual issue.

But I'm not sure what that has to do with cloud. You'd do the same thing self hosting. Probably save money too. Though I'm frequently confused why people don't do both. Self host and host in the cloud. That's how you create resilience. Though you also need to fix problems rather than restart to be resilient too.

I feel like our industry wants to move fast but without direction. It's like we know velocity matters but since it's easier to read the speedometer we pretend they're the same thing. So fast and slow makes sense. Fast by magnitude of the vector. Slow if you're measuring how fast we make progress in the intended direction.


Containers have nothing to do with storage. They are completely orthogonal to storage (you can use Dynamo or RedShift from EC2), and many people run Docker directly on VMs. Plenty of us still spend lots of time thinking about storage and state even with containers.

Containers allow me to outsource host management. I gladly spend far less time troubleshooting cloud-init, SSH, process managers, and logging/metrics agents.


> Containers have nothing to do with storage. They are completely orthogonal to storage

Exactly.

And sure, you can use S3/Dynamo/Aurora from an EC2 box, but what would be the point of that? Just get the app running in a container, and we can look into infrastructure later.

It's a very common refrain. That's why I believe Docker is strongly to linked the development of these proprietary, cloud based models of computing, that place containerisation at the heart of an ecosystem that bastardises the classic idea of a 'server'.

The existence of S3 is one good result of this. IAM, on the other hand, can die in dumpster fire. Though it won't...


> And sure, you can use S3/Dynamo/Aurora from an EC2 box, but what would be the point of that?

An easy API? Easy replication / failover / backups? I would absolutely use S3 even with EC2.

> IAM, on the other hand, can die in dumpster fire.

I’m no great fan of AWS’s approach to IAM, but much of the pain is just the nature of fine-grained / least-privilege permissioning. On EC2 it’s more common to just grant broader permissions; IAM makes you think about least privilege, but you absolutely can grant admin for everything. And as far as a permissioning API goes, IAM is much cleaner/saner than Linux permissions.


I don't see how Docker makes that worse.

Before Docker you had things like Heroku and Amazon Elastic Beanstalk with a much greater degree of lock in than Docker.

ECS and its analogues on the other cloud providers have very little lock in. You should be able to deploy your container to any provider or your own VM. I don't see what Dynamo and data storage have to do with that. If we were all on EC2s with no other services you'd still have to figure out how to move your data somewhere else?

Like I truly don't understand your argument here.


Containerization was basically a way to get rid of the problem of "it works on my machine", mainly the OS version and installed libraries. Plenty of instances where program X will work on system A, but not system B, but program Y works on system B but not A. Or X is supported on Redhat/Ubuntu/etc. but you can't or don't want to build from source.

Even if that is not a problem, you avoid having to install the kitchen sink on your host and make sure everything is configured properly. Just get it working on a container, build and image and spin it up when you need it. Leaves the host machine fairly clean.

You can run a bunch of services as containers within a single host. No cloud or k8s needed. docker-compose is sufficient for testing or smallish projects.

Also, there is a security benefit because if the container is compromised, problem is limited that container not the entire host.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: