Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, the size doesn't matter.

I was once in the camp of small Docker images, but realized it's simply not worth the tradeoff, since there's only one upside to them, and that upside is fast transfer of images.

However, that argument becomes pointless when using a proper CI/CD stack. As a developer, you don't normally upload images yourself, but push changes to GitHub, then Jenkins/Travis/whatever takes over, builds the image, and pushes it into production/staging/whatever. Since CD tool of choice is usually also on the cloud, we don't have to worry about image size, nor to any of the CD vendors charge for data transfer.

I'd rather have bigger images (I base mine off Debian now, used to be Alpine) and not have to worry with lack of ported tools and libraries, than vice-versa.



What if you need to release a hot fix and your image is 1Gb?


1. I push a hotfix to GitHub. 2. Jenkins (which is on Google Cloud) builds it, and it already has all the Docker steps cached from previous builds, so it's fast. 3. Jenkins pushes the image to Google Cloud repo, which is almost instantaneous 4. Kubernetes (also on Google Cloud) pulls the image and makes a new deployment

No big deal. :)


And this is why we need a better image format so that people don't cripple their images to get around the misuse of tar archives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: