> Seriously, are we optimizing for the right stuff here? I know Nix is doing package isolation with a programmable environment defined by text files, it's immutable and all. I get it. But we have alternatives for these problems. They do a pretty good job overall and we understand them.
Setting up a development environment five years ago: here's a Word document that tells you which tools to install. Forget about updating the tools - good luck trying to get everyone to update their systems.
Setting up a development environment two years ago: here's a README that tells you which commands to run to build Docker images locally which will build and test your code. OS X users have fun burning RAM to the unnecessary VM gods when they have little to spare to begin with because MacBook Pros. No reproduceability between Dev and CI despite using container images because Dev will have debugging tools made available in the image that will be removed from the final image. Be limited while debugging locally because all debugging must happen over a network connection.
Today: Install Nix. Run build.sh. Run test.sh. Run nix-shell if you want more freedom.
I'm missing something here. (Disclaimer: I've just tried to understand the OP, I don't know all the details about what is going on there).
The text starts with "In my last post about Nix, I didn’t see the light yet." ... Then it continues with "A popular way to model this is with a Dockerfile." So I have expected that the post will demonstrate how Nix can be used without docker... But then later:
"so let’s make docker.nix:"
and continues in that way. So there is some dockering involved? To this casual reader, it doesn't seem that using Nix allows one to avoid docker, as a technology, but just that some additional goal is achieved by using Nix over that.
I just missed what was actually achieved, other than that the text mentions a few megabytes less here or there, of 100 MB, and that I also miss the information if the megabytes were traded in the build time or, if I understand correctly, in some way even more dependencies (more places from which something has to download something). Can anybody explain?
I'm sure that these tradeoffs were obvious to the author, but I as a reader hoped to somehow get the idea of those, and I missed that (yes, it's hard to "see the light" especially indirectly).
As a person that uses Nix to build Docker images in production, let me explain.
Docker is a tool that lets you package a Linux distribution and roll it out on any random server cleanly and safely. But the stuff inside that distribution is still very much bespoke and unmaintainable.
Nix is a tool that lets you create a custom Linux distribution with absolutely minimal effort. (Basically, just list your packages in a file and hit 'go'.) But the packaging story for Nix is pretty bad.
To bridge that gap, Nix has code that puts a Nix package with all dependencies into a Docker container. It works, but of course kind of icky; something more integrated and smart would be preferred.
> Docker is a tool that lets you package a Linux distribution and roll it out on any random server cleanly and safely. But the stuff inside that distribution is still very much bespoke and unmaintainable.
This, a thousand times. If you are deploying the same container 10,000 times, with no modifications - well it makes sense to spend time maintaining that distribution. But if you're using Docker for dev environments (for example) and each one is different... hiiiya you haven't really moved on in terms of maintainability than using Vagrant or any VM setup.
For development, using Nix or Guix is in my opinion extremely nice. Using Docker in development would mean mounting your dev directory as a volume into a dev-container and depending on your application, this might end up being a pain in the ass. Editors often depend on the same dependencies to compile your code on the go and check for errors - if you don't have these on the host, you will then have to start solving new problems.
With Nix, you can have a package definition and either install all the necessary dependencies globally on your machine or spawn a shell with all of the needed binares in your PATH. Recently an application needed a different Node version than the one that was globally installed on my machine. Instead of having to build a Dockerfile or whatever, I just spawned a shell with the newer Node version, ran the command that I needed and was done.
For production, you might still want to use Docker, as there's a lot of great software built on top of it (Kubernetes and other platform-specific managed services). You can turn a Nix/Guix package definition into a Docker image quite easily, if you already have one. As extra benefit, you remove the small chance of still ending up with incompatible dependencies that you get when using a traditional package manager in a Dockerfile.
Nix itself doesn't use Docker, and you can deploy it like that just fine (with NixOS, and NixOps if you have multiple machines).
But sometimes your ops team has standardized on Docker, maybe because you're using Kubernetes. In that case, Nix will happily build those images for you.
The big one that I got is that the resulting Nix image has just the Go executable in it, and so the server is safer because if anyone hacks into it they'd need to bring their own copy of any tools they wanted with them. I'm a huge fan of reducing attack surfaces wherever possible, and getting a container that will only run the program required and nothing else is a win for me.
> The big one that I got is that the resulting Nix image has just the Go executable in it
Now I miss the point of that one too: if just a Go executable alone can be enough anyway, as it is statically linked, why not just copying it, instead of complicating around?
A lot of people have standardised on Docker images as the default distribution/packaging format as Kubernetes etc make deploying/running them more standardised across orgs.
You can build the binary then have a Dockerfile copy it in from the scratch base image. However if you are using nix for deterministic builds you might as well add the few lines of code to have nix build the docker image to vs a Docker file with a single copy command.
If you are not building a static binary you get the advantage nix will copy in only the dependencies needed. You also get the advantage when you are building images you not randomly downloading urls from the internet that may be dead. Artifacts can come from a nix build cache which is cryptographically signed based on the build inputs so you know building the same image every time produces the same output.
With typical Dockerfiles that is not true. Docker images are not immutable so fetching the same Docker image may result in a different image being fetched. Likewise a lot of Docker files just wget / yum install random packages from places that may not exist anymore. If you maintain your own nix build cache you will always be able to build, get speedup from hitting the build cache vs compiling and know the build is deterministic. Running the same build multiple times will result in the exact same output.
Because you get to use the same tooling to build Docker images that need more than that. Depend on a C shared library via cgo somewhere? Have a directory full of templates and other resource files that need to ship with it? Maybe the Go program needs to shell out to something else? You don’t have to rework your tooling or hack a random shell script up.
Look up nix home manger, you might be able to have nix configure everything in your home directory.
Also nix makes it pretty trivial to package and patch propriety binary vendor software. It would take at least one person on the development team feeling comfortable with nix, but they could craft a nix file that does everything.
If you find something that can do all that reliably I would love to hear about it! I've got some Powershell scripts but it's still a several step process.
Why should I be embarrassed for dumping some weekend coding garbage just to keep HR happy that everyone is expected to have something on Github?
Everyone here knows my stuff long time ago, or is knowledgeable enough to find it, given that my online life goes back to the BBS days, do you think it embarrasses me, really?
Corporate pays my bills, not songs about birds, rainbows and how everyone should stick it to the man.
You skipped the "spend 3 hours figuring out why libgcc_s.so.1 won't link when trying to compile that one protobuf tool you need right now" step.
Seriously, I dread every time I have to compile C++ in NixOS and there's no existing derivation already.
Oh, and when the Omnisharp guys start requiring a new version of mono that won't build the same way for whatever reason, so you're stuck on old versions of your IDE plugins if you want C# support until someone else figure out what broke.
Well, there is a well defined pattern using configure and make. When very complex builds like GNU Emacs and operating systems have done quite well with this pattern, I wonder what problem we are trying to solve.
That's a very apple to oranges comparison, wouldn't you agree?
GNU autotools assists in making source code packages portable accross Unix-like systems, while make is for build automation and has fine-level dependencies (inter-files).
Nix is a package manager, and though is can build your software for you transparently, it is not tied to and specific software configurationand building toolset, can manage buildtime and runtime dependencies (including downloading the sources), caching build results transparently, etc.
Setting up a development environment five years ago: here's a Word document that tells you which tools to install. Forget about updating the tools - good luck trying to get everyone to update their systems.
Setting up a development environment two years ago: here's a README that tells you which commands to run to build Docker images locally which will build and test your code. OS X users have fun burning RAM to the unnecessary VM gods when they have little to spare to begin with because MacBook Pros. No reproduceability between Dev and CI despite using container images because Dev will have debugging tools made available in the image that will be removed from the final image. Be limited while debugging locally because all debugging must happen over a network connection.
Today: Install Nix. Run build.sh. Run test.sh. Run nix-shell if you want more freedom.