Interesting. For me it's generally much faster than other package managers. The evaluation takes some time, but copying derivations from a cache to the Nix store is so much faster than traditional package management.
I wonder if you somehow ended up eval'ing many versions of nixpkgs?
your nix store weighs at 100 GB
¯\_(ツ)_/¯ outside very constrained devices, who cares? I just checked my NixOS dev VM that I have used for months now and cannot remember when I last garbage collected. It's 188GiB, but I have many different versions of CUDA, Torch, etc. (the project I'm currently working on entails building some kernels for many different build configurations), and I run nixos-unstable, where a lot of stuff changes, so generations are pretty unique.
A 2TB NVMe SSD is just over 100 Euro. Caring about 100GiB seems to be optimizing for the wrong things.
I completely agree on embedded machines though. Just deploy it by copying the system closure, garbage collecting anything but the previous closure for backup, it'll be pretty much the same size as any other Linux system.
> For me it's generally much faster than other package managers.
I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.
> outside very constrained devices, who cares?
Seriously, are we going to shame people who can't afford to buy lots of storage?? My smaller laptop has only 250GB, but that's freaking plenty if I stick with apt. But I can barely run Nix on it.
> Seriously, are we going to shame people who can't afford to buy lots of storage??
It's not just storage, though - storage may be cheap but once your machine is at capacity (the physical space in laptops is an important constraint) you have to replace perfectly good hardware to accommodate absurdly space-hungry software (looking at you, Vivado).
Also, don't forget that not everyone has always-available, fast, reliable, cost-free internet. By rural standards my connection's very good, but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time.
Digital wastefulness is a problem, and I do think we need to take it more seriously.
but 100gb would still tie it up for several hours, assuming I didn't need it for anything else in that time
Except that Nix does not download 100 GiB under unless you are installing a gazillion packages. First, Nix downloads compressed output paths. Second, it's not like Nix packages are substantially larger than Debian, Ubuntu, or Fedora packages. The extra storage space comes from (1) Nix keeping multiple generations to allow you to roll back to previous versions of the system -- if you break something, you can always roll back; (2) people using multiple different versions of nixpkgs, which could lead to having multiple versions of system libraries.
(1) is a feature of Nix/NixOS, if you want to use less space, you can trade off the ability to roll back for space. You could always garbage collect everything except the current generation and it would be similar to other distributions. For (2), avoid using multiple nixpkgs versions.
I generally like keeping around a lot of generations, etc. so I don't mind my history of NixOS systems keeping 100-200 GiB. But if you care about space, garbage collect and it won't take up that amount of space.
I don't know what kind of package manager you were using, but I've never seen an update take a good part of an hour before Nix.
Pretty much all popular package managers. APT/dpkg, DNF/rpm, pacman, etc.
I have just updated one machine to the latest unstable. It updated 333 packages, a substantial part of that system. It took 1 minute and 50 seconds, most of it downloading. So, not sure how it takes a good part of an hour for you.
Seriously, are we going to shame people who can't afford to buy lots of storage??
I'm not shaming anyone. Just saying that 1 or 2 TB is pretty normal nowadays (outside Mac, because Apple makes you pay for it). At any rate, you can make the size pretty similar to any other distribution. It's not like glibc or GNOME takes up substantially more disk space on Nix.
If you end up using 100 GiB of storage, you are either keeping a lot of system generations around or you somehow have different nixpkgs versions in your system's closure, ending up with duplicate versions of glibc, etc. If the former is the case, set up automatic garbage collection and the space use will be far less. E.g. on one machine I have only three NixOS unstable generations and the system is 18 GiB (which includes a bunch of machine learning models, etc.). It would probably be substantially less on NixOS stable, since there are less differences between generations (e.g. I have qemu, webkitgtk, etc. three times).
Total size of installation is roughly comparable between NixOS and, say, Ubuntu.
My laptop's Nix closure of 1 generation is 33 GB. My desktop Ubuntu has 27 GB (20 GB /usr + 7 GB in /var, where snaps and flatpaks are stored).
Indeed the disk usage of Nix comes from multiple generations. Every time there is a new version of glibc, gcc, or anything that "the world" depends on, it's another 33 GB download. Storing the old generation is entirely optional. The maximum disk space needed is 2 generations.
Updating Ubuntu to a new LTS version almost always costs me multiple hours, caused by interleaved questions on how to merge changed config files in /etc (which unfortunately one cannot seem to batch), apt installation being rather slow, and during recent years, the update generally breaking in some way that requires a major investigation (e.g. the updater itself dies, or afterwards I have no graphics). On NixOS, these problems do not exist, and the time to update is usually < 30 minutes.
I wonder if you somehow ended up eval'ing many versions of nixpkgs?
your nix store weighs at 100 GB
¯\_(ツ)_/¯ outside very constrained devices, who cares? I just checked my NixOS dev VM that I have used for months now and cannot remember when I last garbage collected. It's 188GiB, but I have many different versions of CUDA, Torch, etc. (the project I'm currently working on entails building some kernels for many different build configurations), and I run nixos-unstable, where a lot of stuff changes, so generations are pretty unique.
A 2TB NVMe SSD is just over 100 Euro. Caring about 100GiB seems to be optimizing for the wrong things.
I completely agree on embedded machines though. Just deploy it by copying the system closure, garbage collecting anything but the previous closure for backup, it'll be pretty much the same size as any other Linux system.