Nix is source-based. If you write your own "package definitions" you can distribute those in all the same ways you would use for source code, since they are source code. nixpkgs for example is a monorepo of many of those package definitions, among other things. Flakes are a (still experimental) approach to (among other things) streamline the options you have outside of nixpkgs inclusion.
If you also want to distribute pre-built binaries you would use a cache. https://cache.nixos.org/ is exactly that for nixpkgs. You can host your own via http(s), ssh or s3. There is also cachix, which is basically a hosting provider for nix caches that is pretty widespread in the community, I think.
Channels are, AFAIU, a reference to some point-in-time/commit/version of nixpkgs. They are exposed in the form of branches in the nixpkgs repository and used in some other places as well. A channel has a set of conditions that need to be met for it to be advanced to a newer version of nixpkgs. For the nixpkgs-unstable channel, which follows the master branch, the packages it contains need to be built by hydra and be present in the global cache before it advances (I don't think this is true for all packages, but this is the general idea). This is to make sure that users who use packages from this channel will mostly find them in the cache (if you use master directly you can often run into packages that you need to compile yourself, because they are not yet present in the cache).
The stable channels are basically "just" branched of from master (or nixpkgs-unstable? not sure) when the given point release is due (there is more to it, for example there is an effort made to make all the packages contained in that release actually buildable, called "Zero Hydra Failures" or ZHF). They will then mostly stay that way apart from the odd backport for security reasons and the likes.
Basically, it is very similar to how a larger software project might be managed with a develop branch and older releases that still receive backports.
(This is my mental model of it anyway, as a user for a few years. There are probably details that might be a bit off or not exactly accurate.)
You can also configure the consumer to consume it over ssh without special setup on the host. But it opens a connection for every single request so it ends up killing the performance [1]