Providing robust restoring and backup systems for a system that allows to run any kind of workload is almost impossible. You'd have to provide database backups for all versions of all databases, correct file backup for the volumes etc.
It feels much more dangerous to have such a system instead in place and provide false sense of security. Users know best what kind of data they need to backup, where they want to back it up, if it needs to be encrypted or not, if it needs to be daily or weekly etc.
ZFS. Snapshot the entire filesystem, ship it off somewhere. Done. At worst, Postgres is slow to startup from the snapshot because it thinks it’s recovering from a crash.
Postgres is recovering from a crash if it's reading from a ZFS snapshot. It probably did have several of it's database writes succeed that it wasn't certain of, and others fail that it also wasn't certain of, and those might not have been "in order". That's why WAL files exist, and it needs to fully replay them.
Classic HN reply that’s very disconnected from reality. Most people don’t run ZFS, most people using these tools are using this to self host their apps as it’s cheaper than managed cloud server. Usually on a dedicated or VPS server where by default you run stock Ubuntu and no niche file system.
TrueNas needs to embody best practice for application backup and restore even though it runs ZFS.
As a side note, it's pretty interesting TrueNas is heavily designed around the ZFS paradigm. Most people think of applications and time stamps to restore not data pools and the snapshot paradigm accompanying ZFS.
With the right layer of abstraction on top of snapshots one could have the cake you need it to as it is difficult for beginners to grasp ZFS.
You are missing the point, nobody doubts it's possible but defaults matter and people just commission a new "Ubuntu" server on OVH, Hetzner, DO etc. and don't configure ZFS or snapshotting or even want to know what that is.
I didn’t say most people run it, I offered it as a solution. If you’re willing to run a server, you should be willing to try ZFS. As long as you follow some best practices, you’ll be fine.
A viable strategy, but requires an experienced Linux/Unix admin and quite some planning & setup effort.
There are a lot of non-obvious gotchas with ZFS, and a lot of knobs to turn to make it do what you want. Anecdotally, a coworker of mine set it up on his development machine back when Ubuntu was heavily promoting it for default installs. It worked well until one day his machine started randomly freezing for minutes multiple times a day... He traced the issue back to some improper snapshotting setup, then spend a couple of days trying to fix it before going back to ext4.
For the Postgres data use case in particular, I would be wary of interactions and probably require a lot of testing if we were to introduce it... Though it seems at least some people are having success with it (not exactly plug and play or cheap setup though): https://lackofimagination.org/2022/04/our-experience-with-po...
Addressing your argument directly though: you know that if you spin up a Postgres database for your app, you need to dump the database to disk to back it up (or if you wanna get fancy, you can do a delta from the last backup + a full backup periodically). Anytime a Postgres database exists, you know the steps you need to take to backup that service.
Same with persistent file storage on disk: if you have a directory of files, you need a snapshot of all of those files.
Each _service_ can know how to back itself up. If you tell a Dokku _app_ to back itself up, what you really mean is that each _service_ attached to that app should do whatever it needs to do to create a backup. Then, dokku only needs to collate all of the various backup outputs, include a copy of the git repository that drives the app, tar/zstd it, and write it to disk.
As you pointed out, the user should probably be able to control the backup cadence, where those backups are shipped off to, the retention period, whether or not they are encrypted, etc, but the actual mechanics of performing a backup aren't exactly rocket science. All of the user configurable values can have reasonable defaults too -- they can/should Just Work (tm). There's value in having that work OOTB even if the backups are just being written to disk on the actual Dokku machine somewhere.
I've landed on your issue too at the time when I was building my Dokku setup. I don't disagree that it would be nice but I just disagree with the parent poster making it sound like it's an essential feature that makes the project any less valuable.
Sure, Dokku is still valuable without it. I've seen a ton of recommendations over the years to "just use Dokku" when people ask about simple hosting setups for their side hustle or whatever though. In that context, Dokku isn't really a viable option unless backups are in place - ideally OOTB. Without having backups OOTB, the idea of Dokku being a really easy way to get going is a little bit diluted, don't you think?
I sort of see it the same way as "just install Linux Mint, it's great, you'll love it", but when something doesn't work, it's "oh yeah, just open the terminal and ________": opening the terminal is the _last_ thing people want to do if they just wanna read their email or whatever.
My comment comes from the lack of backup and restore systems for nearly all projects that I've seen, especially open source for services and applications.
My VPS provider just lets me take image snapshots of the whole machine so I can roll back to a point in time. It's a little slower and less flexible than application or component level but overall I don't even think about backup and restore now because I know it's handled there.
None of my hobby projects across 15 years or so have ever needed backups or restoring. I can agree it would be nice to have, but it’s a far cry from necessary.
FWIW, backup can be ran from a separate docker container mounting same volume as main app and connecting to the db if any so it is not like backup can't be taken care of. That's it how it is often done in the kubernetes world.
Been using dokku for probably 8 years now? (or something close to that; it used to be written entirely in bash!) Hosting private stuff on it, and an application at $oldplace probably also still runs on this solid setup. Highly recommended, and the devs are a great sport!
I've kept a list of these tools that I've been meaning to check out. In scope, do they cover securing the instance? Is there any automation for creating networks of instances?
Depends on what you prefer. I went with Dokku as for me it was important that I could run docker-compose based apps along side with my "Dokku managed" apps. I didn't want to convert my existing apps (Sonarr, Radarr etc.) into Dokku apps and only use Dokku for my web projects.
I also wanted to be able to remove Dokku if needed and everything would continue to run as before. Both of these work very well with Dokku.
https://news.ycombinator.com/item?id=41358020
I wrote up my own experiences too (https://blog.notmyhostna.me/posts/selfhosting-with-dokku-and...) and I can only recommend it. It is ~3 commands to set up an app, and one push to deploy after that.