Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How I installed TrueNAS on my new ASUSTOR NAS (jeffgeerling.com)
80 points by ingve on June 21, 2023 | hide | past | favorite | 65 comments


This is a cool article. I wonder why all the content creators use TrueNAS. I evaluated TrueNAS Scale for a storage server and decided not to use it. I had two major complaints.

First, when you create a ZFS pool, it partitions drives and puts swap on all of them. IIRC it may have even set up some mdadm RAID mirrors. It's a bunch of complexity and can leave you with swap on spinning disks if you don't notice and override the behavior.

For me, whenever I see a bad default like that, where it might have made sense 10+ years ago, I get freaked out about what other poor defaults exist that I'm not noticing or what else may have been neglected for a decade. I use it as an negative indicator and often avoid the whole product at that point.

Second, every time I'd boot (or maybe stop) a VM it would trample my ZFS tunables, specifically zfs_arc_max. I couldn't figure out exactly what was triggering it, but I don't want a system that doesn't play nice with the default CLI tooling.

I actually had a really tough time finding a decent NAS and I'm going to end up building a plain old Linux system running Ubuntu 22.04. If anyone has any suggestions (not QNAP or Synology) for something that can do a good job of ZFS and 2-6 VMs, I'd love to hear them.


> I wonder why all the content creators use TrueNAS.

Because they're content creators, not ZFS tuners. It's plug and play. It makes it easy to have some redundancy and do backups.


This, basically.

I need storage to work, and I don't have enough time to get into ZFS's guts unless I'm going to do a video or longer blog post on a specific topic.

TrueNAS gives sane defaults, exposes most of the basic features through a (relatively sane) UI, and... works.

I still use ADM on my older spinning disk NASes, though—RAID 10 just works, and is fast enough for what I need, and the fact I have three copies of everything (one 'offline' (ish) on Glacier Deep Archive) means if I do ever encounter a corrupted file, I can grab a copy from one of the other backups on different media.

If you're a storage nerd / data hoarder, then it's common to spend more time and go deeper than what something like TrueNAS or some other UI gives you. Otherwise it's like the McDonald's of storage software.


> Otherwise it's like the McDonald's of storage software.

This is why I use TrueNAS. I use Core instead of Scale though because I prefer FreeBSD. Its main job is to hold my ripped media library and run Plex. If I can glue other stuff to it (like an Ubuntu Server VM running Pi-Hole) then that's just an added bonus.


Either that, or (more likely I'd argue), because they're getting paid for promoting TrueNAS


I've never seen a dime from TrueNAS, but I can't speak for other content creators. I am explicit about my sponsorship, and will always mark a video and add information in both the video itself and the description disclosing exactly the relationship I have with any sponsoring vendor.

See: https://github.com/geerlingguy/youtube#sponsorships


[citation needed]


> If anyone has any suggestions (not QNAP or Synology) for something that can do a good job of ZFS and 2-6 VMs, I'd love to hear them.

I'm using FreeBSD and its native hypervisor called bhyve. It's due to overall OS preference, not features Linux lacks regarding ZFS.


Don’t use QNAP. It’s super buggy and you can’t trust it. I’ve had a ton of near misses with every piece of QNAP hardware and software I’ve used, and the only reason I haven’t lost data is due to sheer paranoia.

Synnology is much better put together, but also tends to be a lot more conservative software stack wise. I’ve had no ‘wtf’ moments with Synnology equipment, unlike QNAP - but I have had a few sigh, that’s lame moments due to older/less efficient software approaches or the like.

Anything custom, you’ll have to spend your time figuring everything out, which has pros and cons.

Personally I’ve had zero issues with a multi-pool NAS that has 20 (spinning rust) + 6 SSDs totaling around 150TB on Ubuntu with some pretty standard intel server hardware.

Been running that for 2 years, zero data loss, currently about 80% full. I’ve also learned a ton about ZFS and done some odd things for my own amusement. I wouldn’t recommend it if you want low maintenance though.


Eh if you want to use truenas you have to do things in the UI. Everything you should need to do you can in the UI.

Changes made with the underlying tools often get clobbered because that’s just not very commonly required and anyway you want one source of truth not two.

If you just want a stateless UI on top of the system tools you have to create it yourself… but it won’t be easy.


Exactly right. TrueNAS is designed to work as an appliance, rather than an infinitely tunable "Unix variant with ZFS preinstalled."


Frankly none of the Linux content creators are that good at Linux. Or maybe I discount what 18 years of experience with it does to a person but LTT is constantly saying stuff that makes me cringe, even Luke.


Just want to throw out there that people should be allowed to be wrong. Especially if they've shown the capacity to recognize their mistakes and own them.

Yeah, we cringe but who ever actually learns something without making some mistakes along the way?

If we spent all of our time dealing with the anxiety of trying to never be wrong -- most of us wouldn't be able to function.


Most people who “create content” don’t know much and just do the minimum to get something out. This scales from tiktok to YouTube to popular nonfiction… honestly even a lot of academic research.

It’s not about people being wrong. There’s a problem at every level with how willing people are to publish before they’ve done enough to be sure that their information is good.

Being wrong should be celebrated. Publishing for the rewards of publishing instead of the quality of the work is rightly shamed.


Oh sure, most of them are inquisitive and thoughtful, they just aren't "into" Linux the same way some others are. I hope I didn't come off too judgemental.


I'm not in any ways an expert in linux. I don't notice errors, but a lot of times I'm totally stumped how easy they jump to radical conclusions: "we're not using because single detail", "it's just bad", "one would never" without even hinting at any true research


Get burned real bad and spend 72 hrs restoring service and you learn a lesson. This is the reason why sysadmins / devops / whatever exist. It's an entirely separate skill set from development.


LTT is not a Linux content creator.


And yet has more videos about Linux than sone other CCs that are focused on Linux or talk about it a lot on WAN Show, etc; but even they make some Odd mistakes or technology choices that feel like they're 8-year old holdovers from what they used in industry last.


I'd recommend not using ubuntu, but instead gp with either Proxmox or bog standard debian. Proxmox supports using the zfs pool as storage for vms and using zfs as root disk also. Debian won't help you do the install with zfs on root but it can be done. If you don't go with zfs on root then it's as simple as apt insall zfs-dkms on debian.


I tried Proxmox. I mainly need a storage server that can run a few VMs. Proxmox looks pretty good for a VM heavy setup, but it adds a lot of complexity that I just don't need.

I like the release cadence on Ubuntu and I had such a bad experience with ZFS DKMS (many) years ago that I'm probably a bit bias towards Ubuntu because I simply don't have to think about it.

I won't use ZFS on root unless it simplifies dealing with a failed boot disk. I still prefer CSM over UEFI because UEFI makes it more complicated to recover from failure :-(


I never used Proxmox, so after having read great comments about it tried to install it on a couple small mini PCs I had around just to test some functionalities, but every time it failed. If memory serves (was like 1-2 months ago) on both machines the installer complained it couldn't find the partition to install. Both machines would install and run other Linux distros flawlessly. No idea how to solve that problem.


Recently tried Proxmox, spent few hours on trying to boot any VM placed on ZFS pool but with no luck. Same VMs converted from raw to vmdk (iirc) worked on 2 other, identical NVMe disks with software ext4 raid.


Very odd, I'd love to know more since I've got that setup going on my router/edge box right now. ZFS mirror on two NVMe disks as the storage for root fs and VMs.


It was a recent Proxmox version (newest ~6 weeks ago). It's possible that I used some experimental new kernel version and haven't tried on completely basic/default setup.

The disks were Samsung's MZVL21T0HCLR-00B00. The pair with ext4 sw raid booted all VMs (Linux, FreeBSD) with no issues but second pair with ZFS pool wouldn't.

It may have been due to the newer kernel version but I asked myself a few questions about what's missing in FreeBSD's bhyve regarding my needs and took this route instead of trying default kernel.

    # zpool list
    NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    VMs     952G   828G   124G        -         -    36%    87%  1.00x    ONLINE  -
    zroot   944G  1.21G   943G        -         -     0%     0%  1.00x    ONLINE  -


> If you don't go with zfs on root then it's as simple as apt insall zfs-dkms on debian

That's basically how it works in almost all Linux distros ;-)


May I ask what issues you have with QNAP or Synology?


QNAP had a minor version upgrade a while back that re-enabled auto-updates that had explicitly been disabled and there was a major version update (v4 to v5) not too long after that. I was intentionally holding off on upgrading to v5 because there was a Samba related issue that broke my primary workload. I almost got burned by it. Yesterday I got an email from one of my QNAPs saying an automatic firmware update was scheduled for 00:00 today even though I have auto-updates turned off.

I don't think it actually updated today, but I don't think they have their act together when it comes to managing updates and I'm not willing to depend on any of their stuff.

Synology is more complicated and ultimately comes down to the use of BTRFS. I don't know a ton about filesystems, but, the way I understand it, BTRFS allocates extents and then puts blocks into those extents. Depending on your workload, you can end up with orphaned blocks in those extents that prevent space reclamation (because it reclaims extents, not blocks) and it can result in runaway space usage. Search for "BTRFS missing space".

I may not have gotten that 100% correct, but I think the basic idea is close.

My workload (backup storage) overwrites random blocks in existing files and that's one of the scenarios that exacerbates the issue. I've ended up with empty LUNs on a Synology that are "using" TBs of space on the containing volume.

The Synology can also have a pretty complicated "stack" by the time you get your data onto it. I think I had an image based LUN on a BTRFS volume on mdadm RAID1. That was achieved through the GUI without making any crazy choices AFAIK.


I went all-in on Synology a few years ago. I can manage a bunch of drives myself, but that's not how I want to spend my free time. The Synology just sits in a corner of the house, doing its thing 24/7/365 without me futzing around with it.


> First, when you create a ZFS pool, it partitions drives and puts swap on all of them.

If you have a drive fail, you don't want to replace the failed drive with one that is slightly smaller. It won't work. So they pad with swap.

> IIRC it may have even set up some mdadm RAID mirrors.

Yes in case the drive fails with the swap on.

You don't want to use ZFS for swap, it can require extra memory to allocate blocks. Not what you want when you are out of memory.

> It's a bunch of complexity and can leave you with swap on spinning disks if you don't notice and override the behavior.

Perhaps, but you are assuming their reasons are not as good as yours without knowing their reasoning.


> If you have a drive fail, you don't want to replace the failed drive with one that is slightly smaller. It won't work. So they pad with swap.

What advantage does padding with swap give over leaving some unallocated space?

> Yes in case the drive fails with the swap on.

There's no scenario I can think of where I want swap on a spinning disk.

> You don't want to use ZFS for swap, it can require extra memory to allocate blocks. Not what you want when you are out of memory.

Is that even possible? Do you mean putting swap on a ZVOL?

> Perhaps, but you are assuming their reasons are not as good as yours without knowing their reasoning.

If anyone can justify a reason for putting swap on a spinning disk in 2023 I'd love to be enlightened.


> There's no scenario I can think of where I want swap on a spinning disk.

When you only have spinning disks and you run out of ram.

> What advantage does padding with swap give over leaving some unallocated space?

It allows you to not run out of ram, and it means you get to use the space.


> It allows you to not run out of ram, and it means you get to use the space.

As someone who (this week) had a spinning disk fail in the NAS, with swap spread across all disks... it's kind of a pain in the arse vs using the whole disk.

It's extra complication, which can be especially bad if the disk failure occurs at early am when you're not 100% awake but still need to deal with it. :/


What was the complication? TrueNAS handles this automatically.

Was your swap non-raided?


It's on a standard Linux server. For these particular servers, I'm using them with a different containerisation strategy (LXD) so don't want to run TrueNAS on them. These ones don't need to be applicances. :)


I’m probably not as proficient with ZFS as you are, but for me, the basic Ubuntu LTS option you mentioned has been great. I was sick of all of the GUI and bloated features of unraid. Then I tried TrueNas and it was okay, but in the end I realized that all of the GUI abstraction was just getting in the way. I really just wanted ZFS features and some docker apps.

I think if I needed a lot of VMs I might go with Proxmox, but yeah, the bare bones Ubuntu has been really elegant and stable since I built it a few months ago.


I like TrueNAS because of its plugin support. My installation runs Jellyfin, AdGuard, *arr (Sonarr etc) and a couple of other things. I don't really care about data redundancy for the data that's stored on the NAS, so I just use JBOD. Sure I can build my own linux server to do all this, or maybe run VMs in ProxMox or whatever, but I don't really gain anything from that.

TrueNAS works fine for my use cases, and I suspect it's the same for most users.


Plain old Linux system is the way to go. Mount your volumes, set `/etc/exports`, configure a keytab if you're into that sort of thing, and move on with life.

It's honestly less messing around than any of the off the shelf tools. Depending on what you're doing you may want to mess with some kernel tunables. But even then, most of the time the defaults on a modern kernel are good enough for small stuff.


Been looking into the same situation, and decided to stay away from TrueNAS for similar (though less specific) reasons. Currently looking at FreeBSD with their bhyve VM tooling.


I personally like that truenas scale has a small kubernetes installation for adddons. Makes it stupid easy to write my own addons or integrate something like truecharts.


I have found it infuriating.

The TrueNAS portion of TrueNAS Scale seems fine but the Apps side of it I have found half baked.

I have multiple apps (both true charts and the base catalog) that just hang on init, with no logs, just the spinner and then it stops. Debugging k8s when you are abstracted like that is frustrating.

I also have had a difficult time getting traefik going with either Cert Manager or the TrueNAS certs, one seems to be deprecated the other seems to be undocumented.

If people gravitate towards to the apps feature of Scale I would recommend holding off till its had more time in the oven.


> No warranty is given for faults caused by alternate operating systems.

Sure, but the burden is on the manufacturer to prove the alternate OS caused the fault. They can't just void your warranty for installing another OS.

https://en.m.wikipedia.org/wiki/Magnuson%E2%80%93Moss_Warran...


The problem here is litigation. While you can take them up to court you will need a lot of $.

And the Magnuson Moss Warranty Act does not contain a free-shifting statute. (unlike the Lemon Law which is a more specific version of Magnuson Moss).

What I don't understand however is how we deal with features that aren't used by ASUS, say some form of encryption in the CPU. TrueNAS uses it and your NAS locks up. And Intel actually releases an errata for that CPU.

Who is responsible? Should they give you a new one?


My perspective is if they're selling you hardware + software, the warranty covers the hardware in the context of running their software. If the accelerated encryption doesn't work on OtherOS, but they don't use it on their OS, too bad, that's not part of what they sold you.

In other words, it's not unreasonable for them to request that you (re)install their software and show them the system is broken there. Only if you manage to actually break the device with OtherOS, does burden of proving who broke it matter. I'd personally argue, in most cases, the hardware is defective if it can be broken by doing wrong things with software; but it kind of depends on how far you go; if you go mucking about in the flash roms, it's debatable; extreme overclocking is on the user as well, etc.


I'm looking forward to a bright future when NVMe drives are competitive with magnetic disks and we can all afford to put 12 of 'em in a tiny silent box like this for blazing fast local storage. It'll happen!

Also, bravo to Asus for not only opening their hardware to make installing an alternate OS possible, but documenting it on their site. This makes me vastly more likely to consider them for my next purchase.


> silent

From memory this does have a fan


As a guy who had installed vanilla Debian on a bunch of QNAP and Synology NAS boxes I don't understand what exactly is the reason why anyone would want ZFS. The thing is completely unintuitive when something breaks and also there are slight incompatibilities between the current implementations (which will bite you hard when you are trying to recover an failed array). The whatever filesystem you like on top of LVM2 on top of mdraid is a setup that has reasonable layering and one can understand each layer well enough to troubleshoot or performance tune that particular layer. With ZFS, you just have this neat, shiny, complete black box that does something in slightly incompatible ways.


> I don't understand what exactly is the reason why anyone would want ZFS

It makes snapshots, clones, and replication very simple to deal with.


6 NVMe on a single x8 Link Proving if you build it they will buy it.


PCIe Gen 3 bandwidth x1: 1 GB/s, x8: 8 GB/s

PCIe Gen 4 bandwidth x1: 2 GB/s, x8: 16 GB/s

Yeah, good point.. but 8GB/s is still plenty to saturate a single 10gb ethernet link.


I was planning on doing this, but wasn't entirely sure how much of a bottleneck the CPU would be especially in some sort of encrypted raid setup...


There is generally plenty of CPU. These cores support AES offload and the like. The main bottleneck comes from eac drive only getting 1x PCIe 3.0, from behind a switch. As long as you span at least two drives in some way the next limitation you hit is the 10G nic.

There is something to be said for those $100 PCIe 8 to 4x NVMe cards. Stick 3 of those on a 13600k (2 in the split direct to CPU lanes, one in the indirect lanes) and you'll have more bandwidth per drive, a heck of a lot more cores with more power, the ability to slot a 25g nic, and the ability to load a ton more ram (officially the n5105 only supports 16GB, 32 works fine as far as I've been able to tell, 64 causes errors). The power usage, size, and cost will all be a little higher but if you're looking at a 12 SSD NAS it's probably worth thinking about IMO. You also get an extra x4 for the main os drive, instead of 8GB emmc which can only really be /boot for most realistic setups.


iXsystems sells turnkey TrueNAS boxes with Atom CPUs, which are comparable in terms of TDP, clock speed, core count, etc. Though the Atom CPUs support 16x more RAM, which ZFS will definitely take advantage of, and 50% more PCIe lanes.

So they're not exactly apples-to-apples, but I don't think encrypted datasets would be the bottleneck.


One thing I did different is use the 8 gig emmc as /boot. This allows all of the data drives to be fully used and identically partitioned.


Looks like this device is for NVME drives. Is there any special tuning required for an all-ssd array versus rotational HDD’s?

I’m also curious how long SSD’s are supposed to last relative to an equivalently sized HDD.


I have this and SSDs in it, running arch btw ;). One thing I did was go classic and use RAID due to the limited RAM support by the CPU (16GB officially, 32 works fine as far as I e been able to tell, 64 creates memory errors according to all that have tried). When I did this I went RAID4 since it makes more sense than RAID5 when you have SSDs. I was surprised to find out even cheap SSDs have better endurance than the typical high capacity drives you'll see used in a home NAS. Price was still ~4x per GB though.


https://www.backblaze.com/blog/ssd-edition-2022-drive-stats-...

https://www.backblaze.com/blog/backblaze-drive-stats-for-202...

With there current sample its 0.89% SSD vs 1.39% HDD AFR but the SSD sample is to low atm.


I note the author added an external USB disk to act as the TrueNAS boot disk. I wonder if treating one of the M.2 disks as a boot disk is feasible. There are twelve M.2 slots in that kit, after all.


You can do that too, I just chose to use an external drive to keep things simple and compare apples to apples in terms of NVMe performance (part of my reason for installing TrueNAS was to compare its performance to ADM built into the NAS, running with the same 12-drive storage layouts).


I was setting up FreeNAS, TrueNAS or something similar (FreeBSD-based) for a client years ago and they required utilizing all SATA ports for actual storage and booting from RAID flash drives. They made A LOT of backup copies of these USB sticks and were told to make new copies after system upgrades and reconfiguration.

Last time I had any info from that site - it worked well for them.


IMO it would make more sense to use the eMMC (even with the caveats mentioned in the article).


Related tot he topic but not the article: As of very recently UnRaid also supports ZFS. Worth a look if you are evaluating TrueNAS and similar options.


Is this first-class/native support? Could you provide a link where I can read more?

I am interested in this, but came up empty the last time I looked into it.



My first thought after the page loaded was that I was looking at some sort of old Playstation.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: