Potential complexity is higher with ZFS, but in regards to RAID and the like, I honestly find the zpool tools to be much easier to work with than their mdadm equivalents.
I have a "ghetto NAS" of 24 drives plugged into my server via USB, and getting a raidz3 set up on there was one command:
`zpool create tank raidz3 drive1 drive2 drive3 ....`
Doing scrubs and replacing disks are pretty easy, just using `scrub` and `replace` commands with zpool.
I haven't really felt the need to leave ZFS, at least not in regards to RAIDs; on root I still use ext4, though I might change to btrfs on my next install.
The main limitation I've had with ZFS is expanding it or modifying it is damn near impossible, you pretty much just have to find a second equal or larger storage pool, migrate, nuke, and rebuild. Even mdadm lets you do things like convert from raid5 to raid6 without having to nuke the pool.
Storage Spaces with ReFS has had the most impressive feature set from this perspective. Add and remove an arbitrary number of drives of arbitrary sizes and it'll use the full capacity of the drives to whatever parity level you set it to. It has its own downsides of course, on top of being Windows only, but it's the only FS/pool combo that has really made me think "ZFS doesn't have quite everything perfect".
There are definitely annoyances, but there are workarounds too; you can add additional vdevs to expand the whole pool if you want, and data is striped across them; it even lets you mix and match different raids if you really want to for some reason.
There's some current work to add a disk to existing vdevs, and I think even a semi-working PR for it now: https://github.com/openzfs/zfs/pull/15022. Hopefully in theory that will make ZFS a little less frustrating.
As a sibling comment mentioned, using ZFS for RAID/storage pooling is cumbersome. On my NAS I use plain old ext4 with SnapRAID+MergerFS, which gives me the RAID and checksumming features, while having the flexibility to expand the array using any combination of disks. This works rather well, and I have no need for immediate syncing/checking I would gain with ZFS.
That's my main issue with ZFS. It does too many things, and is too clever/magic for my taste. Many things can go wrong when relying on a monolithic system with that much complexity. I much prefer the Unix "do one thing well" approach, and mixing purpose-built tools to suit my needs, rather than using one tool for everything.
I'm just learning too, so one question: why wouldn't you just consolidate and use ZFS for root? Even with just 1 disk, I understand it's still beneficial for its corruption detection and other such features. Just trying to understand that about going the btrfs way.
There's no reason not to really, other than btrfs is included in the kernel, and if you're not using RAID then I don't think there's a fundamental advantage to ZFS either. If you want to do ZFS on root, it usually requires more than "zero" levels of extra work, unlike btrfs.
I have a "ghetto NAS" of 24 drives plugged into my server via USB, and getting a raidz3 set up on there was one command:
`zpool create tank raidz3 drive1 drive2 drive3 ....`
Doing scrubs and replacing disks are pretty easy, just using `scrub` and `replace` commands with zpool.
I haven't really felt the need to leave ZFS, at least not in regards to RAIDs; on root I still use ext4, though I might change to btrfs on my next install.