Hi everyone, I’ve been working on my homelab for a year and a half now, and I’ve tested several approaches to managing NAS and selfhosted applications. My current setup is an old desktop computer that boots into Proxmox, which has two VMs:
- TrueNAS Scale: manages storage, shares and replication.
- Debian 12 w/ docker: for all of my selfhosted applications.
The applications connect to the TrueNAS’ storage via NFS. I have two identical HDDs as a mirror, another one that has no failsafe (but it’s fine, because the data it contains is non-critical), and an external HDD that I want to use for replication, or some other use I still haven’t decided.
Now, the issue is the following. I’ve noticed that TrueNAS complains that the HDDs are Unhealthy and has complained about checksum errors. It also turns out that it can’t run S.M.A.R.T. checks, because instead of using an HBA, I’m directly passing the entire HDDs by ID to the VM. I’ve read recently that it’s discouraged to pass virtualized disks to TrueNAS, as data corruption can occur. And lately I was having trouble with a selfhosted instance of gitea, where data (apparently) got corrupted, and git was throwing errors when you tried to fetch or pull. I don’t know if this is related or not.
Now the thing is, I have a very limited budget, so I’m not keen on buying a dedicated HBA just out of a hunch. Is it really needed?
I mean, I know I could run TrueNAS directly, instead of using Proxmox, but I’ve found TrueNAS to be a pretty crappy Hypervisor (IMHO) in the past.
My main goal is to be able to manage the data that is used in selfhosted applications separately. For example, I want to be able to access Nextcloud’s files, even if the docker instance is broken. But maybe this is just an irrational fear, and I should instead backup the entire docker instances and hope for the best, or maybe I’m just misunderstanding how this works.
In any case, I have some data that I want to store and want to reliably archive, and I don’t want the docker apps to have too much control over it. That’s why I went with the current approach. It has also allowed for very granular control. But it’s also a bit more cumbersome, as everytime I want to selfhost a new app, I need to configure datasets, permissions and mounting of NFS shares.
Is there a simpler approach to all this? Or should I just buy an HBA and continue with things as they are? If so, which one should I buy (considering a very limited budget)?
I’m thankful for any advice you can give and for your time. Have a nice day!
I hope you have backups
Never should you pass disks though by VM. The disk and SMART data is not passed though so ZFS has no way of knowing disk status. It also has more overhead than just passing though a PCIe device.
I would strongly recommend that you fix this ASAP
Yes, that’s why I’ve posted this question, and I immediately powered the entire NAS off, as to avoid any damage. It’s currently still powered off, until I find the best way to move forward. What I’m afraid of is that if I try to import the pools that were managed by the TrueNAS VM into a bare metal TrueNAS install, or Proxmox, that it won’t work correctly, or that I could lose data.
I would just pickup a simple PCIe sata card. They are fairly inexpensive and are idea for what you are doing.
Going forward, make sure you keep backups of important data.
Yeah, it won’t happen again. I think I’ll migrate to managing ZFS directly from Proxmox, and then handle SMB in a VM or something, because I’m worried about compatibility, with the PCIe SATA card, as the system’s pretty dated.
Fair enough
I would probably shy away from passing the raw disk. There are a few dozen ways to skin that, But in the end I would probably just mount the disc through NFS,smb, whatever it takes. Reading that smart data is paramount for your situation. You could have a bad cable and never know it.
You could run a couple of VMs K8S and longhorn the data, It’s capable of backing up to an S3 compliant storage.
For my home stuff at the moment I’m running unraid with a BTRFS and a parity disc. The first every month I run a scrub if I had any corruption it would find it for me and alert me. It’s slow as balls but more protection is better than less. You can also buy some recycled discs and keep a offline store. I don’t love the recycled discs but for backup purposes that aren’t running 24x7 they’re better than nothing.
Proxmox supports ZFS natively with management in the WebUI. So you could get rid of TrueNAS entirely and not need to deal with HBA pass-through or anything.
You also wouldn’t need NFS or have to deal with shares, as the data is available directly to Proxmox Containers via bind mounts.
Okay, if Proxmox can handle all that, I’ll be glad to ditch TrueNAS. However, I’m afraid that I won’t know how to migrate. I’ve found this reddit thread about someone who tried to do the same thing (I think) and accidentally corrupted their pools. About skipping NFS shares, that would be a big improvement for me, but I’m very unfamiliar with bind mounts. If I understand correctly, you can specify directories that live on the Proxmox Host, and they appear inside the VM, right? How does this compare to using virtual storage? Also, how can I replicate the ZFS pools to an external machine? In any case, thank you for that info!
Migration should be as simple as importing the existing ZFS stuff into the Proxmox OS. Having backups of important data is critical though.
If I understand correctly, you can specify directories that live on the Proxmox Host, and they appear inside the VM, right?
Inside a Container, yep. VMs can’t do bind mounts, and would need to use NFS to share existing data from the host to inside the VM.
How does this compare to using virtual storage?
Like a VM virtual disk? Those are exclusive to each VM and can’t be shared, so if you want multiple VMs to access the same data then NFS would be needed.
But containers with bind mounts don’t have that limitation and multiple containers can access the same data (such as media).
Also, how can I replicate the ZFS pools to an external machine?
ZFS replication would do that.
You should have all your data separately stored, it shouldn’t be locked inside containers, and using a VM hosted on a device to serve the data is a little convoluted
I personally don’t like TrueNAS - I’m not a hater, it just doesn’t float my boat (but I suspect someone will rage-downvote me 😉)
So, as an alternative approach, have a look at OpenMediaVault
It’s basically a Debian based NAS designed for DIY systems, which serves the local drives but it also has docker on, so feels like it might be a better fit for you.
Debian supports zfs so why the extra hassle of truenas? Seems to be a lot of extra work to add a vm when you could just use zfs in Debian. Or install something like Minio in Debian and use that to manage the data in s3 style buckets, again no vm needed
Okay, no VM, understood, but I do want to use a GUI for ZFS, because I’m basically a noob.
Late to the party, but if you really want a GUI for ZFS, 45drives has a ZFS plug-in for cockpit that works quite well.
I was in somewhat of a similar situation but some of the SATA ports were bad and ended up buying a cheap SATA to PCIE card and passing it instead but that still had some issues so I ended up moving TrueNAS to its own dedicated machine and haven’t had any problems since.
Just my two cents.
To anyone reading, do NOT get a PCIe SATA card. Everything on the market is absolute crap that will make your life miserable.
Instead, get a used PCIe SAS card, preferably based on LSi. These should run about $50, and you may (depending on the model) need a $20 cable to connect it to SATA devices.
I have a cheap PCIe card I bought and it works fine.
It cost like $15 and has been rock solid. What is the issue?
The one I had would frequently drop the drives, wreaking havoc on my (software) RAID5. I later found out that it was splitting 2 ports into 4 in a way that completely broke spec.
@thelemonalex I usually find that sata cable or connector are bad.
Personally I use proxmox as a host and share bulky nfs mounts for each VM like Immich, Plex/Jellyfin. For gitea and other small VM - use VM virtual drive and back it up periodically
So if I understand correctly, you’re managing the storage directly within Proxmox, instead of using a VM for that, right? Are the tools good? Does it support ZFS replication, SMB and things like that? Edit: I’ll also check the SATA cables, thanks!
@thelemonalex I use native linux nfs server. Manage shares using /etc/exports file.
For SMB, I use vm with samba and mount a few folders over nfs from proxmox (it just gives me a static IP in case if I decide to change something from proxmox side).
I manage zfs on the proxmox host. I think you can’t export zfs dataset to be managed by VM (except drive passthrough).
My infra has 1 proxmox bulky node with 2 HDD (zfs) and 3 nodes with small ssd/nvme in the cluster.
If you pass a whole raw disk, not virtualized, then TrueNAS should not complain. I don’t know if you can do that in proxmox, I haven’t tried.
Personally I’d get rid of TrueNAS. Even if docker is down, the VM with the data is still up and accessible over anything running on the VM, like scp via ssh.
If you pass the disk the meta data isn’t passed though including SMART data so it is a ticking time bomb.