openmediavault – Basic Installation with ZFS

With my recent server upgrades (wow was I lucky to mostly avoid the RAMpocalypse) it is now time to deprecate my oldest running server. Until recently this was my local backup server and amazingly it’s still doing just fine, I hope I can find it a new home. I doubt someone wants my NixOS complexity for a basic NAS device so I took a look at options to install so they could get a running start. TrueNAS was a good candidate, but it has fairly high hardware requirements. I then found openmediavault which will happily run on much more modest hardware and while ZFS isn’t supported out of the box, it’s easy to get there.

To get started go grab the latest ISO image for openmediavault. Then burn that to a USB drive so you can boot from it.

Initially I was missing the second command (sync), this tripped me up for about an hour. I did think that it was suspicious that the 1.2Gb file wrote to the USB stick so fast, but I figured that it was only 1.2Gb. This caused me to pull the USB stick before it had finished writing the data – and it failed silently. This resulted in my getting part way into the install then hitting an error something like:

There was a problem reading data from the removable media. Please make sure that the right media is present

Ugh, I tried multiple versions including a plain old Debian 13 install. Silly me. When I finally ran with the sync, it took a few minutes to finish instead of seconds.

Now armed with a valid bootable USB stick, it’s time to follow the openmediavault (OMV) new user guide. Stick with it through to the first boot on your new hardware which will allow you to visit the WebUI to manage things. When OMV boots, it will output information to the console with the IP address to help you connect to the Web UI.

During the install you are prompted to create a password, this password is for the root user. This is different from the WebUI admin password, which by default is “openmediavault”. Take the time now to go change that password by clicking on the user icon in the upper right and selecting “Change Password”.

Now is a great time to explore the WebUI, and run any updates that are pending. It’ll give you a feel for what OMV is like. If you only have a single data drive, or just want to completely follow the new user guide – that’s fine, you can stop reading here and go explore.

In this blog post we will dive into enabling ZFS on OMV. If you didn’t yet do this, we need to install the extras repository – following this part of the guide. Access the new OMV installation using ssh and log in as root, we’re going to do a scary wget and pipe to bash.

Once this is done, we will have access to more Plugins which include the ZFS support.

We need to change the kernel we are running to one that will safely support ZFS. To manage kernels we need to install the “openmediavault-kernel 8.0.4” plugin. Do this using the menu System->Plugins and searching for “kernel”.

Select the plugin, then click on the install icon.

Now we have the System -> Kernel menu option allowing us to manage kernels. As of the date of this post, we want to install the recommended proxmox 6.17 kernel.

Start by clicking on the icon that I’ve indicated with the red arrow. Then select the 6.17 version and the install will run, this might take a bit of time.

Very important. Once it is done, reboot.

Post reboot I was able to check Diagnostics -> System Information to confirm that I was now running the new kernel. However, revisiting the System -> Kernel the new kernel was not marked as the default, so I did that. Also I took the opportunity to remove the old kernels I was not using to avoid future problems or confusion.

The next step is to install the ZFS plugin via the System -> Plugins menu. You can follow the documentation for OMV7, but I’ll outline the steps here. Search for ZFS and install the plugin.

At the end of the install you are likely to get a “**Connection Lost**” error. This is OK, just do a hard refresh of the browser window. On OSX Firefox this is Command-SHIFT-R, you may need to look up how to do it with your browser.

Under Storage we will now have a zfs entry. This will allow us to create a storage pool, and it will automatically mount it as well. You may want to read up a bit on ZFS, I’ll suggest starting with my post on it.

Before we create a pool, we’ll want to look at the available drives under Storage -> Disks. In my setup I have four 1TB drives: /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde. You may want to do a ‘quick’ wipe of these devices in case any old ZFS headers exists on the drive which could derail the creation step later.

Flipping back to the Storage -> zfs -> Pools we will add a pool.

This is fairly straight forward but does encompass several ZFS concepts. I did find that after hitting save, I experienced a time-out (my machine is maybe slow?) and the plugin doesn’t accommodate slow responses. The ZFS filesystem was created and mounted and appears under Storage -> File Systems.

We’re close, we have a nice resilient filesystem on a server but we don’t yet expose any of this to make it Network Attached Storage. Let’s next create a user, which will make it easy to create read/write access remote files.

Then we will create a network share, which declares our intent to share a given filesystem with one of the services that provide network visibility. Visit the Storage -> Shared Folders menu and we will create a share.

This process will create a sub-folder on our ZFS filesystem we created. We can check this by logging in over ssh and looking at the mount point.

During some of this configuration you will have OMV prompt you to apply the changes. Each time these prompts to apply pending changes, hit the check mark assuming you want to keep them. Clicking on the triangle will show you a list of changes pending.

The last step to make this file share visible on the network is to pick a service to offer it up, we will pick SMB/CIFS as it is one of the most common and compatible options.

First enable the service in the settings.

Then create the share.

I will reference the OMV8 documentation as a good guide to creating this share, specifically the set of options you want to enable which don’t easily fit into a single screenshot.

At this point we now have a Samba share on the network. On OSX you can connect using the Finder app and hitting Command-K. Entering smb://omv.lan (or whatever your machine is named) and then providing the user credentials defined above.

There is lots more you can do with OMV. Set up a dashboard, enable SMART monitoring, add more Plugins. If you add the docker (compose) plugin this unlocks so many more possibilities. There is reasonable documentation and a forum, so lots of places to seek out help.

NixOS 25.11 – How to Upgrade

Recently the NixOS 25.11 release was announced. As my new server is based on NixOS it was time to do an upgrade and stay current. Moving from 25.05 to 25.11 turns out to be fairly straight forward, and you can uplift multiple versions without much pain or risk. I was also able to update directly from 24.11 to 25.11 in a single step for an old hacky desktop I was experimenting with.

Upgrading is very simple, first figure out which version you are on.

Then it’s just two command lines to do the upgrade.

A full reboot cycle is a good idea to make sure all is well, but shouldn’t be strictly required. The upgrade instructions are clear and easy to follow.

You’ll notice that I’ve selected the small channel, which is suggested for servers. It will not contain the some of the desktop components – but has the benefit of being updated sooner than the “full” version. Another server I had was using 25.11 full channel, and I used the same approach above to switch to the small channel.

NixOS has a very predictable upgrade cadence, twice a year in May and November. The downside is they very quickly deprecate the old version, 25.05 will stop getting regular updates at the end of December this year. Upgrading is so safe, and easy, there really isn’t any excuse to be back level.

You can run into some trouble if you have a very small boot drive, or haven’t done any “garbage collection” of older versions. I would recommend adding the following automatic updates and housekeeping tasks to your /etc/nixos/configuration/nix.

Depending on what has changed, you may get some warning when you do the update, typically stuff about a given setting having changed names or something. These are magically handled for you, but you should take the time to update your config to use the latest names to prevent future upgrade pain.

While we’re on the topic of NixOS configuration, I did learn something about the /etc/nixos/hardware-configuration.nix file. Typically this is set up once during the install, and you can mostly ignore it after that. However, you can re-generate that file using nixos-generate-config and it will update the file, but not overwrite your /etc/nixos/configuration.nix file. Handy if you have hardware changes like adding a USB drive you want to be permanently mounted.

However, as I found out, the nixos-generate-config command can be a “foot gun” and result in a non-clean booting system. The recommendation is to trust by verify, because capturing your hardware setup can be nuanced and you don’t want to end up having boot time problems you only detect much later.

NixOS + Docker with MacVLAN (IPv4)

I continue to make use of the docker macvlan network support as it allows me to treat some of my containers as if they are virtual machines (VMs). Using this feature I can assign an IP address that is distinct from my host, but is still just a container running on the host. I’ve written about creating one, and expanding it.

As I’m now building out a new server and have selected NixOS as my base, I need to make some changes to how I’ve setup the docker macvlan. This blog post captures those changes.

While NixOS supports the declaration of containers, I’m not doing that right now by choice. It’ll make my migration easier and I can always go back and refactor. Thus there are just two things I need to include in my NixOS configuration:

  1. Enable docker support
  2. Modify the host network to route to the macvlan network

The first (enable docker support) is so very easy with NixOS. You need a single line added to your /etc/nixos/configuration.nix

You probably want to modify your user to be in the “docker” group allowing direct access to docker commands vs. needing to sudo each time.

There is a third thing we need to do, create the docker macvlan network. I don’t have this baked into my NixOS configuration because I was too lazy to write an idempotent version of doing it and figuring out where in the start up sequence to make it run. This turns out to be just a one line script:

Docker will persist this network configuration across reboots.

If you stop here, you will be able to create containers with their own IP address. I pass along these two docker command line options to create a container with it’s own IP:

The docker macvlan network I’ve defined has 4 IPs reserved, but you can specify a larger ip-range if you want when you create the docker mavlan network.

However, if you did stop here, you would not be able to reach the container running on 192.168.1.64 from the host. This is the second change to our Nix configuration (modify the host network to route to the macvlan network). In my original post I used a script to create the route from host to container, as this wasn’t persistent I needed to run that script after every boot.

One way to do a similar thing in NixOS is to create a systemd service. I was exploring this and did figure it out. However, I was wrong in my approach. While this worked, it wasn’t the best way to do it. NixOS has networking.macvlans which is a more NixOS-y way to solve the problem. The very helpful community helped me discover this.

If you dig into the implementation (createMacvlanDevice, configureAddrs), you can get some insight into how this maps onto basically the same thing my boot time script did.

This feels a lot less of a hack than using a script. Both work, but using the networking.macvlans approach is nice and clean. I should probably do the work to declare the docker macvlan inside my NixOS configuration to make this complete, but that’s a task for another day.