{"id":2345,"date":"2024-04-28T14:57:04","date_gmt":"2024-04-28T18:57:04","guid":{"rendered":"https:\/\/lowtek.ca\/roo\/?p=2345"},"modified":"2025-10-14T20:52:57","modified_gmt":"2025-10-15T00:52:57","slug":"getting-started-with-zfs","status":"publish","type":"post","link":"https:\/\/lowtek.ca\/roo\/2024\/getting-started-with-zfs\/","title":{"rendered":"Getting started with ZFS"},"content":{"rendered":"<p><a href=\"https:\/\/lowtek.ca\/roo\/wp-content\/uploads\/2024\/04\/openzfslogo.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-medium wp-image-2346\" src=\"https:\/\/lowtek.ca\/roo\/wp-content\/uploads\/2024\/04\/openzfslogo-500x131.png\" alt=\"\" width=\"500\" height=\"131\" srcset=\"https:\/\/lowtek.ca\/roo\/wp-content\/uploads\/2024\/04\/openzfslogo-500x131.png 500w, https:\/\/lowtek.ca\/roo\/wp-content\/uploads\/2024\/04\/openzfslogo-768x201.png 768w, https:\/\/lowtek.ca\/roo\/wp-content\/uploads\/2024\/04\/openzfslogo.png 800w\" sizes=\"auto, (max-width: 500px) 85vw, 500px\" \/><\/a><\/p>\n<p>When <a href=\"https:\/\/en.wikipedia.org\/wiki\/ZFS\">ZFS<\/a> first came out, it was a proprietary filesystem but it had some very interesting characteristics &#8211; at the time it&#8217;s ability to scale massively and protect your data seemed very cool. My interest in filesystems goes back to my C64 days editing floppy disks to create infinite directory listings and the like.\u00a0 Talking about filesystems reminds me of when I was a COOP student at <a href=\"https:\/\/en.wikipedia.org\/wiki\/QNX\">QNX<\/a>, they had &#8216;QFS&#8217; and meeting the developer helped de-mystify filesystems for me.<\/p>\n<p>For some reason ZFS is also linked in my memory with the &#8216;shouting in the datacenter&#8217; video. As best I can tell this is likely because both <a href=\"https:\/\/en.wikipedia.org\/wiki\/DTrace\">DTrace<\/a> and ZFS both came out of Sun around the same time.<\/p>\n<p><iframe loading=\"lazy\" title=\"Shouting in the Datacenter\" width=\"840\" height=\"630\" src=\"https:\/\/www.youtube.com\/embed\/tDacjrSCeq4?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>I finally decided to fully decommission my <a href=\"https:\/\/lowtek.ca\/roo\/2009\/the-new-server\/\">old server<\/a> and the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Standard_RAID_levels#RAID_5\">RAID5<\/a> array of 1TB drives. I&#8217;ve also recently been experimenting with <a href=\"https:\/\/nixos.org\/\">NixOS<\/a>, and I&#8217;ve really enjoyed that so far. I figured why not setup a dedicated backup server? This also presented a good chance to setup and play with ZFS which now has reliable open source versions available.<\/p>\n<p>First I spent some time learning what I would consider ZFS basics. This <a href=\"https:\/\/www.youtube.com\/watch?v=MsY-BafQgj4\">video<\/a> was useful for me. Also, these two <a href=\"https:\/\/vsupalov.com\/zfs-basics\/\">blog<\/a> <a href=\"https:\/\/blog.victormendonca.com\/2020\/11\/03\/zfs-for-dummies\/\">posts<\/a> were good starting points.<\/p>\n<p>Since I&#8217;m using NixOS as my base operating system, I&#8217;ll be following the doc on <a href=\"https:\/\/openzfs.github.io\/openzfs-docs\/Getting%20Started\/NixOS\/index.html#installation\">setting up ZFS on NixOS<\/a>. Now, while I&#8217;m not setting up my boot volume to be ZFS &#8211; it turns out you still need to do the same basic setup if you want ZFS capabilities in your NixOS.<\/p>\n<p>You need to generate a unique &#8216;hostid&#8217; &#8211; the doc suggests using<\/p>\n<pre class=\"\">head -c4 \/dev\/urandom | od -A none -t x4<\/pre>\n<p>Now we need to modify the \/etc\/nixos\/configuration.nix to include<\/p>\n<pre class=\"lang:default decode:true\">boot.supportedFilesystems = [ \"zfs\" ];\r\nboot.zfs.forceImportRoot = false;\r\nnetworking.hostId = \"yourHostId\";<\/pre>\n<p>Rebuild and reboot, then you can query available zpools<\/p>\n<pre class=\"lang:default decode:true \">$ zpool status\r\nno pools available<\/pre>\n<p>Now we create a pool, I think in this step we are actually adding a bunch of devices to a vdev, which is then wrapped in a pool. Using <code>fdisk<\/code> I&#8217;m able to identify the four 1TB drives which are all partitioned and ready to roll: sdd, sde, sdf, and sdg.<\/p>\n<pre class=\"lang:default decode:true \">$ sudo zpool create backup raidz sdd sde sdf sdg<\/pre>\n<p>This process took a short while to complete, but after it was done running <code>sudo fdisk -l \/dev\/sdd<\/code> gave me this:<\/p>\n<pre class=\"lang:default decode:true \">Disk \/dev\/sdd: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors\r\nDisk model: WDC WD10EARX-00N\r\nUnits: sectors of 1 * 512 = 512 bytes\r\nSector size (logical\/physical): 512 bytes \/ 4096 bytes\r\nI\/O size (minimum\/optimal): 4096 bytes \/ 4096 bytes\r\nDisklabel type: gpt\r\nDisk identifier: 2F1E0C4F-A95F-E948-99AC-18E8829496CD\r\n\r\nDevice          Start        End    Sectors   Size Type\r\n\/dev\/sdd1        2048 1953507327 1953505280 931.5G Solaris \/usr &amp; Apple ZFS\r\n\/dev\/sdd9  1953507328 1953523711      16384     8M Solaris reserved 1<\/pre>\n<p>It seems new partitions were created and I now have a zpool<\/p>\n<pre class=\"lang:default decode:true \">New partitions were created and I assume initialized to be zfs\r\n\r\n$ zpool status\r\n  pool: backup\r\n state: ONLINE\r\nconfig:\r\n\r\n\tNAME        STATE     READ WRITE CKSUM\r\n\tbackup      ONLINE       0     0     0\r\n\t  raidz1-0  ONLINE       0     0     0\r\n\t    sdd     ONLINE       0     0     0\r\n\t    sde     ONLINE       0     0     0\r\n\t    sdf     ONLINE       0     0     0\r\n\t    sdg     ONLINE       0     0     0\r\n<\/pre>\n<p>I don&#8217;t believe you can reasonably expand or shrink a <a href=\"https:\/\/en.wikipedia.org\/wiki\/ZFS#RAID_(%22RAID-Z%22)\">RAIDZ<\/a> vdev, this means you need to plan ahead for your storage needs. Also important to remember that the guidance is to not have ZFS volumes at more than 80% usage, beyond this level performance starts to suffer. Storage is cheap, and with pools I think you can have multiple vdev&#8217;s in a single pool, so while a single RAIDZ vdev has limitations I think ZFS offers some interesting flexibility.<\/p>\n<p>Unexpectedly, it seems that the newly created ZFS is also mounted and ready to roll<\/p>\n<pre class=\"lang:default decode:true \">$ zfs list\r\nNAME     USED  AVAIL  REFER  MOUNTPOINT\r\nbackup   523K  2.55T   140K  \/backup<\/pre>\n<p>That&#8217;s not where I want to mount the volume, so let&#8217;s go figure out how to move it.<\/p>\n<pre class=\"lang:default decode:true \"># First let us view the mountpoint\r\n\r\n$ zfs get mountpoint backup\r\nNAME    PROPERTY    VALUE       SOURCE\r\nbackup  mountpoint  \/backup     default\r\n\r\n# Now we can modify that value\r\n\r\n$ sudo zfs set mountpoint=\/data\/raidz backup\r\n\r\n# And check to see it changed\r\n\r\n$ zfs get mountpoint backup\r\nNAME    PROPERTY    VALUE        SOURCE\r\nbackup  mountpoint  \/data\/raidz  local\r\n<\/pre>\n<p>Cool. I&#8217;ve got a ZFS filesytem. One snag, it isn&#8217;t mounted automatically after a reboot. I can manually mount it:<\/p>\n<pre class=\"lang:default decode:true\">$ sudo zpool import -a\r\n<\/pre>\n<p>And digging into the <a href=\"https:\/\/nixos.wiki\/wiki\/ZFS#Importing_on_boot\">NixOS doc<\/a>, we find the configuration we need to add<\/p>\n<pre class=\"lang:default decode:true \"> boot.zfs.extraPools = [ \u201cbackup\u201d ];<\/pre>\n<p>This fixed me up, and ZFS is auto mounted on reboots.<\/p>\n<p>One last configuration tweak, let&#8217;s enable scrubbing of the ZFS pool in our NixOS configuration<\/p>\n<pre class=\"lang:default decode:true\">services.zfs.autoScrub.enable = true;<\/pre>\n<p>Setting up ZFS on NixOS is very easy. Why would you want ZFS over another filesystem or storage management system? I&#8217;ve been using <a href=\"https:\/\/www.snapraid.it\/\">snapraid.it<\/a> for a while on my main server, and I like the data integrity that it brings beyond just a RAID5 setup. The snapraid site has an interesting <a href=\"https:\/\/www.snapraid.it\/compare\">comparison matrix<\/a>. I will say that setting up ZFS RAIDZ was a lot less scary than any of my adventures using mdadm to setup a software RAID5.<\/p>\n<p>What do I see as the <a href=\"https:\/\/itsfoss.com\/what-is-zfs\/\">key strengths<\/a> of ZFS?<\/p>\n<ul>\n<li><strong>Data integrity verification and automatic repair<\/strong> &#8211; all files are check-summed, and with RAIDZ redundancy we can recovery from underlying data corruption.<\/li>\n<li><strong>Pooled Storage<\/strong> &#8211; something I need to explore more, but I think this will give me flexibility over adding more storage over time if needed.<\/li>\n<li><strong>Copy-on-write<\/strong> &#8211; this is about consistency of the filesystem, especially over power failure events.<\/li>\n<\/ul>\n<p>Remember I started out with some old hardware I was repurposing? Those 1TB drives were all surprisingly in &#8216;good&#8217; shape, but between 10 and 13 years of power on time (some of them have manufacture data of 2009). In <a href=\"https:\/\/lowtek.ca\/roo\/2024\/replacing-a-zfs-degraded-device\/\">my next blog post<\/a> we&#8217;ll cover how ZFS handles failures as we see these ancient drives start to fail.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When ZFS first came out, it was a proprietary filesystem but it had some very interesting characteristics &#8211; at the time it&#8217;s ability to scale massively and protect your data seemed very cool. My interest in filesystems goes back to my C64 days editing floppy disks to create infinite directory listings and the like.\u00a0 Talking &hellip; <a href=\"https:\/\/lowtek.ca\/roo\/2024\/getting-started-with-zfs\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Getting started with ZFS&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,12],"tags":[],"class_list":["post-2345","post","type-post","status-publish","format-standard","hentry","category-computing","category-how-to"],"_links":{"self":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts\/2345","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/comments?post=2345"}],"version-history":[{"count":8,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts\/2345\/revisions"}],"predecessor-version":[{"id":2540,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts\/2345\/revisions\/2540"}],"wp:attachment":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/media?parent=2345"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/categories?post=2345"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/tags?post=2345"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}