Over the years I’ve had the expected number of hard drive failures. Some have been more catastrophic to me as I didn’t have a good backup strategy in place, others felt avoidable if I’d paid attention the warning signs.
My current setup for data duplication is based on Snapraid, a non-traditional RAID solution. It allows mixed sizes of drives, and the replication is done via regularly running the sync operation. Mine is done daily, files are sync’d across the drives and a data validation is done from time to time as well. This means while I might lose up to 24hrs of data if the primary drive fails, I have lower usage of the main parity drive and I get the assurance that file corruption hasn’t happened.
Snapraid is very bad when you have either: many small files, frequently changing files. It is ideal for backing up media like photos or movies. To deal with the more rapidly changing data I’ve got a SSD drive for storage. I haven’t yet had a SSD fail on me, but that is assured to happen at one point. Backblaze is already seeing some failure rate information that is concerning. Couple this with the fact that my storage SSD started throwing errors the other day and only a full power cycle of the machine brought it back – it’s fine now, but for how long? Time to setup a mirror.
For this storage I’m going back to traditional RAID. The SSD is a 480GB drive, and thankfully the price of them has dropped to easily under $70. This additional drive now fills all 6 of the SATA ports on my motherboard, the next upgrade will need to be an SATA port expansion card. I’ve written about RAID a few times here.
I’ve moved away from specifying drives as /dev/sdbX
because these values can change. Even this new SSD caused the drive that was at /dev/sdf
to move to /dev/sdg
allowing the new drive to use /dev/sdf
. My /etc/fstab
is now setup using /dev/disk/by-id/xxx
because these are persistent. Most of the disk utilities understand this format just fine as you an see with this example with fdisk.
1 2 3 4 5 6 7 8 9 10 |
$ sudo fdisk -l /dev/sdf Disk /dev/sdf: 447.1 GiB, 480103981056 bytes, 937703088 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes $ sudo fdisk -l /dev/disk/by-id/ata-KINGSTON_SA400S37480G_50026841D62B77E8 Disk /dev/disk/by-id/ata-KINGSTON_SA400S37480G_50026841D62B77E8: 447.1 GiB, 480103981056 bytes, 937703088 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes |
Granted, working with /dev/disk/by-id
is a lot more verbose – but that id will not change if you re-organize the SATA cables.
Let’s get going on setting up the new drive as a mirror for the existing one. Here’s the basic set of steps
- Partition the new drive so it is identical to the existing one
- Create a RAID1 array in degraded state
- Format and mount the array
- Copy the data from the existing drive to the new array
- Un-mount both the array and the original drive
- Mount the array where the original drive was mounted
- Make sure things are good – the next step is destructive
- Add the original drive to the degraded RAID1 array making it whole
It may seems like a lot of steps, and some of them are scary – but on the other side we’ll have a software RAID protecting the data. The remainder of this post will be the details of those steps above.
Continue reading “Ubuntu adding a 2nd data drive as a mirror (RAID1)”