How To: Add 2nd drive to LUKS on Ubuntu

One of my work machines runs Ubuntu, to protect the data stored on this machine an encrypted file system is used. The file system encryption is LUKS based and is applied to a filesystem at creation time, thus to encrypt the system drive it is applied at install time. In my case encryption was an option built into the installer I used.

After running the system for a while I wanted to add a 2nd drive for additional storage and backup. One solution would be to follow this post or this one, both use a key file stored on the first drive to open the second. Ideally I want the crypt password supplied once on boot to unlock both drives.

It seems from searching around that this can be done, but it isn’t clear if hacking the stock scripts is needed or not. I also found the posts to be somewhat lacking step-by-step details. So I’ll try to provide a better how-to here.

Phase 1 – gather some data about our current system.

Determine which disk I want to change. Be very careful, modifying the wrong physical disk could be very bad.

$ sudo fdisk -l

Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe3e5464a

Device Boot Start End Blocks Id System
/dev/sda1 * 1 34 273073+ 83 Linux
/dev/sda2 35 30401 243922927+ 83 Linux

Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x51e1dd3f

Device Boot Start End Blocks Id System
/dev/sdb1 1 30401 244196001 7 HPFS/NTFS

So the plan is to change /dev/sdb into a new encrypted volume.

Let’s now look at the existing encrypted file system to determine how it is configured. I happen to know mine is called lvm_crypt, you should be able to sort this out by looking in /dev/mapper.

$ sudo cryptsetup status lvm_crypt
/dev/mapper/lvm_crypt is active:
cipher: aes-xts-plain
keysize: 512 bits
device: /dev/sda2
offset: 4040 sectors
size: 487841815 sectors
mode: read/write

We’ll want to mimic the cipher and keysize to keep things at the same security level.

Phase 2 – creating an encrypted filesystem

It appears from my experience that the type of the partition doesn’t matter, but for completeness we’ll repartition the drive to be a Linux partition. Again, when doing this be very careful you are specifying the correct disk – it will destroy information.

$ sudo fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): d
Selected partition 1

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-30401, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-30401, default 30401):
Using default value 30401

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now we have a newly partitioned disk. The above may seem somewhat cryptic, we’re simply removing the one existing partition and creating a new one. The default fdisk partition type is a Linux partition suitable for our needs.

Now we create the encrypted filesystem on the new partition, supplying additional parameters to match the key size and cipher of the system volume.

$ sudo cryptsetup luksFormat /dev/sdb1 --key-size=512 --cipher=aes-xts-plain

WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:

I used the same passphrase as the system volume for convenience (and the hope that I could type it in once on boot).

Now we open it and give it a /dev/mapper name, and then format the volume.

$ sudo cryptsetup luksOpen /dev/sdb1 data_crypt
$ sudo mkfs.ext4 /dev/mapper/data_crypt

At this point we could get paranoid and fill the new volume with random data to prevent any latent zeros on the disk from reducing the set of data an attacker would need to examine. I’m not that paranoid about this system.

Phase 3 – mounting the encrypted filesystem at boot time

We’ll be messing with a couple of files: /etc/crypttab, /etc/fstab/ and the initramfs. My working theory is that /etc/crypttab is used to mount the crypt’d filesystems and /etc/fstab is used to mount the parititions (in that order). The initramfs is stored on the unencrypted boot volume and contains a snapshot of the configuration files we need to bootstrap. I found this post somewhat helpful in figuring this part out.

First we use blkid to determine the UUIDs.

$ sudo blkid /dev/sda1
/dev/sda1: UUID="a7357d62-71ad-47d5-89cb-fd0f42576644" TYPE="ext4"
$ sudo blkid /dev/sda2
/dev/sda2: UUID="e4ff5a5f-39f7-4f3e-a45e-737229d95e10" TYPE="crypto_LUKS"
$ sudo blkid /dev/sdb1
/dev/sdb1: UUID="06114da2-138f-401c-9c84-d4a2e6e83bd1" TYPE="crypto_LUKS"

So we add the following line to /etc/crypttab:

data_crypt UUID=06114da2-138f-401c-9c84-d4a2e6e83bd1 none luks

And we now add one line to /etc/fstab:

/dev/mapper/data_crypt        /data        ext4        defaults        0        2

Before we reboot, we need to update the initramfs so these configuration changes will be seen at boot time.

$ sudo update-initramfs -u

Assuming all went as planned, you’re done. Reboot and test it out.

My journey wasn’t “as planned”, I made several silly mistakes. Providing the wrong UUID causing a failure to open the encrypted volume, and using the wrong name in fstab. Both easy to diagnose by reading the error messages (and logs) carefully and walking through the steps manually. Measure twice, cut once has an application here.

I wasn’t successful in getting to a single password entry on boot. I’m prompted twice for the passphrase at boot time, this is easy enough to do. I’m sure I could crawl in and modify the boot scripts to remember and re-try but that’d result in a non-standard configuration causing upgrade pain the in future. Not worth it for a system I rarely reboot.

Phase 4 – bonus marks, testing recovery via LiveCD

I wanted to verify that I could recover from a failure resulting in an inability to boot the system from the hard disk.

This turned out to be really easy. Boot the Live CD, then unlock the volumes:

$ sudo cryptsetup luksOpen /dev/sda2 lvm_crypt
$ sudo cryptsetup luksOpen /dev/sdb1 data_crypt

Now mount them. My system volume is a LVM, and the new volume is just plain old ext4. I found a helpful post on mounting LVMs from rescue mode:

$ sudo lvm vgscan -v
$ sudo lvm vgchange -a y
$ sudo lvm lvs --all
$ sudo mount /dev/mapper/ubuntu-root /mnt

Mounting the ext4 partition is simply

$ sudo mount /dev/mapper/data_crypt /mnt2

That’s it, a second volume added to an existing LUKS system – and confidence we can mount both volumes from a LiveCD in the case of failure.

When Ubuntu fails

I’ve been busy IRL so posting here has taken a backseat to other things, as well I haven’t had a lot of time to tinker. This is an old draft I had kicking around that I’ve cleaned up a bit.

Yes, I’m guilty of running for ‘weeks’ with a pending reboot required, this is probably not helping the situation.  I’ve probably also had several power fails etc with the system in a suspended state. Still, I didn’t expect my Ubuntu system to get to the state it did.

After rebooting my system, the system drive would no longer boot and I was dumped into the (initramfs) busybox

mount: mounting /dev on /root/dev failed: No such file or directory
mount: mounting /sys on /root/sys failed: No such file or directory
mount: mounting /proc on /root/proc failed: No such file or directory
Target Filesystem doesn't have /sbin/init.
No init found. Try passing init=bootarg.

BusyBox v1.10.2 (Ubuntu 1:1.10.2-2ubuntu7) built-in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs)

Ok I think, so there is some filesystem issue with my boot drive – booting a live CD version of Ubuntu should give me the tools to fix it. It turns out the answer was no – the live CD won’t help me either. Sigh, this is the type of thing I’d expect of Windows Vista but not Ubuntu.

Off the to forums and I turn up a post which shows others have had the same issue and the solution. From here it was a simple matter of booting the Ubuntu live CD to download a copy of SLAX to burn to CD then boot from the new SLAX CD to repair the ext4 filesystem. Good thing I had a Ubuntu live CD around.

Once your booted into SLAX, start a root shell and find the volume.

root@slax:~# fdisk -l

This will list all of the drives (if you have more than one) and the partitions on those drives. Next is simply a matter of issuing the filesystem check and repair command on the correct partition

root@slax:~# fsck /dev/sda2

You’ll want to say yes to fixing the problems obviously. Once this completes, simply reboot back to a working system.

Review: OCZ Vertex 3 120G SSD

I’m not entirely certain which event triggered my gear lust for a solid state drive (SSD), it was probably a mix of Jeff Atwood’s post, TechReport’s storage section, and the falling prices resulting in smaller SSDs down below the $100 price point. Whatever it was, I couldn’t really shake the idea of having a SSD in my work laptop – so I decided to get one.

Initially I had thought that a 60G-64G drive would fit the bill, being under the $100 price point and just big enough to hold the OS plus my Lotus Notes mail installation. After reviewing benchmarks, and reviews I decided to focus on the 120G size – in part due to a general recommendation that the 60G size is a bit small for most, and the benchmark numbers on the 120G are a bit better. The price was higher, but still within a very reasonable budget as SSDs are approaching $1 a Gig. The TechReport comparison of 120G-128G size helped me narrow my choice down to the OCZ Vertex 3.

While the Vertex 3 has been on the market a year, it still ranks as one of the fastest drives available. There were some issues with the SandForce SF-2881 controller, but firmware 2.15 is reported to be solid.

My laptop was running a 500G SATA2 Toshiba drive, configured as a single large partition running Windows 7. I had no interest in re-installing from scratch so my approach was to clone the working system onto the smaller drive. There are likely plenty of ways to do this, I was able to easily find a blog post describing how to do it – I roughly followed those steps but will document exactly what I did here.

Step 1) Reduce the partition on the big hard drive to be a bit less than the formatted capacity of the SSD. Initially after reading a bit I was hesitant to use GParted to do this as it seemed some folks had had problems with Windows 7 and GParted. Windows 7 also has a built in partition resize capability.

I ran into several issues trying to use the built in Windows 7 functionality. First up was some unmovable files causing issues. Even after turning off virtual memory and system restore, I still had issues. The Event Viewer was a help in identifying Chrome as holding onto some unmovable files, then I hit what I believe was an issue with NTFS Metafiles being unmovable and blocking my ability to shrink the partition smaller than 245G. At this point I threw my hands in the air and ran GParted from an Ubuntu Live USB key.

GParted ran to completion, but oddly gave me an error indicating something was wrong – but I couldn’t spot anything actually wrong. [Normally GParted should not give an error] The damage was done so I just rebooted and let Windows perform the necessary chkdsk activity. Things were fine, so either I mis-read that there was an error or it was something that was recoverable. Either way I was now happily running with a 100G partition.

Step 2) Use Clonezilla‘s “savepart” option to capture an image of the partition. Since I had a 500G drive which now had lots of empty space after the 100G system partition, I created a 2nd volume to store the captured image to. You can use a second USB mounted drive, or any number of other options including ssh with Clonezilla to store your image.

I will comment that Clonezilla is not for the timid, the user interface appear very complex and requires some careful reading to make sure you’re doing what you think you’re doing. Youtube has a number of walk throughs. For the 100G partition it took about 1:35 to backup.

Above you see the SSD attached to the ultra slim sled that the laptop hard disk was in, this is a very slim metal sleeve with a pull tab and some rubber bumpers. It fit nicely into my W520.

Step 3) Swap the drives. If you have a password on the drive, it’s a good idea to disable before removing it as USB enclosures and passworded drives don’t mix well. Install the new SSD, and place the existing drive into a USB enclosure. Boot the laptop into Ubuntu Live again and partition the new SSD drive, make sure to tag the new partition as with the ‘boot’ flag.

Step 4) Restore the image you saved with Clonezilla’s “restore part” option. In this case I was restoring from the 2nd partition on the original hard drive that is now mounted as a USB volume. Clonezilla warns you twice when restoring a partition to validate you’ve got the correct destination, a nice paranoid touch.

The restore ran nearly 3x faster taking about 37 minutes.

Step 5) Boot into windows, chkdsk may have run again but with the SSD it seemed to take no time at all. You might want to visit the OCZ site and grab the toolbox utility to validate you’ve got the latest firmware, I did this to verify I had 2.15.

Performance

After I did the clone, I ran some boot time tests on the hard drive. I tested immediately after I had completed step 5 with the SSD. For work I need Lotus Notes up and running to access my calendar etc, so that was a logical pattern to benchmark – how long to get back to key information? I used a stop watch, and the times include the time I spent typing in the two passwords and navigating to the icon to launch Notes. It’s not terribly scientific, but I think the results still speak for themselves.

Disk test 1 Disk test 2 Disk test 3 SSD test 1 SSD test 2 SSD test 3
Cold boot to Windows login 1:22 1:24 55 23 23 23
Login to launch of Notes 1:42 1:13 1:44 10 10 10
Lotus Notes ready 40 44 40 10 10 11
Total time 3:45 3:22 3:19 43 43 44

This is crazy hot – more than 3x faster, under a minute from a cold boot.

Now certain operations don’t seem any faster. Resuming from hibernation feels to be about the same speed. This makes sense as the performance difference for sequential reads isn’t much different. It seems in normal usage, lots of little things are more immediate too. Some of this is likely simply moving from a SATA2 to a SATA3 drive, but I’m convinced no spinning platter could keep up with the SSD.