How to: resize a mirrored volume

Having recently setup mirrored volumes with a pair of 1TB drives, I could now migrate data off the pair of 250Gb data drives to allow me to combine those two drives into a single volume.  Way back when I purchased these drives I had intended to run a mirrored setup, but at the time decided that having more storage was more important.  I had “cleverly” purchased two 250Gb drives from different manufacturers, in theory to avoid concurrent failures.  It turns out that not all 250Gb drives are made the same.

Following the instructions from my previous posting, all went well up to where I tried to add the 2nd volume to the mirrored set.  If you run into a similar problem you’ll likely see one of the two following errors:

mdadm: add new device failed for /dev/hda1 as 2: No space left on device
mdadm: add new device failed for /dev/hda1 as 2: Invalid argument

I found some good hints on how to diagnose the problem, it turns out you can check the partition sizes manually

$ cat /proc/partitions
major minor  #blocks  name

8    17  244196001 sdb1
8    65  244198552 sde1

Close, but not quite the same.  As it was, I had unluckily chosen /dev/sdb as the 1st drive in the mirrored set.  It turns out that fdisk tells an even more interesting story.

$ sudo fdisk -l /dev/sdb

Disk /dev/sdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000c0f4f

Device Boot Start End Blocks Id System
/dev/sdb1 1 30401 244196001 83 Linux

$ sudo fdisk -l /dev/sde

Disk /dev/sde: 250.0 GB, 250059350016 bytes
16 heads, 63 sectors/track, 484521 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sde1 1 484521 244198552+ 83 Linux

Yuck, looks messy. At this point I’ve got some of my live data sitting on one half of the mirrored set, and no suitable 2nd drive to act as the mirror.  Somewhat predictably there is a solution that minimizes downtime and avoids copying all of the data to a new location.

First you unmount the volume and run resize2fs on it.  We don’t need to know the correct size, just any size smaller than the 2nd volume – so I used 200Gb.

$ sudo umount /media/data/
$ sudo e2fsck -f /dev/md2
e2fsck 1.40.8 (13-Mar-2008)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md2: 1890279/15269888 files (0.2% non-contiguous), 32742192/61049616 blocks
$ sudo resize2fs /dev/md2 200G
resize2fs 1.40.8 (13-Mar-2008)
Resizing the filesystem on /dev/md2 to 52428800 (4k) blocks.
The filesystem on /dev/md2 is now 52428800 blocks long.

Now we need to calculate what the correct size of the mirrored partition should be. I looked at two bits of data: the size the mdadm -D reported for the partition I wanted to resize, and the size that was in /proc/partitions for the same. These differed by 88 blocks, so I used the value 88 as a fudge factor – it may not be required but it worked for me. I then also ensured that I supplied a value that was an even multiple of 64 (blocks).

So starting with 244196001 from /proc/partitions:

(244196001 - 88) / 64 = 3815561.14

Drop the decimal places and multiply by 64 to get the number of blocks.

3815561 * 64 = 244195904

Now we feed this new size into mdadm and specify the –grow flag (which can also be used to shrink if you specify a block size smaller than the current which is what we are doing in this case).  We then re-run resize2fs without a specified size, which will cause it to expand the filesystem to fill the partition.

$ sudo mdadm --grow /dev/md2 --size=244195904
$ sudo resize2fs /dev/md2
resize2fs 1.40.8 (13-Mar-2008)
Resizing the filesystem on /dev/md2 to 61048976 (4k) blocks.
The filesystem on /dev/md2 is now 61048976 blocks long.

Now all that is left is to run a filesystem check, and remount it.

$sudo e2fsck -f /dev/md2
e2fsck 1.40.8 (13-Mar-2008)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md2: 1890279/15269888 files (0.2% non-contiguous), 32742192/61048976 blocks
$ sudo mount -a

Now when you attempt to add the 2nd volume, it will be a matching size and the mirror will work.  In the future, I intend to be a little more careful when I plan to setup mirrored drives and pick the smaller volume as the starting point.

Google Android Dev Phone 1

I’m stoked – I’ve got a G1!  This is actually a Google Android Dev Phone 1 that I purchased second hand from a friend.  Back in January 2009 I had a chance to play around with one of these while in California (borrowed from a Googler who got it from work), my impression then was “ok, neat – but its basically a computer in your hand” – a different reaction than what I had with the iPhone.  At the time both were at a premium price and I had only recently bought my Nokia 5310.  Between then and now I bought an iPod Touch that I have enjoyed a lot yet had nearly endless frustration with iTunes.

My G1 is running cyanogenmod v4.2.15.1.  The hardware is very similar to the iPhone/iPod Touch.  CPU is 528MHz ARM 11, 192MB RAM, 320 x 480 capacitive touch screen. One bonus feature is that this version of the phone supports AWS/1700megahertz/BandIV which is the frequency that WindMobile is using in Canada (T-Mobile uses this in the US).

Today my service provider is Fido.  I’m on the $15 plan ($16.95 after taxes), this is sufficient for my phone usage as I’m a light user. Unfortunately it doesn’t include call display or voicemail (+$10 option), nor is any data included.  The phone had been wiped and reset when I got it, and I needed to get past the “Welcome to T-Mobile G1” screen.  Unfortunately the built in menus only provide the option to configure an APN.  Not wanting to incur any data charges (last time I did this on my 5310 it was $12 for a few hundred kb!) – I wanted to figure out how to hack around this.

Not surprisingly I was able to find a solution online that allows you to activate your G1 without using any mobile data.  I’m using Ubuntu 9.10 (and the phone is running cyanogenmod) so the directions there were not exactly what I needed, so I’ll briefly repeat them here with the changes.

1) Grab the Android SDK. Install it following the directions, which really boils down to extracting the archive.

2) Now we’re going to modify /etc/udev/rules.d to give normal users (ie: you) permission on the USB port the phone will use.
Create the file /etc/udev/rules.d/50-android.rules with the contents (permissions 644).

SUBSYSTEM==”usb”, SYSFS{idVendor}==”0bb4″, MODE=”0666″

$sudo restart udev

You can skip this step if you want to run steps 4 and 5 as root.

3) Connect the phone and your PC using a USB cable.  The phone does need a SIM card installed.  Boot the phone.

4) Now we run <install path>/android-sdk-linux_86/tools/adb devices to check if we’re properly connected to the device

$ ./adb devices

List of devices attached
HTxxxxxxxxxx device


(where the x’s are your actual device number)

5) Now back to the phone, tap on the “Welcome to T-Mobile G1” screen to get to the setup page.  Then issue the following command on your PC from the tools directory:

$ ./adb shell

# am start -a android.intent.action.MAIN -n com.android.settings/.Settings

The “am” command is actually executing on the phone itself.  This should start up the configuration dialog that allows you to setup a wireless 802.11b/g network.  After this point it should be pretty self explanatory to get yourself setup with the phone.

The screenshot in this post was done using the Android SDK as well.  I’m sure I’ll have more to say about Android and this phone soon.

Mirrored Drives with Ubuntu

Mirrored drives are also known as a RAID 1 configuration.  It is important to note that running mirrored drives should not be used as a substitute for doing backups.  My motivation for running a RAID 1 is simply that with the drive densities today, I expect these drives to fail.  A terabyte unit is cheap enough that multiplying the cost by two isn’t a big deal, and it gives my data a better chance of surviving a hardware failure.

I purchased two identical drives several months apart – in the hopes of getting units from different batches. I even put them into use staggered by a few months as well.  The intent here was to try to avoid simultaneous failure of the drives due to similarities in manufacture date / usage.  In the end, the environment they are in is probably a bigger factor in leading to failure but what can you do?

Linux has reasonable software raid support.  There is a debate of the merits of software raid vs. hardware raid, as well as which level of raid is most useful.  I leave this as an exercise up to the reader.  The remainder of this posting will be the details of setting up a raid 1 on a live system.  I found two forum postings that talked about this process, the latter being most applicable.

We will start with the assumption that you do have the drive physically installed into your system.  The first step is to partition the disk.  I prefer using cfdisk, but fdisk will work too.  This is always a little scary, but if this is a brand new drive it should not have an existing partition table.  In my scenario I wanted to split the 1TB volume into two partitions, a 300Gb and a 700Gb.

Now let’s use fdisk to dump the results of our partitioning work:

$ sudo fdisk -l /dev/sdd

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Device Boot         Start         End      Blocks   Id  System
/dev/sdd1               1       36473   292969341   83  Linux
/dev/sdd2           36474      121601   683790660   83  Linux

Next we need to install the RAID tools if you don’t have them already:

$ sudo apt-get install mdadm initramfs-tools

Now recall that we are doing this in a live system, I’ve already got another 1TB volume (/dev/sda) partitioned and full of data I want to keep. So we’re going to create the RAID array in a degraded state, this is the reason for the use of the ‘missing’ option. As I have two partitions I need to run the create command twice, once for each of them.

$ sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 missing /dev/sdd1
$ sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 missing /dev/sdd2

Now we can take a look at /proc/mdstat to see how things look:

$ cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd2[1]
683790592 blocks [2/1] [_U]

md0 : active raid1 sdd1[1]
292969216 blocks [2/1] [_U]

unused devices: <none>

Now we format the new volumes. I’m using ext3 filesystems, feel free to choose your favorite.

$ sudo mkfs -t ext3 /dev/md0
$ sudo mkfs -t ext3 /dev/md1

Mount the newly formatted partitions and copy data to it from the existing drive. I used rsync to perform this as it is an easy way to maintain permissions, and as I’m working on a live system I can re-do the rsync later to grab any updated files before I do the actual switch over.

$ sudo mount /dev/md0 /mntpoint
$ sudo rsync -av /source/path /mntpoint

Once the data is moved, and you need to make the new copy of the data on the new degraded mirror volume the live one. Now unmount the original 1TB drive. Assuming things look ok on your system (no lost data) now we partition that drive we just unmounted (double and triple check the device names!) and format those new partitions.

All that is left to do is add the new volume(s) to the array:

$ sudo mdadm /dev/md0 --add /dev/sda1
$ sudo mdadm /dev/md1 --add /dev/sda2

Again we can check /proc/mdstat to see the status of the array. Or use the watch command on the same file to monitor the progress.

$ cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd2[1]
683790592 blocks [2/1] [_U]

md0 : active raid1 sda1[2] sdd1[1]
292969216 blocks [2/1] [_U]
[>....................] recovery = 0.6% (1829440/292969216) finish=74.2min speed=65337K/sec

unused devices: <none>

That’s all there is to it.  Things get a bit more complex if you are working on your root volume, but in my case I was simply mirroring one of my data volumes.