{"id":1303,"date":"2012-12-30T23:16:47","date_gmt":"2012-12-31T03:16:47","guid":{"rendered":"https:\/\/lowtek.ca\/roo\/?p=1303"},"modified":"2012-12-30T23:16:47","modified_gmt":"2012-12-31T03:16:47","slug":"add-volum-to-raid5-on-ubuntu","status":"publish","type":"post","link":"https:\/\/lowtek.ca\/roo\/2012\/add-volum-to-raid5-on-ubuntu\/","title":{"rendered":"Add drive to RAID5 on Ubuntu"},"content":{"rendered":"<p>Some time ago I migrated <a href=\"https:\/\/lowtek.ca\/roo\/2011\/how-to-migrate-from-raid1-to-raid5\/\">from a RAID1 setup to RAID5<\/a>, this was on the minimum 3 drives. At some point this summer I spotted a good deal on a matching 1TB drive to what I had in my array and bought it. My purchase sat in my desk drawer for a month (or two) then I finally got around to installing it into the server. At least another couple of months went by until I got to adding it to my array &#8211; it turns out to be really simple and I&#8217;m kicking myself for dragging my feet.<\/p>\n<p>With any hardware upgrade (specifically drives) it&#8217;s a good idea to capture what the system thinks things look like before you make any changes. For the most part Ubuntu talks about <a href=\"https:\/\/help.ubuntu.com\/community\/UsingUUID\">UUIDs<\/a> for drives, but a couple of places (at least in my install) use the <code>\/dev\/sd*#<\/code> names and can trip you up when you shuffle hardware around. Capturing the drive assignments is simply a matter of:<\/p>\n<p><code>$ sudo fdisk -l | grep ^\/dev<\/code><\/p>\n<p>Post hardware installation I was surprised at how much of a shuffle the <code>\/dev\/sd*#<\/code>&#8216;s changed around. I was glad I had a before and after capture of the data, it also let me identify the new drive pretty easily.<\/p>\n<p>Early in my notes I have <em>&#8220;could it be this simple?&#8221;<\/em> and a link to the <a href=\"https:\/\/raid.wiki.kernel.org\/index.php\/Growing\">kernel.org wiki on RAID<\/a>. It turns out that yes, it really is that simple &#8212; but you do need to follow the steps carefully. I did also find an <a href=\"http:\/\/ubuntuforums.org\/showthread.php?t=517282\">Ubuntu Forum post<\/a> that was a good read for background too.<\/p>\n<p>The new drive I had temporarily used on an OSX system to do some recovery work, so <a href=\"http:\/\/manpages.ubuntu.com\/manpages\/precise\/man8\/fdisk.8.html\">fdisk<\/a> wasn&#8217;t very happy about working with the drive that had a <a href=\"http:\/\/en.wikipedia.org\/wiki\/GUID_Partition_Table\">GUID partition table<\/a> (GPT). It turns out <a href=\"http:\/\/manpages.ubuntu.com\/manpages\/precise\/man8\/parted.8.html\">parted<\/a> was happy to work with the volume and let me even change it back into something fdisk could work with.<\/p>\n<p>I puzzled over the fact that this new drive wanted to start at 2048 instead of 63, I was initially under the incorrect assumption this had something to do with the GPT setup that I hadn&#8217;t been able to fix. Consider two basically identical volumes (old followed by new)<\/p>\n<p><code>$ sudo fdisk -l \/dev\/sdb<\/code><\/p>\n<p><code>Disk \/dev\/sdb: 1000.2 GB, 1000204886016 bytes<br \/>\n255 heads, 63 sectors\/track, 121601 cylinders, total 1953525168 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ <strong>512 bytes<\/strong><br \/>\nI\/O size (minimum\/optimal): <strong>512 bytes \/ 512 bytes<\/strong><br \/>\nDisk identifier: 0x00000000<\/code><\/p>\n<p><code>Device Boot Start End Blocks Id System<br \/>\n\/dev\/sdb1 <strong>63<\/strong> 1953520064 976760001 83 Linux<\/code><\/p>\n<p><code>$ sudo fdisk -l \/dev\/sdc<\/code><\/p>\n<p><code>Disk \/dev\/sdc: 1000.2 GB, 1000204886016 bytes<br \/>\n255 heads, 63 sectors\/track, 121601 cylinders, total 1953525168 sectors<br \/>\nUnits = sectors of 1 * 512 = 512 bytes<br \/>\nSector size (logical\/physical): 512 bytes \/ <strong>4096 bytes<\/strong><br \/>\nI\/O size (minimum\/optimal): <strong>4096 bytes \/ 4096 bytes<\/strong><br \/>\nDisk identifier: 0x00000000<br \/>\n<\/code><br \/>\n<code>Device Boot Start End Blocks Id System<br \/>\n\/dev\/sdc1 <strong>2048<\/strong> 1953525134 976761543+ 83 Linux<\/code><\/p>\n<p>I&#8217;ve highlighted the key differences in bold, you can see the physical sector size is 4096 vs. 512 and that is the reason for the different start position. Ok, diversion over &#8211; let&#8217;s actually follow the <a href=\"https:\/\/raid.wiki.kernel.org\/index.php\/Growing\">wiki<\/a> and get this drive added to the RAID array.<\/p>\n<p>Start by looking at what we have:<\/p>\n<p><code>$ cat \/proc\/mdstat<br \/>\nPersonalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]<br \/>\nmd_d3 : active raid5 sdf1[1] sdd1[0] sdb1[2]<br \/>\n1953519872 blocks level 5, 64k chunk, algorithm 2 [3\/3] [UUU]<\/code><\/p>\n<p>So, my RAID5 array is <code>\/dev\/md_d3<\/code>, and I know my new drive is <code>\/dev\/sdc1<\/code> after my parted\/fdisk adventure above.<\/p>\n<p><code>$ sudo mdadm --add \/dev\/md_d3 \/dev\/sdc1<\/code><\/p>\n<p>Now we look at mdstat again and it shows we have a spare. This is honestly what I should have at least done with the drive immediately after installing it &#8211; having a spare lets the RAID array fail over to the spare drive with no administrator intervention.<\/p>\n<p><code>$ cat \/proc\/mdstat<br \/>\nPersonalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]<br \/>\nmd_d3 : active raid5 sdc1[3](S) sdf1[1] sdd1[0] sdb1[2]<br \/>\n1953519872 blocks level 5, 64k chunk, algorithm 2 [3\/3] [UUU]<\/code><\/p>\n<p>Next we grow the array across the new device<\/p>\n<p><code>$ sudo mdadm --grow --raid-devices=4 \/dev\/md_d3<\/code><\/p>\n<p>You can peek at <code>\/proc\/mdstat<\/code> from time to time (or use the <a href=\"http:\/\/manpages.ubuntu.com\/manpages\/precise\/man1\/watch.1.html\">watch<\/a> command) to monitor progress. This may take a while.<\/p>\n<p>Once this is done, don&#8217;t forget to modify <code>\/etc\/mdadm\/mdadm.conf<\/code> as per the <a href=\"https:\/\/raid.wiki.kernel.org\/index.php\/Growing\">wiki<\/a>: <em>&#8220;To make mdadm find your array edit \/etc\/mdadm.conf and correct the num-devices information of your Array&#8221;<\/em><\/p>\n<p>At this point we now have our data spread across more drives, but don&#8217;t have a larger volume. We need to resize the volume to take advantage of the new space. It&#8217;s recommended you do the resize with the RAID5 volume unmounted (offline). I set about to do this and hit problems unmounting the volume: this turned out to be <a href=\"http:\/\/en.wikipedia.org\/wiki\/Samba_(software)\">samba<\/a> holding on to the volume, turning that service off fixed things.<\/p>\n<p>Then I hit a show stopper, the <a href=\"http:\/\/manpages.ubuntu.com\/manpages\/precise\/en\/man8\/resize2fs.8.html\">resize2fs<\/a> command failed:<\/p>\n<p><code>$ sudo resize2fs -p \/dev\/md_d3<br \/>\nresize2fs 1.42 (29-Nov-2011)<br \/>\nresize2fs: Device or resource busy while trying to open \/dev\/md_d3<br \/>\nCouldn't find valid filesystem superblock.<\/code><\/p>\n<p>Huh? This is something I&#8217;ll one day sort out I suppose, but it really beats me what is going on here. You can resize RAID5 while it&#8217;s online too, it&#8217;s slower and a bit scarier, but it works.<\/p>\n<p><code>$ sudo resize2fs \/dev\/md_d3<br \/>\nresize2fs 1.42 (29-Nov-2011)<br \/>\nFilesystem at \/dev\/md_d3 is mounted on \/stuff; on-line resizing required<br \/>\nold_desc_blocks = 117, new_desc_blocks = 175<br \/>\nPerforming an on-line resize of \/dev\/md_d3 to 732569952 (4k) blocks.<\/code><\/p>\n<p>This was followed by a few moments of terror as I realized that I was doing this over a SSH connection &#8211; <a href=\"http:\/\/serverfault.com\/questions\/440313\/resize2fs-get-status-after-ssh-connection-interrupted\">what if the connection is lost?<\/a>\u00a0Next time I&#8217;ll use screen, or nohup the process.<\/p>\n<p>It was neat to watch the free space on the drive creep upwards. It was running at about 1Gb every 2 seconds. Once this finishes, you&#8217;re done. My RAID volume went from 1.9T to 2.8T with the new drive.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Some time ago I migrated from a RAID1 setup to RAID5, this was on the minimum 3 drives. At some point this summer I spotted a good deal on a matching 1TB drive to what I had in my array and bought it. My purchase sat in my desk drawer for a month (or two) &hellip; <a href=\"https:\/\/lowtek.ca\/roo\/2012\/add-volum-to-raid5-on-ubuntu\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Add drive to RAID5 on Ubuntu&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,12],"tags":[],"class_list":["post-1303","post","type-post","status-publish","format-standard","hentry","category-computing","category-how-to"],"_links":{"self":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts\/1303","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/comments?post=1303"}],"version-history":[{"count":2,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts\/1303\/revisions"}],"predecessor-version":[{"id":1305,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/posts\/1303\/revisions\/1305"}],"wp:attachment":[{"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/media?parent=1303"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/categories?post=1303"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lowtek.ca\/roo\/wp-json\/wp\/v2\/tags?post=1303"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}