Growing a WindowsXP VMWare GuestOS in Ubuntu

vmware_ubuntuMy desktop at home is based on Ubuntu and generally its a good end user experience.  The bonus to running Linux is it gives me a unix system to hack around with and I avoid some of the malware nonsense the plagues the much more popular Windows operating system.  Of course, Linux is a poor second cousin and there is a lot of software that is only for Windows.  The suite of work-alike applications under Linux is improving, but most people can’t quite get away from Windows entirely.

For my part, one of the key applications is iTunes.  Sure you can sort of get by with pure Linux, buts its still pretty messy.  Running the real application is much easier.  There are a few other less frequently used applications that require me to have a Windows install around too.

Quite some time ago, I took a physical install of WindowsXP Home and virtualized it to run under VMWare Player (you can also choose to use VMWare Server).  I’ve lost track of the specific steps I did, but the following how to seems to cover basically what I did.  One footnote to this process is that it appears to Windows as if you’ve changed the hardware significantly enough to require revalidation of your license – this shouldn’t be a big deal if you’ve got a legitimate copy.

Now back then, I figured that a 14Gb disk would be plenty of space for Windows.  (Ok, stop laughing now).  So this worked fine for a couple of years, but the cruft has built up to the point where I’m getting regular low disk warnings in my WindowsXP image.  Time to fix it.

You’ll need to get a copy of VMWare Server.  This is a free download, but requires a registration that gives you a free key to run it.  You actually don’t need the key – as we only need one utility out of the archive:  vmware-vdiskmanager.  This will allow us to resize the .vmdk file – which will take a little while.


./vmware-vdiskmanager -x 36Gb WindowsXP.vmdk

The vmware server archive also contains another very useful tool: vmware-mount.  This allows you to mount your vmware disk and access the NTFS partitions under Linux.  Very nice for moving data in or out of your virtualized Windows machine.

I need to credit the blog post which pointed me at the vmware-vdiskmanager, but it goes on to talk about using the Windows administration tools to change the type of the disk from basic to dynamic.  This is a feature not available in XP Home.

The .vmdk file represents the raw disk, so we’ve now got more drive space avilable but the Windows partition is still stuck at 14Gb.  No problem, the Ubuntu live CD contains a copy of GParted which can resize NTFS for us.  We need to edit the .vmx file to add the .iso file and boot from it.

ide0:0.present = "TRUE"
ide0:0.fileName = "/MyData/ISOs/ubuntu-9.04-desktop-i386.iso"
ide0:0.deviceType = "cdrom-image"

I did also have to fiddle with the VMWare BIOS (F2 on boot) to enable booting from the CDRom.  You may or may not need to do this step.

Once you have the Ubuntu Live CD running, run the partition editor under System->Adminstration->Partition Editor.  This is GParted and its got a pretty friendly graphical UI.  It may take some time to apply the change.

WindowsXP_Growing

Once you are done, you need to re-edit your .vmx file to remove the .iso and boot Windows once again.  Don’t Panic.  Windows will detect that there is something amiss on your file system and want to run a check / repair on it.  This is normal.  Let it run through this process, it is a one time fix up and you’ll boot clean afterwards.

WindowsXP-Chkdsk

Start to end it takes a couple of hours, but most of that is waiting for longish disk operations.  Worth it to now have plenty of drive space available for my Windows VMware image.

UPS Monitoring

One of the things that I just hadn’t got around to after migrating to the new server was restoring my UPS monitoring.  The first time I set it up, it seemed pretty involved – partly because the version of Ubuntu I was using (Dapper) needed some special USB configuration.  Now that my server is on a more recent level of Ubuntu, it just works like it is supposed to.

The Ubuntu Community Documentation is well done and covers all the details.  Basically I needed to install apcupsd.  Reading through the known Linux USB issues listed on the APCUPSD site made my scratch my head a bit.  It tells you to check the file /proc/bus/usb/devices to see if the USB device is recognized.  My Ubuntu install doesn’t have this, I suspect it is due to usbfs not running.  The lsusb utility seems to find the device just fine:


$ lsusb
Bus 005 Device 001: ID 0000:0000
Bus 004 Device 001: ID 0000:0000
Bus 003 Device 001: ID 0000:0000
Bus 002 Device 001: ID 0000:0000
Bus 001 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 001 Device 001: ID 0000:0000

So I figured I’d install and see what happened.

sudo apt-get install apcupsd apcupsd-cgi

You’ll note that I installed the CGI package as well so I can check in via the web, this is optional.  You do need to do some minor configuration, this is covered in detail by the Ubuntu Community Documentation on apcupsd.  In my case it was set UPSCABLE usb; UPSTYPE usb; and comment out DEVICE in the file /etc/apcupsd/apcupsd.conf.  Then change ISCONFIGURED to yes in the /etc/default/apcupsd file.

All that was left was to start the service:

sudo /etc/init.d/apcupsd start

and test it using apcaccess.  I’ll leave the cgi-bin setup as an exercise for the reader.

So why bother doing this at all?  Well, the apcupsd service (daemon) will shut down the machine in a controlled manner if there is an extended power failure, configured correctly it will also come back up when the power has been restored.  Logs are also generated to indicate when power failures have happened.   Knowing when, and how long the power was out is comforting.

Time Machine and Linux

Previously I had written about using an Ubuntu server to host Apple Time Machine backups.   Now that I’ve retired the old server, I needed to re-do some of that work to get backups running again.  This time I decided to skip Part A – which was about enabling AFP to make it as Mac friendly as possible.  My new setup simply uses Samba to expose the previously created time machine volume as described in Part B of my old post.  This seems to be working just fine.

I learned the backup lesson the hard way, losing a 20Gig drive a few years ago.  There are in my office two dis-assembled hard drives: one which is the failed drive, and the second was a working same model number drive I bought on eBay.  The plan was to swap the controller board to hopefully save some of the data (it was my old web server).  It turns out that even though the drive models were the same, the internals were different – one drive had a single platter, the other a dual platter.  If this blog posting gives you that “yeah, I should really get my backup story sorted out” feeling, then go do something about it right now.  Hindsight is 20/20.

I meant to take a picture of that pair of drives to accompany this post, but failed to get a quick picture before I left the office.  I figured that I’d use the Creative Commons material from Flickr for this post.  It turns out that giving the correct attribution to the photo is a bit of a pain via Flickr (they should really fix that) – but there is a solution: www.ImageCodr.org.  With a few clicks and a cut and paste from the Flickr photo you want to use, you get an HTML snippet to use.

I had been using rsync to do backups on my old server, backing up from the main drive to one of the data drives.  As well, a number of the machines around the house would rsync backup over the network to the server.  On my new server I’m using rsnapshot, which is based on rsync but provides some nice scripting to give you a nice set of default behaviours.  Setting it up to do local backups from the server main drive to a mounted data drive was trivial.

Using rsync you can configure incremental backups – utilizing hard links to provide multiple full directory trees, with only a small increase in disk footprint.  Rsnapshot uses this facility to provide hourly, daily, and weekly backups – very similar to Time Machine’s backup story which as the data gets older, you get less granularity.  I’ve set it up with daily, hourly and monthly backups – after a few weeks it looks like this:

drwxr-xr-x 3 root root 4096 2009-04-21 00:30 daily.0
drwxr-xr-x 3 root root 4096 2009-04-20 00:31 daily.1
drwxr-xr-x 3 root root 4096 2009-04-19 00:30 daily.2
drwxr-xr-x 3 root root 4096 2009-04-18 00:31 daily.3
drwxr-xr-x 3 root root 4096 2009-04-17 00:31 daily.4
drwxr-xr-x 3 root root 4096 2009-04-16 00:30 daily.5
drwxr-xr-x 3 root root 4096 2009-04-15 00:30 daily.6
drwxr-xr-x 3 root root 4096 2009-04-14 00:30 weekly.0
drwxr-xr-x 3 root root 4096 2009-04-07 00:30 weekly.1

So while I’ve got many full file trees, through the use of hardlinks only 4.7Gig of storage is being used to have 9 copies of a 3.7Gig file tree.