UPS Monitoring

One of the things that I just hadn’t got around to after migrating to the new server was restoring my UPS monitoring.  The first time I set it up, it seemed pretty involved – partly because the version of Ubuntu I was using (Dapper) needed some special USB configuration.  Now that my server is on a more recent level of Ubuntu, it just works like it is supposed to.

The Ubuntu Community Documentation is well done and covers all the details.  Basically I needed to install apcupsd.  Reading through the known Linux USB issues listed on the APCUPSD site made my scratch my head a bit.  It tells you to check the file /proc/bus/usb/devices to see if the USB device is recognized.  My Ubuntu install doesn’t have this, I suspect it is due to usbfs not running.  The lsusb utility seems to find the device just fine:


$ lsusb
Bus 005 Device 001: ID 0000:0000
Bus 004 Device 001: ID 0000:0000
Bus 003 Device 001: ID 0000:0000
Bus 002 Device 001: ID 0000:0000
Bus 001 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
Bus 001 Device 001: ID 0000:0000

So I figured I’d install and see what happened.

sudo apt-get install apcupsd apcupsd-cgi

You’ll note that I installed the CGI package as well so I can check in via the web, this is optional.  You do need to do some minor configuration, this is covered in detail by the Ubuntu Community Documentation on apcupsd.  In my case it was set UPSCABLE usb; UPSTYPE usb; and comment out DEVICE in the file /etc/apcupsd/apcupsd.conf.  Then change ISCONFIGURED to yes in the /etc/default/apcupsd file.

All that was left was to start the service:

sudo /etc/init.d/apcupsd start

and test it using apcaccess.  I’ll leave the cgi-bin setup as an exercise for the reader.

So why bother doing this at all?  Well, the apcupsd service (daemon) will shut down the machine in a controlled manner if there is an extended power failure, configured correctly it will also come back up when the power has been restored.  Logs are also generated to indicate when power failures have happened.   Knowing when, and how long the power was out is comforting.

Time Machine and Linux

Previously I had written about using an Ubuntu server to host Apple Time Machine backups.   Now that I’ve retired the old server, I needed to re-do some of that work to get backups running again.  This time I decided to skip Part A – which was about enabling AFP to make it as Mac friendly as possible.  My new setup simply uses Samba to expose the previously created time machine volume as described in Part B of my old post.  This seems to be working just fine.

I learned the backup lesson the hard way, losing a 20Gig drive a few years ago.  There are in my office two dis-assembled hard drives: one which is the failed drive, and the second was a working same model number drive I bought on eBay.  The plan was to swap the controller board to hopefully save some of the data (it was my old web server).  It turns out that even though the drive models were the same, the internals were different – one drive had a single platter, the other a dual platter.  If this blog posting gives you that “yeah, I should really get my backup story sorted out” feeling, then go do something about it right now.  Hindsight is 20/20.

I meant to take a picture of that pair of drives to accompany this post, but failed to get a quick picture before I left the office.  I figured that I’d use the Creative Commons material from Flickr for this post.  It turns out that giving the correct attribution to the photo is a bit of a pain via Flickr (they should really fix that) – but there is a solution: www.ImageCodr.org.  With a few clicks and a cut and paste from the Flickr photo you want to use, you get an HTML snippet to use.

I had been using rsync to do backups on my old server, backing up from the main drive to one of the data drives.  As well, a number of the machines around the house would rsync backup over the network to the server.  On my new server I’m using rsnapshot, which is based on rsync but provides some nice scripting to give you a nice set of default behaviours.  Setting it up to do local backups from the server main drive to a mounted data drive was trivial.

Using rsync you can configure incremental backups – utilizing hard links to provide multiple full directory trees, with only a small increase in disk footprint.  Rsnapshot uses this facility to provide hourly, daily, and weekly backups – very similar to Time Machine’s backup story which as the data gets older, you get less granularity.  I’ve set it up with daily, hourly and monthly backups – after a few weeks it looks like this:

drwxr-xr-x 3 root root 4096 2009-04-21 00:30 daily.0
drwxr-xr-x 3 root root 4096 2009-04-20 00:31 daily.1
drwxr-xr-x 3 root root 4096 2009-04-19 00:30 daily.2
drwxr-xr-x 3 root root 4096 2009-04-18 00:31 daily.3
drwxr-xr-x 3 root root 4096 2009-04-17 00:31 daily.4
drwxr-xr-x 3 root root 4096 2009-04-16 00:30 daily.5
drwxr-xr-x 3 root root 4096 2009-04-15 00:30 daily.6
drwxr-xr-x 3 root root 4096 2009-04-14 00:30 weekly.0
drwxr-xr-x 3 root root 4096 2009-04-07 00:30 weekly.1

So while I’ve got many full file trees, through the use of hardlinks only 4.7Gig of storage is being used to have 9 copies of a 3.7Gig file tree.

The New Server

I had previously posted about the server than runs lowtek.ca and that it had been given me trouble.   Well, the new parts came pretty quickly and it was a good thing – as just last week the old server packed it in.  It turns out that the most likely reason for the instability was the CPU fan, as it totally seized on the failure day and my CPU temperature climbed up past 90C.  An $8 fan might have solved my immediate problem, and been a lot less headache — having temperature graphs of the server would have helped spot this, something I plan to do with the new one.

At least I had a good excuse to buy new hardware.img_1086 So here is a picture of it hanging out in my furnace room next to the water heater.  While the new system is based on a MiniITX mainboard, I’ve opted to use a full size ATX tower case to house it (the case was a free hand me down from a friend).

I did modify the case quite heavily.  The stock fan grills were simply holes drilled in the case – it was more grill than not, so the airflow was pretty poor.  A couple of minutes with the dremel removed the grills entirely.  I also opened up the front bezel to provide easy inflow of air for the lower case fan.  The upper case fan is mounted to a pair of drive bay covers I’ve glued together.

This case has 6 x 5.25 drive bays, and 3×3.5 bays.  My system drive lives down in the bottom and the data drives are up where the top fan is.

I also cut a fan vent in the side panel to blow down onto the mainboard itself.  This I did with a jigsaw, and I think it turned out well considering it was my first attempt a something like this.

img_1071This older ATX tower case had all of the right connections for this new motherboard.  Power, reset and HDD connectors hooked up no problem.  The power LED was a 3 pin connector vs. the required 2 pin connector.  Karl was able to hook me up with a spare HDD connector that I spliced onto the power LED.

Since there were mounting locaimg_1074tions for 2 more exhaust fans, I couldn’t help myself and added two more at the top/back of the case.

If we count fans, I’ve got 5 case fans, 1 more in the power supply, and a chipset fan on the mainboard.  Overkill?  Yes. Required?  No, probably not.  There seem to be plenty of folk out there running exactly the same board with very minimal cooling.

My motivation here was the current system failing due to heat death, and I’d like this new system to run problem free for years.  The extra cost of a few fans isn’t a big deal, and its very quiet relative to the furnace.  I may further duct / optimize the cooling as the measurements thus far don’t show much of a delta from other peoples numbers on the net.

One more picture of the guts, to give you a sense of how small the MiniITX board is:

img_1070The transition from the old system to the new one should have been as simple as dropping in the drives and booting.  Unfortunately the old version of Ubuntu (Dapper) didn’t have support for the new Atom board and couldn’t make use of its network drivers etc.

Worse still, the system drive refused to boot in the new machine.  It is something that still has me scratching my head.  I even went to the effort of cloning the boot drive onto another which I had proven would boot with the new system – and still no go.  It was almost as if the MBR was in an unexpected location.  Last week I lost a bunch of sleep.

img_1075Above is a picture of the old server while it was cloning the boot drive (which as I mentioned, turned out to be a waste of time).  After five hours of beating my head on the problem, I simply moved the data drives to the new system – and did a clean install on a fresh boot drive.  The old server continued to host lowtek.ca on the old system drive using the external fan to keep the CPU cool.

Yesterday I turned off the old server.  Here are a few URLs that I found helpful in the migration:

As I host a number of wordpress blogs here migrating them required a database backup / restore.  I simply copied over the /var/www directory data instead of re-installing the blog software itself.

Backup:

mysqldump --add-drop-table -h localhost -u sql_username \
--password=sql_passwd sql_database_name > blog.bak.sql

Create DB on new host:

$ mysql -u root -p

mysql> CREATE DATABASE sql_database_name;

mysql> GRANT ALL PRIVILEGES ON sql_database_name.* TO “sql_username”@”localhost”
-> IDENTIFIED BY “sql_passwd”;

mysql> FLUSH PRIVILEGES;

mysql> exit

Restore:

mysql -h localhost -u sql_username  -p sql_database_name < blog.bak.sql

Well, if you’re still hanging in there – let’s talk about power consumption.  I borrowed Trent’s Kill-A-Watt meter and did some measuring.  My home desktop machine draws around 150Watts, and up to 200Watts of power when loaded (and using the CD drive).  The old server machine used 2W in standby, 120W during boot (loaded) and 112W steady state.  The new machine uses 1W in standby, a peak of 100W at boot (when the drives are spinning up), and 60W steady state.

In conclusion: Server machines need to have monitoring setup to track potential problems (temperature).  Migration of your data is less painful if you keep an install log with notes and links (thank goodness I did one last time).  New hardware usually needs a new software install, don’t fight it.