Ubuntu server 8.04 LTS upgrade to 10.04 LTS

Ubuntu 8.04 LTS desktop edition hit end of life well over a year ago (May 12, 2011), the server version end of life date is April 2013 – not too far away. With a server exposed to the internet, staying up to date with patches is good hygiene – there is also the tried and true “if it ain’t broke, don’t fix it“. While there have only been a few critical patches in the last while, I want to stay on a supported version.

The end of life date is a good motivator, but what actually triggered my upgrade was wanting to make use of encfs – and discovering that the version available on 8.04 didn’t have the feature (–reverse) that I was looking for.  I’m actually only part way there as I intend to upgrade all the way to 12.04 LTS to avoid compatibility issues with data stored with encfs. This post will focus on issues and solutions that I encountered on the way to 10.04.

Initiating the upgrade process is quite easy. If you choose to do this via SSH (as I did) you’ll be warned that this could be a bad idea. In my case console access is possible if I head down to the dusty corner of my basement where it lives, so I felt certain that I could recover if needed.

$ sudo do-release-upgrade

Through the install process you’ll be warned of conflicts with the package maintained version of configuration files and the ones you have. Some of these conflict warnings will be pure console, and others will use the curses library to display it a bit more graphically. As I was doing a fairly major upgrade (8.04 -> 10.04) I opted in many cases to overwrite my configuration file with the package maintained version then perform a manual merge afterwards. If you pick this route make sure to keep good notes on which files need to be revisited AND that you have a full system backup – it is nice that the installation system will copy your old version to .dpkg-old so comparison is easy.

Things seemed to go very smoothly, up until it was time to reboot. Post reboot I could ping the machine but SSH was not accepted. Time to head down and check out the actual console. It turned out that my RAID volumes wouldn’t mount and this derailing the normal start up.

As with many things, someone else had run into pretty much the same problem and posted a solution. Sadly that wasn’t quite the full solution in my case as the RAID5 volume wasn’t being recognized properly. I did find the help I was looking for, so basically here is what I did.

Since my RAID volumes are not needed to boot the OS, I simply skipped mounting those drives. Once logged in I could issue:

$ sudo mdadm --auto-detect
$ sudo mdadm --examine --scan

This gave me:

ARRAY /dev/md3 level=raid5 num-devices=3 UUID=7a6c5b68:6d6d7031:7653325d:7e304e58
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=593e3663:69294b3b:30443245:23496c5f

Which I effectively cut & pasted into the /etc/mdadm/mdadm.conf file with one small change: I replaced /dev/md3 and /dev/md2 with /dev/md/d3 and /dev/md/d2 respectively. I’m not certain this was necessary but it matched closer what the 1st solution I found described and it is working fine in my system.

Once past that roadblock, most of the other things were trivial. I found copying the config files back to my Ubuntu desktop system and using meld to view differences to be much easier than trying to interpret the console diff utility.

There was a warning about the default TCP port changing for postgrey, yet for some reason mine still appears to be running happily on port 60000. I also had some trouble with my ThinkUp installation, one of the required PHP packages (php5-mcrypt) had been auto removed in the upgrade. While I’ve certainly missed something in the process, it’s been a couple of days and I haven’t seen any serious problems due to the upgrade.

Restricted shell file server with scponly

Today it’s fairly typical to have an always on, high speed internet connection. Many geeks like myself will run a Linux box 24/7 at home that acts as a file server, media server, and possibly a few other roles like email and web. Enabling ssh access it extremely handy for when you’re away from home, not only does it give you secured shell access but it enables tunneling over ssh. A secondary but also valuable ability of this type of setup is online file storage that is on hardware you own (or more specifically, not owned by someone random).

You might want to enable friends of yours to also enjoy the benefits of having online file storage, but you might not want them tinkering around inside your system with full shell access. Whatever the reason for your paranoia, scponly is a great solution.

scponly is an alternative ‘shell’ (of sorts) for system administrators who would like to provide access to remote users to both read and write local files without providing any remote execution priviledges. Functionally, it is best described as a wrapper to the tried and true ssh suite of applications.

It is quite easy to install and configure scponly on Ubuntu:

$ sudo apt-get install scponly

During the package configuration step that is triggered automatically on the install, you’ll be asked if you want chroot or not.

While the warning appears to be quite dire, choosing yes has some advantages. In a chroot jail the apparent root directory is modified, this limits the users visibility to the filesystem – often to their home directory. The security warning is due to the implementation of scponly needing suid-root  privileges in order to create the chroot jail. You need to assume that the scponly code doesn’t contain any potential exploits, a trade off for the reduced filesystem visibility that in turn increases system security. In the end, as scponly is wrapping the well known and validated ssh suite we’re in a fairly good place.

Next we need to uncompress and modify the setup helper script to be executable:

$ cd /usr/share/doc/scponly/setup_chroot
$ sudo gunzip setup_chroot.sh.gz
$ sudo chmod +x setup_chroot.sh

Use the helper script to create a chroot restricted user (frank).

$ sudo ./setup_chroot.sh

Next we need to set the home directory for this scponly user.
please note that the user's home directory MUST NOT be writeable
by the scponly user. this is important so that the scponly user
cannot subvert the .ssh configuration parameters.

for this reason, a writeable subdirectory will be created that
the scponly user can write into.

-en Username to install [scponly]
frank
-en home directory you wish to set for this user [/home/frank]

-en name of the writeable subdirectory [incoming]
backup
-e
creating /home/frank/backup directory for uploading files

Your platform (Linux) does not have a platform specific setup script.
This install script will attempt a best guess.
If you perform customizations, please consider sending me your changes.
Look to the templates in build_extras/arch.
- joe at sublimation dot org

please set the password for frank:
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
if you experience a warning with winscp regarding groups, please install
the provided hacked out fake groups program into your chroot, like so:
cp groups /home/frank/bin/groups

In the above I provided a username (frank) and I accepted the defaults except for the writeable subdirectory (backup) and password.

On my test system, an Ubuntu 11.04 (Natty) desktop install I wasn’t able to connect using scp, or sftp.

$ scp testfile.txt frank@desktop:testfile.txt
frank@desktop's password:
unknown user 1001
lost connection

It turns out I was hit by a reported problem, and it was a simple matter of copying some missing files into the chroot jail:

$ sudo cp -av /lib/i386-linux-gnu/libnss_files* /home/frank/lib/i386-linux-gnu/

Now everything worked. I could scp, sftp and mount using sshfs (one of my favorite utilities).

Bonus round

If you want the writeable subdirectory to be the default directory, simply modify the system /etc/passwd file to have a double slash followed by the directory:

frank:x:1001:1001::/home/frank//backup:/usr/sbin/scponlyc

Changing the password is also supported by scponly:

$ ssh -t frank@desktop passwd

Building PDFs with ImageMagick

I’ve flipped back and forth between reading physical books and eBooks over the last couple of years. I’m currently in an eBook phase, and it may stick this time. A sale on Kobo let me grab a few I had been meaning to read for next to nothing, now that I’ve bought a few I’m more likely to buy more.

Sometime you want to move some content into a format that can be easily read using one of the eReaders. Let’s consider two scenarios: a) You have a paper copy of something you want to scan and convert, b) there is a web resource that is formatted as pages but isn’t in PDF format. Under Ubuntu I like Simple Scan, it allows you to easily scan multi-page documents. If dealing with a web resource, a full screen browser window and Alt-Print Screen will perform a screen capture allowing you to save a series of pages quickly.

Simple Scan will save multiple scanned pages with filenames (Scanned Document-1.jpg) which sort nicely in order of scan. The screen shot utility uses filenames in the format  “Screenshot at YYYY-MM-DD HH:MM:SS.png” so again we have perfect alphabetic sorting in the directory. Having the files in the directory in the correct order will be helpful later on.

Now with both scanning and screen capture there will be elements in the image that we want to crop. As we’re likely dealing with 10’s of pages, we don’t want to have to open GIMP on each of them and edit. Enter ImageMagick – a command line friendly tool for image processing. My screen resolution is 1680×1050 and the screen shots were all 1680×1026 (due to the Ubuntu desktop title bar). The screen shot contained the browser “chrome” as well as portions of the page I didn’t want. Using GIMP I was able to determine the upper left (491×126) and lower right (1170×1026) corners of the image, a little math told me the cropped image size was 679×900. I made a copy of one of the images and called it x.png, this let me experiment to make sure I got it right.

$ convert x.png -crop 679x900+491+126 y.png

Excellent, the resulting y.png file is properly cropped. Now I want to convert all of the files in the directory, and in fact I want to mutate them in place. It turns out mogrify is the the solution:

$ mogrify -crop 679x900+491+126 *.png

This will modify all of the images “in place” in the directory I’m using. For scanned images we have pretty much the same process yet the cropping dimensions will be different.

At this point I jumped the gun and converted all of the files in the directory into a pdf. Here is a screen capture of the PDF viewer showing a simple example to demonstrate the problem:

So while the cropped .png displays properly with no whitespace around it, the PDF clearly has additional whitespace. The ImageMagick identify utility helps explain what’s wrong here:

$ identify Screenshot\ at\ 2012-05-29\ 20\:26\:25.png
Screenshot at 2012-05-29 20:26:25.png PNG 679x900 1680x1026+491+126 8-bit DirectClass 1.263MB 0.050u 0:00.050

Ah, so the image still has the original size, but it’s been cropped to the corrected size. It turns out I want to apply an additional processing step to the images, +repage (to completely remove/reset the virtual canvas meta-data from the images)

$ mogrify +repage *.png

$ identify Screenshot\ at\ 2012-05-29\ 20\:26\:25.png
Screenshot at 2012-05-29 20:26:25.png PNG 679x900 679x900+0+0 8-bit DirectClass 1.263MB 0.050u 0:00.050

Now I’m ready to create a PDF file:

$ convert *.png book.pdf

This works like a charm because my files are in the correct order. The resulting PDF size is a little bit bigger than the sum of the individual image files. I did explore ways to reduce this, but all of them resulted in lower quality images in the PDF and that impacted readability.