Ubuntu 16.04 to 18.04 rebuild with new SSD

Way back in late 2016 I started building a new server to replace my aging server that runs this website, email, etc. I’m still slowly working towards actually swapping all of the services I run to the new hardware, a process slowed by the fact that I’m trying to refactor everything to be container based – and that I simply don’t seem to have a lot of free time to hack on these things. This has been complicated by the original motherboard failing to work, and my desire to upgrade the SSD to a larger one.

Originally I had a 60GB SSD, which I got at a great price (about $60 if I recall). Today SSD prices have fallen through the floor, making it seem sensible to start building 1TB raid arrays out of (highly reliable) SSDs. A little before Christmas I picked up a 120GB Kingston SSD for around $30. Clearly it was time to do an upgrade.

Back in 2016, the right choice was Ubuntu 16.04, in 2019 the right version is 18.04 LTS. Now while I haven’t moved all the services I need to the new server (in fact, I’ve moved very few) — there are some services which do run on the new server and my infrastructure does rely on them. This means that I want to minimize the downtime of the new server, while still achieving a full clean install.

Taking the server offline for ‘hours’ seemed like a bad idea, however since my desktop is a similar enough machine to the new server hardware it seemed like a good idea to use it to build the new OS on the new SSD. In fact, my bright idea was to do the install with the new SSD inside of a USB enclosure.

Things went smoothly, download 18.04.02, create a bootable USB stick, put SSD into a drive enclosure, boot with USB stick and do an install to the external SSD. Initially I picked a single snap, the docker stuff – I later changed my mind about this and re-did the install. The shell/curses based installer was new to me – but easy to use. The only option I picked was to pre-install OpenSSH server because of course I want one of those.

Attempting to boot from the newly installed SSD just didn’t work. It was hanging my hardware just after POST. This was very weird. Removing the SSD from the enclosure and doing a direct SATA connection worked fine – and it proved that the install was good to go.

For the next part, I’m using my old OS setup post as a reference.

Either 18.04.02 is recent enough, or the install has changed – but I didn’t need to do

since doing the above made no changes to the system as installed. Still, it doesn’t hurt to make sure you are current.

Of course, we still want fail2ban

I didn’t create a new SSH key, but copied in the keys from the old system. I did need to create the .ssh directory.

Once you’ve copied in the id_rsa files, don’t forget to create the authorized_keys file

It’s a good idea now to verify that we can access the machine via ssh. Once we know we can, let’s go make ssh more secure by blocking password based authentication. The password is still useful to have for physical (console) logins.

Let’s also setup a firewall (ufw)

Now as we will almost exclusively be using docker to host things, docker will be managing the iptables for us in parallel to ufw. So strictly speaking we don’t need 80 or 443 enabled because docker will do that when we need to. I did also discover that ufw has comments you can add to each rule – making a nice way to know why you opened (or closed) a port.

Staying on the security topic, let’s install logwatch. This requires a mail system – and as we don’t want postfix because that will interfere with the plan to put email in a docker container.. we are going to use nullmailer setup to send email to our mail server (hosted on the old server until we migrate).

Don’t forget to configure logwatch to email us daily

For docker we’re going to hook up to the docker.com repositories and install the CE version. To do this we need some additional support added to our base install.

We can now install docker.

I also like to be able to run docker as ‘me’ vs needing to sudo for every command

Don’t forget that you need to logout/login to have your shell environment pick up the permissions to do this.

A few more annoyances we’re going to address. Needing my password to sudo is inconvenient. I’m willing to trade off some security for making this more convenient – however, be aware we are making a trade-off here.

Don’t be me, either use visudo to verify that you’re modifying /etc/sudoers correctly. Making your user is easy, just add to the end of the file.

Another minor nit is while I was asked in the install to give a machine name, the /etc/hosts file didn’t get the name added as expected. Just adding it (machineName) to the end of the 1st line will fix things.

Things are almost perfect, but we haven’t setup automatic updates yet.

Then configure the unattended upgrades

Next is ensuring that the unattended upgrades get run, modify the file: /etc/apt/apt.conf.d/20auto-upgrades to look like below

We can test this using a dry-run

I did discover an interesting problem with docker and shutdowns. The automatic reboot will use shutdown to initiate a reboot in the middle of the night. When this happens you can cancel the shutdown, but this seems to cause docker with a –restart policy to not restart things.

It does appear that if you allow the shutdown to happen (ie: do not cancel it), then the docker containers do restart as expected.

I can’t really explain the shutdown and docker interaction issue. Nor the install of Ubuntu onto an external drive, and then that drive causing the machine to fail to POST properly.

Edit

Adding 3rd party repository to unattended-upgrade. Since I elected to pull docker from docker.com vs. the package provided via Ubuntu, I want to also have that repository be part of the automatic upgrades.

I first found this post which provides a path to figuring it out, but I found a better description which I used to come up with my solution.

This gives us the two bits of magic we need to add to /etc/apt/apt.conf.d/50unattended-upgrades – so we modify the Allowed-Origins section to look like:

Again, we can test that things are cool with

This will ensure that docker updates get applied automatically.

Switching from DSL to Cable

I’ve been on DSL forever. I started out on Bell, have been on NCF, and most recently TekSavvy. I’ve had my trials and tribulations with DSL, and have a collection of DSL modems (some are backup, some are bad, some were sensitive to line conditions).

Cable has always been a faster alternative, but it meant I needed to pay a cable install fee and switch technology in general. Also, a static IP wasn’t possible on cable – and having self hosted lowtek.ca for a long time, I’ve always felt a bit trapped to DSL to give me a static IP.

DSL can be fast, but not in my area which seems to have been left behind for faster connectivity. The highest speed I could get was 15 down, 1 up. Now, 15 down is great – I can stream HD Netflix without any real problems. The internet feels fast enough.

I have to say, I really appreciate that Google has built a good enough speed test that is easy to use.

One motivating factor was my desire to stop paying Bell for my land line. It’s nearly $50 a month and we barely use it. Sure I could switch to a VOIP provider but then I still have to pay for the dryloop cost and it sounded like I’d probably experience a service outage (of up to 5 days) when the line switched.

Moving to cable means losing the static IP. It also means that outgoing port 25 is going to be blocked and this means my self hosted email server will have problems.

I’ve sketched out a solution for static IP hosting. I’ll try to write that up in the future once I’ve done it. For now, because on cable your IP rarely changes – I’m just pretending that the IP I have is a static IP.

For sending email, I needed to route all my mail via Teksavvy, treating them as a smarthost. My email setup is postfix, thus there is literally a one line setup change to /etc/postfix/main.cf

This works because TekSavvy allows anonymous SMTP to this from inside of the network they control. What it does is force all outgoing email to be sent to their server, which in turn will send it out.

Now modern email servers do additional trust checks, one of these is Sender Policy Framework (SPF). Configuring your SPF is done via a TXT record in your DNS. While the relayhost was working, I was seeing a warning when checking email sent to gmail (but other email providers check SPF too).

It took a while for me to figure out how to get my SPF record setup correctly. I got a bit lucky, as I was reading https://support.google.com/a/answer/33786?hl=en which pointed at _spf.google.com as the Google SPF record holder. It turns out TekSavvy adopted the same naming: _spf.teksavvy.com. Your SPF record needs to point at other SPF records, so finding this meant it was an easy change to my DNS TXT record for SPF.

I will point at MXToolBox as a great web based tool for sorting out all sorts of email issues.

Now email was not only working via the smarthost, but my SPF record was setup correctly. I’m still experiencing delays when sending to gmail, but not apparently to other sites. From looking at the headers, it seems TekSavvy can at times (often) delay delivery to gmail. This is frustrating, but there are other paths to solution if it’s a big problem.

Now that email was sorted out, switching to cable was really easy. The cable box arrived and was “installed” by finding the live cable in my basement by the power panel and plugging the box in. It turns out that since the cable comes from a box at the end of my lawn and the buried cable to my house (which is 20yrs or more old) is in good shape, I get fantastic signal strength. The tech had to install an attenuator to reduce the signal to the happy range where the modem was going to work well.

Switching from DSL with static IP to Cable with rarely changing IP was a simple matter of swapping the WAN cable into my router from one box to the other. I had to reconfigure my router to use “Automatic” from “PPPoE”, and boom I was on the internet again. Visiting https://www.whatismyip.com/ and I had the new IP address, followed by a simple DNS change to use that as the address for lowtek.ca and I’m back.

At this point all I’ve lost is the reverse DNS check is failing, because the IP that lowtek.ca resolves to – does not answer lowtek.ca when you look up that IP. This is more important for sending email than receiving, and since I’m sending via Teksavvy it doesn’t matter as much. I still want a more ‘proper’ static IP to be assigned to lowtek.ca – more on that in a future post.

Boom – cable is just faster than DSL. With the added bonus that changing speeds is zero admin costs. If I want to move to 2x faster, it’s another $7 a month. On DSL I was at the fastest speed available to me at my location. Cable is costing me about $14 more a month, but the phone line savings will make up for that – once I get past the hump of buying a new cable modem and VOIP ATA box.

 

Server Upgrade Part 5: rsyslog

Now that mail is running in a container, logwatch is a lot less interesting because log data is not visible on the host (the container has the logs). One option is to map the log files from the container into the host logs, but this might get messy. It seems like a better idea to build out an rsyslog setup to flow logs from the container into the host.

To start with I needed to understand rsyslog a bit better. Then I came across a post that does pretty much what I’m trying to accomplish, docker containers sending logs to the host with rsyslog.

Before we configure the container, we need to get our host machine accepting rsyslog input. We’ll need to turn on one of TCP or UDP remote access on the host. I’ll stick with just TCP for my initial setup.

Edit /etc/rsyslog.conf, you’ll find a comment near the top about TCP reception, uncomment the pair of lines below it

and then restart the service to pick up the new config

I’m using a firewall on the host so I’ll also have to open up the firewall so that things can see this new port. When we do firewall stuff, we should always pause and think about the security implications here.

In my case, the machine that has the port open will be behind my router, which also runs a firewall. So while we’re punching a hole in the firewall, we actually have another firewall protecting us from the big bad internet.

At this point our host machine has rsyslog running and waiting for traffic. Now let’s take a look at the mail server container and change it to flow logs to the host rsyslog.

It turns out the client is really easy, but you do have to be aware of UDP or TCP.  Inside the container – we modify our Dockerfile to create an /etc/rsyslog.d/10-rsyslog.conf file that looks like:

If you are using TCP, use both @@ – for UDP just a single @. You also need to use the actual hostname of the host machine (the one we configured with rsyslog just above). We need rebuild and restart the container to activate the rsyslog configuration.

Back on the host, we see data showing up with the container hostname in the logs. If we look on the host machine in /var/log/syslog we see rsyslogd startup from the mail container:

 

There are plenty of other log messages related to the mail container start up as well.

We could manually emit some log messages from the client inside the container, by shelling into the container and running logger.

and in /var/log/syslog on the host we see..

Logwatch should be a lot more interesting