Server Upgrade Part 4: systemd and docker containers

I’ve set the BIOS on the new server to resume running after a power failure. The OS happily comes back no problem, and docker even starts up – but none of the containers.

Apparently in docker 1.2 there is now support to restart your containers after boot, this sounds useful, but the version of docker in Ubuntu 16.04 is 1.12.1, close but not quite recent enough.

No worries, we can use systemd to sort this out. The docker documentation has some examples, but I found an article which takes a slightly different approach – one they claim is more aligned to CoreOS.

Adding our own services to systemd is simply a matter of adding a file to /etc/systemd/system/. The very first one I want to add is one to host my own registry. Now you probably don’t want to host your own registry since you could simply use the public dockerhub, or any number of other solutions. However, I’m also the type of person who hosts their own email, so of course I’m going to host my own registry.

The %n in the file is replaced with the ‘full unit name’. If we save this file as /etc/systemd/system/docker-registry.service the unit name should be docker-registry.

In a simple configuration, the local docker registry will persist data as a docker volume. I decided it was safer to map to a local file tree.

Once you’ve created the file /etc/systemd/system/docker-registry.service we need to reload the systemd daemon and start the service.

If all has gone well, docker ps should show us the new running registry (it does). Now let’s make it start every boot.

And yup, a reboot and we see the registry has started. So now we have a local registry to store our containers in, and a pattern to apply to make them services that start even after a reboot (expected or unexpected).

Pushing images to a local registry is covered in the documentation. I might later want to add a certificate to the registry, but it is not required.

Server Upgrade: Part 3 – email server in docker

Docker has become one of my go to tools for managing software stacks. There are a couple of clever things that come together to make docker awesome, but the bit that works well for this purpose is the Dockerfile configuration file that makes it easy to manage a given software stack deployment and the containerization let’s you run it without interacting with other components. So I get a nice repeatable way to install the software I want, and I don’t have to worry about it messing with another part of the system when I deploy it.

I’d say that data files are a weakness of docker, and most people tend to map (large) portions of the host filesystem into the container to get persistence. This is where you have to be careful. I keep thinking that someone needs to invent “docker for data”, or maybe I just need to wrap my head around the right model to make it more natural.

In any case, I hadn’t installed docker yet. The Ubuntu repository doesn’t have the latest version, but it is fairly current so I’ll just stick with that vs. using the docker repository.

I found someone who’s blazed a trail on setting up a full mail server in docker, so I’ll start with it and modify it to meet my needs. I liked the philosophy they used of basing it on config files, instead of a SQL DB with configuration information which a lot of mail setups in docker seem to go for. They also shared the files on github, making it easy to look at.

We’ll probably want to be able to run docker as a normal user (vs. needing to sudo every time).

Don’t forget to log out then back in to get your user to have the right group permissions. Note: there are some interesting security implications here. Docker doesn’t yet have strong security controls, so if you can run a container you can basically have root on the host.

Now, this setup needs docker-compose, so we’ll install that too.

And here is where we get some friction, his setup requires compose 1.6 or higher. Well, we’ll probably do some hacking. Let’s get the code from github

So we need to create our own docker-compose.yml file, two examples are provided. One with the ELK stack included for logging, the other without. For now we’ll start without ELK as we don’t need those features (yet).

Here is my preliminary docker-compose.yml file

I’ve made it work for older compose versions, and ditched the docker volume for a simple directory mapping to ./maildata – at least for now. I’ve also chosen to build my own container locally vs. pull down the dockerhub image that was provided.

Runs for a while and get the container ready to run (builds the image)

Yup, and my concern about a local mail server was right on the money – it’s going to conflict with my mail server in a container (which if you recall was pulled in when I installed logwatch).

So let’s get brave

Since logwatch is a dependency, it will also be removed. Now logwatch doesn’t strictly need postfix, it really just wants a mail transport agent, so we can install a simpler mail forwarder like nullmailer and take care of that for logwatch.

Ta-da, we’ve got logwatch installed and no postfix and thus no ports being held open.

Cool, and docker ps shows that all the ports are mapped nicely. Now referencing the README.md from the git repo, we need to create some user accounts and DKIM keys. We’ll also remember to stop running the container we started just a moment ago.

Now we can run the container and try to email mario

And now type in this script a line at a time

So receiving email for mario is working. Now can we get this server to send email?

Configuring my desktop install of thunderbird to talk with the new server was simple to do. This let me read the manually sent email to the mario user, and made it easy for me to test sending email using the new docker container server.

So things are going well at this point, we’ve got a mail server appears to be able to send and receive email. However, since mail is now running in a container, the logs are being hidden from logwatch. I can either run logwatch in the container, or expose the logs from the container to the host.

Stuff I still need to do, or at least think through before switching over to this new mail setup

  1. Let’s encrypt certificate
  2. DKIM setup
  3. mail migration
  4. alias support
  5. logwatch
  6. smarthost configuration
  7. webmail
  8. dovecot sieve / pigeonhole

Installing custom firmware on Nexus 5

Until very recently the phone in my pocket was a Nexus 4 that I bought used 2+ years ago, it’s seen a couple of battery changes and a full brain transplant (motherboard) swap.  It’s running Android Marshmallow, but there is a Nougat version available – so it still feels current. Phones feel like they’ve hit the same plateau that computers have, sure there are newer and faster models — but for most needs models a couple of years old are just fine.

The Nexus 5 hit my magic price point of ~$160 on the used market, even for the 32Gb version – making it too tempting an upgrade for me to pass on. The stock firmware only offers 6.01 (Marshmallow) but the Nexus 5 is still a well supported device in the custom ROM space. I’m a big fan of CyanogenMod but that’s come to a fairly spectacular end recently. I’m eagerly waiting for it’s successor LineageOS to get their infrastructure in place and regular builds happening.

Installing custom firmware on the Nexus 5 is similar to any well supported Nexus device, Google really did a good thing allowing the hardware to be friendly to developers. Read on for the detailed steps.

Continue reading “Installing custom firmware on Nexus 5”