Server Upgrade Part 4: systemd and docker containers

I’ve set the BIOS on the new server to resume running after a power failure. The OS happily comes back no problem, and docker even starts up – but none of the containers.

Apparently in docker 1.2 there is now support to restart your containers after boot, this sounds useful, but the version of docker in Ubuntu 16.04 is 1.12.1, close but not quite recent enough.

No worries, we can use systemd to sort this out. The docker documentation has some examples, but I found an article which takes a slightly different approach – one they claim is more aligned to CoreOS.

Adding our own services to systemd is simply a matter of adding a file to /etc/systemd/system/. The very first one I want to add is one to host my own registry. Now you probably don’t want to host your own registry since you could simply use the public dockerhub, or any number of other solutions. However, I’m also the type of person who hosts their own email, so of course I’m going to host my own registry.

The %n in the file is replaced with the ‘full unit name’. If we save this file as /etc/systemd/system/docker-registry.service the unit name should be docker-registry.

In a simple configuration, the local docker registry will persist data as a docker volume. I decided it was safer to map to a local file tree.

Once you’ve created the file /etc/systemd/system/docker-registry.service we need to reload the systemd daemon and start the service.

If all has gone well, docker ps should show us the new running registry (it does). Now let’s make it start every boot.

And yup, a reboot and we see the registry has started. So now we have a local registry to store our containers in, and a pattern to apply to make them services that start even after a reboot (expected or unexpected).

Pushing images to a local registry is covered in the documentation. I might later want to add a certificate to the registry, but it is not required.

Server Upgrade: Part 3 – email server in docker

Docker has become one of my go to tools for managing software stacks. There are a couple of clever things that come together to make docker awesome, but the bit that works well for this purpose is the Dockerfile configuration file that makes it easy to manage a given software stack deployment and the containerization let’s you run it without interacting with other components. So I get a nice repeatable way to install the software I want, and I don’t have to worry about it messing with another part of the system when I deploy it.

I’d say that data files are a weakness of docker, and most people tend to map (large) portions of the host filesystem into the container to get persistence. This is where you have to be careful. I keep thinking that someone needs to invent “docker for data”, or maybe I just need to wrap my head around the right model to make it more natural.

In any case, I hadn’t installed docker yet. The Ubuntu repository doesn’t have the latest version, but it is fairly current so I’ll just stick with that vs. using the docker repository.

I found someone who’s blazed a trail on setting up a full mail server in docker, so I’ll start with it and modify it to meet my needs. I liked the philosophy they used of basing it on config files, instead of a SQL DB with configuration information which a lot of mail setups in docker seem to go for. They also shared the files on github, making it easy to look at.

We’ll probably want to be able to run docker as a normal user (vs. needing to sudo every time).

Don’t forget to log out then back in to get your user to have the right group permissions. Note: there are some interesting security implications here. Docker doesn’t yet have strong security controls, so if you can run a container you can basically have root on the host.

Now, this setup needs docker-compose, so we’ll install that too.

And here is where we get some friction, his setup requires compose 1.6 or higher. Well, we’ll probably do some hacking. Let’s get the code from github

So we need to create our own docker-compose.yml file, two examples are provided. One with the ELK stack included for logging, the other without. For now we’ll start without ELK as we don’t need those features (yet).

Here is my preliminary docker-compose.yml file

I’ve made it work for older compose versions, and ditched the docker volume for a simple directory mapping to ./maildata – at least for now. I’ve also chosen to build my own container locally vs. pull down the dockerhub image that was provided.

Runs for a while and get the container ready to run (builds the image)

Yup, and my concern about a local mail server was right on the money – it’s going to conflict with my mail server in a container (which if you recall was pulled in when I installed logwatch).

So let’s get brave

Since logwatch is a dependency, it will also be removed. Now logwatch doesn’t strictly need postfix, it really just wants a mail transport agent, so we can install a simpler mail forwarder like nullmailer and take care of that for logwatch.

Ta-da, we’ve got logwatch installed and no postfix and thus no ports being held open.

Cool, and docker ps shows that all the ports are mapped nicely. Now referencing the README.md from the git repo, we need to create some user accounts and DKIM keys. We’ll also remember to stop running the container we started just a moment ago.

Now we can run the container and try to email mario

And now type in this script a line at a time

So receiving email for mario is working. Now can we get this server to send email?

Configuring my desktop install of thunderbird to talk with the new server was simple to do. This let me read the manually sent email to the mario user, and made it easy for me to test sending email using the new docker container server.

So things are going well at this point, we’ve got a mail server appears to be able to send and receive email. However, since mail is now running in a container, the logs are being hidden from logwatch. I can either run logwatch in the container, or expose the logs from the container to the host.

Stuff I still need to do, or at least think through before switching over to this new mail setup

  1. Let’s encrypt certificate
  2. DKIM setup
  3. mail migration
  4. alias support
  5. logwatch
  6. smarthost configuration
  7. webmail
  8. dovecot sieve / pigeonhole

Server Upgrade: Part 2 – Basic OS setup

At this point I’ve got the hardware setup and running, but it’s a very basic install. This post is inspired by a post I came across some time ago that I felt gave some good advice. I’ll walk through the steps I took while following that article.

Starting with a clean Ubuntu 16.04.1 server install.

1st login

Fail2ban is a must have security feature, blocking traffic when it detects repeated failed attempts to access your system.

Now we want to make some ssh keys following the Ubuntu documentation. One key question is how big a key should we be using for reasonable security? I think the answer is 4096 bits.

Why create a key on the new machine? I have the opinion that unique ssh keys for unique machines is a good idea, and the ssh config file makes is really simple to manage multiple keys on your main machine (laptop).

Should you make use of a passphrase when creating your ssh key? If you really care about security, yes – you should. Otherwise anyone who manages to get their hands on your key immediately has access to everything. There is a small usability trade-off, since you need to provide that pass phrase (password) every time you want to use the key.

The ssh config file (not on the host, but on your laptop) will look like this:

Assuming you’ve copied the private key (id_rsa) from the new server you’re setting up to the laptop, make sure to chmod 600 that key file too.

Now we should be able to ssh into the new server from our laptop. Hooray, it works!

Time to lock down ssh so key based logins are the only way

You might want to have a shell logged in to the machine while you do this, so you can verify that things are cool AND fix stuff if there is a problem. Otherwise you’re locked out.

Time for a firewall, we’re going to use ufw because it’s simple and does the IPTables setup for us.

We’re letting ssh, http and https protocols, and that’s it. We’re not yet running a web server, but we will at one point.

Now let’s reboot and see how things are doing, if we can’t login — we really broke stuff. In my case, everything was still working just fine. If you did break things, go get the USB install for Ubuntu and boot a live version on the machine and use root access there to fix the configuration files to let you get back in — or just re-install and start over.

When I did the original install, I opted to not do automatic security updates. This was a mistake so I’ll fix that now.

Again, from the post I’m sort of following along – they recommend logwatch, something I haven’t used previously. Having run with it for a while now, I’ve grown to like the level of detail it pulls together into a daily email.

Now the logwatch installation triggered a postfix install and configuration. This is probably useful for my host to have email capability, but really I want to host my main mail system inside a docker container. Hopefully this won’t cause me grief in the future as I build out the set of containers that will run email etc.

I just picked the default postfix ‘internet’ install and it appears to have done the right thing to allow the new server to send email to lowtek.ca, so that’s positive. Again, my concern is when this new machine starts hosting the lowtek.ca email inside a docker container, how will all of that work? An issue I’ll certainly cover in a future post.

 

Almost done.

At this point, things are pretty good, but..

I’m concerned by the 10 packages that aren’t being automatically upgraded. It turns out that this is easy to fix with a dist-upgrade.

And of course, I continued and got all of the latst upgrades to the 16.04 tree. If you’re observant, you’ll notice that 16.04 is still back on an older kernel (4.4) which is fine, this is the LTS release.

One more reboot… then we see

excellent. We’re current, and with the automatic updates we’ll get security patches with no additional effort. Non security patches will not install automatically, so from time to time we’ll still need to manually pull down patches.