Server Upgrade Part 5: rsyslog

Now that mail is running in a container, logwatch is a lot less interesting because log data is not visible on the host (the container has the logs). One option is to map the log files from the container into the host logs, but this might get messy. It seems like a better idea to build out an rsyslog setup to flow logs from the container into the host.

To start with I needed to understand rsyslog a bit better. Then I came across a post that does pretty much what I’m trying to accomplish, docker containers sending logs to the host with rsyslog.

Before we configure the container, we need to get our host machine accepting rsyslog input. We’ll need to turn on one of TCP or UDP remote access on the host. I’ll stick with just TCP for my initial setup.

Edit /etc/rsyslog.conf, you’ll find a comment near the top about TCP reception, uncomment the pair of lines below it

and then restart the service to pick up the new config

I’m using a firewall on the host so I’ll also have to open up the firewall so that things can see this new port. When we do firewall stuff, we should always pause and think about the security implications here.

In my case, the machine that has the port open will be behind my router, which also runs a firewall. So while we’re punching a hole in the firewall, we actually have another firewall protecting us from the big bad internet.

At this point our host machine has rsyslog running and waiting for traffic. Now let’s take a look at the mail server container and change it to flow logs to the host rsyslog.

It turns out the client is really easy, but you do have to be aware of UDP or TCP.  Inside the container – we modify our Dockerfile to create an /etc/rsyslog.d/10-rsyslog.conf file that looks like:

If you are using TCP, use both @@ – for UDP just a single @. You also need to use the actual hostname of the host machine (the one we configured with rsyslog just above). We need rebuild and restart the container to activate the rsyslog configuration.

Back on the host, we see data showing up with the container hostname in the logs. If we look on the host machine in /var/log/syslog we see rsyslogd startup from the mail container:

 

There are plenty of other log messages related to the mail container start up as well.

We could manually emit some log messages from the client inside the container, by shelling into the container and running logger.

and in /var/log/syslog on the host we see..

Logwatch should be a lot more interesting

Server Upgrade Part 4: systemd and docker containers

I’ve set the BIOS on the new server to resume running after a power failure. The OS happily comes back no problem, and docker even starts up – but none of the containers.

Apparently in docker 1.2 there is now support to restart your containers after boot, this sounds useful, but the version of docker in Ubuntu 16.04 is 1.12.1, close but not quite recent enough.

No worries, we can use systemd to sort this out. The docker documentation has some examples, but I found an article which takes a slightly different approach – one they claim is more aligned to CoreOS.

Adding our own services to systemd is simply a matter of adding a file to /etc/systemd/system/. The very first one I want to add is one to host my own registry. Now you probably don’t want to host your own registry since you could simply use the public dockerhub, or any number of other solutions. However, I’m also the type of person who hosts their own email, so of course I’m going to host my own registry.

The %n in the file is replaced with the ‘full unit name’. If we save this file as /etc/systemd/system/docker-registry.service the unit name should be docker-registry.

In a simple configuration, the local docker registry will persist data as a docker volume. I decided it was safer to map to a local file tree.

Once you’ve created the file /etc/systemd/system/docker-registry.service we need to reload the systemd daemon and start the service.

If all has gone well, docker ps should show us the new running registry (it does). Now let’s make it start every boot.

And yup, a reboot and we see the registry has started. So now we have a local registry to store our containers in, and a pattern to apply to make them services that start even after a reboot (expected or unexpected).

Pushing images to a local registry is covered in the documentation. I might later want to add a certificate to the registry, but it is not required.

Server Upgrade: Part 3 – email server in docker

Docker has become one of my go to tools for managing software stacks. There are a couple of clever things that come together to make docker awesome, but the bit that works well for this purpose is the Dockerfile configuration file that makes it easy to manage a given software stack deployment and the containerization let’s you run it without interacting with other components. So I get a nice repeatable way to install the software I want, and I don’t have to worry about it messing with another part of the system when I deploy it.

I’d say that data files are a weakness of docker, and most people tend to map (large) portions of the host filesystem into the container to get persistence. This is where you have to be careful. I keep thinking that someone needs to invent “docker for data”, or maybe I just need to wrap my head around the right model to make it more natural.

In any case, I hadn’t installed docker yet. The Ubuntu repository doesn’t have the latest version, but it is fairly current so I’ll just stick with that vs. using the docker repository.

I found someone who’s blazed a trail on setting up a full mail server in docker, so I’ll start with it and modify it to meet my needs. I liked the philosophy they used of basing it on config files, instead of a SQL DB with configuration information which a lot of mail setups in docker seem to go for. They also shared the files on github, making it easy to look at.

We’ll probably want to be able to run docker as a normal user (vs. needing to sudo every time).

Don’t forget to log out then back in to get your user to have the right group permissions. Note: there are some interesting security implications here. Docker doesn’t yet have strong security controls, so if you can run a container you can basically have root on the host.

Now, this setup needs docker-compose, so we’ll install that too.

And here is where we get some friction, his setup requires compose 1.6 or higher. Well, we’ll probably do some hacking. Let’s get the code from github

So we need to create our own docker-compose.yml file, two examples are provided. One with the ELK stack included for logging, the other without. For now we’ll start without ELK as we don’t need those features (yet).

Here is my preliminary docker-compose.yml file

I’ve made it work for older compose versions, and ditched the docker volume for a simple directory mapping to ./maildata – at least for now. I’ve also chosen to build my own container locally vs. pull down the dockerhub image that was provided.

Runs for a while and get the container ready to run (builds the image)

Yup, and my concern about a local mail server was right on the money – it’s going to conflict with my mail server in a container (which if you recall was pulled in when I installed logwatch).

So let’s get brave

Since logwatch is a dependency, it will also be removed. Now logwatch doesn’t strictly need postfix, it really just wants a mail transport agent, so we can install a simpler mail forwarder like nullmailer and take care of that for logwatch.

Ta-da, we’ve got logwatch installed and no postfix and thus no ports being held open.

Cool, and docker ps shows that all the ports are mapped nicely. Now referencing the README.md from the git repo, we need to create some user accounts and DKIM keys. We’ll also remember to stop running the container we started just a moment ago.

Now we can run the container and try to email mario

And now type in this script a line at a time

So receiving email for mario is working. Now can we get this server to send email?

Configuring my desktop install of thunderbird to talk with the new server was simple to do. This let me read the manually sent email to the mario user, and made it easy for me to test sending email using the new docker container server.

So things are going well at this point, we’ve got a mail server appears to be able to send and receive email. However, since mail is now running in a container, the logs are being hidden from logwatch. I can either run logwatch in the container, or expose the logs from the container to the host.

Stuff I still need to do, or at least think through before switching over to this new mail setup

  1. Let’s encrypt certificate
  2. DKIM setup
  3. mail migration
  4. alias support
  5. logwatch
  6. smarthost configuration
  7. webmail
  8. dovecot sieve / pigeonhole