Installing docker-mailserver

Everyone should have their own domain name (or several). Having a website on your domain is easy and a sensible use of that domain name. Almost no one should run their own email server, it’s complicated and makes you responsible for all of the problems.

There are lots of providers out there that run email services and allow you to bring your own domain. ProtonMail would be a good choice, you can even bring your own custom domain and still use ProtonMail. Alternatives are offered by Google, and Microsoft if you want to go that route, both provide support for custom domains.

If you are still thinking of running your own mail server, then grab your tinfoil hat and let’s look at the best way to operate a mail server in the age of containers. I’ve run my own email for a long time, mostly following the Ubuntu setup. Using the docker-mailserver solves all of the email problems with a single container.

I will mention there are many alternatives: mailu, iredmail, etc. The docker-mailserver project stands out for me as they have avoided database based configuration and have stuck with ‘files on disk’ as the model.

This is a long overdue mail upgrade. I started doing this way back in 2017, but never really finished the work. The SSD rebuild disrupted how I was doing things, and changing email is a little scary. The hard drive that stores my email is very old. It is a Seagate Barracuda 40GB (ST340014A). The SMART information says that the Power Cycle Count is only 502, but the Power On Hours is an astounding 130442 (that is 14.89 years). Every stat is in pre-fail or old age, it is definitely time to move my email off that drive.

Before starting, take the time to read through the documentation. Once you think you’re ready to start installing thing the ReadMe is the right path to follow. I’m going a slightly different path than the recommended one which uses docker-compose, and will build out a basic docker deployment.

First I grabbed the following two files

And the setup script for v10.0.0 as I intend to use the ‘stable’ version vs. ‘edge’. It is important to get the matching setup.sh script for the version you are deploying.

I used the docker-compose.yml file to inform me how to configure my Makefile based docker approach. Most of the create options are a direct mimic of the compose file.

I walked through the mailserver.env file and made a few changes

  • ONE_DIR=1
    I’m not totally sure about this, but the documentation reads: “consolidate all states into a single directory (/var/mail-state) to allow persistence using docker volumes.” which seems like a good idea.
  • ENABLE_CLAMAV=1
    My existing email server uses ClamAV 
  • ENABLE_FAIL2BAN=1
    I’m a fan of fail2ban for protecting my server from abuse
  • ENABLE_SPAMASSASSIN=1
    My existing email sever uses SpamAssassin

The volume pointing to letsencrypt is sort of a placeholder for now. Once we get things basically setup I will be changing the SSL_TYPE to enable encryption using my existing letsencrypt certificate that my webserver container has setup.

I later added the following additional configuration to enable logwatch.

  • LOGWATCH_INTERVAL=daily
    Having an email centric logwatch email vs. combining it with my server logwatch, seemed like a good idea.

  • LOGWATCH_RECIPIENT=postmaster@lowtek.ca
    Where to send the email.

With my Makefile based docker approach I have build, start and update targets. I can manually roll-back if needed as the previous container is has a -old name. The first step is to build the container.

At this point we are at the Get up and running section. We need to start the container and configure some email addresses.

Assuming all goes well, the mailserver container will be running. If we poke around the filesystem we’ll see a few files have been created

  • config/dovecot-quotas.cf
  • maillogs/clamav.log
  • maillogs/freshclam.log

We should be able to run the setup script and add a user, and configure some key aliases.

The creation of the account will cause some additional files to be created

  • config/postfix-accounts.cf
  • config/postfix-virtual.cf

At this point we have a running email server – but we need to start getting data to flow there. You may have to open ports in your firewall on the docker host to allow external traffic to connect to this new container.

This is very encouraging, but there is still a list of things I need to do

  1. Create accounts and aliases
  2. Configure smart host sending via AWS SES
  3. Enable SSL_TYPE
  4. Set up DKIM
  5. Change the port forwarding to point to my container host
  6. Migrate email using imapsync

The rest of this post is details on those steps

Continue reading “Installing docker-mailserver”

Pi-hole Ubuntu Server (take 2)

My past two posts have been the hardware setup and pi-hole deployment. You rarely get things right the first time and this is no exception.

I’ve already added an update to the first post about my nullmailer configuration. I had forgotten to modify /etc/mailname and thus a lot of the automated email was failing. I happened to notice this because my newly deployed pi-hole was getting a lot of reverse name lookups for my main router, which were triggered by the email system writing the error.

You may also want to go look at / clean out the /var/spool/nullmailer/failed directory, it likely has a bunch of emails that were not sent.

Once corrected, I started to get a bunch of emails. They were two different errors both triggered by visiting the pi-hole web interface

The first issue is a bit curious. Let’s start by looking at the /etc/resolv.conf file which is a generated file based on the DHCP configuration.

Then if we look at the static configuration that was created by the pi-hole install in /etc/dhcpcd.conf

Decoding the interaction here. The pi-hole setup script believes that we should have a static IP and it also has assigned the pi-hole to use DNS servers that are the same DNS servers that I specified as my custom entries for upstream resolution.

This feels like the pi-hole install script intentionally makes the pi-hole host machine not able to do local DNS resolutions. I wonder if all pi-hole installations have this challenge. I asked on the forum, – and after a long discussion I came to agree that I fell into a trap.

A properly configured linux machine will have the name from /etc/hostname also reflected in /etc/hosts (as I have done). My setup was working fine because the DNS service on my OpenWRT router was covering up this gap. While it’s nice that it just worked, the machine should really be able to resolve its own name without relying on DNS.

As my network has IPv6 as well, the pi-hole also gets an IPv6 DNS entry. Since the static configuration doesn’t specify an upstream IPv6 DNS – the ULA for the pi-hole leaks into the configuration. This is a  potentially scary DNS recursion problem, but it doesn’t seem to be getting used to forward the lookup of ‘myhost’ to the pi-hole so I’m going to ignore this for now.

And easy fix is to go modify /etc/hosts

Where myhost is whatever appears in /etc/hostname.

This will fix the first problem, and it seems the second problem was also caused by the same unable to resolve myhost. I think this is exactly what this post was describing when it was seeing the error messages. This is another thread and different solution to a similar problem.

Satisfied I’d solved that mystery, along the way I’d also created a list of things to ‘fix’

  1. Ditch rsyslog remote
  2. Run logwatch locally
  3. Run fluentbit locally
  4. Password-less sudo
  5. SSH key only access
  6. Add mosh

The rest of this post is just details on these 6 changes.

Continue reading “Pi-hole Ubuntu Server (take 2)”

Pi-Hole – a Black Hole for Advertisements

Pi-hole was first released back in 2015, I’m not certain when I became aware of it but given my interest in the Raspberry Pi I’m pretty sure I heard about it fairly soon afterwards. I did find this tweet from 2016

Now while I was aware of the project, I didn’t start running it for a while. It was only at some point during my containerization of my server that I started to run pi-hole in a container (Oct 2018 give or take a bit)

Running it as a container isn’t too hard – but you’ll probably have to turn off the DNS server than is running already to avoid the port conflict.

Here is the Makefile I was using to manage my pi-hole deployment.

Unfortunately – something happened to my configuration / state – such that I could not update my container without it hanging. Fortunately having the rollback target let me quickly restore the previous version. I’ve tested the makefile on another temporary machine and it appeared to work, so it should be a reasonable base if you wanted to go the container route.

One of the problems of running in a container is the networking in general. I struggled with the mapping of the web UI access as the same machine is also running my public facing web server. While I could map the DNS port (53) and access it over IPv6 – all of the IPv6 traffic appeared as if it were coming from the docker network vs. from the source machines.

This takes away from one of the great values of running pi-hole – the additional insight it gives you to what your various devices are doing on the network. With the docker networking mess, I was missing all of the IPv6 traffic effectively (because I couldn’t tell the devices apart).

After stalling on the decision, and some explorations on how I could use macvlan support in docker to give a container a unique (from the host) IP address – I just bought some nice hardware to solve the problem. Setting that hardware up is covered in the previous post.

Now we can install pi-hole. I would encourage you to read the script before just piping it into bash, however in the big picture we’re going to trust the folks that wrote this code to also provide updates – and those updates could be evil too.

The script is interactive, you’ll need to answer some questions to perform the install. I found it interesting that the setup script doesn’t ask for IPv6 DNS severs, but does allow you to specify customer IPv4 servers. During the setup it looks like it is changing my network setup to be a static IP address. Post install script I know I’m going to have to tweak things.

Since the default web password is generated, you probably want to set one.

Visiting the web interface under “Settings->DNS” I added my upstream IPv6 DNS servers. I’m using the CIRA DNS and if you’re a Canadian I would encourage you to do the same.

On the same settings page I enabled conditional forwarding and specified my local lan range and main router which is running my DHCP server. It was pointed out to me that additional configuration is required for IPv6 conditional forwarding, I haven’t done this yet.

My OpenWRT router provides multiple IPv6 addresses and the setup script detected IPv6 address isn’t the right one.  Poking around, it appears /etc/pihole/setupVars.conf contains the information and I just need to tweak it. Generally you should not change that file by hand, but I did for this one thing and it fixed the problem.

As I feared, the setup script changed my /etc/dhcpcd.conf to reflect a static IP address. I may later change this but I had already effectively tweaked the DHCP server to answer the same static address.

At this point – I have a working pi-hole, I just need to configure some clients to point there.

As mentioned above, I run OpenWRT as my router.  There are 2 places we need to configure to point all DNS queries to the pi-hole. This can be done by modifying how it responds to DHCP requests – as it will provide the DNS server as part of that transaction.

An alternative approach to this would be to set your upstream DNS server to be the pi-hole. I didn’t take this approach because I was concerned about DNS loops and networking was a lot more complicated when things were in a container, the approach I’ll cover is what worked with the container version as well.

Changing the DNS entry that is provided by the DHCP exchange is easy to find in the config file /etc/config/dhcp file – there are two lines in a section that looks like:

Finding the place in the LuCI UI to add these always causes me to stumble around for a while. The two options list dhcp_option and list dns are in slightly different places.

The IPv4 setting can be found under Network->Interfaces, edit your Lan interface. Then pick the Advanced tab. We need to add a dhcp option 6,149.112.121.30.

Then select the IPv6 Settings tab. Here we add to the Announced DNS Servers section 2620:10A:80BB::30.

Once you’ve done this your pi-hole will start getting traffic from devices that get an address on your network. You may have to wait for the devices to update their connections.

I noticed that IPv6 addresses were not reverse mapping – but specifically asking my router for the bad addresses seems to indicate that it also can’t reverse map, so maybe there is an OpenWRT problem here. Also – it seemed to get better after a while, and more address->name mappings were discovered. I asked in the pi-hole forum about this behaviour.

It turns out that this is an ordering problem. Pi-hole won’t look up a failed address again, but it does build the network table and bind things together by MAC address. The work around is to modify your /etc/pihole/pihole-FTL.conf to have REFRESH_HOSTNAMES=ALL. There is a slight downside to this that every hour there will be a storm of reverse DNS lookups as all hosts are refreshed.

A few final observations.

  • The magic DNS name pi.hole now works on my network. This brings you directly to the pi-hole dashboard.
  • Tools->Network shows lots more useful information. In docker you didn’t get MAC addresses and generally things were more chaotic.
  • Pi-hole is blocking more than 1/3 of the DNS lookups. Sure some of this is because the ad-software is probably failing and trying again, but still that’s a lot of DNS queries.
  • I discovered the Group Management feature, and this seems to be a way to allow clients to opt out of ad blocking. This is super useful as previously I was just changing the DNS on the clients.