OpenWRT 19.07 to 21.02.0 upgrade

I’ve long been a fan of open firmware for my home routers. Way back I started with DD-WRT, but more recently I’ve moved to OpenWRT paired with TP-Link Archer C7 hardware. I actually have two, one as my main gateway and a second configured as a dumb AP (access point). Running two WiFi access points means I get great coverage from the basement, to the second floor.

While I have two, they are not quite identical. One is a v5, and the other is a v2. Still, for hardware you can pick up for under $100 it fits into my sweet spot for hardware. I picked up both of mine used, for less than half the price of new hardware. The other benefit to having two which run the same software stack (OpenWRT 19.07), means I can swap hardware for the main gateway if I have a compatibility problem.

Recently the latest version (21.02) has been declared stable. It’s well past time to upgrade. With the hardware I have, OpenWRT has recently done a transition from ar71xx to ath79. Thankfully In my setup with 19.07 I’d already moved to ath79.

You can check the TARGET that you are running by ssh’ing into the router and looking at the /etc/openwrt-release file

It is a good idea to start with the release notes. The upgrade process is 3 parts.

  1. Prepare
  2. Upgrade
  3. Post Install Configuration, Setup or Restore

We’ve already started the Prepare step since we have looked at the release notes and made sure we have the ath79 version running. Since we’re already on ath79 we can use the normal sysupgrade process. OpenWRT is also moving to DSA Networking, but the hardware I’m using hasn’t switched over yet – the upgrade process will detect this and refuse to upgrade if you have a problem.

I’ve got a full rsync backup of my router configuration, so if something goes really wrong I at least have the files.

Next I need to figure out which packages I have installed on the router(s). This script seems to work well, I’ll duplicate the script here.

This will dump out the list of packages that you have installed beyond the stock configuration. I’ve got rsync installed (of course) along with the prometheus packages. The upgrade will not automatically install these packages, so having the list of them helps us get back to the configuration we are used to.

We should now be ready to Upgrade. I found the easiest way to get the right sysupgrade package was using the Firmware Selector web tool. Since I have slightly different versions, I needed to make two downloads.

It is always a good idea to check the hash (sha256sum) of the files you download to make sure you have a good download. I’ve been burned only a handful of times by this, but once should be enough to teach you the lesson.

Using the Web UI to upgrade can be found in the menu system: LuCI → System → Backup / Flash Firmware → Actions: Flash new firmware image.

Time for a deep breath, and double check we have a backup. Time to flash, also double checking we push the right version to the right device.

After flashing the 1st device, I noted two things. The “flashing” screen seemed to get stuck – well past when the device has refreshed. Using a second browser window, I think I figured out why things got stuck. The LuCI web UI now redirects you to https:// with a self signed certificate. I think the browser on the flash screen got stuck because of the change in protocol (and the bad cert).

It’s good that the web UI is now hosted using https because now when you log in, you’re not sending your passwords in the clear. Sure it’s your own network, but I think I’d rather have to deal with clicking through the advanced screens to tell my browser to accept the self signed certificate than not have a secure connection.

For the Post Install Configuration, Setup or Restore step it’s a matter of going through the packages I’d identified above and re-installing them. As a second check, I also re-installed an re-ran the listuserpackages.awk script and got the same list back once I’d installed all the packages.

I can also verify that all of the configuration files made it by testing the functions these additional packages had. All was good, at least with my dumb AP.

My main gateway router was a bit scarier – it’s got a few more packages installed than the dumb AP and when I update it, I take an internet outage. Also, I ran into trouble trying to update some of the modules:

  • kmod-usb-storage
  • kmod-usb3

Both of these are related to my use of a USB drive as storage for the vnStat package. This appears to be a bug in the web UI – either way, using the cli worked fine to install things. It also looks like a fix is coming in the next patch.

The cli was happy to find the package

Where the web UI simply failed to located it.

In the end all was well, and a reboot to make sure things all came back just fine got me back to a fully working state but on the latest version. This was much easier than I had expected and I shouldn’t have stalled doing this so long.

A few housekeeping details to work through post install / basic check out. The upgrade procedure will create *-opkg files in /etc/config when you install the new packages. Be safe and do a quick diff and review of what has or has not changed (you may need to install the diffutils package, or move the files off the device to a machine that can diff things).

In my case – the sqm package seems to have changed at least some of the options. My setup still works, but I should probably rebase my config file.

Other stuff I have to fiddle with. WPA3 – now  mainline for OpenWRT I need to figure out how to run with this new protocol (while not breaking the world). A quick look around makes it seem like WPA3 is still a bit on the bleeding edge.

There is also New network configuration syntax and board.json change which means some changes in /etc/config/network which I should probably make sure I migrate to (the current build has some backwards compatibility – so my old file is fine for now).

Doing a planned migration / upgrade was way better than my usual emergency restore / rebuild. The last time I was messing with the router firmware I’d accidentally run an rsync backup and gotten the source / target the wrong way around – successfully sync’ing a blank directly onto my operating access point. Doh.

Anyways, hopefully this article helps me (or someone else) upgrade smoothly in the future.

Managing docker containers with makefiles

 Image by Jason Scott, licensed under CC-BY-2.0.

I’m very old school when it comes to managing my personal infrastructure. When I came across Jessie Frazelle‘s blog on her personal infrastructure I was inspired to adopt a makefile based approach. There are a couple of others who’ve written about similar approaches.

Here is a generic version of my makefile for containers which are built and hosted on one of the container repositories

There are 3 targets: build, start, update. The update leaves the old container with a -old name. This is useful if you have to roll back to a previous version if for some reason the new container breaks. You could even create a rollback target that did this automatically.

Now, this is great if there is an existing container repository that has a container for you to pull, but things need to change if you are starting with a Dockerfile and building your own container.

Again, I’ve used the same build, start, update commands and have a built in assumption that the Makefile lives in the root of the git project. Instead of pulling from a container registry, we pull the latest from the git project and do a local build.

Having a very similar structure helps with consistency of managing my docker containers.

One day I would like to further enhance these makefiles to support checking for updates, be that a newer container build or git changes. Adding a post update check to ensure that the container has started would be very good too.

Installing docker-mailserver

Everyone should have their own domain name (or several). Having a website on your domain is easy and a sensible use of that domain name. Almost no one should run their own email server, it’s complicated and makes you responsible for all of the problems.

There are lots of providers out there that run email services and allow you to bring your own domain. ProtonMail would be a good choice, you can even bring your own custom domain and still use ProtonMail. Alternatives are offered by Google, and Microsoft if you want to go that route, both provide support for custom domains.

If you are still thinking of running your own mail server, then grab your tinfoil hat and let’s look at the best way to operate a mail server in the age of containers. I’ve run my own email for a long time, mostly following the Ubuntu setup. Using the docker-mailserver solves all of the email problems with a single container.

I will mention there are many alternatives: mailu, iredmail, etc. The docker-mailserver project stands out for me as they have avoided database based configuration and have stuck with ‘files on disk’ as the model.

This is a long overdue mail upgrade. I started doing this way back in 2017, but never really finished the work. The SSD rebuild disrupted how I was doing things, and changing email is a little scary. The hard drive that stores my email is very old. It is a Seagate Barracuda 40GB (ST340014A). The SMART information says that the Power Cycle Count is only 502, but the Power On Hours is an astounding 130442 (that is 14.89 years). Every stat is in pre-fail or old age, it is definitely time to move my email off that drive.

Before starting, take the time to read through the documentation. Once you think you’re ready to start installing thing the ReadMe is the right path to follow. I’m going a slightly different path than the recommended one which uses docker-compose, and will build out a basic docker deployment.

First I grabbed the following two files

And the setup script for v10.0.0 as I intend to use the ‘stable’ version vs. ‘edge’. It is important to get the matching setup.sh script for the version you are deploying.

I used the docker-compose.yml file to inform me how to configure my Makefile based docker approach. Most of the create options are a direct mimic of the compose file.

I walked through the mailserver.env file and made a few changes

  • ONE_DIR=1
    I’m not totally sure about this, but the documentation reads: “consolidate all states into a single directory (/var/mail-state) to allow persistence using docker volumes.” which seems like a good idea.
  • ENABLE_CLAMAV=1
    My existing email server uses ClamAV 
  • ENABLE_FAIL2BAN=1
    I’m a fan of fail2ban for protecting my server from abuse
  • ENABLE_SPAMASSASSIN=1
    My existing email sever uses SpamAssassin

The volume pointing to letsencrypt is sort of a placeholder for now. Once we get things basically setup I will be changing the SSL_TYPE to enable encryption using my existing letsencrypt certificate that my webserver container has setup.

I later added the following additional configuration to enable logwatch.

  • LOGWATCH_INTERVAL=daily
    Having an email centric logwatch email vs. combining it with my server logwatch, seemed like a good idea.

  • LOGWATCH_RECIPIENT=postmaster@lowtek.ca
    Where to send the email.

With my Makefile based docker approach I have build, start and update targets. I can manually roll-back if needed as the previous container is has a -old name. The first step is to build the container.

At this point we are at the Get up and running section. We need to start the container and configure some email addresses.

Assuming all goes well, the mailserver container will be running. If we poke around the filesystem we’ll see a few files have been created

  • config/dovecot-quotas.cf
  • maillogs/clamav.log
  • maillogs/freshclam.log

We should be able to run the setup script and add a user, and configure some key aliases.

The creation of the account will cause some additional files to be created

  • config/postfix-accounts.cf
  • config/postfix-virtual.cf

At this point we have a running email server – but we need to start getting data to flow there. You may have to open ports in your firewall on the docker host to allow external traffic to connect to this new container.

This is very encouraging, but there is still a list of things I need to do

  1. Create accounts and aliases
  2. Configure smart host sending via AWS SES
  3. Enable SSL_TYPE
  4. Set up DKIM
  5. Change the port forwarding to point to my container host
  6. Migrate email using imapsync

The rest of this post is details on those steps

Continue reading “Installing docker-mailserver”