OpenWRT 21.02 to 22.03 upgrade

Here are my notes on upgrading OpenWRT, they are based on my previous post on upgrading.

In this case I’m upgrading specifically TP-Link Archer C7 v2 – the process will be similar for other OpenWRT devices but it’s always worth reviewing the device page. I’ve also got some v5 versions, and this means a slightly different firmware, but the same exact process.

For a major version upgrade it is worth reading the release notes First start by reading the release notes – nothing seems to be specific to my device that requires any special considerations, so I can just proceed.

An upgrade from OpenWrt 21.02 or 22.03 to OpenWrt 22.03.5 is supported in many cases with the help of the sysupgrade utility which will also attempt to preserve the configuration.

I personally prefer the cli based process, so we’ll be following that documentation.

Step 1. While I do nightly automated backups, I should also just do a web UI based backup – this is mostly for peace of mind

Step 2. Download the correct sysupgrade binary -the easy way to do this is by using the firmware selector tool. I recommend that you take the time to verify the sha256sum of your download, this is rarely an issue but I have experienced bad downloads and it’s hard to debug after the fact.

It is recommend to check you have enough RAM free – thankfully the archer has a lot of RAM (which is used for the /tmp filesystem too) so I have lots of space.

Step 3. Get ready to flash – if you review the post install steps, you’ll see that while the sysupgrade will preserve all of our configuration files – it won’t preserve any of the packages.

This script will print out all of the packages you’ve installed.

Save the list away so you can easily restore things post install. There is a flaw with this script as I’ll point out later, but in many cases it’ll work fine for you.

On my dumb access points I get this list of packages

Mostly I have the prometheus exporter (for metrics) and rsync (for backups) installed. My main gateway has a few more packages (vnstat and sqm) but it’s similar.

Step 4. Time to flash. Place the firmware you downloaded onto the openwrt router in /tmp and run sysupgrade.

This is a bit scary — because you lose your ssh connection as part of the upgrade.  It took about a minute and a half of radio silence before the device came back.  However, I was then greeted with the new web UI – and over ssh I get the 22.03.5 version splash.

Step 5. Check for any package updates – usually I leave things well enough alone, but we just did a full upgrade so it’s worth making sure we are fully current. Note, this may mess with the script in step 3 since the install dates will change for other components.

If you get any packages listed, we can easily upgrade using opkg upgrade <pkg name>

Step 6.  Install packages captured in step 3. Do this by creating a simple script to opkg install <pkg name> for each package.

Post install, take a careful look at the output of the installs, and look for any *-opkg files in /etc/config or /etc. These are config files which conflicted with local changes.

Sometimes you will want to keep your changes – others you’ll want to replace your local copy with the new -opkg file version. Take your time working through this as it will avoid tricky problems to debug later.

When I upgraded my main router, vnstat seems to have been busted in some way. The data file was no longer readable (and it’s backup) – I suspect that some code change caused the format to be incompatible. I had to remove and recreated a new one. Oh well.

Things mostly went smoothly, it took about 30mins per openwrt device and I was going slowly and taking notes. There was one tiny glitch in the upgrade. The /root/.ssh directory was wiped out – I use this to maintain a key based ssh/scp from each of my dumb AP to the main router.

Bonus. I found a new utility: Attended Sysupgrade. This is pretty slick as it makes it very easy to roll minor versions (so 22.03.02 -> 22.03.05 for example) but it will not do a major upgrade (21.03 -> 22.03). I’ve installed this on all of my openwrt devices and will use it to stay current. It takes care of all of the upgrade steps above.. but it does suffer the same ‘glitch’ in that /root/.ssh is wiped out. The other downside is that the custom firmware that is built, breaks the script in step 3 – since the flash install date is the same for all of the components. I’ll need to go refactor that script for my next upgrade.

OpenWRT as a wireguard client

Previously I’ve written about running wireguard as a self hosted VPN. In this post I’ll cover how to connect a remote site back to your wireguard installation allowing that remote site to reach machines on your local (private) network. This is really no different than configuring a wireguard client on your phone or laptop, but by doing this on the router you build a network path that anyone on the remote network can use.

I should probably mention that there are other articles that cover a site-to-site configuration, where you have two wireguard enabled routers that extend your network across an internet link. While this is super cool, it wasn’t what I wanted for this use case. I would be remiss in not mentioning tailscale as an alternative if you want a site-to-site setup, it allows for the easy creation of a virtual network (mesh) between all of your devices.

In my case my IoT devices can all talk to my MQTT installation, and that communication not only allows the gathering of data from the devices, but offers a path to controlling the devices as well. What this means is that an IoT device at the remote site, if it can see the MQTT broker I host on my home server – will be controllable from my home network. Thus setting up a one way wireguard ‘client’ link is all I need.

I will assume that the publicly visible wireguard setup is based on the linuxserver.io/wireguard container. You’ll want to add a new peer configuration for the remote site. This should generate a peer_remote.conf file that should look something like:

This is the same conf file you’d grab and install into a wireguard client, but in our case we want to setup an OpenWRT router at a remote location to use this as it’s client configuration. The 10.13.13.x address is the default wireguard network for the linuxserver.io container.

I will assume that we’re on a recent version of OpenWRT (21.02 or above), as of this writing 23.03.2 is the latest stable release. As per the documentation page on setting up the client you’ll need to install some packages. This is easy to do via the cli.

Now there are some configuration parameters you need to setup (again in the cli, as we’re going to set some environment variables then use them later).

Now this is where I got stuck following the documentation. It wasn’t clear to me that the WG_ADDR value should be taken from the peer_remote.conf file as I’ve done above. I thought this was just another private network value to uniquely identify the new wg0 device I was creating on the OpenWRT router. Thankfully some kind folk on the OpenWRT forum helped point me down the right path to figure this out.

Obviously WG_SERV points at our existing wireguard installation, and the three secrets WG_KEY, WG_PSK, and WG_PUB all come from the same peer_remote.conf file. I do suspect that one of these might be allowed to be unique for the remote installation however, I know that this works – and I do not believe we are introducing any security issues.

At this point we have all the configuration we need, and can proceed to configure the firewall and network

This sets up a full tunnel VPN configuration. If you want to permit a split-tunnel then we need to change one line in the above script.

The allowed_ips needs to change to specify the subnet you want to route over this wireguard connection.

One important note. You need to ensure that your home network and remote network do not have overlapping IP ranges. This would introduce confusion about where to route what. Let’s assume that the home network lives on 192.168.1.0/24 – we’d want to ensure that our remote network did not use that range so let’s assume we’ve configure the remote OpenWRT setup to use 192.168.4.0/24. By doing this – we make it easy to know which network we mean when we are routing packets around.

Thus if we wanted to only send traffic destined for the home network over the wireguard interface, we’d specify:

As another way of viewing this configuration, let’s go take a peek at the config files on the OpenWRT router.

/etc/config/network will have two new sections

and the /etc/config/firewall will have one modified section

You’ll note that the wg0 device is part of the wan zone.

It really is pretty cool to have IoT devices at a remote site, magically controlled over the internet – and I don’t need any cloud services to do this.

 

Knowing when to update your docker containers with DIUN

DIUN – Docker Image Update Notifier. I was very glad to come across this particular tool as it helped solve a problem I had, one that I felt strongly enough about that I’d put a bunch of time into creating something similar.

My approach, was to build some scripting to determine the signature of the image that I had deployed locally, and then make many queries to the registry to determine what (if any) changes were in the remote image. This immediately ran into some of the API limits on dockerhub. There were also other challenges with doing what I wanted. The digest information you get with docker pull doesn’t match the digest information available on dockerhub. I did fine this useful blog post (and script) that solves a similar problem, but also hits some of the same API limitations. It seemed like maybe a combination of web scraping plus API calls could get a working solution, but it was starting to be a hard problem.

DIUN uses a very different approach. It starts by figuring out what images you want to scan – the simplest way to do this is to allow it to look at all running docker containers on your system. With this list of images, it can then query the docker image repository for the tag of that image. On the first run, it just saves this value away in a local data store. Every future run, it compares the tag it fetched to the one in the local data store – if there is a difference, it notifies you.

In practice, this works to let you know every time a new image is available. It doesn’t know if you’ve updated your local image or not, nor does it tell you what changed in the image – only that there is a newer version. Still, this turns out to be quite useful especially when combined with slack notifications.

Setting up DIUN for my system was very easy. Here is the completed Makefile based on my managing docker container with make post.

I started very simply at first following the installation documentation provided. I used a mostly environment variable approach to configuring things as well. The three variables I need to get started were:

  • DIUN_WATCH_SCHEDULE – enable cron like behaviour
  • DIUN_PROVIDERS_DOCKER – watch all running docker containers
  • DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT – watch all by default

Looking at the start-up logs for the diun container is quite informative and generally useful error messages are emitted if you have a configuration problem.

I later added the:

  • DIUN_NOTIF_SLACK_WEBHOOKURL

in order to get slack based notifications. There is a little bit of setup you need to do with your slack workspace to enable slack webhooks to work, but it is quite handy for me to have a notification in a private channel to let me know that I should go pull down a new container.

Finally I added a configuration file ./data/config.yml to capture additional docker images which are used as base images for some locally built Dockerfiles. This will alert me when the base image I’m using gets an update and will remind me to go re-build any containers that depend on them. This use the environment varible:

  • DIUN_PROVIDERS_FILE_FILENAME

My configuration file looks like:

I’ve actually been running with this for a couple of weeks now. I really like the linuxserver.io project and recommend images built by them. They have a regular build schedule, so you’ll see (generally) weekly updates for those images. I have nearly 30 different containers running, and it’s interesting to see which ones are updated regularly and which seem to be more static (dormant).

Some people make use of Watchtower to manage their container updates. I tend to subscribe to the philosophy that this is not a great idea for a ‘production’ system, at least some subset of the linuxserver.io folks agree with this as well. I like to have hands on keyboard when I do an update, so I can make sure that I’m around to deal with any problems that may happen.