OpenWRT: Dumb AP

I’m a big fan of OpenWRT – partly because it turns a ‘commodity’ router into something a lot more powerful, but also because it allows me to choose the software running on an important bit of hardware. Internet access is almost like air at this point, you want a solid solution you can trust. For me, controlling the software is part of that.

To get good WiFi coverage in my house, I needed more than one WiFi access point. OpenWRT has the idea of a ‘dumb AP’ (access point) that allows you to extend your WiFi coverage with another router. I’ve now got 3 TP-Link Archer C7‘s setup. One is the main gateway and the other two are access points. Two was plenty to get good coverage, three ensures that every corner of the house is solid.

The OpenWRT documentation for setting one up is pretty good, but it can be confusing. If you dig around you’ll actually find a few different approaches as well, but I’ll stick with the one that is working for me.

The doc boils down the basic steps into

TL;DR Here are the important configurations for a Wireless AP router (Dumb AP):
1. The dumb AP is connected LAN-to-LAN to the main router through an Ethernet cable.
2. The dumb AP bridges its wireless interface onto its LAN interface. Wireless traffic on the dumb AP goes to its (Ethernet) LAN interface, and then to the main router.
3. The dumb AP LAN port has a static address on the same subnet as the main router’s LAN interface
4. The dumb AP’s gateway is set to the address of the main router
5. The dumb AP does not provide DHCP service, DNS resolution, or a firewall

Clearly (1) is the physical connection between the two devices. It turns out that (2) is not required for current versions (newer than 21.02). This leaves just 3, 4, 5 which we can accomplish pretty easily.

I’m working with version 22.03, and this is a recap of what I did vs. a walk-through as a made a few errors along the way. First you need to start with the device you want to turn into an access point not connected to your main network. Connect directly to it with an Ethernet cable.

Log in and go to “Network”->”Interfaces” and edit the “LAN” interface.
We want to set a static IP address (the Protcol). The IPv4 address you select should be one on your current network that is not part of the DHCP range (pick a low number). The IPv4 gateway should be the IP address of your main router.

On the “Advanced Settings” tab we want to configure the DNS server to point at the IP address of our main router.

Next, on the DHCP Server tab we will select “Ignore interface” so we stop this device from trying to hand out IP addresses, the main router has this job.

Last we need to go to the DHCP Server sub-tab “IPv6 Settings” and disable all three of: RA-Service, DHCPv6-Service, and NDP-Proxy.

Once this is all done, we can “Save” and “Save & Apply” the settings we’ve made. As we’ve just changed the IP address of the device we’ll likely need to reconnect to the new IP to be able to verify that all the settings have taken hold.

Assuming all is well, we can now connect this newly configured access point to our normal LAN network.

One last thing for configuration of the access point, we haven’t yet disabled the firewall which is not required. Let’s do this in a way that will help us when we upgrade the OpenWRT firmware and modify the /etc/rc.local file to have:

This will disable and stop the services if they are running. One more reboot and we’re good to go.

Of course, now we have two devices which can offer WiFi service. The access point will forward traffic to the ‘gateway’ which is our main router.

WiFi configuration is just like normal. I would recommend using the same SSID as your main router (and same passwords, etc), but select a different channel. Your devices should seamlessly switch from one access point to the other.

For 2.4GHz WiFI – we are well advised to pick one of channels 1, 6 or 11. There is a good article that discusses why here. Since we have 2 access points we can pick 2 of those 3 channels – so it’s worth looking to see which of the two are least crowded in your area.

On OSX, if you hold down “Option” and click the wifi icon on the task bar you’ll be presented with additional options. Pick “Open Wireless Diagnostics…” then immediately use the Menu bar to  open “Window”->”Scan” – this will present you with the list of networks that your OSX machine can see. Moving around your home you can do a rudimentary network scan.

For 5GHz WiFi there are more channels, but the advice is the same. Pick non-overlapping channels. Make sure to set the country code to unlock the channels which are allowed in your country. I wasn’t able to use channel 100 until I did this, so it’s good to configure the country under the Wifi device “Advanced Settings”.

Bonus activity

As you run this 2nd wifi access point you’ll notice that when viewing it’s status page you don’t always see nice hostnames. This is because the ‘dumb AP’ is delegating much of the work back to the main router / gateway and doesn’t build out the arp table which would contain this information. There are some solutions where arp-scan and fping are used to get this information. While this works well, it doesn’t cover IPv6 addresses. The other downfall is that you have to install additional packages.

A simpler approach is to use scp to regularly copy the /tmp/dhcp.leases file from the main gateway to the dumb AP. The one downside to this is that you’ll additionally see the full list of “Active DHCP Leases” on the dumb AP – something that it is not managing at all as we disabled DHCP.

Neither solution is perfect, I’ll leave this up to the reader to decide which works best.

OpenWRT as a wireguard client

Previously I’ve written about running wireguard as a self hosted VPN. In this post I’ll cover how to connect a remote site back to your wireguard installation allowing that remote site to reach machines on your local (private) network. This is really no different than configuring a wireguard client on your phone or laptop, but by doing this on the router you build a network path that anyone on the remote network can use.

I should probably mention that there are other articles that cover a site-to-site configuration, where you have two wireguard enabled routers that extend your network across an internet link. While this is super cool, it wasn’t what I wanted for this use case. I would be remiss in not mentioning tailscale as an alternative if you want a site-to-site setup, it allows for the easy creation of a virtual network (mesh) between all of your devices.

In my case my IoT devices can all talk to my MQTT installation, and that communication not only allows the gathering of data from the devices, but offers a path to controlling the devices as well. What this means is that an IoT device at the remote site, if it can see the MQTT broker I host on my home server – will be controllable from my home network. Thus setting up a one way wireguard ‘client’ link is all I need.

I will assume that the publicly visible wireguard setup is based on the linuxserver.io/wireguard container. You’ll want to add a new peer configuration for the remote site. This should generate a peer_remote.conf file that should look something like:

This is the same conf file you’d grab and install into a wireguard client, but in our case we want to setup an OpenWRT router at a remote location to use this as it’s client configuration. The 10.13.13.x address is the default wireguard network for the linuxserver.io container.

I will assume that we’re on a recent version of OpenWRT (21.02 or above), as of this writing 23.03.2 is the latest stable release. As per the documentation page on setting up the client you’ll need to install some packages. This is easy to do via the cli.

Now there are some configuration parameters you need to setup (again in the cli, as we’re going to set some environment variables then use them later).

Now this is where I got stuck following the documentation. It wasn’t clear to me that the WG_ADDR value should be taken from the peer_remote.conf file as I’ve done above. I thought this was just another private network value to uniquely identify the new wg0 device I was creating on the OpenWRT router. Thankfully some kind folk on the OpenWRT forum helped point me down the right path to figure this out.

Obviously WG_SERV points at our existing wireguard installation, and the three secrets WG_KEY, WG_PSK, and WG_PUB all come from the same peer_remote.conf file. I do suspect that one of these might be allowed to be unique for the remote installation however, I know that this works – and I do not believe we are introducing any security issues.

At this point we have all the configuration we need, and can proceed to configure the firewall and network

This sets up a full tunnel VPN configuration. If you want to permit a split-tunnel then we need to change one line in the above script.

The allowed_ips needs to change to specify the subnet you want to route over this wireguard connection.

One important note. You need to ensure that your home network and remote network do not have overlapping IP ranges. This would introduce confusion about where to route what. Let’s assume that the home network lives on 192.168.1.0/24 – we’d want to ensure that our remote network did not use that range so let’s assume we’ve configure the remote OpenWRT setup to use 192.168.4.0/24. By doing this – we make it easy to know which network we mean when we are routing packets around.

Thus if we wanted to only send traffic destined for the home network over the wireguard interface, we’d specify:

As another way of viewing this configuration, let’s go take a peek at the config files on the OpenWRT router.

/etc/config/network will have two new sections

and the /etc/config/firewall will have one modified section

You’ll note that the wg0 device is part of the wan zone.

It really is pretty cool to have IoT devices at a remote site, magically controlled over the internet – and I don’t need any cloud services to do this.

 

Knowing when to update your docker containers with DIUN

DIUN – Docker Image Update Notifier. I was very glad to come across this particular tool as it helped solve a problem I had, one that I felt strongly enough about that I’d put a bunch of time into creating something similar.

My approach, was to build some scripting to determine the signature of the image that I had deployed locally, and then make many queries to the registry to determine what (if any) changes were in the remote image. This immediately ran into some of the API limits on dockerhub. There were also other challenges with doing what I wanted. The digest information you get with docker pull doesn’t match the digest information available on dockerhub. I did fine this useful blog post (and script) that solves a similar problem, but also hits some of the same API limitations. It seemed like maybe a combination of web scraping plus API calls could get a working solution, but it was starting to be a hard problem.

DIUN uses a very different approach. It starts by figuring out what images you want to scan – the simplest way to do this is to allow it to look at all running docker containers on your system. With this list of images, it can then query the docker image repository for the tag of that image. On the first run, it just saves this value away in a local data store. Every future run, it compares the tag it fetched to the one in the local data store – if there is a difference, it notifies you.

In practice, this works to let you know every time a new image is available. It doesn’t know if you’ve updated your local image or not, nor does it tell you what changed in the image – only that there is a newer version. Still, this turns out to be quite useful especially when combined with slack notifications.

Setting up DIUN for my system was very easy. Here is the completed Makefile based on my managing docker container with make post.

I started very simply at first following the installation documentation provided. I used a mostly environment variable approach to configuring things as well. The three variables I need to get started were:

  • DIUN_WATCH_SCHEDULE – enable cron like behaviour
  • DIUN_PROVIDERS_DOCKER – watch all running docker containers
  • DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT – watch all by default

Looking at the start-up logs for the diun container is quite informative and generally useful error messages are emitted if you have a configuration problem.

I later added the:

  • DIUN_NOTIF_SLACK_WEBHOOKURL

in order to get slack based notifications. There is a little bit of setup you need to do with your slack workspace to enable slack webhooks to work, but it is quite handy for me to have a notification in a private channel to let me know that I should go pull down a new container.

Finally I added a configuration file ./data/config.yml to capture additional docker images which are used as base images for some locally built Dockerfiles. This will alert me when the base image I’m using gets an update and will remind me to go re-build any containers that depend on them. This use the environment varible:

  • DIUN_PROVIDERS_FILE_FILENAME

My configuration file looks like:

I’ve actually been running with this for a couple of weeks now. I really like the linuxserver.io project and recommend images built by them. They have a regular build schedule, so you’ll see (generally) weekly updates for those images. I have nearly 30 different containers running, and it’s interesting to see which ones are updated regularly and which seem to be more static (dormant).

Some people make use of Watchtower to manage their container updates. I tend to subscribe to the philosophy that this is not a great idea for a ‘production’ system, at least some subset of the linuxserver.io folks agree with this as well. I like to have hands on keyboard when I do an update, so I can make sure that I’m around to deal with any problems that may happen.