Knowing when to update your docker containers with DIUN

DIUN – Docker Image Update Notifier. I was very glad to come across this particular tool as it helped solve a problem I had, one that I felt strongly enough about that I’d put a bunch of time into creating something similar.

My approach, was to build some scripting to determine the signature of the image that I had deployed locally, and then make many queries to the registry to determine what (if any) changes were in the remote image. This immediately ran into some of the API limits on dockerhub. There were also other challenges with doing what I wanted. The digest information you get with docker pull doesn’t match the digest information available on dockerhub. I did fine this useful blog post (and script) that solves a similar problem, but also hits some of the same API limitations. It seemed like maybe a combination of web scraping plus API calls could get a working solution, but it was starting to be a hard problem.

DIUN uses a very different approach. It starts by figuring out what images you want to scan – the simplest way to do this is to allow it to look at all running docker containers on your system. With this list of images, it can then query the docker image repository for the tag of that image. On the first run, it just saves this value away in a local data store. Every future run, it compares the tag it fetched to the one in the local data store – if there is a difference, it notifies you.

In practice, this works to let you know every time a new image is available. It doesn’t know if you’ve updated your local image or not, nor does it tell you what changed in the image – only that there is a newer version. Still, this turns out to be quite useful especially when combined with slack notifications.

Setting up DIUN for my system was very easy. Here is the completed Makefile based on my managing docker container with make post.

I started very simply at first following the installation documentation provided. I used a mostly environment variable approach to configuring things as well. The three variables I need to get started were:

  • DIUN_WATCH_SCHEDULE – enable cron like behaviour
  • DIUN_PROVIDERS_DOCKER – watch all running docker containers
  • DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT – watch all by default

Looking at the start-up logs for the diun container is quite informative and generally useful error messages are emitted if you have a configuration problem.

I later added the:

  • DIUN_NOTIF_SLACK_WEBHOOKURL

in order to get slack based notifications. There is a little bit of setup you need to do with your slack workspace to enable slack webhooks to work, but it is quite handy for me to have a notification in a private channel to let me know that I should go pull down a new container.

Finally I added a configuration file ./data/config.yml to capture additional docker images which are used as base images for some locally built Dockerfiles. This will alert me when the base image I’m using gets an update and will remind me to go re-build any containers that depend on them. This use the environment varible:

  • DIUN_PROVIDERS_FILE_FILENAME

My configuration file looks like:

I’ve actually been running with this for a couple of weeks now. I really like the linuxserver.io project and recommend images built by them. They have a regular build schedule, so you’ll see (generally) weekly updates for those images. I have nearly 30 different containers running, and it’s interesting to see which ones are updated regularly and which seem to be more static (dormant).

Some people make use of Watchtower to manage their container updates. I tend to subscribe to the philosophy that this is not a great idea for a ‘production’ system, at least some subset of the linuxserver.io folks agree with this as well. I like to have hands on keyboard when I do an update, so I can make sure that I’m around to deal with any problems that may happen.

 

Hacking an old HP Chromebook 11 G5

When it was time to get one of the kids a Chromebook for school years ago, I made sure to purchase a 4GB memory model with an Intel chip. I’m a fan of ARM devices, but at the time (6+ years ago) there was some real junk out there. There was also a price factor and I was looking at the lower end market, durability was also a concern.

I remember the Dell Chromebook 11″ was a hot item back then, but the pricing was higher than I wanted. Same for the Lenovo Chromebooks. After a bunch of searching around I found a nice HP Chromebook 11 G5 (specs) – if my memory is correct I got this well under $300 at the time.

This HP 11 G5 worked well, survived a few drops, and made it until it’s end of life – when Google stops providing OS updates. I’ve since replaced it with a nice Lenovo IdeaPad Flex 5 Chromebook – a nice step up, and there was a refurb model available for a great price (under $350).

For a long time there has been Neverware CloudReady – a neat way to get ChromeOS on old laptops. I always worried that there were security concerns with some random company offering ‘Google’ logins, but Neverware worked well. Google has since bought CloudReady, and seems to have turned around and created Chromeos Flex as the successor.

I figured that I could use Chromeos Flex on the HP 11 G5 to continue to get updates. Another solution would be to look at turning it into a GalliumOS machine. I actually have another old 14″ Chromebook I have run GalliumOS on, but have since moved to Linux Mint and use it as a generic Linux laptop.

I would recommend reading through the GalliumOS wiki information carefully to learn about the process of converting a Chromebook into a useful generic low end laptop. Specifically the Preparing section, a review of the Hardware Compatibility section and Firmware sections. Inevitably you’ll also end up on MrChromeBox’s site – which is where you’ll get the firmware replacement you’ll need.

While you can in some cases get alternative firmware running on the Chromebook hardware, it’s much easier if you go remove the hardware write protect. There wasn’t a specific guide to doing this, but the iFixit site was useful for the tear down aspect.

You will want to remove the black screw pointed at by the arrow. It’s near the keyboard ribbon cable connector. This is the hardware write protect.

Once I’d done this, it was simply a matter of installing the
“UEFI (Full ROM) firmware” using the MrChromeBox scripts. This is not for the faint of heart, and I do recommend making a backup of the original firmware in case you want to go back.

At this point you can install any old OS distribution you want. In my case I wanted to install Chromeos Flex, so I’d downloaded that and created a USB drive with it ready to roll. Installing it on my newly firmware updated Chromebook was easy.

I then ran into trouble. While Chromeos starts up fine, it was quickly clear that sound didn’t work. The video camera was working fine, but I couldn’t get any output or input for sound. I found that others had this same issue. I even tried using wired headphones (same problem) and bluetooth headphones (sound out was fine, sound in didn’t work at all)

This is a bummer, but understandable. Chromebook hardware is not really the target for Chromeos Flex. I figured it was worth trying out a generic Linux distro, so I picked Linux Mint. Booting from a USB drive with Mint on it was again easy with the new firmware. Sound output worked fine, as did web cam video – but the mic was still a problem, again something others had discovered.

At this point Chromeos Flex was a dead end. I can’t give someone a Chromebook that doesn’t have audio in or out and no reasonable work-arounds to get there. Installing Linux won’t trivially solve the problem because I get sound out, but no mic.

Remember when I said it was a good idea to backup the original firmware? Yup, we’re returning this Chromebook to stock (but I’ll leave the write protect screw out – because why not?). The MrChromeBox FAQ walks you through restoring that firmware. Since I had Linux Mint on a bootable USB I just used that to start up a shell and pull the script. Once I’d restored the stock firmware, I needed to build a ChromeOS recovery image and then return to a totally stock setup.

Now this old HP 11 G5 Chromebook has all of it’s features working, video, sound, mic.. but is trapped on an expired version of ChromeOS. Eventually the browser will become annoyingly old and at that point you’ll have to decide between the limitations of the browser, or losing your mic (and possibly sound).

Tasmota, MQTT and Prometheus

In my recent IoT adventures I’ve been using Tasmota firmware to give me local only control over my devices. I’ve started to build out some sensors (temperature, flood) that I want to gather data from, this requires that I have a way to alert (and graph) the data coming out of these sensors. I already have Grafana + Prometheus running, thus it is just a matter of adding a prometheus exporter to get the data out.

Tasmota has built in MQTT support. While I could just craft my own prometheus exporter that scraped the data from the various devices, I decided that adding MQTT to the software stack and using an off the shelf MQTT prometheus exporter meant I was only managing configuration instead of writing something custom.

MQTT is a great protocol for IoT devices. It’s light weight and reliable, it’s dubbed “The Standard for IoT Messaging“. It didn’t take long for me to come across the Ecilpse Mosquitto project which based on the pull stats from dockerhub is very popular. The terms MQTT broker and MQTT server seem to be used interchangeably – the thing to remember is it’s a message queue system that supports publish / subscribe.

The Tasmota device is a client, Mosquitto is the broker, and the Prometheus exporter is a client. Once the data is in Prometheus I can make pretty graphs and create alerts in Grafana.

Running Mosquitto in a container was very easy, but quickly ran into a problem I had created myself with the restricted IoT network. Devices on my IoT network can’t see each other, or really much of anything including not being able to see the MQTT broker.

While I could poke a hole in the IoT network configuration to allow it to see the host:port that my MQTT broker is running on, there are a lot of containers running on that host/IP. Then I remember that I could use the docker macvlan support to create a unique IP address. [Security footnote: while this let’s me have a unique IP address, the code is still running on the same host as the other containers so the additional security is somewhat limited. This is still sort of cool and makes it less likely that I’ll goof up some firewall rules and expose too many things, it also sets me up for running an actual second host if I wanted better security.]

I quickly discovered that you can only have 1 macvlan setup on a host. It may be possible to work around this limitation using the 802.1q trunk bridge mode, this quickly started to seem complicated so I bailed. I did discover that with my existing macvlan network I can specify static IP addresses, and since I have unused IPs in my macvlan network this will work fine.

Here is the makefile that manages the docker deployment of my Mosquitto MQTT broker.

And the mosquitto.conf file is simply

Keeping in mine that my wireguard container is running on 192.168.1.64 and I’ve gone back and modified the wireguard Makefile/container to specify that IP address to ensure that things work reliably after reboots.

The last thing I need to do is modify my OpenWRT configuration to allow the IoT network devices to be able to see this new container. Adding the following to my /etc/config/firewall enables that.

Not specified here, but a good idea – is to assign a hostname to the static IP of the container so we can later reference it by name.

Configuring my Tasmota devices to talk with the MQTT broker is straightforward now that there is network visibility, the documentation is pretty easy to follow. Viewing the console of the Tasmota device helps see if the connection is successful.

Another good debugging technique is to shell into the mosquitto container and subscribe to all events.

A quick recap: We have statically assigned an IP to a container from the macvlan network. That container is running a MQTT broker. The IoT devices are sending events to that broker, and moving data from the IoT network onto the network where Prometheus lives.

Now all that is left is to add an exporter to pull data from MQTT and feed Prometheus. Looking at dockerhub it seems https://hub.docker.com/r/kpetrem/mqtt-exporter is very popular, but after some experiments it seemed it didn’t meet my needs. One thing I wanted to support was detecting when a device went missing and there wasn’t an easy way to do this using that exporter.

Using the list of prometheus ports it was easy to find many MQTT exporters. This one (https://github.com/hikhvar/mqtt2prometheus) stood out as a good option. While it doesn’t have a container on dockerhub, it does have one on the github hosted repo (ghcr.io/hikhvar/mqtt2prometheus:latest).

My Makefile for driving the container is

and more importantly the config file

Most interesting for others here is probably the MQTT payloads and the mapping I’ve made to the config file. It took a few iterations to figure out the correct config file based on the documentation.

Now that I’ve got data flowing, I can create pretty graphs and set alerts as needed.

I’ve also achieved a no-code solution to getting data from the Tasmota IoT sensors into graphs and alerts. The pathway is a little longer than I would like for reliability: IoT device -> MQTT Broker -> exporter -> Prometheus -> Grafana. This means that I need 4 containers to work + the IoT device itself. It’s still not a bad solution, and the Prometheus + Grafana solution is used for monitoring other components of my infrastructure so I pay regular attention to it.