pOwn your IoT – OpenBeken

If you buy something, you expect to own it – this means being able to decide what it’s doing or not doing. If you can’t open it, you don’t own it. I think this is really important when we consider IoT devices that you add to your home. You should have 100% control over your light switches, not be reliant on some company to allow you to manage them.

In the past I’ve used Tasmota to replace the firmware in some commodity devices with good success. I wanted a new light switch and found the Martin Jerry S01 switch, so I ordered one. Unfortunately when it arrived, I opened it up and discovered the control module was no longer an ESP 8266 – but a Tyua CB3S device.

Some searching turned up the OpenBeken project. This is an open firmware that supports a number of Tuya devices. It appears to be possibly inspired by Tasmota which I found attractive, but the fact that there was a way to run open firmware on this device was the big draw.

Let me back up a little. Opening the MJ-S01 is quite easy. I used a putty spatula (thin metal blade) to pry the side clips. There are 4 clips, two per side.

Once you’ve got the clips released, you can easily remove the switch plate. There is a metal grounding plate you’ll have to un-hook from the switch plate. There is a cable with a 3 pin connector to separate the switch plate from the base, this is optional but makes it easier to work with the switch plate that has the controller.

I went further and removed the screws holding the circuit board to the switch plate in order to see the other side where the CB3S is attached. In the picture above you can see the blue circuit board in the middle. You don’t need to do this extra disassembly as the row of 6 pads exposes the right pins we want to work with.

In order to flash new firmware, I need to find and connect 4 pins: 3.3v, GND, TX, and RX. To identify these I referenced the Tuya documentation on this module which listed the pin outs on the module. Using my multi-meter to check connectivity, I was able to map the pin outs on the module to the pads on the circuit board.

Now it’s a simple matter of heating up the soldering iron and hooking up some wires to these pads.

A bit ugly, but it works. Now I can test that I’ve got things correct by hooking up just 3.3v and GND. Success! When I power on the device this way I get the expected blinking LED, and I can long-press the button to enter setup mode. Getting the stock firmware into AP (access point) mode – I see the expected “Smart_XXXX” access point become available to my laptop WiFi.

Next we get to experience the adventure of setting up the application on Windows. I’m going to gloss over this because it’s both a bit complicated and also my experience is likely to be different than yours. We are trying to get the GUI based flash tool installed. I needed to install some .net framework, and tell Windows it was ok to run this un-trusted application. I was lucky that my USB<->serial dongle was recognized by Windows and showed up as COM6.

Assuming you are able to run the app, get your serial connection sorted out, and provide 3.3v power to the device – we are very close to being able to get things going. One note: I connected the TX of my serial device to the RX of the CB3S board, and RX to TX. Crossing the connection seemed to work for me.

There is quite a bit to unpack from the image above. First you can see that my Serial UART was correctly detected and setup as COM6. I expect your configuration here will be different, and I hope it works easily for you but USB serial devices and windows can be frustrating.

The second key thing is to pick the right “chip type”. The CB3S contains a BK7231N, thus I selected that from the list of supported chips. I suggest you then “Download latest from Web” which in my case upgraded me from version 606 to version 670.

At this point everything seemed OK, but I wanted to proceed cautiously. The CB3S apparently enters programming state upon power on. I had this all hooked up, and tried “Do firmware backup (read) only”. This just worked for me, and I was greeted with the screen capture I took above showing “Reading success!” – so I knew now that I had at least all of the right connections made. The other thing that reading the firmware did was give the tool something to parse and discover the Tuya settings, this data appeared in a second dialog box and provided a JSON payload for me to save away.

Now we need to be brave and flash the latest version of the open firmware. This time it seemed to get stuck trying to enter programming mode and I needed to very (very) briefly disconnect/reconnect power to reset it. This worked great and I held my breath while it flashed.

I had not checked off the box “Automatically configure OBK on flash write” so once it was flashed, I then did a second operation of “Write only OBK config” to write the discovered values (that JSON payload). I didn’t need to configure anything, the tool had already initialized the values internally after the firmware backup step.

In theory, I have the original firmware downloaded to my machine in case I want to revert. If you care about this, maybe track down that file and save it. I personally don’t think I’d ever go back.

One more power cycle, and I’m very happy to see a WiFi access point appear named “OpenBK76231N_XXXXX”. Connecting my laptop to this I’m able to visit the IP address of the gateway (http://192.168.4.1) and am greeted by a very Tasmota looking web page to configure the device.

Now I can remove my patch wires from the solder pads, re-assemble the device and test that things still work end-to-end (they do). While there are similarities to Tasmota, things are quite different. There isn’t a built in timer facility which I was hoping for, but it turns out that via some simple scripting I can program in a timer schedule. You can even change the built in web UI via scripting which is pretty cool.

There is also very nice Home Assistant integration built in. The CB3S controller appears to be more snappy than the Tasmota ESP-8266 based devices I have, so while this device wasn’t what I expected when I ordered it – with a bit of work it seems I’m in a pretty good place.

Footnote: There is a forum which seems fairly active on the OpenBK firmware and various supported devices.

Tasmota, MQTT and Prometheus

In my recent IoT adventures I’ve been using Tasmota firmware to give me local only control over my devices. I’ve started to build out some sensors (temperature, flood) that I want to gather data from, this requires that I have a way to alert (and graph) the data coming out of these sensors. I already have Grafana + Prometheus running, thus it is just a matter of adding a prometheus exporter to get the data out.

Tasmota has built in MQTT support. While I could just craft my own prometheus exporter that scraped the data from the various devices, I decided that adding MQTT to the software stack and using an off the shelf MQTT prometheus exporter meant I was only managing configuration instead of writing something custom.

MQTT is a great protocol for IoT devices. It’s light weight and reliable, it’s dubbed “The Standard for IoT Messaging“. It didn’t take long for me to come across the Ecilpse Mosquitto project which based on the pull stats from dockerhub is very popular. The terms MQTT broker and MQTT server seem to be used interchangeably – the thing to remember is it’s a message queue system that supports publish / subscribe.

The Tasmota device is a client, Mosquitto is the broker, and the Prometheus exporter is a client. Once the data is in Prometheus I can make pretty graphs and create alerts in Grafana.

Running Mosquitto in a container was very easy, but quickly ran into a problem I had created myself with the restricted IoT network. Devices on my IoT network can’t see each other, or really much of anything including not being able to see the MQTT broker.

While I could poke a hole in the IoT network configuration to allow it to see the host:port that my MQTT broker is running on, there are a lot of containers running on that host/IP. Then I remember that I could use the docker macvlan support to create a unique IP address. [Security footnote: while this let’s me have a unique IP address, the code is still running on the same host as the other containers so the additional security is somewhat limited. This is still sort of cool and makes it less likely that I’ll goof up some firewall rules and expose too many things, it also sets me up for running an actual second host if I wanted better security.]

I quickly discovered that you can only have 1 macvlan setup on a host. It may be possible to work around this limitation using the 802.1q trunk bridge mode, this quickly started to seem complicated so I bailed. I did discover that with my existing macvlan network I can specify static IP addresses, and since I have unused IPs in my macvlan network this will work fine.

Here is the makefile that manages the docker deployment of my Mosquitto MQTT broker.

And the mosquitto.conf file is simply

Keeping in mine that my wireguard container is running on 192.168.1.64 and I’ve gone back and modified the wireguard Makefile/container to specify that IP address to ensure that things work reliably after reboots.

The last thing I need to do is modify my OpenWRT configuration to allow the IoT network devices to be able to see this new container. Adding the following to my /etc/config/firewall enables that.

Not specified here, but a good idea – is to assign a hostname to the static IP of the container so we can later reference it by name.

Configuring my Tasmota devices to talk with the MQTT broker is straightforward now that there is network visibility, the documentation is pretty easy to follow. Viewing the console of the Tasmota device helps see if the connection is successful.

Another good debugging technique is to shell into the mosquitto container and subscribe to all events.

A quick recap: We have statically assigned an IP to a container from the macvlan network. That container is running a MQTT broker. The IoT devices are sending events to that broker, and moving data from the IoT network onto the network where Prometheus lives.

Now all that is left is to add an exporter to pull data from MQTT and feed Prometheus. Looking at dockerhub it seems https://hub.docker.com/r/kpetrem/mqtt-exporter is very popular, but after some experiments it seemed it didn’t meet my needs. One thing I wanted to support was detecting when a device went missing and there wasn’t an easy way to do this using that exporter.

Using the list of prometheus ports it was easy to find many MQTT exporters. This one (https://github.com/hikhvar/mqtt2prometheus) stood out as a good option. While it doesn’t have a container on dockerhub, it does have one on the github hosted repo (ghcr.io/hikhvar/mqtt2prometheus:latest).

My Makefile for driving the container is

and more importantly the config file

Most interesting for others here is probably the MQTT payloads and the mapping I’ve made to the config file. It took a few iterations to figure out the correct config file based on the documentation.

Now that I’ve got data flowing, I can create pretty graphs and set alerts as needed.

I’ve also achieved a no-code solution to getting data from the Tasmota IoT sensors into graphs and alerts. The pathway is a little longer than I would like for reliability: IoT device -> MQTT Broker -> exporter -> Prometheus -> Grafana. This means that I need 4 containers to work + the IoT device itself. It’s still not a bad solution, and the Prometheus + Grafana solution is used for monitoring other components of my infrastructure so I pay regular attention to it.

Learning Rust by (re)writing a prometheus exporter

As part of my self hosted infrastructure I’ve got prometheus setup gather metrics from various sources, and Grafana to visualize them. My TED5000 gives me power usage information for the whole house, and my thermostat provides temperature data.

When using prometheus, you need exporters to provide metric data. There is an existing TED5000 exporter that I’m using, but there wasn’t one that I found for the thermostat – so I created one. The initial implementation was in python, and this worked fine. However I’d see in my pi-hole dashboard that lookups of the thermostat name were high in the stats (22000+ lookups per 24hr period). Looking at the logs, it seems every 15 second four lookups would happen, two pairs (A and AAAA) separated by 2 seconds. I suspect this was a side effect of the radiotherm library I was using in my code.

The exporter is very simple, it’s just a webserver that responds by making a web request to the thermostat, and responding with the data formatted for prometheus. The response payload looks like:

I figured that this was a good opportunity for me to learn Rust with a practical project.

The first thing I did was get myself setup to compile Rust code. I did this using a docker container as inspired by my previous post.

At this point I was able to create my first hello world rust code. I went from printing “hello” to following a tutorial to create a very simple webserver – which will be sufficient for a prometheus exporter.

I then started to learn about cargo and the crates.io site. It turns out there are a few prometheus crates out there, I found one that looked like a good match but after looking at it in more details I decided it was a lot more code and capability than I was looking for. Consider again the very simple response payload above, I really need very little help from a library.

I located a tutorial on reading data from a URL in Rust. This tutorial was less complete as it made assumptions you had more knowledge of managing crates than I did at the time. In order to take the code snippet and get working code you needed to add two lines to your Cargo.toml file. Furthermore, the code makes use of the blocking feature in the reqwest library which is not on by default. Using crates.io you can find the libraries (error-chain, reqwest) and details on how to configure them. This ends up being what you need:

At this point I now have two samples of code. One which is a single threaded web server, and a second which can read from a URL (the thermostat) and parse out the JSON response. A little bit of hacking to figure out return syntax in Rust and I’ve managed to smash these together and have a basic exporter working.

The existing exporter runs as a container on my server, so all that remains is to wrap this Rust code up in a container and I’ve got a complete replacement.

Looking at the binary I’d been running with cargo run I was a little surprised to see it was 60MB, I was quick to rationalize that this was in the debug tree. Compiling a release version (cargo build --release) resulted in a much smaller binary (8.5MB). This size reduction sent me off to see how small a container I could easily create.

Two things I want: a) a multi-stage build Dockerfile b) a static binary that will run in a scratch image. Luckily this is well travelled ground and is concisely explained in this blog post for creating small rust containers.

The result was a 12MB docker image that only contains the one binary that is my code.

The previous python based implementation built on an alpine base image was 126MB in size, so a 10x reduction in size – plus this new image runs without privilege (ie: as a user). This means this container has a very small attack surface. It appears to use %0.01 CPU vs. the previous %0.02. You can check the complete code out on github.

My pi-hole is showing me that it is still one of the higher names resolved with 11520 lookups in a 24hr period. This maps out to 24 hours * 60 minutes * 4 (every 15 seconds). I’ve improved on the DNS lookups, but it still feels like I can further improve this with a some clever coding.