LIRC vs ir-keytool

Related to my recent IoT hacking, what started me down this path is the long term annoyance of my X10 lighting being unreliable. X10 has always been problematic due to it’s use of power line communication, this has gotten worse as we add more and more noisy electronic devices that cause additional feedback onto the house wiring.

With the X10 light switch I had an IR-543 which mapped IR (infra red) and the rest of my home theater gear is all IR controlled, so a single remote could control everything including the lights. Another nice feature of the X10 light switch I had was soft on / soft off – meaning that when you turned the lights off they would dim down to off, and the same for on. At the start of a movie this is pretty nice.

Of course with a wifi enabled light switch, how do I get IR control? This seemed like a good reason to DIY a solution and build an IR controller / repeater based on a Raspberry Pi. I found that it’s relatively easy to control Tasmota devices with curl, so I was able to easily turn the lights on or off using a simple program. I was pleased to discover that the new light switch also had the soft on / soft off behaviour.

To build an IR device on Linux, I first thought of LIRC as I’ve used this in the past. As I dug deeper, it seems the LIRC project is quite dormant and I was fighting with a lot of stale tooling. I was succeeding in getting something working with the various remotes I wanted to use but it felt like it was a lot of work. Then a friend mentioned ir-keytable to me which led me to the more modern IR control in Linux solution.

The short version of the story is that the ir-keytable support is in a similar state as the LIRC work. I believe this boils down to the fact that IR control is still very niche, and there are lots of hardware variables due to many different remote controls. If you want to do something simple: receive IR input to control a linux machine, then ir-keytable is the way to go. More complex situations may require LIRC. Both approaches have their challenges but ir-keytable is the more modern solution.

The rest of this article will be about getting ir-keytable going on Raspberry Pi OS with a TSOP4838 IR receiver. For my application I have a more complex set of requirements so I’ll be continuing with an LIRC based solution, but more on that another time.

Continue reading “LIRC vs ir-keytool”

Wireguard – self hosted VPN

After my recent adventures setting up IoT devices with local only access, I now needed to sometimes be able to talk to those devices when I’m not home. There are plenty of solutions, including setting up SSH tunnels which I’ve done in the past. Wireguard seems like a nice solution and it was high time I had VPN access to my home network.

The folks have a nicely curated wireguard container with documentation. There are also plenty of good tutorials on installing wireguard. You can even go deeper and build your own, or explore alternatives.

Here is a makefile – based on my template for docker makefiles.

Once you create this – go pull the .png files for the QR codes from the config directory. This will make it trivial to setup your phone.

On mobile data – this just works. The local only Tasmota devices I can now control when away from home and it’s super easy. What doesn’t work with this setup is accessing other docker containers on the same host as the wireguard container.

I explored a few options to solve this, but it boils down to the problem of containers not easily being able to see each other. This bugs me, because while I can appreciate the security of containers being isolated from each other – if I expose a port on the host to a container – then other containers should be able to see that same port – but they can’t. This means that containers actually have less visibility into the host than an external machine – that seems wrong.

You can solve the network visibility problem by giving the container a unique IP address. Here is a brief recap of creating a macvlan docker network – details can be found in my previous post on this topic

Now from the makefile above, all we need to do is add --network myNewNet to the docker flags and update the container and we’re good to go.

It’s interesting that the docker ps command seems to not show as much about the container when it is run in this mode (No port information – but yes, ports are exposed).

One thing to keep in mind, if you first setup the container just on the docker host without macvlan you may need to adjust your port mapping to account for the new IP.

If I want the docker host machine to be able to see this container on the new IP we will need to use that --aux-address to build a network path. This is optional, but useful so it’s worth doing.

The version of Ubuntu I’m using doesn’t ship with rc.local enabled. I started down the path of enabling rc.local, but the further I got the more it seemed this was the wrong answer. This post talking about rc.local, pointed me at cron’s ability to execute on commands on reboot. The cron @reboot capability seems like the easy path here, the other choice being to create a systemd service which is effectively what the rc.local solution is.

Let’s create a script in /usr/local/bin/macvlansetup, making sure it’s executable.

Then we’ll edit root’s crontab to call this on reboot

Adding the new job

Now we’re set. The wireguard container has a unique IP address and no visibility problems to any of my other containers on the same host. The IoT devices can also be seen just fine when I am remote and enable the VPN. The one trade-off is a slightly more complicated networking setup.

With the default wireguard settings, this acts like a full tunnel VPN – meaning all of the network traffic runs over the tunnel. This is useful as a security measure if I’m on an untrusted wifi network – all the traffic will flow securely from my device to my home network then back out again to the internet. In my case with my pi-hole configured as the DNS server, I get ad-blocking over the VPN.

Ubuntu adding a 2nd data drive as a mirror (RAID1)

Over the years I’ve had the expected number of hard drive failures. Some have been more catastrophic to me as I didn’t have a good backup strategy in place, others felt avoidable if I’d paid attention the warning signs.

My current setup for data duplication is based on Snapraid, a non-traditional RAID solution. It allows mixed sizes of drives, and the replication is done via regularly running the sync operation. Mine is done daily, files are sync’d across the drives and a data validation is done from time to time as well. This means while I might lose up to 24hrs of data if the primary drive fails, I have lower usage of the main parity drive and I get the assurance that file corruption hasn’t happened.

Snapraid is very bad when you have either: many small files, frequently changing files. It is ideal for backing up media like photos or movies. To deal with the more rapidly changing data I’ve got a SSD drive for storage. I haven’t yet had a SSD fail on me, but that is assured to happen at one point. Backblaze is already seeing some failure rate information that is concerning. Couple this with the fact that my storage SSD started throwing errors the other day and only a full power cycle of the machine brought it backĀ  – it’s fine now, but for how long? Time to setup a mirror.

For this storage I’m going back to traditional RAID. The SSD is a 480GB drive, and thankfully the price of them has dropped to easily under $70. This additional drive now fills all 6 of the SATA ports on my motherboard, the next upgrade will need to be an SATA port expansion card. I’ve written about RAID a few times here.

I’ve moved away from specifying drives as /dev/sdbX because these values can change. Even this new SSD caused the drive that was at /dev/sdf to move to /dev/sdg allowing the new drive to use /dev/sdf. My /etc/fstab is now setup using /dev/disk/by-id/xxx because these are persistent. Most of the disk utilities understand this format just fine as you an see with this example with fdisk.

Granted, working with /dev/disk/by-id is a lot more verbose – but that id will not change if you re-organize the SATA cables.

Let’s get going on setting up the new drive as a mirror for the existing one. Here’s the basic set of steps

  1. Partition the new drive so it is identical to the existing one
  2. Create a RAID1 array in degraded state
  3. Format and mount the array
  4. Copy the data from the existing drive to the new array
  5. Un-mount both the array and the original drive
  6. Mount the array where the original drive was mounted
  7. Make sure things are good – the next step is destructive
  8. Add the original drive to the degraded RAID1 array making it whole

It may seems like a lot of steps, and some of them are scary – but on the other side we’ll have a software RAID protecting the data. The remainder of this post will be the details of those steps above.

Continue reading “Ubuntu adding a 2nd data drive as a mirror (RAID1)”