Using Docker to isolate development environments

In terms of modern programming languages, I would pick golang as a great language to use. To me, it feels a little bit like a modernized C and has a nice approach to building single file binaries. It has a module system, type safety and garbage collection.

I really dislike the way that you install golang onto your system and the way you end up managing that install with environment variables and the like. It’s not a terrible install story, but I don’t like polluting my development laptop with cruft. Multiple versions make this more annoying.

Docker on the other hand, despite it’s flaws, is worth having installed. It allows you to run lots of different stuff without needing to make a big commitment about the install. So instead of re-imaging my machine to start fresh, I just purge the containers and start again. Another benefit is that it’s relatively easy for me to point someone else at my configuration and for them to re-use it nearly directly.

Getting a docker image that will persist state and make it trivial to compile golang programs turns out to be very easy

A couple of things to note. I’m using a host mounted volume – specifically the current directory that I issue the docker create in. From inside the container it is mapped to /data. I’ve also named the container, making it easy for me to re-run/attach to it for future compiles.

Edit – running with -u is a good idea to make docker run as the right user (you). This will mean that files created by that container on the mounted host volume are owned by you.

As an example, here is how I’d go about compiling a github project that is in golang.

How slick is that? My development machine needs docker and git installed. The rest of this environment is entirely inside the docker container.

Let me now demonstrate persistence of the container from run to run.

Thus if I happen to need a utility which isn’t installed in the base golang bullseye image, it’s easy for me to install. Also, from run to run – I have persistence of the changes I’ve made to my named image.

Ripping CDs to FLAC on Linux

This weekend I was messing around with ripping CDs again. Almost my entire collection has been digital for years, but all in MP3 format. Way back when ripping a CD to MP3 was a trade-off of time to encode, quality of playback, size of file I’d made some choices about what I was going to use as my standard for digitizing my collection.

I arrived at 192 kbps encoding (fixed bitrate), MP3 format encoded with the LAME encoder for the most part. To arrive at this bitrate I’d done a lot of A/B sampling between songs – with my best attempts at blind listening comparison to see which ones I could tell the difference between. After about 160 kbps I couldn’t hear any significant differences, and certainly after 192 kbps it was all the same. If you want to learn more about bitrates and encoding formats, this seems like a good article.

Since then – computers have gotten stupidly faster, so encoding time doesn’t matter . Storage is also cheap and plentiful, so I don’t care about file sizes. My main media server is Plex, which will happily transcode from FLAC for me to MP3 when I need it. There is also the mp3fs userspace filesystem that I can use to map FLAC to MP3 when I need it. I’d arrived at the conclusion that my archive format should be FLAC a couple of years ago, but I’d failed to get Linux setup to rip successfully.

With Windows machines there has been EAC which is basically the default way people who care will rip their music. It ensures ‘perfect’ copies. I found whipper which seemed to provide a similar solution on Linux, but a couple of years ago I failed to get this to work with either of the optical drives I have installed in my box. I could get all the tracks but the last one on the disc, very frustrating.

In my recent revisit to this, I started with abcde. This resulted in a simple docker container that I could run that would rip a disc to FLAC without much fuss. It worked well, but then I re-discovered my notes on using whipper and figured I’d see if the project had progressed.

It had – and whipper works great with one of my optical drives, but not the other. That’s fine as one is enough. My working drive is an ASUS DRW-1814BL and so far no problems except for one disc. The one problematic CD was one that I’d failed to rip in the past as well, there is a little bit of physical damage and whipper would bail dealing with the index.

It turns out my abcde setup worked great on this bad CD and was able to rip it to FLAC. I’ve less confidence that the abcde process is as robust and exact as whipper – but I’d rather have the music ripped than not at all.

For abcde I have a Makefile

And a Dockerfile

You’ll want to customize it to point at the right device, but there isn’t much here. Make build, make run and you’re good to go.

For whipper, it’s ever easier. I just used a shell script to call the pre-built container.

There is a little bit of setup for whipper, best to follow the documentation. Briefly you need to analyze the drive and figure out the offset. I also tweaked the generated config file to change the directory and file naming scheme.

That’s it. Ripping your CDs with high quality is trivial now, and with nicely featured media servers the format doesn’t matter. Now I just have to slowly re-rip CDs that I care about having high quality archives of.

 

Wireguard – self hosted VPN

After my recent adventures setting up IoT devices with local only access, I now needed to sometimes be able to talk to those devices when I’m not home. There are plenty of solutions, including setting up SSH tunnels which I’ve done in the past. Wireguard seems like a nice solution and it was high time I had VPN access to my home network.

The linuxserver.io folks have a nicely curated wireguard container with documentation. There are also plenty of good tutorials on installing wireguard. You can even go deeper and build your own, or explore alternatives.

Here is a makefile – based on my template for docker makefiles.

Once you create this – go pull the .png files for the QR codes from the config directory. This will make it trivial to setup your phone.

On mobile data – this just works. The local only Tasmota devices I can now control when away from home and it’s super easy. What doesn’t work with this setup is accessing other docker containers on the same host as the wireguard container.

I explored a few options to solve this, but it boils down to the problem of containers not easily being able to see each other. This bugs me, because while I can appreciate the security of containers being isolated from each other – if I expose a port on the host to a container – then other containers should be able to see that same port – but they can’t. This means that containers actually have less visibility into the host than an external machine – that seems wrong.

You can solve the network visibility problem by giving the container a unique IP address. Here is a brief recap of creating a macvlan docker network – details can be found in my previous post on this topic

Now from the makefile above, all we need to do is add --network myNewNet to the docker flags and update the container and we’re good to go.

It’s interesting that the docker ps command seems to not show as much about the container when it is run in this mode (No port information – but yes, ports are exposed).

One thing to keep in mind, if you first setup the container just on the docker host without macvlan you may need to adjust your port mapping to account for the new IP.

If I want the docker host machine to be able to see this container on the new IP we will need to use that --aux-address to build a network path. This is optional, but useful so it’s worth doing.

The version of Ubuntu I’m using doesn’t ship with rc.local enabled. I started down the path of enabling rc.local, but the further I got the more it seemed this was the wrong answer. This post talking about rc.local, pointed me at cron’s ability to execute on commands on reboot. The cron @reboot capability seems like the easy path here, the other choice being to create a systemd service which is effectively what the rc.local solution is.

Let’s create a script in /usr/local/bin/macvlansetup, making sure it’s executable.

Then we’ll edit root’s crontab to call this on reboot

Adding the new job

Now we’re set. The wireguard container has a unique IP address and no visibility problems to any of my other containers on the same host. The IoT devices can also be seen just fine when I am remote and enable the VPN. The one trade-off is a slightly more complicated networking setup.

With the default wireguard settings, this acts like a full tunnel VPN – meaning all of the network traffic runs over the tunnel. This is useful as a security measure if I’m on an untrusted wifi network – all the traffic will flow securely from my device to my home network then back out again to the internet. In my case with my pi-hole configured as the DNS server, I get ad-blocking over the VPN.