Running Selenium testing in a single Docker container

Selenium is a pretty neat bit of kit, it is a framework that makes it easy to create browser automation for testing and other web-scraping activities. Unfortunately it seems there is a dependency mess just to get going, and when I hit these types of problems I turn to Docker to contain the mess.

While there are a number of “Selenium + Docker” posts out there, many have more complex multi-container setups. I wanted a very simple single container to have Chrome + Selenium + my code to go grab something off the web. This article is close, but doesn’t work out of the box due to various software updates. This blog post will cover the changes needed.

First up is the Dockerfile.

The changes needed from the original article are minor. Since Chrome 115 the chromedriver has changed locations, and the zip file layout is slightly different. I also updated it to pull the latest version of Selenium.

ChromeDriver is a standalone server that implements the W3C WebDriver standard. This is what Selenium will use to control the Chrome browser.

The second part is the Python script tests.py

Again, only minor changes here to account for changes in Selenium APIs. This script does do some of the key ‘tricks’ to ensure that Chrome will run inside Docker (providing a few arguments to Chrome).

This is a very basic ‘hello world’ style test case, but it’s a starting point to start writing a more complicated web scraper.

Building is as simple as:

And then we run it and get output on stdout:

Armed with this simple Docker container, and using the Python Selenium documentation you can now scrape complex web pages with relative ease.

Update: Managing docker containers with makefiles

I’ve been a bit of an old school “just use docker” vs. “docker-compose”, but it was the other day when I learned that they’ve now baked “compose” into the base docker cli. This means there is less reason to avoid using the features of docker compose.

Still, as previously discussed, I like using Makefiles to manage my containers. There is a very good argument to adopt using Watchtower, despite it being not recommended.

We do not endorse the use of Watchtower as a solution to automated updates of existing Docker containers. In fact we generally discourage automated updates.

Let me go on record that if you’re going to blindly run your update process, you may as well just use Watchtower. This is a good way to automatically break things of course, so maybe don’t auto-update things that you rely on to work (like wireguard or your web server).

With that out of the way – let me point out one of the downsides to my previous Makefile approach to managing containers: I’d slowly run out of disk space with many old images. I had gotten into the habit of running

every once in a while. This would get rid of all the containers, and images, that were no longer active. It did have the downside of removing images that were not currently running – so you need to be aware of that if you had assumptions.

Still my makefile approach saved me a few times allowing me to easily re-name / re-run the -old version of a container and back out of a bad update.

What I needed to do was make sure that the old, -old image was removed. This was where most of the disk space growth was. This turns out to be easier to do than to explain. Here is my update makefile

There are only two new lines. The first is

This is assigning the variable IMAGEID to have the docker image id value from the current -old image.

The second is the obvious removal of the image based on that id.

I believe that these two additional lines result in nearly no filesystem growth as I update images. Very handy as some of the linuxserver.io images pull down ~1GB of data every time, and they are updated weekly.

Debugging your network

The other day there was something strange happening on my home network. While my Amazon Fire HD8 tablet was happily on the WiFi, my Pixel 4a was ‘bouncing’ between WiFi and mobile data. The animated image at the top of the post is a simulation of what I saw.

The first thing I did was reboot my phone. Maybe this was some weird snafu and a reboot will fix it. This is the classic turn if off and on again.

The problem persisted. At the time I thought this may be just my device, but I found out later that Jenn’s Pixel 6a was doing the same thing, as was an older Pixel 2. The 6a was happy enough delivering internet, but based on the mobile data usage for the day it was clear most of the traffic was mobile data.

Next was the look at the OpenWRT router(s) – and I was a bit surprised that all of them had been up for 70+ days. I was headed out the door and decided to stall since it seemed that computers (and the 6a) were working ok (or so I thought). I confirmed that my phone was fine as I worked from the office, and had solid WiFi all day.

After dinner, it was clear that there was still a problem – but only the Pixel phones seemed to be having the problem. I tried removing the WiFi connection from my phone and re-adding it, maybe that would help? Nope. What if I switch to the guest network? Whaaat? Solid connectivity?!

Now the guest network doesn’t get the benefit of ad blocking from my Pi Hole. Maybe there is something going on there? I start poking around the logs on the Pi Hole, and reviewing logs on the OpenWRT router to see if there is any evidence. Nothing stands out. I try moving my phone to the IoT network – all three of the WiFi networks are hosted on the same hardware – so this helps eliminate the hardware and a good chunk of the software. The IoT network does make use of the Pi Hole, and to my surprise my phone was happy on the WiFi when using the IoT network.

At this point I’m about an hour and a half into debugging this, and I’m starting to run out of ideas. There have not been any configuration changes to my network recently. I don’t believe that any software updates have landed on my phone, and to have 3 different Pixel devices to all have the same problem all of a sudden is really weird. I’ve rebooted both my networking devices and the mobile devices – still the problem persists.

I run with 3 access points (two dumb AP and a main gateway), but each of them advertises the same SSID on two WiFi channels (a total of 6 distinct WiFi channels, all with the same SSID). This is a great setup for me as my devices just seamlessly move from connection to connection based on what is best.

My next idea is to change the SSID of one of the channels from ‘lan wifi’ to ‘hack wifi’ – allowing me to specifically connect to a given radio on a given access point. Now I can connect my phone to this new ‘hack wifi’ and know that configuration changes to this will affect the one device. Unsurprisingly the behaviour is the same, my phone just keeps connecting / reconnecting to this ‘hack wifi’.

I dive into some of the OpenWRT settings, looking for something that will make this WiFi connection more resilient. There are lots of options, but ultimately this is a dead end. I then wonder what would happen if I modify this WiFi connection to instead of routing to my ‘lan’ network, to route to the ‘iot’ network. Now my connection to ‘hack wifi’ works great.. hmm

What does this tell me? There seems to be something weird about my ‘lan’ network vs. there being something specific about the way that my ‘lan wifi’ is configured. This pivots me away from looking for differences in the WiFi configuration of my IoT, Guest, and Lan networks and looking more specifically at what’s connected to the lan network.

I grab the list of all devices on the lan network. Eliminate all of the WiFi clients in that list because it’s probably not them (but I’m guessing). Let’s take a closer look at the wired things (of which I have a good number). I start unplugging things, no joy. I turn off my main wired switch and still nothing. Finally I try unplugging my old server.

Boom. That was it. Almost immediately my phone connects to the WiFi network and stays connected. A quick check, and all of the other Pixel phones are happy now too.

This reminds me of a problem I had years ago, but my network was smaller and a little less complicated. A network cable I had built turned out to be bad, but only after months of use. Suddenly one day, my whole network was misbehaving, all devices were having problems. Powering off the machine had no effect, it was only once I removed the network cable that things came back to life. This was similar as the bad machine was connected to the main wired switch that I had powered off, but that wasn’t enough to fix the issue. Removing the cable was the fix.

Throughout this problem, computers on the ‘lan wifi’ seemed fine. Video calls worked fine, no strange drops or slow downs. Still, the impact to the Pixel phones was extreme.

The old server is very old (built in 2009), all of the drives in it are starting to fail and I should probably just power it off and wipe the drives and dispose of the hardware. I just haven’t quite gotten to it yet, this is probably a sign I should.

Edit  – Oct 6th

Oh no, it happened again. I haven’t plugged the machine or the cable that I determined was the issue last time back in – so what is triggering this?

This time I went straight to the 24port switch. Instead of powering it off (and on) which last time I did, and it had no effect – I removed it from the network by disconnecting the network cable between it and my OpenWRT router.

My phone immediately stopped bouncing between wifi and LTE, but just sat on LTE. It was then that I noticed that (of course) my pi hole which is run on a Raspberry Pi 4, is connected to the wired switch.

Plugging the switch back into the network, and my phone happily rejoined the network. The problem was gone, and has stayed gone since then. No reboots of any equipment were needed, only a brief drop from the network for the 24 port switch.  Very mysterious.

Edit – Oct 23rd

Ugh, again. This time I tried to be more surgical. I unplugged the pi-hole machine and waited 30 seconds. Plugging it back in might have fixed things? (not a power cycles, just a network drop?) Or maybe it fixed itself? Grr.

Edit – Nov 2nd

Yup, again. I woke up and noticed my phone was bouncing. Unplug the pi-hole from the network, wait, add it back.. magic – all fixed.

My ISP recently started giving me IPv6 (again) — I’m wondering if there is something funny going on there. This is still odd.

Edit – Dec 13th

And again (and possibly I missed recording one time it has happened since the last as well). It seems clear that disconnecting the network cable from the Pi and waiting a few (10) seconds – then reconnecting it is all I need to do to resolve this problem. I have changed (and tested) the network cable and changed the port it is connected to on the 24port switch.

Final Edit

I’ve failed to update a few times when I’ve had minor issues – but based on the history above you can see it continued. I’ve now moved the RPi4 (pi-hole) from connecting to the 24port switch, to directly connecting to my OpenWRT (Archer c7). Knock on wood, so far I haven’t seen the problem since.

I lied – one more edit!

The mysterious wifi-cycling happened again, but just to a Pixel2 device. It kept connecting, and flashing that there was no internet and then going through the cycle all over again. I run multiple WiFi networks (and even have unique VLANs), and I could get the pixel2 to stay on the guest network no problem, but not the main lan network. This got me thinking – ok, it is something to do with my pi-hole. However, I’m not seeing any logs on the pi-hole.

The light bulb finally went on. What if the pixel2 is using the IPv6 address of the pi-hole to connect? Oh look – IPv6 address my DHCP server is handing out is not the IPv6 address of my pi-hole!?

My OpenWRT configuration which hands out the IPv6 addresses, and assigns the pi-hole it’s static addresses – had a configuration error. I had given it an IPv6 ‘suffix’ of 8, and it needed to be corrected to 08 to work.

One more reboot, and my pi-hole now has the correct IPv6 address – and the Pixel2 is happily connected to the wifi. This feels like the true root cause of this problem. It does explain why rebooting things got the timing to be different and the device(s) having problems would pick the IPv4 address or just fail over to when it couldn’t reach the IPv6 address. The pixel2 obviously has less logic in this space, and helped me find the problem.