Hacking an old HP Chromebook 11 G5

When it was time to get one of the kids a Chromebook for school years ago, I made sure to purchase a 4GB memory model with an Intel chip. I’m a fan of ARM devices, but at the time (6+ years ago) there was some real junk out there. There was also a price factor and I was looking at the lower end market, durability was also a concern.

I remember the Dell Chromebook 11″ was a hot item back then, but the pricing was higher than I wanted. Same for the Lenovo Chromebooks. After a bunch of searching around I found a nice HP Chromebook 11 G5 (specs) – if my memory is correct I got this well under $300 at the time.

This HP 11 G5 worked well, survived a few drops, and made it until it’s end of life – when Google stops providing OS updates. I’ve since replaced it with a nice Lenovo IdeaPad Flex 5 Chromebook – a nice step up, and there was a refurb model available for a great price (under $350).

For a long time there has been Neverware CloudReady – a neat way to get ChromeOS on old laptops. I always worried that there were security concerns with some random company offering ‘Google’ logins, but Neverware worked well. Google has since bought CloudReady, and seems to have turned around and created Chromeos Flex as the successor.

I figured that I could use Chromeos Flex on the HP 11 G5 to continue to get updates. Another solution would be to look at turning it into a GalliumOS machine. I actually have another old 14″ Chromebook I have run GalliumOS on, but have since moved to Linux Mint and use it as a generic Linux laptop.

I would recommend reading through the GalliumOS wiki information carefully to learn about the process of converting a Chromebook into a useful generic low end laptop. Specifically the Preparing section, a review of the Hardware Compatibility section and Firmware sections. Inevitably you’ll also end up on MrChromeBox’s site – which is where you’ll get the firmware replacement you’ll need.

While you can in some cases get alternative firmware running on the Chromebook hardware, it’s much easier if you go remove the hardware write protect. There wasn’t a specific guide to doing this, but the iFixit site was useful for the tear down aspect.

You will want to remove the black screw pointed at by the arrow. It’s near the keyboard ribbon cable connector. This is the hardware write protect.

Once I’d done this, it was simply a matter of installing the
“UEFI (Full ROM) firmware” using the MrChromeBox scripts. This is not for the faint of heart, and I do recommend making a backup of the original firmware in case you want to go back.

At this point you can install any old OS distribution you want. In my case I wanted to install Chromeos Flex, so I’d downloaded that and created a USB drive with it ready to roll. Installing it on my newly firmware updated Chromebook was easy.

I then ran into trouble. While Chromeos starts up fine, it was quickly clear that sound didn’t work. The video camera was working fine, but I couldn’t get any output or input for sound. I found that others had this same issue. I even tried using wired headphones (same problem) and bluetooth headphones (sound out was fine, sound in didn’t work at all)

This is a bummer, but understandable. Chromebook hardware is not really the target for Chromeos Flex. I figured it was worth trying out a generic Linux distro, so I picked Linux Mint. Booting from a USB drive with Mint on it was again easy with the new firmware. Sound output worked fine, as did web cam video – but the mic was still a problem, again something others had discovered.

At this point Chromeos Flex was a dead end. I can’t give someone a Chromebook that doesn’t have audio in or out and no reasonable work-arounds to get there. Installing Linux won’t trivially solve the problem because I get sound out, but no mic.

Remember when I said it was a good idea to backup the original firmware? Yup, we’re returning this Chromebook to stock (but I’ll leave the write protect screw out – because why not?). The MrChromeBox FAQ walks you through restoring that firmware. Since I had Linux Mint on a bootable USB I just used that to start up a shell and pull the script. Once I’d restored the stock firmware, I needed to build a ChromeOS recovery image and then return to a totally stock setup.

Now this old HP 11 G5 Chromebook has all of it’s features working, video, sound, mic.. but is trapped on an expired version of ChromeOS. Eventually the browser will become annoyingly old and at that point you’ll have to decide between the limitations of the browser, or losing your mic (and possibly sound).

When rate limiting (and firewalling) goes wrong


Recently I experienced a few power failures that lasted hours. This means that when the power is back, all of my infrastructure reboots and reconnects. For the most part this is 100% automatic, but the last time I ran into an interesting problem.

My pi-hole was running with the default rate limiting of 1000/60. This means that each device can make up to 1000 requests per minute, and if it exceeds that it will be put on a deny list for 60 seconds.

It turns out that my main server that runs a bunch of docker containers makes a lot of DNS requests when everything is starting up all at once. This creates a storm of requests to the pi-hole and the server ends up being blocked for DNS requests (responding with REFUSED) due to rate limiting.

Unfortunately the behaviour of enough of the containers is to retry when this happens. This causes more DNS requests to be made as the retry logic runs. These retries cause another wave of requests which cause the server to be blocked again. Some of my containers entered error conditions due to unexpected DNS failures, so these needed to later be restarted but at least they stopped contributing to the problem.

My email container was pretty unhappy, it really wants to be able to use DNS, even when receiving email. Since my server had been unavailable for a while, there were external email servers trying to deliver mail that had been queued – this contributed to the load. Additionally I couldn’t connect any email clients to the server which left me scratching my head a little, more on that later on.

The ‘fix’ was easy enough. Modify the pi-hole DNS rate-limiting setting to 0/0 to remove any rate limiting. This is imperfect, but at one point I saw 30,000 requests in a minute from my struggling server and I think I’d rather have no limit and deal with that problem than hit the limit and run into this denial of service issue.

Now that the pi-hole was happy, I was able to get most of my containers to be happy with a little poking at them. Email was still sad, and this took me a coffee break to realize what was wrong. The email container was receiving email just fine, but I could not connect with a client. This felt like a networking problem, but how could that be?

I had forgotten (again) – that the email server has fail2ban running in it. This scans logs looking for suspicious activity and will ban an IP for a period of time by inserting a firewall rule. Furthermore, as I use the domain name to configure my email client – this resolves to the external IP. The external IP means that the client talks to my OpenWRT router which provides NAT and then redirects/maps that external IP back into my network. This has the effect that the originating IP looks like it is my router, not the client machine on the internal IP address. This process is called NAT reflection, or NAT hairpinning.

While NAT reflection is a super handy feature for my OpenWRT router to have, allowing me to easily from inside my home network visit a machine I’ve exposed via port mapping to the outside world using the same DNS entry that points at the external IP address — it means that services on that machine see my router IP as the client IP. When any of the machines in my house have problems connecting to my email server, in this case because I had DNS REFUSED errors on the email server, fail2ban decides that is a bad client and bans it. Thus banning all traffic originating from my home network.

This is easy to fix once you understand what is happening, I just needed to unban my router IP and my email clients could connect.

Learning Rust by (re)writing a prometheus exporter

As part of my self hosted infrastructure I’ve got prometheus setup gather metrics from various sources, and Grafana to visualize them. My TED5000 gives me power usage information for the whole house, and my thermostat provides temperature data.

When using prometheus, you need exporters to provide metric data. There is an existing TED5000 exporter that I’m using, but there wasn’t one that I found for the thermostat – so I created one. The initial implementation was in python, and this worked fine. However I’d see in my pi-hole dashboard that lookups of the thermostat name were high in the stats (22000+ lookups per 24hr period). Looking at the logs, it seems every 15 second four lookups would happen, two pairs (A and AAAA) separated by 2 seconds. I suspect this was a side effect of the radiotherm library I was using in my code.

The exporter is very simple, it’s just a webserver that responds by making a web request to the thermostat, and responding with the data formatted for prometheus. The response payload looks like:

I figured that this was a good opportunity for me to learn Rust with a practical project.

The first thing I did was get myself setup to compile Rust code. I did this using a docker container as inspired by my previous post.

At this point I was able to create my first hello world rust code. I went from printing “hello” to following a tutorial to create a very simple webserver – which will be sufficient for a prometheus exporter.

I then started to learn about cargo and the crates.io site. It turns out there are a few prometheus crates out there, I found one that looked like a good match but after looking at it in more details I decided it was a lot more code and capability than I was looking for. Consider again the very simple response payload above, I really need very little help from a library.

I located a tutorial on reading data from a URL in Rust. This tutorial was less complete as it made assumptions you had more knowledge of managing crates than I did at the time. In order to take the code snippet and get working code you needed to add two lines to your Cargo.toml file. Furthermore, the code makes use of the blocking feature in the reqwest library which is not on by default. Using crates.io you can find the libraries (error-chain, reqwest) and details on how to configure them. This ends up being what you need:

At this point I now have two samples of code. One which is a single threaded web server, and a second which can read from a URL (the thermostat) and parse out the JSON response. A little bit of hacking to figure out return syntax in Rust and I’ve managed to smash these together and have a basic exporter working.

The existing exporter runs as a container on my server, so all that remains is to wrap this Rust code up in a container and I’ve got a complete replacement.

Looking at the binary I’d been running with cargo run I was a little surprised to see it was 60MB, I was quick to rationalize that this was in the debug tree. Compiling a release version (cargo build --release) resulted in a much smaller binary (8.5MB). This size reduction sent me off to see how small a container I could easily create.

Two things I want: a) a multi-stage build Dockerfile b) a static binary that will run in a scratch image. Luckily this is well travelled ground and is concisely explained in this blog post for creating small rust containers.

The result was a 12MB docker image that only contains the one binary that is my code.

The previous python based implementation built on an alpine base image was 126MB in size, so a 10x reduction in size – plus this new image runs without privilege (ie: as a user). This means this container has a very small attack surface. It appears to use %0.01 CPU vs. the previous %0.02. You can check the complete code out on github.

My pi-hole is showing me that it is still one of the higher names resolved with 11520 lookups in a 24hr period. This maps out to 24 hours * 60 minutes * 4 (every 15 seconds). I’ve improved on the DNS lookups, but it still feels like I can further improve this with a some clever coding.