Generating SSH key pairs

Despite having had some excitement recently, SSH continues to be both the utility and a protocol that I use heavily every day. I will also have to shout out to mosh which is a must have overlay, if you aren’t using it – stop reading this now and go get mosh.

Not often, but every once in a while I find myself needing to generate a new key pair for use with SSH. GitHub has one of the best articles on doing this, but it’s not quite what I want. I find myself having to re-think the small differences I want to make each time, clearly time to write up what I do so I can just visit this post when I need to generate a key.

Yup, that’s it. In the directory you run this there will be two files generated. The private key is basename, and the public key is basename.pub. I’m also a fan of the .ssh/config file which you may want to adopt, this makes it easy to have different keys for different systems.

Breaking down the creation command. We are generating a key using the Ed25519 algorithm, most modern systems will support this. Next up we see that we are adding a comment, I find this useful to identify what the public key is for. Last is the filename(s) we want the output written to.

You’ll see that comments often have no whitespace in them, if you want to be risk adverse avoid using spaces and use dashes or something.

OpenWRT on GL.iNet GL-MT6000 (aka: Flint 2)

I was reading through the OpenWRT forum several months back to see if the TPLink AX23 was still the right upgrade choice for me. I’ve been very happy with the classic TPlink Archer C7 – having 3 of these as my core network (two as dumb AP). I came across this thread on devices for ‘newcomers’ and discovered the GL.iNet GL-MT6000, it looks like a monster bit of hardware at a pretty low price point. My travel router is a GL.iNet device and it’s been great hardware for OpenWRT. Then bonus time at work hit, and I ran out of excuses to buy the GL-MT6000.

While you can buy directly from GL.iNet, just after I pushed the buy now button there I discovered that I was going to be on the hook for import duties and the shipping was via FedEx. I’ve not had good experiences with this path and the administration fees are high. The support process from GL.iNet was amazing – a few emails and my order was cancelled without any fuss.

I ended up buying via Amazon.ca (camelcamelcamel link) because shipping costs were predictable. I see that it’s not currently in stock, but my total including shipping was $248.49 – still a deal for this much hardware.

Speaking of hardware

  • Two 2.5Gb ports
  • 1GB RAM
  • 8GB Flash
  • Quad core 2GHz CPU
  • Wifi6

This may not be enough hardware to handle 1Gb symmetric fibre, but I’m still back on a much slower cable 100/30 plan. It also gets me thinking about upgrading my network switches to 2.5Gb.. but that’s a different post.

The device itself has some heft to it – there is apparently a sizeable heat-sink inside. The power cord is short – about 3′, and there is no power switch, not a problem for me, but I can see why some people felt this was a limitation.

Of course, the very first thing I’m going to do is flash this with OpenWRT. This is as simple as grabbing the sysupgrade.bin file from https://openwrt.org/toh/gl.inet/gl-mt6000 and connecting to the device over a wired connection.

The factory firmware hosts an administration web UI on http://192.168.8.1/ allowing you to do basic setup. I’m prompted to pick a language and set a password.

From this screen we can select Upgrade on the left navigation bar, then local upgrade and upload the sysupgrade.bin file we downloaded

The built in firmware handles the upgrade very nicely, it even detects a kernel change and automatically selects to not keep setting (which is what the OpenWRT wiki advises)

Even during the upgrade the web UI is pretty slick

Once it hits 100% it will automatically reboot. Since the OpenWRT default IP is different, we need to visit a different admin web page http://192.168.1.1

I have to say that the exterior of the device has a matte black finish, and the angular styling appeals deeply to my 80’s stealth bomber admiring inner teen. It reminds me of the USRobotics Courier 56k modems back in the day.

At this point we’ve got OpenWRT installed, and it’s just a matter of working through the configuration steps. I did run into a few problems that were my own tripping over my own feet issues. Linux apparently ‘remembers’ the name of the connection, and the type of connection security. If you change the encryption but not the name it seems you can run into problems. I also messed up one of the passwords with a type-o. Eventually I got it all settled down and things worked great.

Replacing a ZFS degraded device

It was no surprise that a new RAIDZ array built out of decade old drives was going to have problems, I didn’t expect the problems to happen quite so quickly, but I was not surprised. This drive had 4534 days of power on time, basically 12.5 years. It was also manufactured in Oct 2009, making it 14.5 years old.

I had started to backup some data to this new ZFS volume, and upon one of the first scrub operations ZFS flagged this drive as having problems.

The degraded device, maps to /dev/sdg – I determined this by looking a the /dev/disk/by-id/wwn-0x50014ee2ae38ab42 link.

On one of my other systems I’m using snapraid.it, which I quite like. It has a SMART check that does a calculation to indicate how likely the drive is to fail. I’ve often wondered how accurate this calculation is.

The nice thing is you don’t need to be using snapraid to get the SMART check data out, it’s a read only activity based on the devices. In this case it has decided the failing drive has 100% chance of failure, so that seems to check out.

Well, as it happens I had a spare 1TB drive on my desk so it was a matter of swapping some hardware. I found a very useful blog post covering how to do it, and will replicate some of the content here.

As I mentioned above, you first need to figure out which device it is, in this case it is /dev/sdg. I also want to figure out the serial number.

Good, so we know the serial number (and the brand of drive), but when you’ve got 4 identical drives, which of the 4 is the right serial number? Of course, I ended up pulling all 4 drives before I found the matching serial number. The blog post gave some very good advice.

Before I configure an array, I like to make sure all drive bays are labelled with the corresponding drive’s serial number, that makes this process much easier!

Every install I make will now follow this advice, at least for ones with many drives. My system now looks like this thanks to my label maker

I’m certain future me will be thankful.

Because the ZFS array had marked this disk as being in a FALTED state, we do not need to mark it ‘offline’ or anything else before pulling the drive. If we were swapping an ‘online’ disk we may need to do more before pulling the drive.

Now that we’ve done the physical swap, we need to get the new disk added to the pool.

The first, very scary thing we need to do is copy the partition from an existing drive in the vdev. The new disk is the TARGET, and an existing disk is SOURCE.

Once the partition is copied over, we want to randomize the GUIDs as I believe ZFS relies on unique GUIDs for devices.

This is where my steps deviate from the referenced blog post, but the changes make complete sense. When I created this ZFS RAIDZ array I used the short sdg name for the device. However, as you can see after a reboot the zpool command is showing me the /dev/disk/by-id/ name.

This worked fine. I actually had a few miss-steps trying to do this, and zpool gave me very friendly and helpful error messages. More reason to like ZFS as a filesystem.

Cool, we can see that ZFS is repairing things with the newly added drive. Interestingly it is shown as sdg currently.

This machine is pretty loud (it has a lot of old fans), so I was pretty wild and powered it down while the ZFS was trying to resilver things. When I rebooted it after relocating it to where it normally lives and the noise won’t bug me, it seems that the device naming has sorted itself out.

The snapraid SMART report now looks a lot better too

It took about 9 hours to finish the resilvering, but then things were happy.

Some folks think that you should not use RAIDZ, but create a pool with a collection of vdevs which are mirrors.

About 2 weeks later, I had a second disk go bad on me. Again, no surprise since these are very old devices. Here is a graph of the errors.

The zfs scrub ran on April 21st, and you can see the spike in errors – but clearly this drive was failing slowly all along as I was using it in this new build. This second failing drive was /dev/sdf – which if you look back at the snapraid SMART report, was at 97% failure percentage. It is worth noting that while ZFS and the snapraid SMART have both decided these drives are bad, I was able to put both drives into a USB enclosure and access them still – I certainly don’t trust these old drives to store data on them, but ZFS stopped using the device before it became unusable.

I managed to grab a used 1TB drive for $10. It is quite old (from 2012) but only has a 1.5yrs of power on time. Hopefully it’ll last, but at the price it’s hard to argue. Swapping that drive in was a matter of following the same steps. Having the drive bay labelled with the serial numbers was very helpful.

Since then, I’ve picked up another $10 1TB drive, and this one is from 2017 with only 70 days of power on time. Given I’ve still got two decade old drives in this RAIDZ, I suspect I’ll be replacing one of them soon. The going used rate for 1TB drives is between $10 and $20 locally, amazing value if you have a redundant layout.