Way back in late 2016 I started building a new server to replace my aging server that runs this website, email, etc. I’m still slowly working towards actually swapping all of the services I run to the new hardware, a process slowed by the fact that I’m trying to refactor everything to be container based – and that I simply don’t seem to have a lot of free time to hack on these things. This has been complicated by the original motherboard failing to work, and my desire to upgrade the SSD to a larger one.
Originally I had a 60GB SSD, which I got at a great price (about $60 if I recall). Today SSD prices have fallen through the floor, making it seem sensible to start building 1TB raid arrays out of (highly reliable) SSDs. A little before Christmas I picked up a 120GB Kingston SSD for around $30. Clearly it was time to do an upgrade.
Back in 2016, the right choice was Ubuntu 16.04, in 2019 the right version is 18.04 LTS. Now while I haven’t moved all the services I need to the new server (in fact, I’ve moved very few) — there are some services which do run on the new server and my infrastructure does rely on them. This means that I want to minimize the downtime of the new server, while still achieving a full clean install.
Taking the server offline for ‘hours’ seemed like a bad idea, however since my desktop is a similar enough machine to the new server hardware it seemed like a good idea to use it to build the new OS on the new SSD. In fact, my bright idea was to do the install with the new SSD inside of a USB enclosure.
Things went smoothly, download 18.04.02, create a bootable USB stick, put SSD into a drive enclosure, boot with USB stick and do an install to the external SSD. Initially I picked a single snap, the docker stuff – I later changed my mind about this and re-did the install. The shell/curses based installer was new to me – but easy to use. The only option I picked was to pre-install OpenSSH server because of course I want one of those.
Attempting to boot from the newly installed SSD just didn’t work. It was hanging my hardware just after POST. This was very weird. Removing the SSD from the enclosure and doing a direct SATA connection worked fine – and it proved that the install was good to go.
For the next part, I’m using my old OS setup post as a reference.
Either 18.04.02 is recent enough, or the install has changed – but I didn’t need to do
1 2 |
$ sudo apt-get update $ sudo apt-get upgrade |
since doing the above made no changes to the system as installed. Still, it doesn’t hurt to make sure you are current.
Of course, we still want fail2ban
1 |
$ sudo apt-get install fail2ban |
I didn’t create a new SSH key, but copied in the keys from the old system. I did need to create the .ssh directory.
1 2 |
$ mkdir .ssh $ chmod 700 .ssh |
Once you’ve copied in the id_rsa files, don’t forget to create the authorized_keys file
1 2 |
cp id_rsa.pub authorized_keys chmod 400 authorized_keys |
It’s a good idea now to verify that we can access the machine via ssh. Once we know we can, let’s go make ssh more secure by blocking password based authentication. The password is still useful to have for physical (console) logins.
1 2 3 4 |
$ vi /etc/ssh/sshd_config # Change to no to disable tunnelled clear text passwords PasswordAuthentication no |
Let’s also setup a firewall (ufw)
1 2 3 4 |
$ sudo ufw allow 22 $ sudo ufw allow 80 $ sudo ufw allow 443 $ sudo ufw enable |
Now as we will almost exclusively be using docker to host things, docker will be managing the iptables for us in parallel to ufw. So strictly speaking we don’t need 80 or 443 enabled because docker will do that when we need to. I did also discover that ufw has comments you can add to each rule – making a nice way to know why you opened (or closed) a port.
Staying on the security topic, let’s install logwatch. This requires a mail system – and as we don’t want postfix because that will interfere with the plan to put email in a docker container.. we are going to use nullmailer setup to send email to our mail server (hosted on the old server until we migrate).
1 2 |
$ sudo apt-get install nullmailer $ sudo apt-get install logwatch |
Don’t forget to configure logwatch to email us daily
1 2 3 4 |
$ vi /etc/cron.daily/00logwatch # execute /usr/sbin/logwatch --output mail --mailto myid@lowtek.ca --detail high |
For docker we’re going to hook up to the docker.com repositories and install the CE version. To do this we need some additional support added to our base install.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
# things we need to make the install go smoothly sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common # add the key from docker.com curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - # add the repository sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" # update apt sudo apt-get update |
We can now install docker.
1 |
$ sudo apt-get install docker-ce docker-ce-cli containerd.io |
I also like to be able to run docker as ‘me’ vs needing to sudo for every command
1 2 |
$ sudo gpasswd -a ${USER} docker $ sudo service docker restart |
Don’t forget that you need to logout/login to have your shell environment pick up the permissions to do this.
A few more annoyances we’re going to address. Needing my password to sudo is inconvenient. I’m willing to trade off some security for making this more convenient – however, be aware we are making a trade-off here.
TIL visudo is the safe way to edit sudoers https://t.co/qop34V5zxf .. and yes, I'm currently locked out of my linux box
— Roo (@andrew_low) February 16, 2019
Don’t be me, either use visudo to verify that you’re modifying /etc/sudoers correctly. Making your user is easy, just add to the end of the file.
1 2 |
# no password sudo for user myid myid ALL=(ALL) NOPASSWD:ALL |
Another minor nit is while I was asked in the install to give a machine name, the /etc/hosts file didn’t get the name added as expected. Just adding it (machineName) to the end of the 1st line will fix things.
1 |
127.0.0.1 localhost.localdomain localhost machineName |
Things are almost perfect, but we haven’t setup automatic updates yet.
1 |
$ sudo apt install unattended-upgrades |
Then configure the unattended upgrades
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ sudo vi /etc/apt/apt.conf.d/50unattended-upgrades # Remove comments to enable updates "${distro_id}:${distro_codename}-updates"; # Setup email for any failures Unattended-Upgrade::Mail "root"; Unattended-Upgrade::MailOnlyOnError "true"; # A few more settings Unattended-Upgrade::Remove-Unused-Kernel-Packages "true"; Unattended-Upgrade::Remove-Unused-Dependencies "true"; Unattended-Upgrade::Automatic-Reboot "true"; Unattended-Upgrade::Automatic-Reboot-Time "02:00"; |
Next is ensuring that the unattended upgrades get run, modify the file: /etc/apt/apt.conf.d/20auto-upgrades to look like below
1 2 3 4 |
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Download-Upgradeable-Packages "1"; APT::Periodic::AutocleanInterval "7"; APT::Periodic::Unattended-Upgrade "1"; |
We can test this using a dry-run
1 |
$ sudo unattended-upgrades --dry-run --debug |
I did discover an interesting problem with docker and shutdowns. The automatic reboot will use shutdown to initiate a reboot in the middle of the night. When this happens you can cancel the shutdown, but this seems to cause docker with a –restart policy to not restart things.
1 2 3 4 5 6 7 8 9 10 11 |
# issue a scheduled shutdown $ sudo shutdown +10 --reboot # containers are running fine $ docker ps # cancel the shutdown $ sudo shutdown -c # containers still happy $ docker ps # reboot $ sudo reboot # containers not automatically restarted |
It does appear that if you allow the shutdown to happen (ie: do not cancel it), then the docker containers do restart as expected.
I can’t really explain the shutdown and docker interaction issue. Nor the install of Ubuntu onto an external drive, and then that drive causing the machine to fail to POST properly.
Edit
Adding 3rd party repository to unattended-upgrade. Since I elected to pull docker from docker.com vs. the package provided via Ubuntu, I want to also have that repository be part of the automatic upgrades.
I first found this post which provides a path to figuring it out, but I found a better description which I used to come up with my solution.
1 2 3 4 |
$ grep Origin /var/lib/apt/lists/download.docker.com_linux_ubuntu_dists_bionic_InRelease Origin: Docker $ grep Suite /var/lib/apt/lists/download.docker.com_linux_ubuntu_dists_bionic_InRelease Suite: bionic |
This gives us the two bits of magic we need to add to /etc/apt/apt.conf.d/50unattended-upgrades – so we modify the Allowed-Origins section to look like:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}"; "${distro_id}:${distro_codename}-security"; // Extended Security Maintenance; doesn't necessarily exist for // every release and this system may not have it installed, but if // available, the policy for updates is such that unattended-upgrades // should also install from here by default. "${distro_id}ESM:${distro_codename}"; "${distro_id}:${distro_codename}-updates"; "Docker:bionic"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; |
Again, we can test that things are cool with
1 |
$ sudo unattended-upgrades --dry-run --debug |
This will ensure that docker updates get applied automatically.