Server Upgrade: Part 2 – Basic OS setup

November 26th, 2016

At this point I’ve got the hardware setup and running, but it’s a very basic install. This post is inspired by a post I came across some time ago that I felt gave some good advice. I’ll walk through the steps I took while following that article.

Starting with a clean Ubuntu 16.04.1 server install.

1st login

Fail2ban is a must have security feature, blocking traffic when it detects repeated failed attempts to access your system.

Now we want to make some ssh keys following the Ubuntu documentation. One key question is how big a key should we be using for reasonable security? I think the answer is 4096 bits.

Why create a key on the new machine? I have the opinion that unique ssh keys for unique machines is a good idea, and the ssh config file makes is really simple to manage multiple keys on your main machine (laptop).

Should you make use of a passphrase when creating your ssh key? If you really care about security, yes – you should. Otherwise anyone who manages to get their hands on your key immediately has access to everything. There is a small usability trade-off, since you need to provide that pass phrase (password) every time you want to use the key.

The ssh config file (not on the host, but on your laptop) will look like this:

Assuming you’ve copied the private key (id_rsa) from the new server you’re setting up to the laptop, make sure to chmod 600 that key file too.

Now we should be able to ssh into the new server from our laptop. Hooray, it works!

Time to lock down ssh so key based logins are the only way

You might want to have a shell logged in to the machine while you do this, so you can verify that things are cool AND fix stuff if there is a problem. Otherwise you’re locked out.

Time for a firewall, we’re going to use ufw because it’s simple and does the IPTables setup for us.

We’re letting ssh, http and https protocols, and that’s it. We’re not yet running a web server, but we will at one point.

Now let’s reboot and see how things are doing, if we can’t login — we really broke stuff. In my case, everything was still working just fine. If you did break things, go get the USB install for Ubuntu and boot a live version on the machine and use root access there to fix the configuration files to let you get back in — or just re-install and start over.

When I did the original install, I opted to not do automatic security updates. This was a mistake so I’ll fix that now.

Again, from the post I’m sort of following along – they recommend logwatch, something I haven’t used previously. Having run with it for a while now, I’ve grown to like the level of detail it pulls together into a daily email.

Now the logwatch installation triggered a postfix install and configuration. This is probably useful for my host to have email capability, but really I want to host my main mail system inside a docker container. Hopefully this won’t cause me grief in the future as I build out the set of containers that will run email etc.

I just picked the default postfix ‘internet’ install and it appears to have done the right thing to allow the new server to send email to, so that’s positive. Again, my concern is when this new machine starts hosting the email inside a docker container, how will all of that work? An issue I’ll certainly cover in a future post.


Almost done.

At this point, things are pretty good, but..

I’m concerned by the 10 packages that aren’t being automatically upgraded. It turns out that this is easy to fix with a dist-upgrade.

And of course, I continued and got all of the latst upgrades to the 16.04 tree. If you’re observant, you’ll notice that 16.04 is still back on an older kernel (4.4) which is fine, this is the LTS release.

One more reboot… then we see

excellent. We’re current, and with the automatic updates we’ll get security patches with no additional effort. Non security patches will not install automatically, so from time to time we’ll still need to manually pull down patches.

Server Upgrade: Part 1 – the build

November 14th, 2016


I’ve been on and off for nearly 2 years talking about getting new hardware for the server that runs The current server is a modest Atom based board with only 2GB of RAM, I’m pretty sure my phone has more compute power. My reasons for upgrading were mostly so I could move over to using a Docker based deployment, having done a bunch of Docker things for work – I’ve really gotten to like it for managing software stacks. The Atom chip doesn’t support some of the virtualization that is needed, heck – it’s also a 32bit system only. Stalling on buying new hardware wasn’t all that hard, there was always other priorities. Then I discovered the current server was put together 7 years ago! Worse, the IDE drive that is the boot volume is older than that.. Clearly I have a ticking time bomb for a server.

I wanted to get a skylake processor, but was happy to stick with an entry level Pentium – it will let me run all the software I care about and while it’s not the fastest, it will be much much faster than the Atom. I wanted 4 RAM slots on the motherboard, I figure 16GB now – with the potential to double that later. I also needed 6 SATA ports (Raid 5 + boot drive), and if I need to get more I’ll buy a PCIe SATA extension port. I went ASUS based on their durability reputation. I picked up a new power supply (because who wants to trust one that is already 7 years old), and a cheap SSD drive which will be much more reliable than any spinning platter.

If you follow the links, you’ll notice that the first 3 are CanadaComputers and the last two are I usually buy from CanadaComputers, but while they list the G4400 online you can’t actually buy it at the store. If I’m buying something online, well – is as good as anywhere, and they not only had the G4400 cheaper, they also had a crazy clear out price on the SSD ($37.99).

The build went smoothly, I was able to re-use an old Dell PC case to house the new hardware temporarily. I had a 3.5″ adapter for the SSD kicking around, and the motherboard came with a couple of SATA cables.

Getting an OS installed proved to be a bit trickier. Something funky is wrong with my desktop Ubuntu, downloads with Chrome are often corrupted (md5sum fails). The 16.10 Ubuntu install even when it is a correct download, will not self check it’s files correctly AND while I could boot it from USB, I couldn’t get it to install on my new hardware.

I believe the problem is something to do with the newer kernel level, but given that 16.04 is the LTS version and what I’ll be running on the server anyways, it’s not worth banging my head on this problem further.

All in, I spent a bit more than $400 CAD and have a couple of rebates in the mail. Not bad as I’ll probably run this hardware for at least 5 years, and it’ll be a much needed upgrade.

Next up – I need to install the server version and start building out the software stack that will take over the current servers functionality. Once that is mostly ready, it’ll be time to do a hardware swap into the real server case that has all of the drives in it.

JavaScript and the Single Threaded Lie

August 5th, 2016

JavaScript is single threaded

Yes, but.. it is also heavily asynchronous. Since my experience porting Node.js to new platforms, it’s turned into one of my go to languages for hacking web apps. Unfortunately I still fall into the async trap.

Let’s look at some code:

And here is the output of the code if we run it:

Now, in this example – it’s pretty clear what is causing the issue. The setTimeout() is doing a delay of 2 seconds. This is an async call, so the for loop has completed it’s entire cycle before any of the callbacks get a chance to run. Timing issues also make things more confusing if things sometimes work since you can’t guarantee the order of asynchronous functions.

The trap for me, is when the code is more complex and it isn’t obvious to see that there is an async call in the way. Also, my brain keeps telling me that for each iteration through the loop, I (think I’m) creating a new variable (item) and that should provide correct isolation for the state.

There are two simple solutions to this problem.

Solution 1 – use the call stack:

Move the async call myFunction into a helper and pass the value to it. This moves it from being a local variable to one on the stack (as a parameter to helper).

Solution 2 – have the callback give the value back:

To be honest, solution 2 is just another call stack approach – but it’s a different enough pattern to be a second solution.

The output of both good solutions looks like: