Docker system prune – not always what you expect

Containers have improved my ‘home lab’ significantly. I’ve run a server at home (exposed to the internet) for many years. Linux has made this both easy to do, and fairly secure.

However, in the old – “I’ll just install these packages on my linux box” – model, you’d end up with package A needing some dependency and package B needing the same one, then you’d have version conflicts. It was always something you could resolve, but with enough software you’d have a mess of dependencies to figure out.

Containers solves this by giving you a lightweight ‘virtualization’ isolating each of your packages from each other AND it also is a very convenient distribution mechanism. This allows you to easily get a complete functional application with all of it’s dependencies in a single bundle. I’ll point at linuxserver.io as a great place to get curated images from. Also, consider having an update policy to help you keep current, something like DUIN, or fully automate with watchtower.

Watchtower does have the ability to do some cleanup for you, but I’m not using watchtower (yet). I have included some image clean up into my makefiles because I was trying to fight filesystem bloat due to updates. While I don’t want to prematurely delete anything, I also don’t want a lot of old cruft using up my disk space.

I recently became aware of the large number of docker volumes on my system. I didn’t count, but it was well over 50 (the list filled my terminal window). This seemed odd, some of them had a creation date of 2019.

Let’s just remove them docker volume prune – yup, remove all volumes not used by at least one container. Hmm, no – I still have so many. Let’s investigate further.

What? If I sub in a volume id that I know is attached to a container, I do get the container shown to me. This feels like both docker system prune and docker volume prune are broken.

Thankfully the internet is helpful if you know what to search for. Stackoverflow helped me out. It in turn pointed me at a github issue. Here is what I understand from those.

Docker has both anonymous and named volumes. Unfortunately, many people were naming volumes and treating them like permanent objects. Running docker system prune was removing these named volumes if there wasn’t a container associated with it. Losing data sucks, so docker has changed to not remove named volumes as part of a prune operation.

In my case, I had some container images which had mount points that I wasn’t specifying as part of my setup. An example is a /var/log mount – so when I create the container, docker is creating a volume on my behalf – and it’s a named volume. When I recreate that image, I’m getting a new volume and ‘leaking’ a named volume which is no longer attached to a container. This explains why I had 50+ volumes hanging out.

You can easily fix this

Yup, now I have very few docker volumes on my system – the remaining ones are associated with either a running or a stopped container.

Leave a Reply

Your email address will not be published. Required fields are marked *