Comparing images to detect duplicates

I’ve been using Photoprism to manage my large and growing photo library. We had simply outgrown using a single machine to manage the library, and Apple had burned us a couple of times by changing their native photo management system. I’m also not the type to trust someone else to keep and secure my photos, so I’m going to host it myself.

I have backups of those photo libraries which I’m working from, and unfortunately those backups seem to have replication of the photos. No problem right? Photoprism has the ability to detect duplicates and reject them. Sweet. However, it does rely on the photos being exactly the same binary.

My problems start when I have a bunch of smaller photos, which look ok – but are clearly not the original. In this particular case the original is 2000×2000, and the alternate version is 256×256 (see top of post for an example of two images). Great – just delete the small one, but with 1000’s of photos how do I know that one is a duplicate of another but resized?

There are other flags here too, the smaller resized version is missing a proper EXIF date stamp. So sure, I can just sort out the photos based on ones with valid EXIF data and then I have a bunch of others which don’t have data. But, what if one of those photos isn’t a resized version? Maybe it’s a photo of something that I only have a small version of?

Again, with 1000’s of photos to review, I’m not going to be able to reasonably figure out which ones are keepers or not. Good thing that doing dumb stuff is what computers are good at. However, looking at two images and determining if they are the same thing is not as easy as you might think.

The folks at imagemagick have some good ideas on comparing for differences, they even tackle the same issue of identifying duplicates but still end up relying on you creating your own solution based on some advice.

Since I had this problem, I did cook up some scripting and an approach which I’ll share here. It’s messy, and I still rely on a human to decide – but for the most part I get a computer to do some brute force work to make the problem human sized.

Continue reading “Comparing images to detect duplicates”

Expanding a docker macvlan network

I’ve previously written about using macvlan networks with docker, this has proved to be a great way to make containers more like lightweight VMs as you can assign a unique IP on your network to them. Unfortunately when I did this I only allocated 4 IPs to the network, and 1 of those is used to provide a communication path from the host to the macvlan network.

Here is how I’ve used up those 4 IPs:

  1. wireguard – allows clients on wireguard to see other docker services on the host
  2. mqtt broker – used to bridge between my IoT network and the lan network without exposing all of my lan to the IoT network
  3. nginx – a local only webserver, useful for fronting Home Assistant and other web based apps I use
  4. shim – IP allocated to supporting routing from the host to the macvlan network.

If I had known how useful giving a container a unique IP on the network was, I would have allocated more up front. Unfortunately you can’t easily grow a docker network, you need to delete and recreate it.

As an overview here is what we need to do.

  • Stop any docker container that is attached to the macvlan network
  • Undo the shim routing
  • Delete the docker network
  • Recreate the docker network (expanded)
  • Redo the shim routing
  • Recreate the existing containers

This ends up not being too hard, and the only slightly non-obvious step is undoing the shim routing, which is the reverse of the setup.

The remainder of this post is a walk through of setting up a 4 IP network, then tearing it down and setting up a larger 8 IP network.

Continue reading “Expanding a docker macvlan network”

Docker system prune – not always what you expect

Containers have improved my ‘home lab’ significantly. I’ve run a server at home (exposed to the internet) for many years. Linux has made this both easy to do, and fairly secure.

However, in the old – “I’ll just install these packages on my linux box” – model, you’d end up with package A needing some dependency and package B needing the same one, then you’d have version conflicts. It was always something you could resolve, but with enough software you’d have a mess of dependencies to figure out.

Containers solves this by giving you a lightweight ‘virtualization’ isolating each of your packages from each other AND it also is a very convenient distribution mechanism. This allows you to easily get a complete functional application with all of it’s dependencies in a single bundle. I’ll point at linuxserver.io as a great place to get curated images from. Also, consider having an update policy to help you keep current, something like DUIN, or fully automate with watchtower.

Watchtower does have the ability to do some cleanup for you, but I’m not using watchtower (yet). I have included some image clean up into my makefiles because I was trying to fight filesystem bloat due to updates. While I don’t want to prematurely delete anything, I also don’t want a lot of old cruft using up my disk space.

I recently became aware of the large number of docker volumes on my system. I didn’t count, but it was well over 50 (the list filled my terminal window). This seemed odd, some of them had a creation date of 2019.

Let’s just remove them docker volume prune – yup, remove all volumes not used by at least one container. Hmm, no – I still have so many. Let’s investigate further.

What? If I sub in a volume id that I know is attached to a container, I do get the container shown to me. This feels like both docker system prune and docker volume prune are broken.

Thankfully the internet is helpful if you know what to search for. Stackoverflow helped me out. It in turn pointed me at a github issue. Here is what I understand from those.

Docker has both anonymous and named volumes. Unfortunately, many people were naming volumes and treating them like permanent objects. Running docker system prune was removing these named volumes if there wasn’t a container associated with it. Losing data sucks, so docker has changed to not remove named volumes as part of a prune operation.

In my case, I had some container images which had mount points that I wasn’t specifying as part of my setup. An example is a /var/log mount – so when I create the container, docker is creating a volume on my behalf – and it’s a named volume. When I recreate that image, I’m getting a new volume and ‘leaking’ a named volume which is no longer attached to a container. This explains why I had 50+ volumes hanging out.

You can easily fix this

Yup, now I have very few docker volumes on my system – the remaining ones are associated with either a running or a stopped container.