Google Pixel 10

I didn’t need a new phone, the Pixel 7 I’d been using for almost 2 years was doing everything I needed. The battery life was good, it still felt fast, and the photos were great.

I think my downfall was that I needed to upgrade the kids to newer (working) phones, and managed to get a couple of Pixel 7 Pros for a great price. This meant my phone, while still great, was the lowest spec phone in the family. I think since Black Friday we’d seen cell phone plan promotions that included the Pixel 10, leading to a lot of them hitting the market (new in box). The asking price was often $750 with low points being around $600, still a bit high for my frugal self.

Then a friend of mine did the plan upgrade dance and had a Pixel 10 he was willing to let go a bit lower than the typical asking price – while I didn’t need an upgrade – now I’m rocking the Pixel 10.

Above is an animation that compares the 7 vs the 10. The specifications say the 10 is slightly heavier than the 7, but the size of the phone is a tiny bit smaller. Notable are the more pronounced rounded corners. The 10 feels smaller in my hand, it’s still a giant phone as all phones seem to be today, but something about the shape makes it feels smaller.

The internals are all newer, better, faster. The 10 has a laundry list of improvements over the 7: A 5x optical telephoto camera; Satellite SOS service; 12GB RAM; updates until 2032.

If you want more comparisons I found this article which runs down a comparison of the 10 vs. many of the previous models.

I was able to move my old Pixel 7 along for a fair price, it turned out the person who bought it from me was someone I used to work with. Nice.

Over the 2+ years I used it, it only accumulated 293 charge cycles – and the battery should be good for 80% capacity at 800 cycles. It’s still got more than a year of Google support and after that you can always go to GrapheneOS or LineageOS.

Back to talking about the Pixel 10. Let’s start with the Google AI summary of the upgrade from the 7 to the 10.

Upgrading from the Pixel 7 to the Pixel 10 (2025) offers significant improvements in battery life, display brightness (3000+ nits), and processing power (Tensor G5), largely addressing the heat and battery degradation issues found in older models. The Pixel 10 features a smoother 120Hz display, faster charging, and 12GB of RAM compared to the 7’s 8GB

As usual, I grabbed a generic bumper case off Amazon, which came with a screen protector. I particularly like the cases that have extra bumps on the corners, both for fall protection – but also I find them handy to help grip the phone.

Unfortunately the screen protector didn’t work with the finger print sensor. The 7 used an optical sensor, whereas the 10 uses an ultrasonic sensor. The generic screen protector was too thick preventing it from working. Google has introduced a ‘made for Google‘ certification for screen protectors – boo. While face unlock worked, I’m not sure I wanted to be stuck with that or a password. I broke down and ordered Spigen certified screen protector, $24.99 which turned into almost $30 after tax but next day delivery.

The Spigen protector worked without issue, and the kit I ordered actually included two protectors so the price seemed a bit more worth it. The installation is a bit funky, but worked just like their video describes. I did need to use the squeegee tool to remove the bubbles, but it went smoothly.

Aside from the small issue with a generic screen protector and fingerprint recognition adopting the new phone was straight forward. The migration process from the 7 to the 10 went smoothly. Even the eSIM just magically moved over. All in all it was an easy upgrade.

A few weeks in to using it, I’m still pleased with the upgrade. I keep finding reviews which say all of the things better than I could.

Update: NixOS + Docker + MacVLAN

This is an updated version of my previous article NixOS + Docker with MacVLAN (IPv4) – which addresses ARP Flux which causing networking problems. Skip to the bottom summary if you want to just read the conclusion.

Ever since, but very intermittently, my new server build out – I had been having issues with my macbook (OSX) randomly losing connection to the server, but only for a brief time. I noticed mostly when using mosh and it would indicate that I’d lost connection, but self hosted web UIs like Grafana also gave me problems.

Ping would also fail, but then it would start to work after a minute or so. What was weird is that while I could not ping my new server (192.168.1.79) I could have a stable mosh connection to another box (192.168.1.140), which could see the new server (192.168.1.79) just fine. This made it seem very much like it was something wrong with OSX. I was guilty of mostly ignoring this because it did impact me, but not frequently — it was still very annoying.

This is also an exploration of using Google Gemini, which you can use from an incognito tab with no login required. It didn’t immediately get me to the right answer, but we eventually got there.

The first iteration didn’t really get me very far, only a suggestion to check the route on my OSX machine when the problem was happening. This meant I had to wait around for it to break – and catch it in the act.

Unfortunately the output from this didn’t change with broken networking vs. working. At this point I added the detail that this networking problem seemed to happen most often after waking the OSX machine from sleep. Gemini was then able to reason:

In your route output, the flag LLINFO stands for Link Layer Information. This means macOS has a cached MAC address for 192.168.1.79.

When a Mac sleeps, it often hands off its networking responsibilities to a “Sleep Proxy” (like an Apple TV or HomePod) or simply drops its ARP table. When it wakes up, it might be trying to use a stale hardware address, or the Linux machine (myserver.lan) might not be responding to the Mac’s “Who has 192.168.1.79?” request fast enough.

The next suggestion was to check the arp information when networking was broken.

  • If it says (incomplete), the Mac is sending requests but getting no answer.
  • If it shows a MAC address, compare it to the actual MAC address of the Linux box. If they match, the physical path is clear but the packets are being dropped higher up.

This got me looking at my WatchYourLan deployment, this is a networking monitoring tool that I use to alert me to new devices (based on MAC addresses) on my network.

To my surprise – I found another MAC 82:d4:9a:8f:06:40 for myserver.lan, one that I didn’t expect to see. Gemini called this a smoking gun, as it is a Locally Administered Address (LAA) – also known as a generated MAC. Gemini also concludes that this may be a Docker issue, which somewhat confirms what I was thinking. The advice from Gemini is getting more specific now, but I also find that in general this is where AI generated output can go off the rails so caution is required.

The Pattern: You have a “Bridge” problem

Seeing four different randomized MACs for a single IP (192.168.1.79) on a wired Linux machine is a classic symptom of Docker Userland Proxy or Bridge-utils misconfiguration.

When you run containers or virtual machines in “Bridge Mode,” the Linux kernel creates a virtual switch (br0, docker0, etc.).

The physical NIC (eth0) has its own hardware MAC.

The Bridge (br0) often generates its own MAC (like one of these A6 or 52 ones).

Each container gets a virtual interface (veth) with a randomized MAC.

I then remember to add the detail that this Linux machine is a NixOS installation, and Gemini comes back with

NixOS with a complex Docker setup is the “perfect storm” for this. NixOS tends to be very explicit about interface creation, and Docker’s default bridge behavior often clashes with how NixOS manages its own networking stack.

It suggests a “ghost hunter” command to identify bad MACs

If these “ghost” MACs are responding to ARP requests on your physical LAN (a phenomenon called ARP Flux), it’s usually because the Linux kernel is being too “helpful” by answering ARP requests for an IP on any interface, even if the request came in on a different one.

It also suggests a change to my configuration.nix file, which in retrospect was pretty close to the final solution – but not quite. At this point I’m pretty impressed with Gemini being able to help with NixOS specifics. It also asks the question

Does your NixOS config happen to have any bridge interfaces (networking.bridges) or macvlan setups defined manually?

Why, yes – yes I do I reply. Gemini then fires back:

This macvlan-shim is the “smoking gun” for your macOS networking issues.

Why this breaks your Mac’s connection
By default, Linux is very “promiscuous” with ARP. When your Mac sends an ARP request asking “Who is 192.168.1.79?”, the Linux kernel sees that request on enp3s0. Because the macvlan-shim is bridged to that same physical card, the kernel might answer the Mac using the macvlan’s generated MAC address instead of the physical NIC’s hardware address.

This is known as ARP Flux. Your Mac gets confused because it sees multiple MAC addresses (the “ghosts” we found earlier) all claiming to be the same IP or living on the same segment. When the Mac wakes from sleep, it’s a race: if the “shim” MAC wins the ARP race, the Mac sends traffic to a virtual interface that isn’t configured to handle the host’s primary traffic.

It sure likes the term “smoking gun”, but it feels like we’re close to identifying the real root problem in my setup. I had previously done some changes to the way I deployed the various docker containers on my macvlan network to set static MAC addresses, but had not yet figured out how to make the shim IP have a static MAC address. The evidence I have and the advice from Gemini are starting to align. Odd that I got to this configuration from a previous setup using Ubuntu which didn’t have an issue, or maybe it was just luck due to timing.

I asked a lot of clarifying questions about the solutions it was proposing, which helped iterate towards both my more complete understanding of the solution – but it also helped generate some options. A minimalist solution to this was suggested, but felt like a lot of ‘trust in the force’:

If you added this to my previous article on NixOS + Docker + MacVLAN you’d probably be fine. However, here is the more complete solution I ended up using:

Again, if you compare with the original article we can see that I’ve added a specific MAC address for the shim, and used a naming convention to make the last digit match the IP address. The big change is the boot.kernel.sysctl which is very similar to the minimal setup above.

I also use the new MAC address numbering scheme for each of my macvlan containers – assigning them 06:00:00:00:00:XX where XX is the IP. Very handy to see they are the right assignment.

Now this had some interesting side-effects. The shim IP continued to offer up a stale MAC generated MAC address. I was able to fix this by forcing a recreate (but I suspect a reboot may have solved the problem).

A few other things also broke due to this more restrictive ARP control. Previously a container living on the macvlan network (like my nginx for local web services – on IP 192.168.1.65) could see my server (192.168.1.79). The myNewNet-shim provided visibility from the host to the nginx (192.168.1.65).

In this new no ARP Flux world, I have to use the shim address (192.168.1.67) in order to connect to my server (192.168.1.79) – so everywhere I reference x.x.x.79 I now needed to use x.x.x.67.

[Edit: after running like this for a bit, I realized I was having the same ARP Flux issue – this section is bad advice Gemini gave me]

I’ve realized that this makes my wireguard setup a bit more annoying because if I use wireguard to connect home from remote, and I want to connect to my server I need to use the shim address instead of the local DNS name that maps to the real address. This was too big a trade off so I went back to Gemini and had a long discussion (with a lot of dead ends) to arrive at a solution that adds back the visibility of the host (192.168.1.79) from the macvlan containers (say 192.168.1.64 for example).

Now using wireguard, I can see the full local network – host included.

[Edit: the above fix didn’t work, I still had ARP Flux problems. The simplest solution was simply to inject an ip route add 192.168.1.79 via 192.168.1.55 into each of the macvlan containers. Since many of my containers are linuxserver.io based, I can use the custom script pattern they support]

Less ARP chaos is a good thing. Gemini did suggest that instead of declaring the route for the shim with a CIDR block that includes the shim address, I could be even more specific and simply have a route per IP address:

The key difference here is using /32 instead of /30. Being specific avoids the Linux Kernel from having to figure out what to do with the shim address (192.168.1.67) but it seems to do the right thing so I’ve gone with the simpler declaration. I’ve also got a special route in there for wireguard addresses so my host can see the wireguard clients directly.

Verifying the fix can be done on the OSX machine

We want to review all of the MAC addresses to make sure we have the expected ones that follow the pattern 06:00:00:00:00:xx. If all goes well, this is the end of the ghosting problem (ARP Flux) that will cause the annoying interruption in networking from my OSX machine to the server.

In summary – the full docker macvlan setup on NixOS – this is basically a brief re-telling of the original post, with all of the updates above merged in.

First enable docker support on NixOS. You need a single line added to your /etc/nixos/configuration.nix

Create your docker macvlan network.

Docker will persist this network configuration across reboots.

Now we need to modify /etc/nixos/configuration.nix to fix routing to/from our macvlan network IPs from the host – and avoid causing APR Flux.

This was a bit of a journey, but if you were just looking for a clean way to get macvlan networks working with NixOS and Docker hopefully it is presented in a way that is straight forward and you can follow. Also, we touched on using AI to help us explore solutions – and I do encourage you to use it as a tool vs. a magic eight ball. Along the way many wrong answers were presented, but by asking for more details and ways to test the assumptions, and changes – I was able to learn more about the solution and come to what I think was a good solution.

Re-encoding/transcoding video for space savings

There are lots of good reasons to run a media server at home to host your own content. There are also plenty of legitimate ways to get content, but also many that are less so. I think it is important to pay creators for their efforts, I also really like “owning” the media I enjoy and not paying repeatedly to re-watch things.

Sometimes you’ll end up with a version of the content (say captured OTA) that is both high quality, but huge. An hour long 1080p HDTV capture from my HD Homerun lands in around 7.7GB. A more efficient encoding can significantly drop that, of course these are lossy encodings meaning that the quality can be less. The trick is to figure out how to pick an encoding of the video that retains most of the quality, but is significantly smaller.

Different codecs have different approaches, all with trade-offs, but sometimes it’s really hard to see the differences. Many of the content creators are also faced with the same dilemma, they have high bit-rate master copies of the content and need to squeeze it down to fit on a DVD or Blu-ray.

Let’s dive into how we do this. For this example we’ll take a 45min TV show that starts at 1.9G. It is encoded in AVC1/H.264 at 6000 kb/s. The image quality is very good, but it’s also sort of big and maybe you don’t care that much about this particular TV series but you do want to retain a copy.

FFmpeg can fix this for us. We’ll be moving to H.265 as an encoding, burning a bunch of CPU cycles to get there, and ideally get a video that is substantially the same but uses less storage.

You can see we’ve told it to re-encode to the H.265 codec, but to simply copy the audio. The crf flag is important for maintaining quality, lower values will be more quality, higher less. If you don’t supply the flag, a default of 28 is used resulting in more space savings, but to my eyes a softness in the output.

That is a significant space savings, the new file is only 27% of the original – 3 times smaller! Sure, it’s small, but is it any good?

Let’s look at some stats. While we have maintained the 1920×1080 resolution, we’ve dropped the bitrate to 1510 kb/s. This explains a lot of the saving, we’ve reduce the number of bits used to create each frame of the image.

We will use ffmpeg to extract a few frames of the two videos for comparison. The use of the .png format will give us “lossless” images (but recall the video itself has been encoded in a lossy format).

Above are the the frames from 4:10, the first/top is from the original, and the second/bottom is from the smaller version. You can click through to view the original 1920×1080 image. Visually they seem identical.

Using the idiff tool we can create a difference image and get data about how different these two images are from each other.

Ok, so they are different but we knew that. What does the difference image look like?

Yup, just a big black box not a lot of obvious differences, this agrees with what we we can see ourselves. Let’s tweak the idiff command to highlight the differences by a scale of 20

Now we can see the small differences magnified, and it’s in areas you’d expect, around the edges of objects and in the small details.

Let’s look at another frame using the same approach.

Again, the original is the first, and the smaller is the second. Let’s do the diff, but only grab the scaled version.

Again, in the scaled version of the diff image we see that the fine details are were the very small differences are. Try opening the full images, one in each tab and toggling between them – can you see any changes? Yes, we know there are differences – the smaller file is 1/3 of the size – but is this enough to matter? Or even notice?

I’m going to continue to use MakeMKV to rip blu-rays of movies I consider ‘reference material’ – like Dune, because I want all the bits. However, if it’s just some TV show I captured OTA and I’m going to watch once, or at most a handful of times – I’ll take the space savings.