After my recent adventures setting up IoT devices with local only access, I now needed to sometimes be able to talk to those devices when I’m not home. There are plenty of solutions, including setting up SSH tunnels which I’ve done in the past. Wireguard seems like a nice solution and it was high time I had VPN access to my home network.
The linuxserver.io folks have a nicely curated wireguard container with documentation. There are also plenty of good tutorials on installing wireguard. You can even go deeper and build your own, or explore alternatives.
Here is a makefile – based on my template for docker makefiles.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
# # wireguard - VPN # https://hub.docker.com/r/linuxserver/wireguard/ # # Needs port forward in gateway/router for 51820/UDP # NAME = wireguard REPO = linuxserver/wireguard # ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) # Create the container build: docker create \ --name=$(NAME) \ --cap-add=NET_ADMIN \ --cap-add=SYS_MODULE \ -e PUID=1000 \ -e PGID=1000 \ -e TZ=America/Toronto \ -e SERVERURL=yourDomain.ca \ -e PEERS=myPhone,myLaptop \ -e PEERDNS=9.9.9.9 \ -p 51820:51820/udp \ -v $(ROOT_DIR)/config:/config \ -v /lib/modules:/lib/modules \ --sysctl="net.ipv4.conf.all.src_valid_mark=1" \ --restart=unless-stopped \ $(REPO) # Start the container start: docker start $(NAME) # Update the container update: docker pull $(REPO) - docker rm $(NAME)-old docker rename $(NAME) $(NAME)-old make build docker stop $(NAME)-old make start |
Once you create this – go pull the .png files for the QR codes from the config directory. This will make it trivial to setup your phone.
On mobile data – this just works. The local only Tasmota devices I can now control when away from home and it’s super easy. What doesn’t work with this setup is accessing other docker containers on the same host as the wireguard container.
I explored a few options to solve this, but it boils down to the problem of containers not easily being able to see each other. This bugs me, because while I can appreciate the security of containers being isolated from each other – if I expose a port on the host to a container – then other containers should be able to see that same port – but they can’t. This means that containers actually have less visibility into the host than an external machine – that seems wrong.
You can solve the network visibility problem by giving the container a unique IP address. Here is a brief recap of creating a macvlan docker network – details can be found in my previous post on this topic
1 2 3 4 5 6 |
$ docker network create -d macvlan -o parent=enp3s0 \ --subnet 192.168.1.0/24 \ --gateway 192.168.1.1 \ --ip-range 192.168.1.64/30 \ --aux-address 'host=192.168.1.67' \ myNewNet |
Now from the makefile above, all we need to do is add --network myNewNet
to the docker flags and update the container and we’re good to go.
It’s interesting that the docker ps
command seems to not show as much about the container when it is run in this mode (No port information – but yes, ports are exposed).
1 2 |
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 434d19122529 linuxserver/wireguard "/init" 36 seconds ago Up 35 seconds wireguard |
One thing to keep in mind, if you first setup the container just on the docker host without macvlan you may need to adjust your port mapping to account for the new IP.
If I want the docker host machine to be able to see this container on the new IP we will need to use that --aux-address
to build a network path. This is optional, but useful so it’s worth doing.
The version of Ubuntu I’m using doesn’t ship with rc.local enabled. I started down the path of enabling rc.local, but the further I got the more it seemed this was the wrong answer. This post talking about rc.local, pointed me at cron’s ability to execute on commands on reboot. The cron @reboot capability seems like the easy path here, the other choice being to create a systemd service which is effectively what the rc.local solution is.
Let’s create a script in /usr/local/bin/macvlansetup
, making sure it’s executable.
1 2 3 4 5 6 7 8 |
$ cat /usr/local/bin/macvlansetup #!/bin/bash # Enable macvlan visibility for docker host # ip link add myNewNet-shim link enp3s0 type macvlan mode bridge ip addr add 192.168.1.67/32 dev myNewNet-shim ip link set myNewNet-shim up ip route add 192.168.1.64/30 dev myNewNet-shim |
Then we’ll edit root’s crontab to call this on reboot
1 |
$ sudo crontab -e |
Adding the new job
1 2 |
# One shot on reboots @reboot /usr/local/bin/macvlansetup |
Now we’re set. The wireguard container has a unique IP address and no visibility problems to any of my other containers on the same host. The IoT devices can also be seen just fine when I am remote and enable the VPN. The one trade-off is a slightly more complicated networking setup.
With the default wireguard settings, this acts like a full tunnel VPN – meaning all of the network traffic runs over the tunnel. This is useful as a security measure if I’m on an untrusted wifi network – all the traffic will flow securely from my device to my home network then back out again to the internet. In my case with my pi-hole configured as the DNS server, I get ad-blocking over the VPN.