Forgejo – self hosted git server

Source control is really great, and very accessible today. Many developers starts with just a pile of files and no source control, but you soon have a disaster and wish you had a better story than a bunch of copies of your files. In the early days of computing, source control was a lot more gnarly to get setup – now we have things like GitHub that make it easy.

Forgejo brings a GitHub like experience to your homelab. I can’t talk about Forgejo without mentioning Gitea. I’ve been running Gitea for about the last 7 years, it’s fairly lightweight and gives you a nice web UI that feels a lot like GitHub, but it’s self-hosted. Unfortunately there was a break in the community back in 2022, when Gitea Ltd took control – this did not go well and Forgejo was born.

The funny thing is that Gitea orginally came out of Gogs. The wikipedia page indicates Gogs was controlled by a single maintainer and Gitea as a fork opened up more community contribution. It’s unfortunate that open source projects often struggle either due to commercial influences, or the various parties involved in the project. Writing code can be hard, but working with other people is harder.

For a while Forgejo was fully Gitea compatible, this changed in early 2024 when they stopped maintaining compatibility. I only became aware of Forgejo in late 2024, but decided Gitea was still an acceptable solution for my needs. It was only recently that I was reminded about Forgejo and re-evaluated if it was finally time to move (yes, it was).

Forgejo has a few installation options, I unsurprisingly selected the docker path. I opted to use the rootless docker image which may limit some future expansion such as supporting actions, but I have basic source control needs and I can always change things later if I need.

My docker-compose.yml uses the built in sqlite3 DB, but as mentioned above is using the rootless version which is a bit more secure.

As I’m based on NixOS, my gid is 100, not the typical 1000.  I had to modify my port mapping to avoid a conflict with Gitea which is already using 3000.

Now, the next thing I need (ok, want) to do is configure my nginx as a reverse proxy so I can give my new source control system a hostname instead of having to deal with port numbers. I actually run two nginx containers – one based on swag for internet visible web, and another nginx for internal systems. With a more complex configuration I could use just one, but having a hard separation gives me peace of mind that I haven’t accidentally exposed an internal system to the internet.

I configured a hostname (forge.lan) in my openwrt router which I use for local DNS. My local nginx is running with a unique IP address thanks to macvlan magic. If I map forge.lan to the IP of my nginx (192.168.1.66) then the nginx configuration is fairly simple, I treat it like a site creating a file config/nginx/site-confs/forge.conf that looks like:

Most of this is directly from the forgejo doc on reverse proxies. When my nginx gets traffic on port 80 for a server named forge.lan, it will proxy the connection to my main server (192.168.1.79) running the forgejo container.

With this setup, we can now start the docker container

And visit http://forge.lan to be greeted by the bootstrap setup screen. At this point we can mostly just accept all the defaults because it should self detect everything correctly.

When we interact with this new self hosted git server, at least some of the time we’ll be on the command line. This means we’ll be wanting to use a ssh connection so we can do things like this

There is a problem here. If you recall the webserver (192.168.1.66) is not on the same IP as the host (192.168.1.79) of my forgejo container. Since I want the hostname forge.lan to map to the webserver IP, I’ve introduced a challenge for myself.

When I hit this problem with Gitea, my solution was simply to switch to using my swag based public facing webserver (which runs on my main host IP) and use a deny list to prevent anyone from getting to gitea unless they were on my local network. This works, but means I had some worry that one day I’d mess that up and expose my self hosted git server to the internet. It turns out there is a better way, nginx knows how to proxy ssh connections.

This stackexchange post pointed me in the right direction, but it’s simply a matter of adding a stream configuration to your main nginx.conf file.

After restarting nginx, I can now perform ssh connections to forgejo. This feels pretty slick.

I then proceeded to clone my git repos from my gitea server to my new forgejo server. This is a bit repetitive, and to avoid too many mistakes I cooked up a script based on this pattern. Oh yeah, and I did need to enable push-to-create on forgejo to make this work.

This script takes a single parameter which is the URL for the source (Gitea) server. It then strips off the project name which is the directory of the project. We are using the --mirror option to tell git we want everything, not just the main branch.

Using this I was able to quickly move all 29 repositories I had. My full commit history came along just fine. I did lose the fun commit graph, but apparently if you aren’t afraid to do some light DB hacking you can move it over from Gitea as the formats are basically the same. The action table is the one you want to move / migrate. I’m ok with my commit graph being reset.

You also don’t get anything migrated outside of the git repos, this means issues for example will be lost. For me this isn’t a big deal, I had 3 issues all created 4 years ago. If you want a more complete migration you might investigate this set of scripts.

The last thing for me is to work my way through any place I’ve got any of those repositories checked out, and change the origin URL. For example consider my nix-configs project

At this point I’m fully migrated, and can shut down the old gitea server.

If you were interested in a Forgejo setup that has actions configured and is setup for a bit more scale, check out this write up.

openmediavault – Basic Installation with ZFS

With my recent server upgrades (wow was I lucky to mostly avoid the RAMpocalypse) it is now time to deprecate my oldest running server. Until recently this was my local backup server and amazingly it’s still doing just fine, I hope I can find it a new home. I doubt someone wants my NixOS complexity for a basic NAS device so I took a look at options to install so they could get a running start. TrueNAS was a good candidate, but it has fairly high hardware requirements. I then found openmediavault which will happily run on much more modest hardware and while ZFS isn’t supported out of the box, it’s easy to get there.

To get started go grab the latest ISO image for openmediavault. Then burn that to a USB drive so you can boot from it.

Initially I was missing the second command (sync), this tripped me up for about an hour. I did think that it was suspicious that the 1.2Gb file wrote to the USB stick so fast, but I figured that it was only 1.2Gb. This caused me to pull the USB stick before it had finished writing the data – and it failed silently. This resulted in my getting part way into the install then hitting an error something like:

There was a problem reading data from the removable media. Please make sure that the right media is present

Ugh, I tried multiple versions including a plain old Debian 13 install. Silly me. When I finally ran with the sync, it took a few minutes to finish instead of seconds.

Now armed with a valid bootable USB stick, it’s time to follow the openmediavault (OMV) new user guide. Stick with it through to the first boot on your new hardware which will allow you to visit the WebUI to manage things. When OMV boots, it will output information to the console with the IP address to help you connect to the Web UI.

During the install you are prompted to create a password, this password is for the root user. This is different from the WebUI admin password, which by default is “openmediavault”. Take the time now to go change that password by clicking on the user icon in the upper right and selecting “Change Password”.

Now is a great time to explore the WebUI, and run any updates that are pending. It’ll give you a feel for what OMV is like. If you only have a single data drive, or just want to completely follow the new user guide – that’s fine, you can stop reading here and go explore.

In this blog post we will dive into enabling ZFS on OMV. If you didn’t yet do this, we need to install the extras repository – following this part of the guide. Access the new OMV installation using ssh and log in as root, we’re going to do a scary wget and pipe to bash.

Once this is done, we will have access to more Plugins which include the ZFS support.

We need to change the kernel we are running to one that will safely support ZFS. To manage kernels we need to install the “openmediavault-kernel 8.0.4” plugin. Do this using the menu System->Plugins and searching for “kernel”.

Select the plugin, then click on the install icon.

Now we have the System -> Kernel menu option allowing us to manage kernels. As of the date of this post, we want to install the recommended proxmox 6.17 kernel.

Start by clicking on the icon that I’ve indicated with the red arrow. Then select the 6.17 version and the install will run, this might take a bit of time.

Very important. Once it is done, reboot.

Post reboot I was able to check Diagnostics -> System Information to confirm that I was now running the new kernel. However, revisiting the System -> Kernel the new kernel was not marked as the default, so I did that. Also I took the opportunity to remove the old kernels I was not using to avoid future problems or confusion.

The next step is to install the ZFS plugin via the System -> Plugins menu. You can follow the documentation for OMV7, but I’ll outline the steps here. Search for ZFS and install the plugin.

At the end of the install you are likely to get a “**Connection Lost**” error. This is OK, just do a hard refresh of the browser window. On OSX Firefox this is Command-SHIFT-R, you may need to look up how to do it with your browser.

Under Storage we will now have a zfs entry. This will allow us to create a storage pool, and it will automatically mount it as well. You may want to read up a bit on ZFS, I’ll suggest starting with my post on it.

Before we create a pool, we’ll want to look at the available drives under Storage -> Disks. In my setup I have four 1TB drives: /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde. You may want to do a ‘quick’ wipe of these devices in case any old ZFS headers exists on the drive which could derail the creation step later.

Flipping back to the Storage -> zfs -> Pools we will add a pool.

This is fairly straight forward but does encompass several ZFS concepts. I did find that after hitting save, I experienced a time-out (my machine is maybe slow?) and the plugin doesn’t accommodate slow responses. The ZFS filesystem was created and mounted and appears under Storage -> File Systems.

We’re close, we have a nice resilient filesystem on a server but we don’t yet expose any of this to make it Network Attached Storage. Let’s next create a user, which will make it easy to create read/write access remote files.

Then we will create a network share, which declares our intent to share a given filesystem with one of the services that provide network visibility. Visit the Storage -> Shared Folders menu and we will create a share.

This process will create a sub-folder on our ZFS filesystem we created. We can check this by logging in over ssh and looking at the mount point.

During some of this configuration you will have OMV prompt you to apply the changes. Each time these prompts to apply pending changes, hit the check mark assuming you want to keep them. Clicking on the triangle will show you a list of changes pending.

The last step to make this file share visible on the network is to pick a service to offer it up, we will pick SMB/CIFS as it is one of the most common and compatible options.

First enable the service in the settings.

Then create the share.

I will reference the OMV8 documentation as a good guide to creating this share, specifically the set of options you want to enable which don’t easily fit into a single screenshot.

At this point we now have a Samba share on the network. On OSX you can connect using the Finder app and hitting Command-K. Entering smb://omv.lan (or whatever your machine is named) and then providing the user credentials defined above.

There is lots more you can do with OMV. Set up a dashboard, enable SMART monitoring, add more Plugins. If you add the docker (compose) plugin this unlocks so many more possibilities. There is reasonable documentation and a forum, so lots of places to seek out help.

NixOS 25.11 – How to Upgrade

Recently the NixOS 25.11 release was announced. As my new server is based on NixOS it was time to do an upgrade and stay current. Moving from 25.05 to 25.11 turns out to be fairly straight forward, and you can uplift multiple versions without much pain or risk. I was also able to update directly from 24.11 to 25.11 in a single step for an old hacky desktop I was experimenting with.

Upgrading is very simple, first figure out which version you are on.

Then it’s just two command lines to do the upgrade.

A full reboot cycle is a good idea to make sure all is well, but shouldn’t be strictly required. The upgrade instructions are clear and easy to follow.

You’ll notice that I’ve selected the small channel, which is suggested for servers. It will not contain the some of the desktop components – but has the benefit of being updated sooner than the “full” version. Another server I had was using 25.11 full channel, and I used the same approach above to switch to the small channel.

NixOS has a very predictable upgrade cadence, twice a year in May and November. The downside is they very quickly deprecate the old version, 25.05 will stop getting regular updates at the end of December this year. Upgrading is so safe, and easy, there really isn’t any excuse to be back level.

You can run into some trouble if you have a very small boot drive, or haven’t done any “garbage collection” of older versions. I would recommend adding the following automatic updates and housekeeping tasks to your /etc/nixos/configuration/nix.

Depending on what has changed, you may get some warning when you do the update, typically stuff about a given setting having changed names or something. These are magically handled for you, but you should take the time to update your config to use the latest names to prevent future upgrade pain.

While we’re on the topic of NixOS configuration, I did learn something about the /etc/nixos/hardware-configuration.nix file. Typically this is set up once during the install, and you can mostly ignore it after that. However, you can re-generate that file using nixos-generate-config and it will update the file, but not overwrite your /etc/nixos/configuration.nix file. Handy if you have hardware changes like adding a USB drive you want to be permanently mounted.

However, as I found out, the nixos-generate-config command can be a “foot gun” and result in a non-clean booting system. The recommendation is to trust by verify, because capturing your hardware setup can be nuanced and you don’t want to end up having boot time problems you only detect much later.