Generative AI code assist

I considered use “Vibe Coding” as the title, but it’s just such a buzz word that I decided to go with a more factual title. I’m old school enough to want to distinguish between generative AI and the more broad AGI (Artificial General Intelligence). I’ll also state that I consider myself a bit of an AI coding skeptic, but hopefully in a healthy way.

Just like any computer program, garbage-in, garbage-out. The modern buzz word for this is AI-slop. I’ll avoid bashing the technology and focus on how you can use it constructively today, even with some of it’s limitations. I will also confess that at work I’ve got access to AI for code generation and it’s been interesting learning a new set of skills, this post will focus on what you can do for free on the web.

Perchance has a no login required, free code generator. I was in the process of setting up karakeep to replace wallabag. Both of these tools perform a similar function, effectively a web based bookmark manager and offline capture of a web resource. A simple list of links + archive.org would solve the same problem, but this is a self-hosted solution and is pretty neat.

The task at hand is to figure out how to export all of my links from wallabag and the import them into karakeep, the more context I can preserve the better. Since there isn’t a common import/export format between the two tools, we’ll use the aforementioned code generator to create something to convert the file.

Luckily both support a JSON based format. I can export from wallabag into JSON. It looks something like this

And karakeep has both an import/export supported into JSON format

First we will create a few sample entries in karakeep and do an export to figure out what it’s format is. It turns out to look something like this:

If you look at the two formats, you can see some obvious mappings. This is good. I started with the perchance code generator and a very simple prompt:

This let me get my feet wet, and make sure I had my environment setup to run code etc. I do have reasonable javascript experience, and that will help me use the code generator as a tool to move quickly. I tend to think of most of these AI solutions as doing pattern matching, they pick the ‘shape’ of your solution and fill in the blanks – this is also where they will make stuff up if there is a blank and you haven’t given it enough context, it’ll just guess at a likely answer.

Once I had the code generator creating code, and I was able to test it, things moved along fairly quickly. I iterated forwards specifying the output format JSON etc.. and I was both a bit amazed, but pleased to see that it had decided to use the map capability in nodejs. This made the generated code quite simple.

My final prompt ended up being:

And this is the code it generated

Notice anything curious here? It has, without me saying anything, decide to map title and tags into the output object. Very nice – I’m impressed.

Was there any really smartness here? Well, I would not have arrived at the idea of using map in the javascript code – it’s the right and elegant solution. A stronger javascript developer would have likely landed here since it is a concise solution to the problem. Maybe I would have found a similar answer on stackoverflow, but the code generator made it easy for me.

The date manipulation is also very slick.

I would have eventually got there, but it just did it for me. A very nice time saver.

This generated javascript let me export / import 376 entries from my wallabag — preserving the original dates, tags and titles.

Sometimes working with AI for code is like having a book smart, and very fast, new hire. No experience, lots of enthusiasm, and cranks out code quickly. Does the code work? Not always, maybe not even often. I’ve also had to ‘reset’ the approach being used, when multiple iterations uncovered that it was basically impossible to solve the problem using the approach that I started with. Using test driven development can help provide guide rails for ‘working / not working’, the more context you can provide the better. Learning how to guide the AI, and evaluate if you’re getting what you intended to ask for are the new skills I’ve been growing.

I feel I do need to throw down some caution flags around AI use. If you’re using something that is ‘free’, think again why are they making it free to use? Open source projects, don’t mean that it’s safe. Under the covers, this is all still built out of the same parts – so if you use it to open your network and data to the internet, you’ve got the same security problems.

Interesting times.

Forgejo – self hosted git server

Source control is really great, and very accessible today. Many developers starts with just a pile of files and no source control, but you soon have a disaster and wish you had a better story than a bunch of copies of your files. In the early days of computing, source control was a lot more gnarly to get setup – now we have things like GitHub that make it easy.

Forgejo brings a GitHub like experience to your homelab. I can’t talk about Forgejo without mentioning Gitea. I’ve been running Gitea for about the last 7 years, it’s fairly lightweight and gives you a nice web UI that feels a lot like GitHub, but it’s self-hosted. Unfortunately there was a break in the community back in 2022, when Gitea Ltd took control – this did not go well and Forgejo was born.

The funny thing is that Gitea orginally came out of Gogs. The wikipedia page indicates Gogs was controlled by a single maintainer and Gitea as a fork opened up more community contribution. It’s unfortunate that open source projects often struggle either due to commercial influences, or the various parties involved in the project. Writing code can be hard, but working with other people is harder.

For a while Forgejo was fully Gitea compatible, this changed in early 2024 when they stopped maintaining compatibility. I only became aware of Forgejo in late 2024, but decided Gitea was still an acceptable solution for my needs. It was only recently that I was reminded about Forgejo and re-evaluated if it was finally time to move (yes, it was).

Forgejo has a few installation options, I unsurprisingly selected the docker path. I opted to use the rootless docker image which may limit some future expansion such as supporting actions, but I have basic source control needs and I can always change things later if I need.

My docker-compose.yml uses the built in sqlite3 DB, but as mentioned above is using the rootless version which is a bit more secure.

As I’m based on NixOS, my gid is 100, not the typical 1000.  I had to modify my port mapping to avoid a conflict with Gitea which is already using 3000.

Now, the next thing I need (ok, want) to do is configure my nginx as a reverse proxy so I can give my new source control system a hostname instead of having to deal with port numbers. I actually run two nginx containers – one based on swag for internet visible web, and another nginx for internal systems. With a more complex configuration I could use just one, but having a hard separation gives me peace of mind that I haven’t accidentally exposed an internal system to the internet.

I configured a hostname (forge.lan) in my openwrt router which I use for local DNS. My local nginx is running with a unique IP address thanks to macvlan magic. If I map forge.lan to the IP of my nginx (192.168.1.66) then the nginx configuration is fairly simple, I treat it like a site creating a file config/nginx/site-confs/forge.conf that looks like:

Most of this is directly from the forgejo doc on reverse proxies. When my nginx gets traffic on port 80 for a server named forge.lan, it will proxy the connection to my main server (192.168.1.79) running the forgejo container.

With this setup, we can now start the docker container

And visit http://forge.lan to be greeted by the bootstrap setup screen. At this point we can mostly just accept all the defaults because it should self detect everything correctly.

When we interact with this new self hosted git server, at least some of the time we’ll be on the command line. This means we’ll be wanting to use a ssh connection so we can do things like this

There is a problem here. If you recall the webserver (192.168.1.66) is not on the same IP as the host (192.168.1.79) of my forgejo container. Since I want the hostname forge.lan to map to the webserver IP, I’ve introduced a challenge for myself.

When I hit this problem with Gitea, my solution was simply to switch to using my swag based public facing webserver (which runs on my main host IP) and use a deny list to prevent anyone from getting to gitea unless they were on my local network. This works, but means I had some worry that one day I’d mess that up and expose my self hosted git server to the internet. It turns out there is a better way, nginx knows how to proxy ssh connections.

This stackexchange post pointed me in the right direction, but it’s simply a matter of adding a stream configuration to your main nginx.conf file.

After restarting nginx, I can now perform ssh connections to forgejo. This feels pretty slick.

I then proceeded to clone my git repos from my gitea server to my new forgejo server. This is a bit repetitive, and to avoid too many mistakes I cooked up a script based on this pattern. Oh yeah, and I did need to enable push-to-create on forgejo to make this work.

This script takes a single parameter which is the URL for the source (Gitea) server. It then strips off the project name which is the directory of the project. We are using the --mirror option to tell git we want everything, not just the main branch.

Using this I was able to quickly move all 29 repositories I had. My full commit history came along just fine. I did lose the fun commit graph, but apparently if you aren’t afraid to do some light DB hacking you can move it over from Gitea as the formats are basically the same. The action table is the one you want to move / migrate. I’m ok with my commit graph being reset.

You also don’t get anything migrated outside of the git repos, this means issues for example will be lost. For me this isn’t a big deal, I had 3 issues all created 4 years ago. If you want a more complete migration you might investigate this set of scripts.

The last thing for me is to work my way through any place I’ve got any of those repositories checked out, and change the origin URL. For example consider my nix-configs project

At this point I’m fully migrated, and can shut down the old gitea server.

If you were interested in a Forgejo setup that has actions configured and is setup for a bit more scale, check out this write up.

openmediavault – Basic Installation with ZFS

With my recent server upgrades (wow was I lucky to mostly avoid the RAMpocalypse) it is now time to deprecate my oldest running server. Until recently this was my local backup server and amazingly it’s still doing just fine, I hope I can find it a new home. I doubt someone wants my NixOS complexity for a basic NAS device so I took a look at options to install so they could get a running start. TrueNAS was a good candidate, but it has fairly high hardware requirements. I then found openmediavault which will happily run on much more modest hardware and while ZFS isn’t supported out of the box, it’s easy to get there.

To get started go grab the latest ISO image for openmediavault. Then burn that to a USB drive so you can boot from it.

Initially I was missing the second command (sync), this tripped me up for about an hour. I did think that it was suspicious that the 1.2Gb file wrote to the USB stick so fast, but I figured that it was only 1.2Gb. This caused me to pull the USB stick before it had finished writing the data – and it failed silently. This resulted in my getting part way into the install then hitting an error something like:

There was a problem reading data from the removable media. Please make sure that the right media is present

Ugh, I tried multiple versions including a plain old Debian 13 install. Silly me. When I finally ran with the sync, it took a few minutes to finish instead of seconds.

Now armed with a valid bootable USB stick, it’s time to follow the openmediavault (OMV) new user guide. Stick with it through to the first boot on your new hardware which will allow you to visit the WebUI to manage things. When OMV boots, it will output information to the console with the IP address to help you connect to the Web UI.

During the install you are prompted to create a password, this password is for the root user. This is different from the WebUI admin password, which by default is “openmediavault”. Take the time now to go change that password by clicking on the user icon in the upper right and selecting “Change Password”.

Now is a great time to explore the WebUI, and run any updates that are pending. It’ll give you a feel for what OMV is like. If you only have a single data drive, or just want to completely follow the new user guide – that’s fine, you can stop reading here and go explore.

In this blog post we will dive into enabling ZFS on OMV. If you didn’t yet do this, we need to install the extras repository – following this part of the guide. Access the new OMV installation using ssh and log in as root, we’re going to do a scary wget and pipe to bash.

Once this is done, we will have access to more Plugins which include the ZFS support.

We need to change the kernel we are running to one that will safely support ZFS. To manage kernels we need to install the “openmediavault-kernel 8.0.4” plugin. Do this using the menu System->Plugins and searching for “kernel”.

Select the plugin, then click on the install icon.

Now we have the System -> Kernel menu option allowing us to manage kernels. As of the date of this post, we want to install the recommended proxmox 6.17 kernel.

Start by clicking on the icon that I’ve indicated with the red arrow. Then select the 6.17 version and the install will run, this might take a bit of time.

Very important. Once it is done, reboot.

Post reboot I was able to check Diagnostics -> System Information to confirm that I was now running the new kernel. However, revisiting the System -> Kernel the new kernel was not marked as the default, so I did that. Also I took the opportunity to remove the old kernels I was not using to avoid future problems or confusion.

The next step is to install the ZFS plugin via the System -> Plugins menu. You can follow the documentation for OMV7, but I’ll outline the steps here. Search for ZFS and install the plugin.

At the end of the install you are likely to get a “**Connection Lost**” error. This is OK, just do a hard refresh of the browser window. On OSX Firefox this is Command-SHIFT-R, you may need to look up how to do it with your browser.

Under Storage we will now have a zfs entry. This will allow us to create a storage pool, and it will automatically mount it as well. You may want to read up a bit on ZFS, I’ll suggest starting with my post on it.

Before we create a pool, we’ll want to look at the available drives under Storage -> Disks. In my setup I have four 1TB drives: /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde. You may want to do a ‘quick’ wipe of these devices in case any old ZFS headers exists on the drive which could derail the creation step later.

Flipping back to the Storage -> zfs -> Pools we will add a pool.

This is fairly straight forward but does encompass several ZFS concepts. I did find that after hitting save, I experienced a time-out (my machine is maybe slow?) and the plugin doesn’t accommodate slow responses. The ZFS filesystem was created and mounted and appears under Storage -> File Systems.

We’re close, we have a nice resilient filesystem on a server but we don’t yet expose any of this to make it Network Attached Storage. Let’s next create a user, which will make it easy to create read/write access remote files.

Then we will create a network share, which declares our intent to share a given filesystem with one of the services that provide network visibility. Visit the Storage -> Shared Folders menu and we will create a share.

This process will create a sub-folder on our ZFS filesystem we created. We can check this by logging in over ssh and looking at the mount point.

During some of this configuration you will have OMV prompt you to apply the changes. Each time these prompts to apply pending changes, hit the check mark assuming you want to keep them. Clicking on the triangle will show you a list of changes pending.

The last step to make this file share visible on the network is to pick a service to offer it up, we will pick SMB/CIFS as it is one of the most common and compatible options.

First enable the service in the settings.

Then create the share.

I will reference the OMV8 documentation as a good guide to creating this share, specifically the set of options you want to enable which don’t easily fit into a single screenshot.

At this point we now have a Samba share on the network. On OSX you can connect using the Finder app and hitting Command-K. Entering smb://omv.lan (or whatever your machine is named) and then providing the user credentials defined above.

There is lots more you can do with OMV. Set up a dashboard, enable SMART monitoring, add more Plugins. If you add the docker (compose) plugin this unlocks so many more possibilities. There is reasonable documentation and a forum, so lots of places to seek out help.