How To: Jenkins with Apache controlled authentication

For a change of pace, I was working with RHEL6 instead of Ubuntu and setting up a Jenkins CI server. I’ve used Jenkins aka Hudson previously, but this was my first time setting it up.

A lot of this is straight from the Jenkins wiki, which is detailed and helpful but at times cryptic. The default access mode of Jenkins is pretty much wide open, this is very handy for getting things done – but probably not what you want if there are a mix of people on the network, many whom you really don’t want to let do stuff like launch/configure your builds. As I had Apache already running and setting up authentication with Apache is relatively straight-forward, I figured the easy solution would be to hide Jenkins behind Apache.

Since Jenkins is a big wad of Java code that is offering up a web interface, we’ve effectively got two web servers running: Apache, and Jenkins (different ports). The solution we’ll use is a proxy on the Apache side and some firewall rules to prevent direct access to Jenkins, forcing people through the proxy and thus the authentication controlled by Apache.

Let’s start by checking to see if mod proxy is enabled. This is simply a matter of verifying if /etc/httpd/conf/httpd.conf has these two lines:

LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so

In my case it was enabled so no work needed. Now we modify /etc/sysconfig/jenkins, at the end of the file we need to add some args to point at the path we want our Jenkins instance to be hosted at:

JENKINS_ARGS="--prefix=/jenkins"

Restart Jenkins (service jenkins restart) to have the changes picked up. You can test to see if it’s working on localhost:8080/jenkins. I tried, and failed to have this work for a nested path (ie: /path/to/jenkins), I suspect this is a Jenkins limitation but didn’t chase down the actual reason.

Next let’s create an Apache configuration file in /etc/httpd/conf.d/jenkins_proxy.conf with the following contents:

ProxyPass /jenkins http://localhost:8080/jenkins
ProxyPassReverse /jenkins http://localhost:8080/jenkins
ProxyRequests Off

# Local reverse proxy authorization override
# Most unix distribution deny proxy by default (ie /etc/apache2/mods-enabled/proxy.conf in Ubuntu)
Order deny,allow
Allow from all

We need to restart the web server (service httpd restart) to have these changes picked up. As this is RHEL6 and it is running with SELinux enabled, we also need to allow httpd (Apache) to do proxy connections:

# setsebool -P httpd_can_network_connect true

Now at this point you should be able to visit http://yoursite.com/jenkins, and see that our proxy configuration is working. This is cool, but people can still talk to it via http://yoursite.com:8080/jenkins which will bypass Apache.

We’ll be using iptables to accomplish this. As you can see my system was running iptables, but everything was permitted.

# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Use ifconfig to figure out what your ethernet card is (the one where external packets will come from), in my case it was eth6. So we can simply instruct iptables to drop packets destined to 8080 from that source, allowing only internal traffic (ie: the proxy) to pass.

# iptables -A INPUT -p tcp -i eth6 --dport 8080 -j DROP

That’s it, now users are forced to come in the correct front door (Apache). To make the iptables change permanent:

#service iptables save

So now the Apache web server is seeing traffic to Jenkins and can perform authentication, this is simple enough to add to the Apache configuration file we created that defined the proxy (/etc/httpd/conf.d/jenkins_proxy.conf) – I’ll leave that one up to the reader to sort out.

Makejail – limited SSH account on Ubuntu

Jail Cell in the Rock by hadsie, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  photo by  hadsie 

Previous I had covered how to setup scponly as a restricted fileserver environment. While this works well, it is very limited and didn’t allow for rsync to run (without heroics beyond what I was willing to do). Using makejail seems to be a better solution for my needs, and it turns out to be quite easy to setup on Ubuntu 12.04. On the journey here I had also tried out rssh which I also decided wasn’t a good fit.

You’ll of course need sshd installed which I’ll assume you have, and makejail which we can install easily:

$ sudo apt-get install makejail

Now we need to modify our openssh configuration by editing /etc/ssh/sshd_config, there are two changes we need to make. Modify the yes setting for UsePrivilegeSeparation:

# Disable Privilege Separation to allow chroot
UsePrivilegeSeparation no

and at the bottom of the configuration file we’ll add:

Match User frank
ChrootDirectory /home/frank
AllowTCPForwarding no
X11Forwarding no
PasswordAuthentication no

Of course, for each restricted user you need to specify the username and home directory. You may have noticed that for the restricted users I’ve disabled password authentication, this is because changing the password is broken in the ‘jailed’ environment so we just avoid the issue by insisting on the use of keys (yes, you’ll need the restricted user to send you their public key to install in the .ssh/authorized_keys file of the restricted user).

Next we need to create a simple python script file that we can pass to makejail as a configuration file. I called mine jailconf.py and the contents look like:

chroot = "/home/frank"
testCommandsInsideJail = ["bash", "ls", "touch", "rm", "rmdir", "less", "cat", "rsync" ]

Then execute makejail with this configuration file.

$ sudo makejail jailconf.py

For some reason, I needed to run makejail twice initially before it ran without errors – but it is something you can run multiple times with no serious side effects, this is handy if you want to add more commands later.

That’s it, now if you take a peek at the filesystem structure that’s been created – it’s a chroot environment. You’ll probably want to go in and create a /home/frank/stuff directory and assign ownership to the user so they can stick files there.

$ sudo ls -l /home/frank
total 36
drwxr-xr-x 2 root root 4096 Sep 19 22:59 bin
drwxr-xr-x 2 root root 4096 Sep 19 22:55 dev
drwxr-xr-x 3 root root 4096 Sep 19 22:56 etc
drwxrwxrwx 4 frank frank 4096 Sep 19 23:28 stuff
drwxr-xr-x 4 root root 4096 Sep 19 22:55 lib
drwxr-xr-x 2 root root 4096 Sep 19 22:55 root
drwxr-xr-x 2 root root 4096 Sep 19 22:59 sbin
drwxr-xr-x 2 root root 4096 Dec 5 2009 selinux
drwxr-xr-x 5 root root 4096 Sep 19 22:55 usr

Now once you sort out the public key login (and remember to make sure the permissions on the .ssh directory and authorized keys are correct), the user frank will be able to log in and see the directory tree /home/frank as if it were the root of the filesystem. Only commands listed in the configuration file (jailconf.py) will be available to that user. Of course, if the filesystem is writeable (and executable) then they could always upload copies of the commands they want to run – but hopefully these are people you trust to some level.

References: I came to this solution initially through this article. There was a serverfault post that helped with the ssh configuration changes related to disabling password authentication.

In my case this is one component in allowing a friend to use my system as a remote (encrypted) backup site using rsync. I’ll post more details on that in the future.

Ubuntu server 10.04 LTS upgrade to 12.04 LTS

Here we go again.. As I mentioned previously, I was back on 8.04 LTS which worked well for me, but end of life on support and my desire to use some of the newer features drove me down this multiple upgrade path. Since I had just come fresh from an upgrade, the second round wasn’t quite as big a deal.

Kick things off with one simple command:

$ sudo do-release-upgrade

I keep track of the changes required in a text file as the upgrade progresses, this is a good practice and has saved me a number of times.

Unfortunately my RAID array broke again on this upgrade. Time time it was /etc/fstab needing to be changed to /dev/md_d2 /dev/md_d3 instead of /dev/md2 /dev/md3.

DKIM-filter isn’t available in 12.04 – I’ve been getting (non fatal) error messages  in /var/log/mail.log about this. I’ll probably move to OpenDKIM at one point, but it’s not a big issue other than generating a bit of extra log data.

NFS broke on me this time. I was using a domain name based exclusion in my /etc/exports which I needed to remove:

/data/stuff/Shared *.lowtek.ca(rw,no_root_squash,async,no_subtree_check)

Changing the *.lowtek.ca to simply * and then running exportfs -r to update the files fixed things. The exportfs utility is new to me, or at least I’ve never needed to use it in order to fiddle with my NFS exports before.

I use the package mailgraph to make pretty charts of my email logs. This broke in the upgrade, I was able to track it down to the configuration file containing BOOT_START="YES" instead of BOOT_START=true (the default). This seems to be some mix of my previous configuration and the new package maintained one.  I will note that the new version of mailgraph is aware of greylisting which helps clean up the graphs a bit.

Dovecot‘s defaults changed to require secure logins only. This didn’t seem like a big deal until I found out that Jenn’s iPhone 3G refused to authenticate. It took a bit of trying, but I did find a solution. The iPhone 3G was running iOS 3.1.3, and that version didn’t like self signed certificates AND secure logins for sites it hadn’t synchronized with previously. To resolve this it was a simple matter of allowing plain text authentication in /etc/dovecot/local.conf

disable_plaintext_auth = no

Don’t forget to restart the dovecot service so the changes are picked up:

$ sudo service dovecot restart

Then synchronizing the phone with plaintext login. Then removing the dovecot configuration change we just made to default back to SSL based logins (and restart the service again), and the iPhone detects this and asks us to accept the self signed certificate. Weird, but it works.

My Slimserver seems to have not survived past this upgrade, I haven’t fixed this yet but it should be relatively simple. Hopefully the end of life status of this product line won’t mean the complete loss of the community around it.

In summary, there were the typical minor upgrade aches and pains but nothing that took my site down for any appreciable amount of time.