Playing around with benchmarks

So I just rebuilt my little home server RAID from LVM+Ext4 to ZFS, changing the layout from RAID5 to RAID1+0, consisting of a pool of two mirrored disk sets.

Since I’m a cheap bugger frugal, I still run a small HP MicroServer Gen7 (N54L) with only 2 GB of RAM, which I’ve filled up with 4 x 3 TB WD RED drives for storage, and a 60 GB SSD for the system.

As everybody knows, the only difference between screwing around and science is writing it down. I was slightly too eager to get started to remember to benchmark my old file system, so I guess any future changes will have to be compared to the current set-up. Continue reading “Playing around with benchmarks”


Monitoring Keepalived with SNMP on Ubuntu 14.04


Using keepalived in combination with a couple of HAProxy instances is a convenient yet powerful way of ensuring high availability of services.

Network map, Normal
Load balancer pair in normal state

Up until now, I’ve considered it enough to monitor the VMs where the services run, and the general availability of a HAProxy listener on the common address. The drawback is that it’s hard to see if the site is served by the intended master or the backup load balancer at a glance. The image to the right shows the intended – and at the end of this article achieved – result, with the color of the lines between nodes giving contextual information about the state of the running services.

Monitoring state changes could naïvely be achieved by continuously tailing the syslog and searching for “entered the MASTER state”. This would be a pretty resource-intensive way of solving the issue, though. A less amateurish way to go about it would to use keepalived’s built-in capability of running scripts on state changes, but there are a number of situations in which you can’t be sure that the scripts are able to run, so that’s not really what we want to do either.

Fortunately, keepalived supports SNMP, courtesy of the original author of the SNMP patch for keepalived, Vincent Bernat. In addition to tracking state changes, it potentially allows us to pull out all kinds of interesting statistics from keepalived, as long as we have a third machine from which to monitor things. Let’s set it up. Continue reading “Monitoring Keepalived with SNMP on Ubuntu 14.04”

Reverse Proxy with NTLM authentication in Linux

The other day I got a fun project at work: We need to have several users authenticate to a site using the same SSL certificate, but with logs showing which users were connected at any time.

The basic premises are simple: A reverse proxy server takes calls to a specific address and sends them on to the actual service provider, along with the proper certificate, while logging the userID of the user making the request.

This looked like a good opportunity to introduce Linux into our environment, so that’s what I did.

The machine

The task isn’t very machine intensive – I predict at most a couple of users at any given time. Only in-house users means no need to put the machine in a DMZ or similar, and it also means I can talk about it in general terms on the web.

I chose to go with CentOS 5.x for this machine, since it’s RedHat compatible (easy to jump right in for external consultants if the need should arise) and since it’s the latest version of the distribution that our current hypervisor explicitly supports.

For the proxy server, I went with Squid. It’s lightweight and seems robust enough.

For logging of user activity, we need to know the IDs of users accessing the solution, and we need to validate them against a white-list. I wanted the validation process to be transparent to the users, which requires NTLM authentication against our AD. A regular LDAP authentication with a password prompt in the browser would have been my fallback solution if I hadn’t managed to get NTLM working.

NTLM Authentication

One thing I stumbled upon right away was that the Samba version in CentOS 5 doesn’t talk properly to Windows 2008 domain controllers. Since I wanted an RPM build for simplicity’s sake, I tried Sernet-Samba available through, and installed samba, samba-client, samba-utils, samba-winbind and samba-doc for my selected version, platform and distribution.

The next problem I stumbled into was winbindd and nmbd not starting properly. It would work just fine if started from /usr/sbin but crash horribly when started from /etc/init.d.

#/etc/init.d/smb status
smbd (pid 15393 15385) is running...
nmbd is stopped
#/etc/init.d/winbind status
winbindd dead but subsys locked
#tail -4 /var/log/log.nmbd
[2012/01/11 15:33:07, 0] lib/util_sock.c:1366(create_pipe_sock)
bind failed on pipe socket /var/lib/samba/nmbd/unexpected: Permission denied
[2012/01/11 15:33:07, 0] nmbd/nmbd_packets.c:48(nmbd_init_packet_server)
ERROR: nb_packet_server_create failed: NT_STATUS_ACCESS_DENIED

After some forum browsing, I tried switching SELinux into permissive mode. This worked, and since this machine is running locally only, it’s an acceptable workaround for the moment.

Finishing up

The default config file for Squid is ginormous since it also includes all documentation. I ended up slashing it down to the bare essentials needed for the reverse proxy and the SSL definitions. I pulled my hair for a while over getting AD group membership to count in the config file, though. For some reason, I got an NT_STATUS_OK: Success (0x0) as an answer when I executed ntlm_auth –require-group-membership-of from a command line – that is the same as for a correct logon – even for users that aren’t members of the group, while from within Squid, I just got an endless row of password prompts until I click cancel. The symptoms were identical no matter if I specified the domain name or not, and no matter if I used the group name in human readable-format or if I specified it’s SID.

After both googling and trying to get some tips via various IRC channels, I finally decided to just remove the Linux server’s AD object and re-register it. Something in this process fixed the problem. The final thing I did when it comes to the AD was to set up a cron job to reset the machine password once a day. Since then I haven’t had any problems with this server.

I also finally got some hands-on experience with shell scripting. When I only used Linux for fun, I never had any use for anything more advanced than regular config file tweaking. The requirement to keep data for several years that this server has, gave me a reason to actually look at Bash and have a few hours of fun with it, also learning a couple of things about both Squid and grep that I hadn’t thought of earlier.