Fixing (some) access errors in Veeam

I just spent a couple of hours troubleshooting a stupid problem where I got access errors when trying to backup a VM from a newly-installed Veeam server. Searching forums for answers I got red herrings all over the place, from opening up the Windows Firewall for RPC traffic, to removing Veeam VSS files from various folders and shares, to purging keys in the Registry.

It turned out none of that was the cause of the problem, but instead I had re-discovered an issue I’ve seen before: For some reason, Veeam sometimes won’t work properly with UPN logons (username@domain) but instead requires Down-Level logon names (DOMAIN\username). Changing that fixed the problem.


Playing around with benchmarks

So I just rebuilt my little home server RAID from LVM+Ext4 to ZFS, changing the layout from RAID5 to RAID1+0, consisting of a pool of two mirrored disk sets.

Since I’m a cheap bugger frugal, I still run a small HP MicroServer Gen7 (N54L) with only 2 GB of RAM, which I’ve filled up with 4 x 3 TB WD RED drives for storage, and a 60 GB SSD for the system.

As everybody knows, the only difference between screwing around and science is writing it down. I was slightly too eager to get started to remember to benchmark my old file system, so I guess any future changes will have to be compared to the current set-up. Continue reading “Playing around with benchmarks”

Fedora 25 impressions

I recently switched from Ubuntu 16.10 to Fedora 25 on my gaming computer just to give it another shot. The mindset in this distribution is slightly different from that of Ubuntu, especially in that releases come more often. For a computer mainly used for playing around, this is not a bad thing, but unfortunately it also shows where Fedora’s weaknesses are, compared to more polished operating systems, and compared to what regular users would accept from a daily driver. A very concrete example is how the operating system handles kernel updates:

Running the non-free nVidia driver – this is my gaming computer first and foremost –  every kernel update seems to break the graphical user environment, at the very least requiring me to perform an additional reboot after showing me the famous white “oh no…” screen. To be fair, the non-free drivers are not part of the core operating system in Fedora,  but would it really be that hard to look for this characteristic event and let it trigger an additional reboot if that’s all it takes?

Otherwise I must say Fedora does what I need it to and does it well. I’ll keep using it for a while and see how it works for me in the long run.

Postfix and subdomain mail delivery

My last project has been about securing outgoing mail from one of our systems.

We had a few basic requirements:
1) Mails to customers should not be falsely flagged as spam.
2) Performance. Several thousand mails are sent per day.
3) Reusability: Multiple systems should be able to send mail through the same solution, preferably from multiple domains.

1) The MTA will be placed in our DMZ to be reachable by various systems in multiple domains.
2) The MTA must reach our mail-to-fax converter.
3) The MTA must reach our main mail server cluster without going via external services.
4) Accounts used for outgoing mail should not have mail to them stored on the MTA, but instead relayed to the main mail servers.


Originally we sent our mail via our regular mail servers through a cloud based spam filter service, but our volumes caused this traffic to get throttled by the service provider, breaking the performance requirement.

Next we tried a software called SMTPBeamer, which a colleague of mine had used for a slightly different task, but which seemed promising, easy to set up and doesn’t break the bank. Unfortunately, this program doesn’t have native DKIM signing of mails, which it turns out is pretty mandatory today if one wants to avoid having a large share of sent mail bounce or get stuck in spam filters. In other words this broke our first and perhaps most important requirement.

This caused me to consider a serious mail transfer agent, namely Postfix.

Installation and initial configuration was made dead simple thanks to the excellent walkthrough provided by Christoph Haas, at ISPMail Tutorial. Thanks to his explanations, digging deeper into how Postfix works to complement with further functionality got a lot easier than I had anticipated.

So what pitfalls did we have to cross?


We still send a lot of faxes. They are generated by an appliance connected to our PBX: Basically it listens to mail on the format To begin with, I couldn’t get my head around how to make Postfix understand that I wanted mail to that subdomain to be sent to a specific IP address.
Hint: The Postfix documentation is all you need, provided you understand that it requires to be able to look up any recipient domain by DNS. An entry in the hosts file is not enough.

The relevant clue was found in a forum post where the author wrote about the command “host”, which specifically looks up the given host name using DNS rather than the hosts file. After spending hours trying different combinations of relay and transport maps and configurations, just adding the fax subdomain to the zone file for the correct subnet solved the problem immediately. I had understood the Postfix documentation for the necessary transport rules correctly from the start, but I hadn’t understood Postfix.

User accounts

After following the ISPMail Tutorial to the T, I had a perfect little mail server which could send mail using local virtual accounts for authentication, but also accepted mail to these accounts. It would be possible to work around this issue, but this was not the behavior I was looking for. By switching the domain to which these accounts belonged in the database without changing their fully qualified names, and adding their actual domain to relay_domains, along with a transport rule, I can now use the proper mail addresses for authentication, to reduce the risk of spamming while still passing on any mail from one account to another straight to our internal mail servers.

I will soon take the time to describe the solutions and configurations required in more technical detail and hopefully using a lot less prose.

Using the right tool for the job…

I encountered an interesting problem after setting up load balancing for a web service one of our devs needed to make available: Accessing a dummy page with a HTTP GET went flawlessly when using a regular web browser, but POSTing to it using his client software or curl resulted in a 503 error. At first I suspected a misconfigured firewall, but when reading the HAProxy logs, I discovered that the 503 error was accompanied by a “<NOSRV>” tag, meaning that HAProxy couldn’t make out to which backend it should forward the client data.

The solution was simple: Up until now, I’d only forwarded traffic from modern web browsers, using the ssl_fc_sni function to find the appropriate backend based on the server name requested by the client. What I forgot when setting up these rules was that the POST wouldn’t be performed by a modern browser, so I had no guarantee that the client would be capable of the SNI (Server Name Indication) protocol extension.

The simple solution was to use the host record from the HTTP header instead:

use_backend backend1 if { hdr(Host) -i }

Note that this requires that SSL is terminated in HAProxy. My configuration terminates SSL, reads and modifies relevant HTTP information, then establishes a new SSL connection to the backend servers using the appropriate certificate checks. This way protocol secrecy should be kept anytime traffic is in transit.

Apple Smart Keyboard First Impressions

Having just received my Smart Keyboard for my iPad Pro 9,7″, I thought I’d write a little about it.

The first thing I was slightly apprehensive about was naturally how it would feel to type on it. The Apple tables in stores don’t really lend themselves to actually testing that aspect realistically. It turns out I worried unnecessarily: The cupped shape of the keys, along with the relatively large gap between them makes it very comfortable for me to type on the keyboard. Going from my Retina MacBook Pro or Magic Keyboard to the Smart Keyboard is almost completely seamless for me. It’s comfortable enough on a table, but what’s interesting is that thanks to its strong magnets, it actually works in my lap while half-lying in a couch too. At least as long as the iPad itself keeps its center of balance towards the rear support.

The keyboard itself supports almost all shortcuts and key combinations I’m used to from Apple’s computer keyboards except for those that require the use of the Fn key, which on the Smart Keyboard is replaced by a shortcut to switch between keyboard layouts.

As I am used to writing on a Swedish keyboard but often write technical documents in English, I soon encountered a situation that could have turned the Smart Keyboard into a dud for me:
How does it handle typing in one language while using the keyboard layout of another language? The autocorrect dictionary in iOS is tied to the chosen keyboard layout. Turns out Apple thought of that issue long before I did. When I did, I was very happy to see that under General Settings, there’s a button called Hardware keyboard. Thanks to it, it’s possible to turn off text autocorrection while using a physical keyboard while retaining the function when typing on-screen, where special characters are chosen visually anyway. This is one of those small things that makes me fond of Apple. This need of mine probably represents a pretty small percentage of Apple’s customers, but one of their developers thought of it and implemented a solution that makes switching from tablet mode to “almost laptop” mode completely seamless.

So are there any drawbacks to the Smart Keyboard?
Not a lot of them. One thing I noticed quickly is that the edit field on some forums doesn’t capture the cursor keys: Marking text using various combinations of Shift, Option, Command and the cursor keys is somewhat hit-or-miss across different sites on the web. In WordPress it works perfectly, but on the MacRumors forums touching any of the cursor keys while in the edit field scrolls to the bottom of the page. At this point I have no idea where the problem lies, but it’s a bit frustrating since selecting text is a chore using fingers on a touch screen.

All in all, and in my use case, the Smart Keyboard complements the iPad Pro perfectly, and I can definitely see myself leaving for an extended vacation without bringing my computer along largely thanks to it. Time will tell whether I’ll stay happy with this combination or if I’ll rather invest in an ultralight laptop the next time I have to replace my hardware.





Monitoring Keepalived with SNMP on Ubuntu 14.04


Using keepalived in combination with a couple of HAProxy instances is a convenient yet powerful way of ensuring high availability of services.

Network map, Normal
Load balancer pair in normal state

Up until now, I’ve considered it enough to monitor the VMs where the services run, and the general availability of a HAProxy listener on the common address. The drawback is that it’s hard to see if the site is served by the intended master or the backup load balancer at a glance. The image to the right shows the intended – and at the end of this article achieved – result, with the color of the lines between nodes giving contextual information about the state of the running services.

Monitoring state changes could naïvely be achieved by continuously tailing the syslog and searching for “entered the MASTER state”. This would be a pretty resource-intensive way of solving the issue, though. A less amateurish way to go about it would to use keepalived’s built-in capability of running scripts on state changes, but there are a number of situations in which you can’t be sure that the scripts are able to run, so that’s not really what we want to do either.

Fortunately, keepalived supports SNMP, courtesy of the original author of the SNMP patch for keepalived, Vincent Bernat. In addition to tracking state changes, it potentially allows us to pull out all kinds of interesting statistics from keepalived, as long as we have a third machine from which to monitor things. Let’s set it up. Continue reading “Monitoring Keepalived with SNMP on Ubuntu 14.04”