I asked a question on ServerFault recently and went on to answer it with more than a bit of help from Michael. I had no idea that Linux allows you to have multiple routing tables, but it all makes sense once you work it out!
Following on from my earlier post about co-locating a Raspberry Pi at EDIS, I can now report that my Pi is up and running in Austria.
On the downside, it took a full month from posting it to having it plugged in (but, then, I didn’t pay for the service, so I can’t really complain).
On the upside, it came up correctly with both IPv4 and IPv6 connectivity. The link seems to be solid and stable for both – slightly longer pings over v6 to the same physical location, but nothing to worry about.
Conspiracy theorists posited that EDIS were going to sell all the Pis on Ebay and virtualize them onto an old Pentium 4. If they’ve done that, it’s a very slick job – the processor serial number of what I can SSH into matches that of the Pi I shipped.
I plan to install Nagios on it – watch this space.
In a month where pleasingly many things got crossed off my long-term To Do list, the St Columba’s website was happily no exception. Like so many church and charity websites I’ve run over the years (the eight years since I was sixteen, in fact!), it was a pile of static content which meant I was stuck maintaining it and those producing the content couldn’t easily chip in.
So, what to do? I could spend several weekends hacking together a CMS in Django – actually, no I couldn’t, on the evidence of failing to have found time for that in three years – or I could use somebody else’s second-rate pile of PHP. What I really wanted was a system which removed all dependency on my servers, cost nothing and was reasonably easy for non-techies to be carefully steered into using.
Google Sites was suggested by Tony, and since we already had Google Apps set up for the domain in question, it seemed like a good place to start, with a reasonable chance of being a long-term solution.
I started by sitting down for a few hours and dumping what content was worth keeping from the old site. Here’s an assortment of things I noticed:
- All our outgoing hyperlinks were mangled by Google Sites to bounce via a URL of the form google.com/something?url=<our actual link>. As well as looking ugly, this is a particular pain in the neck for users on mobile connections, since the high latency makes the extra redirection from Google’s URL to the real one painfully obvious. I’ve no idea why this happened, but it went away after a couple of hours, so that’s OK
- In order to avoid having tedious stuff about comments and attachments at the bottom of every page, you need to make a template with those options turned off, and base your pages on it. And if you’ve made 80 pages before discovering that, you need to fix each one by hand on the page settings box for it. Sigh.
- On a more positive note, the integration with Google Calendar,Google Groups signup, and Google Docs is top notch as you might expect.
- The CMS seems pretty capable – my not-especially-techie co-conspirator was able to bash out most of the design and not make it look too bad, and the page editing tools are simple enough for non-technical users.
The Google Site backed site is now live, and I’ll post more later on smoothing off some of the rough edges.
A few weeks ago, the story broke that hitherto unheard of (by me, at least) Austrian hosting firm EDIS was offering free co-location of Raspberry Pis in their data centre. Since I hadn’t really found a use for mine (like almost every geek I know who bought one ‘just because’), I decided to give it a go. The plan is to use it to run Nagios to keep an eye on various machines I run back here in the UK.
This guy describes how you can make a start, and I pretty much followed his lead – install the basic Debian image from the official RPi site, then rip out everything graphical, set up an SSH server, firewall it and expand the root partition to fill the SD card. In my case, I didn’t bother shipping a USB stick in it – the 16GB SD card should be all the storage a basic monitoring installation will ever need.
The last thing to do before posting it (along with a USB cable to power it) is configure the IP addresses they gave you (you were cool enough to ask for IPv6 too, right?). I wrote /etc/network/interfaces like this:
auto eth0 allow-hotplug eth0 iface eth0 inet static address w netmask 255.255.255.0 gateway x
iface eth0 inet6 static address y # we're assigned a /112, but the routing is /48 based netmask 48 gateway z
Obviously, replace the ws, xs, ys and zs with the settings they e-mailed you.
It’s worth noting (I had to ask EDIS to clarify this) that they don’t provide IPv4 DNS servers for you to use – go for Google public DNS or similar, with /etc/resolv.conf like this:
nameserver 220.127.116.11 nameserver 18.104.22.168
There’s not much you can do to test you’ve got the networking right, but I did boot it and check eth0 came up when a cable was plugged in, with the right IPs on it. You can also check the output of ‘sudo route -a’ to make sure the default route goes via the gateway it should.
I posted my Pi (Royal Mail’s standard air mail – cost about £2) this morning. I’ll write a follow-up when it’s arrived and running.
Back in January, we had a break-in at St Columba’s. Some ill-mannered individual smashed their way in and trashed quite a lot of cupboard doors. Ultimately, the only item they took was a four-year-old laptop which was getting a bit creaky anyway. The laptop itself was probably worth a couple of hundred pounds tops, but as is the way with small organizations, it contained a bunch of irreplaceable data about our hirings and general admin.
As I recounted this story to a few geeky friends in the pub, they all sucked air through their teeth and began to commiserate with me about what a pain it must have been to try and replace all that un-backed-up data.
“But no!”, I was able interrupt them (with just a trace of smugness). Despite this laptop being a standalone machine whose regular users are completely un-technical, there was a backup system in place. It involved a Python script which used rdiff-backup to shove the laptop’s hard drive the wrong way up the office ADSL line and onto one of my servers. It shuts down the laptop when it’s done (many hours after you start running it), so all I had to train my users to do was “click this icon before you go home on Friday”. So we lost a few days’ worth of data, but it could have been a lot worse. And the solution took about an hour for me to put together and cost nothing.
I’ve decided to give up on maintaining my own SVN server – Mercurial is my version control of choice these days, and for the scale of the open source stuff I do, there’s no good reason not to use Bitbucket.
I’ve done my best to fix all the historical links pointing to svn.dnorth.net so they go to the files now on my Bitbucket account, but do get in touch if I’ve missed one.
For a few years now, I’ve run a hosting co-operative with a few friends. Although the cost savings versus all renting VMs individually are probably marginal at best these days, one of the nice things about it is the chance to run things like our incoming MX on one machine only, instead of all having to run our own anti-spam and other measures. The incoming mail is handled by Exim, and each user of our system can add domains for which mail is processed. They get to toggle SMTP-time rejection of spam and viruses, and specify the final destination machine for incoming mail to their domain.
This has all been working well for over two years, but occasionally something has to change: a few months ago, we got rid of sender verify callouts, now widely considered abusive by SMTP server admins, and more recently we added support for tagging messages with headers to say if they passed or failed DKIM verification. And every time I make such a change, I worry that I might have inadvertently broken something. This server handles mail for 30 domains and 8 people, some of who rely on it to run businesses! Panic!
I usually end up reassuring myself by doing some ad-hoc testing by hand after reconfiguring the server. At the most basic level, whatever your SMTP server is, you can use netcat to have a conversation with it on port 25:
d@s:~$ nc localhost 25 220 mailserver.splice.org.uk ESMTP Exim 4.71 Sat, 17 Mar 2012 09:51:20 +0000 HELO localhost 250 mailserver.splice.org.uk Hello localhost [127.0.0.1] MAIL FROM: <> 250 OK RCPT TO: firstname.lastname@example.org 550-Callout verification failed: 550 550 Unrouteable address QUIT 221 mailserver.splice.org.uk closing connection
And there, I’ve just convinced myself that one of our features is still working: the mailserver should call forward to the final destination for mail to @dnorth.net addresses to check the local part (‘someaddress’ in this case) is valid, and reject the message up-front if it’s not.
Exim also has a load of other toys you can take advantage of: say I want to check how mail to email@example.com is routed:
d@s:~$ exim4 -bt firstname.lastname@example.org R: hubbed_hosts for dnorth.net email@example.com router = hubbed_hosts, transport = remote_smtp host mx-internal.dnorth.net [192.0.2.100]
(IP addresses changed for example purposes, obviously)
And finally, there’s debug mode: you can run
exim4 -bhc <ip address>
to run a complete ‘fake’ SMTP session as though you were connecting from the given IP address. You can send messages, but they won’t actually go through, and exim prints a lot of debug output to give you a clue as to its inner workings as it decides how to route the message.
This is all very well, but a quick brainstorming session gives a list of over 30 things I might want to check about my mailserver:
- Basic check that mail is accepted to our domains
- Only existent addresses on our domains should have mail accepted
- Domains with SMTP-time spam rejection on should have spam rejected
- Same for viruses
- Same for greylisting
Testing all these by hand isn’t going to fly, so what tools can we find for automating it? A bit of Googling turns up swaks, which looks quite handy, but suffers from two drawbacks for me: first, it’s a bit low-level, and a collection of scripts calling it will be a bit difficult to read and maintain for testing all 30 of my assertions. Second, it really sends the e-mails in the success case, and I don’t want my users to get test messages or have to set up aliases for receiving and discarding them. swaks will definitely become my tool of choice for ad-hoc testing in future, but meanwhile…
The other promising Google result is Test::MTA::Exim4, which is a Perl wrapper for testing an exim config file. However, a few problems: (1) it’s Perl, and I Don’t Do Perl. (2), it’s limited to testing the routing of addresses, so it’s not going to cut it for checking spam rejection etc.
Having at least pretended not to be suffering from NIH syndrome, let’s spec out a fantasy system for doing what I want: I would like to be able to write some nice high-level tests in my favourite language, Python, which look a bit like this
class HubbedDomainTests(EximTestCase): """ Tests for domains our server acts as the 'proxy MX' for, doing scanning etc before forwarding the mail to the destination machine """ def testProxiedMailAccepted(self): """Proxied mail should be accepted""" session = self.newSession() session.mailFrom('firstname.lastname@example.org').rcptTo('email@example.com').randomData() def testLocalPartsVerifiedWithDestinationMachine(self): """Local parts should be verified with the destination machine""" session = self.newSession() session.mailFrom('firstname.lastname@example.org').assertRcptToRejected('email@example.com')
I could then run these in the usual manner for Python unit tests, and lastly, I want them backed by an exim4 -bhc session so that they’re as realistic as possible without actually sending messages.
This post is long enough already, so I’ll cut to the chase and say that I’ve made a start on writing it, and you can find out more at Bitbucket. In a follow-up post, I’ll talk about how it was done.
You know how it is: you’re hosting some creaky mass of PHP and SSIs on your box for historical/hysterical reasons, the site requires some kind of FTP access for its admin to edit it, and you’d rather not give them an SSH login with which to do arbitrary stuff on your machine.
For the last couple of years, I’ve used scponly (this guide) to achieve roughly the right effect, but having an essentally unmaintained chroot on my box slowly collecting security vulnerabilities felt wrong. Surely it must be possible to provide secure FTP to users without using SSH at all, and without having to maintain a chroot?
# apt-get install proftpd-basic
The documentation is pretty good; it enabled me to arrive at the config below (suck it into the main one using an include for ease of maintenance) with just the one diversion to work out why my WinSCP wouldn’t talk to it (see the protocol switching line below). In WinSCP’s defence, I am using a pretty ancient version.
# Use SFTP with the same keys as SSH # http://www.proftpd.org/docs/contrib/mod_sftp.html SFTPEngine on SFTPLog /var/log/proftpd/sftp.log SFTPHostKey /etc/ssh/ssh_host_rsa_key SFTPHostKey /etc/ssh/ssh_host_dsa_key # Enable compression SFTPCompression delayed # Workaround for WinSCP bug: http://winscp.net/forum/viewtopic.php?t=8121 SFTPClientMatch ".*WinSCP.*" sftpProtocolVersion 4 # Allow the same number of authentication attempts as OpenSSH. # # It is recommended that you explicitly configure MaxLoginAttempts # for your SSH2/SFTP instance to be higher than the normal # MaxLoginAttempts value for FTP, as there are more ways to authenticate # using SSH2. MaxLoginAttempts 6 # Only allow specifically whitelisted users (members of the ftp group) # http://www.proftpd.org/docs/howto/Limit.html <Limit LOGIN> AllowGroup ftp DenyAll </Limit>
Just make sure your users are in the ftp group, but not able to log in through SSHD.
Of course, with either solution you still have to worry about scripts and PHP executed by your user’s website being able to see the full filesystem of the machine, but mod_chroot and mod_suexec for Apache are both well documented and also Debian packaged.
Port forwarding with xinetd – this is really useful. I spent ages trying and failing to get an iptables-based forward from an IP address/port on machine 1 to an IP address/port on a remote machine2. I thought about writing some sick Python to forward incoming connections, but running it as root to bind the right port/IP or doing yet more iptables didn’t appeal … and then I remembered this is what xinetd is for.
So on Debian, you can apt-get install xinetd and use the config above to forward arbitrary ports (there’s also a ‘bind’ directive to specify the IP to bind to on the forwarding machine).
As if they weren’t an inefficient enough organisation to deal with in other respects, today a division of British Gas asked me to send them a remittance advice to:
Yes, that really is an ampersand in the local part. Sufficiently unusual that trying to send to it upset my default-configured installation of Exim:
rejected RCPT <CardiffC&MFinance@centrica.com>: restricted characters in address
You can tone down the (perfectly reasonable) check for these iffy characters by exempting centrica.com from it: edit /etc/exim4/conf.d/acl/30_exim4-config_check_rcpt and edit the domains line of the second ‘restricted characters in address’ ACL to read:
domains = !+local_domains : !centrica.com