This has got very slick. These days, if you buy a new Pi with one of their SD cards, it boots to a nice menu asking which operating system you’d like to install – including Microsoft’s IoT offering, which looks interesting. I haven’t tried it yet, though, as my first Pi 3 is destined for media centre duties, making OSMC the OS of choice.
I’m in Spain this week for ApacheCon, and Monzo is definitely delivering on the promise – no foreign usage fees, just the MasterCard exchange rate. I got some cash out when I landed, and the rate was €1.16 for a pound. The Post Office seem to require a minimum spend of £400 to give a worse rate (€1.1266 for a pound) – and who has time to get their holiday money in advance in this day and age?
As ever with these things, avoid ATM and chip and PIN machines offering to charge you in GBP – they’re highly unlikely to give a better rate.
It was about time I pensioned off the tired old Core2 Duo desktop running as my home fileserver. It sucked up sufficient electricity that it was worth having a Raspberry Pi sat on top of it, to issue a wake-on-LAN before running various tasks (e.g. backups) and turn it off again afterwards. It was also starting to develop reliability issues – who knew buying used-up hardware for a nominal £1 would barely give three years’ service…
HP’s Microservers have a good reputation as a basic home NAS box, and the £60 cashback offer running in November certainly helps: I got the Gen8 with a dual core 2.3Ghz Celeron and 4GB of RAM for £120 after cashback. Here it is:
It looks quite swish and is very quiet, especially if you select the power-saving options in the BIOS – it also puts out reassuringly little heat. The BIOS is quite nicely laid out and easy to follow, though it does seem to lack the classic “discard changes and exit” option.
It has four SATA bays which are inside the front door and have trays to slide the disks in and out with.
They’re not hot-swap apparently they are as long as you don’t use the inbuilt RAID controller!, but at least physically moving disks in and out isn’t a problem: they even supply a little tool to handle the screws with. I put the boot disk from my old server in the leftmost slot, and the two 1.5TB halves of my main RAID array in slots 2 and 3. It booted from the first disk and Just Worked, though the BIOS takes a while to wade through all the checks.
I believe it has some sort of built in hardware RAID controller, but I prefer to stick to good ol’ fashioned Linux software RAID (newer, cooler solutions are also available): you can set up an array on any old hardware, plug the disks into something new and different, and it all comes back to life trivially. Linux is really good at moving to new hardware, and after an fsck* (even that could have been avoided if I’d set the system clock right) it was all ready to go. Try doing that with hardware RAID, where moving disks to a new controller is often impossible without wiping them and starting over.
It has no less than three network ports on the back – two are ordinary dual NICs and the third is for HP’s “iLO” remote management stuff. Since I’ve run out of ports on my router I haven’t had a chance to try that out. Linux recognised the NICs as eth2 and eth3, but that might just be a hangover from the installation starting life on other hardware.
I’ve only had it a few days, but it’s been solid and reliable so far … here’s hoping it outlasts its Core2 predecessor by a good few years.
*File System ChecK – it’s a Linux command. Obviously.
I was pleasantly surprised at how easy it was (with a bit of Googling) to set up the Raspberry Pi 2 I’d had gathering dust on my desk as a print server. My increasingly venerable USB-only Canon MP480 works just fine under Debian, but having to boot my desktop to use it was becoming tedious – much of what I get done these days is laptop-based or even done on my phone.
Having set it all up, I can scan and print from any Linux box on the network (theoretically from Windows too, only I have none of those left), and, with the addition of the CUPS Print app, I can print PDFs and the like directly from my Android phone.
Update – putting some IPv6-only DNS on the Pi and pointing a Windows 10 VM at it via IPP, printing Just Works. Even more impressively, Windows drivers for your printer do not need to exist, as long as they exist for CUPS on the Pi. I just chose the generic HP PostScript printer suggested in the article and it works perfectly, which is handy as Canon have no intention of providing Win10 drivers.
Here’s what I’ve managed to do with it so far …
Since this is very new hardware, it comes as no surprise that the versions of the Yubikey tools in Debian stable aren’t new enough to talk to it. I used the Windows versions off the Yubico website to have a poke around the settings, but if the device is already in OTP+U2F+CCID mode, then that should cover all bases.
The device arrived just over 1 business day after I’d ordered, sent from a UK distributor – having had to order and pay in dollars via PayPal, it wasn’t clear that this would be so snappy. Well done Yubico.
Physically, it fits nicely onto my keyring and is barely any more noticeable than the two small USB drives and other bits and pieces hanging on there. Though its arrival did inspire me to purge my keyring of quite a lot of keys I’d been carrying around for years and almost never using. Weight is a concern when you’re dangling your keys on the end of a USB-connected device (mostly in case they pull it out of the port rather than the risk of physical damage in this case), so digging out an extension cable to let the keys rest on my desk was necessary to connect to my desktop. Laptops are naturally more convenient here.
U2F (Universal Second Factor)
I didn’t buy it for this particularly – until there’s Firefox support, U2F is a bit too edge-case for me. However, with the addition of the relevant udev rules (easily Google-able), it does work with Google Chrome on Debian and is a more convenient alternative to the Google-Authenticator based two-factor auth I already had enabled on my Github and Google accounts. The nice thing is that both services will fall back to accepting codes from the app on your phone if a U2F token isn’t present, so it’s low-risk to turn this on. The Yubikey is more convenient than having to dig out my phone and transcribe a code, though (not least because a security-conscious nerd like me now has over a screenful of entries in the app) – and is also impervious to the phone running out of battery or otherwise failing.
Other than confirming it works on Yubico’s demo site, I’ve not found a use for this just yet. It’s worth noting that, if I understand correctly, it relies on Yubico having the corresponding public key to the private key embedded in the device, and thus uses require authenticating against Yubico’s servers. It is possible to insert your own key onto the device and run your own auth service, but that seemed a bit much effort for one person (and hard to convince myself it’d be more secure in the long run). Meanwhile, I see I could use the Yubikey as a second factor for key-based SSH authentication, or even a second factor for sudo. But in both cases it would require Yubico’s servers being up to authenticate the second factor, and I’m not sure what my backup plan for that would be. At the very least, I’d want to own a second Yubikey as fallback before locking down any service to the point where I couldn’t get in without one.
This was the main use-case I bought the Yubikey for. Previously I’d had a single RSA PGP key which I used for both encrypting and signing. The difficulty with this set-up is that you end up wanting the private key key on half a dozen machines which you regularly use for e-mail, but the more places you have it, the more likely it is to end up on a laptop that gets stolen or similar.
For my new key, I’ve switched to a more paranoid setup which involves keeping the master key completely offline, and using sub-keys on the Yubikey to sign/encrypt/authenticate – these operations are carried out by a dedicated chip on the Yubikey and the keys can’t be retrieved from it. You’re still required to enter a PIN to do these operations, in addition to having physical possession of the Yubikey and plugging it in. However, the nice trick about the sub-keys is that in the event of the Yubikey being lost/stolen, you can simply revoke them and replace them with new ones – without affecting your master key’s standing in the web of trust.
The Yubikey 4 supports 4096 bit RSA PGP keys – unlike its predecessors which were capped to 2048 bits. To make all this work, you need to use the 2.x series of GnuPG – 1.x has a 3072 bit limit on card-based keys and even that turned out to be more theoretical than achievable. I couldn’t initially persuade GnuPG 2.x (invoked as gpg2 on Debian/Ubuntu) to recognise the Yubikey, but a combination of installing the right packages (gnupg-agent, libpth20, pinentry-curses, libccid, pcscd, scdaemon, libksba8) and backporting the libccid configuration from a newer version finally did the trick, with gpg2 –card-status displaying the right thing. Note that if running Gnome, some extra steps may be required to stop it interfering with gnupg-agent (but you should get a clear warning/error from gpg2 –card-status if that’s the case).
I generated my 4096 bit subkeys on an offline machine (a laptop booted from a Debian live CD) and backed them up before using “keytocard” in gpg to write them to the Yubikey.
I got into a bit of a tangle by specifying a 5 digit PIN – gpg (1.x) let me set it but then refused to accept it, saying it was too short! Fortunately I’d set a longer admin PIN and the admin PIN can be used to reset the main PIN. I’m not sure if that was just a gpg 1.x bug, but I’ve gone for longer PINs to be safe.
Having waded through all that, the pay-off is quite impressive: Enigmail, my preferred solution for PGP and e-mail, works flawlessly with this set-up. It correctly picks and uses sub-keys marked as for encryption purposes, and it prompts for the Yubikey PIN exactly as you’d expect when signing, encrypting and decrypting. It’s worth having a read of the documentation, and giving some consideration to the “touch to sign” facility. This goes some way towards making up for the Yubikey’s weakness compared to some more traditional PGP smart cards: it doesn’t have a trusted keypad for you to enter your PIN, so there’s always the risk of it being intercepted by a keylogger. But touch to sign means malware can’t ask the key to sign/encrypt/decrypt without you touching the button on the key for each operation. In any case, this set-up is a quantum leap over my old one and means I can make more use of PGP on more machines, with less risk of my key being compromised.
It’s worth noting that the Yubikey is fully supported on Windows and is recognised by GnuPG there too. This set-up might finally persuade me that a Windows machine can be trusted with doing encrypted e-mail.
I’ve just bought myself a Yubikey 4 to experiment with.
The U2F and OTP features are of some interest, but the main thing I bought it for was PGP via GnuPG. I was disappointed to discover that it works (at least as far as gpg –card-status showing the device) on current Ubuntu (15.10) and even on Windows (!), but not Debian stable (Jessie). Still, this is quite a new device…
In every case on Debian/Ubuntu, you need to apt-get install pcscd
For the non-internet-connected machine on which I generate my master PGP key and do key signing, I can just use the Ubuntu live CD, but since my day-to-day laptop and desktop are Debian, this does need to work on Debian stable for it to be of much use to me. A bit of digging revealed this handy matrix of devices, and the knowledge that support for the Yubikey 4 was added to libccid in 1.20. Meanwhile, jessie contains 1.4.18-1. Happily, a bit more digging revealed that retrieving and using the testing package’s version of /etc/libccid_Info.plist was enough to make it all start working:
[david@jade:~]$ gpg --card-status (28/11 20:20) gpg: detected reader `Yubico Yubikey 4 OTP+U2F+CCID 00 00' Application ID ...: XXX Version ..........: 2.1 Manufacturer .....: unknown Serial number ....: XXX Name of cardholder: David North Language prefs ...: en Sex ..............: male URL of public key : [not set] Login data .......: david Private DO 1 .....: [not set] Private DO 2 .....: [not set] Signature PIN ....: not forced Key attributes ...: 2048R 2048R 2048R Max. PIN lengths .: 127 127 127 PIN retry counter : 3 0 3 Signature counter : 0 Signature key ....: [none] Encryption key....: [none] Authentication key: [none] General key info..: [none]
I’ve raised a bug asking if the new config can be backported wholesale to stable, but meanwhile, you can copy the file around to make it work.
For U2F to work in Google Chrome, I needed to fiddle with udev as per the suggestions you can find if you Google about a bit. The OTP support works straight away, of course, as it’s done by making the device appear as a USB keyboard.
I’ve been erasing and reinstalling a few machines lately – some for sale on eBay, some to set them up for friends. And I’ve finally reached the point where I’m fed up of having to dig out the writable DVDs and download DBAN or Knoppix for the nth time. It’s especially irritating when the computer in question doesn’t have a DVD drive.
One answer to this is network booting – any PC with an ethernet port these days can usually do it, so all I needed was a network boot server on my LAN at home. I’ve always been a little nervous of this as it means taking the job of doing DHCP away from the router and doing it myself on a Linux box*, but since I have an always-on Raspberry Pi anyway, I thought I might as well have a go.
There’s plenty of information on the internet about this sort of thing, but most of it is out of date or needlessly complicated. I used dnsmasq as my DHCP and TFTP server, and the very latest and greatest pxelinux to boot to. It’s worth noting that if you get a new enough version (I compiled myself a 6.03), it has native support for fetching the things to be booted over HTTP (no need to mess around chain-loading something more complicated to do the HTTP bit). This is important, as TFTP is glacially slow to the point of being unusable for multi-megabyte boot images.
Armed with this, you can do the following in dnsmasq.conf:
# Network boot enable-tftp tftp-root=/tftpboot dhcp-boot=lpxelinux.0 # DHCP Option 210 = PXE path prefix (RFC 5071) dhcp-option-force=210,http://your-network-boot-server/boot/
You need to use Apache or similar to expose the right files under /boot/, of course. Armed with this, you can use paths relative to http://your-network-boot-server/boot/ in the menu configuration for pxelinux and you should see things being booted over HTTP nice and quickly.
I’ll finish with a screenshot of a three-year-old laptop displaying my network boot menu:
* Yes, if you have a clever enough router you can simply have it announce the network boot server in its DHCP replies and leave it doing DHCP, but when was the last time you saw that option in a consumer-grade router?
Update: The change described below does not seem to have reliably stopped Google from bouncing my e-mails. Time to ask them what they’re doing…
I obviously spoke too soon. Having complimented Google for finally enabling IPv6 on Google Apps, I was lying in bed this morning firing off a few e-mails from my phone when this bounce came back:
This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: firstname.lastname@example.org SMTP error from remote mail server after end of data: host ASPMX.L.GOOGLE.COM [2a00:1450:400c:c05::1a]: 550-5.7.1 [2001:41c8:10a:400::1 16] Our system has detected that this 550-5.7.1 message does not meet IPv6 sending guidelines regarding PTR records 550-5.7.1 and authentication. Please review 550-5.7.1 https://support.google.com/mail/?p=ipv6_authentication_error for more 550 5.7.1 information. ek7si798308wic.60 - gsmtp
Hmm. The recipient address has been changed, but the rest of the above is verbatim. The page Google link to says:
“The sending IP must have a PTR record (i.e., a reverse DNS of the sending IP) and it should match the IP obtained via the forward DNS resolution of the hostname specified in the PTR record. Otherwise, mail will be marked as spam or possibly rejected.”
All of which is reasonable-ish, but the sending IP does have a PTR record which matches the IP obtained by forward resolution:
david@jade:~$ host 2001:41c8:10a:400::1 18.104.22.168.0.0.0.0.0.0.0.0.0.0.0.0.0.0.4.0.a.0.1.0.8.c.22.214.171.124.0.2.ip6.arpa domain name pointer diamond.dnorth.net. david@jade:~$ host diamond.dnorth.net. diamond.dnorth.net has address 126.96.36.199 diamond.dnorth.net has IPv6 address 2001:41c8:10a:400::1
So what are they objecting to? Some Googling and some speculation suggests that they might be looking at all hosts in the chain handling the message (!). Further down the bounce in the original message text we find:
Received: from [2a01:348:1af:0:1571:f2fc:1a42:9b38] by diamond.dnorth.net with esmtpsa (TLS1.0:RSA_ARCFOUR_MD5:128) (Exim 4.80) (envelope-from email@example.com) id 1Vrm3Q-0002Ay-NH; Sat, 14 Dec 2013 10:02:36 +0000
Now, the IPv6 address given there is the one my phone had at the time. It doesn’t have reverse DNS because you can’t disable IPv6 privacy extensions in Android (also Google’s fault!), and assigning reverse DNS to my entire /64 would require a zone file many gigabytes big.
At this point, it’s probably best to stop speculating on Google’s opaque system and start working around it from my end. Others have resorted to disabling IPv6 for their e-mail server altogether – no thanks – or just for sending to gmail.com. This latter approach doesn’t work for me as the example above involves my-friend-with-google-apps.example.com – and potentially lots of different domains will be using Google Apps for mail, so a simple domain-based white/blacklist isn’t going to cut it.
After spending some time with the excellent Exim manual, I’ve come up with a solution. It involves replacing the dnslookup router with two routers, one for mail to GMail/Google Apps hosted domains, and one for other traffic. Other settings on the routers are omitted for brevity, but you should probably keep the settings you found originally.
dnslookup_non_google: debug_print = "R: dnslookup (non-google) for $local_part@$domain" # note this matches the host name of the target MX ignore_target_hosts = *.google.com : *.googlemail.com # not no_more, because the google one might take it dnslookup_google: debug_print = "R: dnslookup (google) for $local_part@$domain" # strip received headers to avoid Google's silly IPv6 rules headers_remove = Received headers_add = X-Received: Authenticated device belonging to me or one of my users no_more
Following on from my earlier post about co-locating a Raspberry Pi at EDIS, I can now report that my Pi is up and running in Austria.
On the downside, it took a full month from posting it to having it plugged in (but, then, I didn’t pay for the service, so I can’t really complain).
On the upside, it came up correctly with both IPv4 and IPv6 connectivity. The link seems to be solid and stable for both – slightly longer pings over v6 to the same physical location, but nothing to worry about.
Conspiracy theorists posited that EDIS were going to sell all the Pis on Ebay and virtualize them onto an old Pentium 4. If they’ve done that, it’s a very slick job – the processor serial number of what I can SSH into matches that of the Pi I shipped.
I plan to install Nagios on it – watch this space.
A few weeks ago, the story broke that hitherto unheard of (by me, at least) Austrian hosting firm EDIS was offering free co-location of Raspberry Pis in their data centre. Since I hadn’t really found a use for mine (like almost every geek I know who bought one ‘just because’), I decided to give it a go. The plan is to use it to run Nagios to keep an eye on various machines I run back here in the UK.
This guy describes how you can make a start, and I pretty much followed his lead – install the basic Debian image from the official RPi site, then rip out everything graphical, set up an SSH server, firewall it and expand the root partition to fill the SD card. In my case, I didn’t bother shipping a USB stick in it – the 16GB SD card should be all the storage a basic monitoring installation will ever need.
The last thing to do before posting it (along with a USB cable to power it) is configure the IP addresses they gave you (you were cool enough to ask for IPv6 too, right?). I wrote /etc/network/interfaces like this:
auto eth0 allow-hotplug eth0 iface eth0 inet static address w netmask 255.255.255.0 gateway x
iface eth0 inet6 static address y # we're assigned a /112, but the routing is /48 based netmask 48 gateway z
Obviously, replace the ws, xs, ys and zs with the settings they e-mailed you.
It’s worth noting (I had to ask EDIS to clarify this) that they don’t provide IPv4 DNS servers for you to use – go for Google public DNS or similar, with /etc/resolv.conf like this:
nameserver 188.8.131.52 nameserver 184.108.40.206
There’s not much you can do to test you’ve got the networking right, but I did boot it and check eth0 came up when a cable was plugged in, with the right IPs on it. You can also check the output of ‘sudo route -a’ to make sure the default route goes via the gateway it should.
I posted my Pi (Royal Mail’s standard air mail – cost about £2) this morning. I’ll write a follow-up when it’s arrived and running.