Category Archives: SysAdmin

Sip2Sim and the OnePlus Two

Andrews and Arnold do an interesting service where they supply a SIM card which connects to VOIP at their end. Annoyingly, they don’t have a sensibly usable set of 07… UK mobile numbers they can route onto VOIP to go with the service, but since my OnePlus Two has two SIM slots, that seemed like a way to give it a punt…

Double SIM carding it ... like a pro

Double SIM carding it … like a pro (this is the drawer from the OnePlus Two)

The particular variant of SIM I ordered (O2/EU Voice) doesn’t push out to Nano SIM, instead requiring a pair of scissors and a steady hand (or a proper tool, but who has one of those?) As you can see from the picture, I got away with the scissors and it even worked afterwards:

Custom network name and two signal strength indicators

Custom network name and two signal strength indicators

Android has some pretty impressive native support for more than one SIM, and shows two signal strength selectors as you’d expect. As you can see at the top left, the SIM networks (operator names) are shown with a pipe separating them. For some reason you can’t fiddle with this on the control pages, but you can set it when ordering and contact support to alter it.

I ordered a London number on AAISP’s VOIP service to go with the SIM, and that all works as expected. Texts are a bit clunky (presenting as from “SIP2SIM”), but it looks like that may be configurable/changable.

Mobile data appears to go via NAT and emerge from an IP address registered to Manx Telecom.

The two things I really wanted to play with are setting up my own Asterisk again, and using the roaming to get decent data on the train up north. I’ll report back when I’ve had a play.

Bytemark charging for IPv4 addresses

We‘ve had a server at Bytemark for many years (since the summer of 2009). For the most part they’ve been good, Towards the end of last year, they started charging for extra IPv4 addresses. This came as no great surprise, given the near-exhaustion of this finite resource, but our modest /27 suddenly represented a 1/3 increase in the cost of our server.

Fortunately, Bytemark agreed to waive the increase until our next annual renewal, after I got a bit shirty about mid-term price increases. Which left us with the joy of consolidating our usage. They’re quite right to point out that the advent of SNI support in all modern browsers, and things like sslh mean we don’t need so many addresses any more, but having parcelled them out to my six users in blocks of 4 (making firewalling easier as everyone had a /30), I had a long and tedious consolidation exercise to carry out.

Happily, many reboots and much faffing later, we’re nearly there and should be able to hand back 16 addresses of our 32 next month, thus cutting our cost by £192+VAT/year.

This has prompted me to take a closer look at going all-IPv6. I’ve ordered a small IPv6 only VM from Mythic Beasts to play with. Teething troubles aside, it works quite well, with inbound proxying for the websites and NAT64 for outbound access to IPv4 services. Running just the one IP stack feels much cleaner and easier to administer, and opens up the possibility of using an IP address per website/service with no danger of running out.

Edit: Bytemark’s original announcement failed to mention that it’s £1 plus VAT for each IPv4 address. Sigh.

Raspberry Pi print & scan server

I was pleasantly surprised at how easy it was (with a bit of Googling) to set up the Raspberry Pi 2 I’d had gathering dust on my desk as a print server. My increasingly venerable USB-only Canon MP480 works just fine under Debian, but having to boot my desktop to use it was becoming tedious – much of what I get done these days is laptop-based or even done on my phone.

Having set it all up, I can scan and print from any Linux box on the network (theoretically from Windows too, only I have none of those left), and, with the addition of the CUPS Print app, I can print PDFs and the like directly from my Android phone.

Update – putting some IPv6-only DNS on the Pi and pointing a Windows 10 VM at it via IPP, printing Just Works. Even more impressively, Windows drivers for your printer do not need to exist, as long as they exist for CUPS on the Pi. I just chose the generic HP PostScript printer suggested in the article and it works perfectly, which is handy as Canon have no intention of providing Win10 drivers.

E-mail for the last three days – my part in its downfall

You know Tuesday afternoon is going well when your mother calls and asks why she hasn’t received any e-mail for the last three days. Come to think of it, I hadn’t had much e-mail lately either.

Five minutes later on the mailserver (IP addresses and domain names changed to protect the guilty)…

$ exim4 -bhc 128.66.0.1

RCPT TO bar@example.com
>>> check hosts = ${sg {${lookup sqlite{ /path/to/my/sqlite.database SELECT host FROM blacklist WHERE domain = '$domain'; }}}{\n}{: }}
...
>>> no IP address found for host foo.example.com (during SMTP connection from (somewhere) [128.66.0.1])
>>> foo.example.com in dns_again_means_nonexist? no (option unset)
>>> host in "foo.example.com"? list match deferred for foo.example.com
>>> deny: condition test deferred in ACL "acl_check_rcpt"
451 Temporary local problem - please try later
LOG: H=(somewhere) [128.66.0.1] F=<foo@example.com> temporarily rejected RCPT <bar@example.com>

$ host foo.example.com
Host foo.example.com not found: 2(SERVFAIL)

Yes,  if you have a host-based blacklist which contains names as well as IP addresses, Exim will defer messages if it gets a “temporary” DNS failure when looking up names on the list. So not only did the owner of foo.example.com screw me over by sending me spam, but their broken DNS deferred all of my incoming mail. Excellent. I’ll let you know if the internet comes up with a solution to this other than the per-domain one.

Yubico Yubikey 4: PGP, U2F and other things

I recently bought myself a Yubikey 4, as previously mentioned.

Here’s what I’ve managed to do with it so far …

General observations

Since this is very new hardware, it comes as no surprise that the versions of the Yubikey tools in Debian stable aren’t new enough to talk to it. I used the Windows versions off the Yubico website to have a poke around the settings, but if the device is already in OTP+U2F+CCID mode, then that should cover all bases.

The device arrived just over 1 business day after I’d ordered, sent from a UK distributor – having had to order and pay in dollars via PayPal, it wasn’t clear that this would be so snappy. Well done Yubico.

Physically, it fits nicely onto my keyring and is barely any more noticeable than the two small USB drives and other bits and pieces hanging on there. Though its arrival did inspire me to purge my keyring of quite a lot of keys I’d been carrying around for years and almost never using. Weight is a concern when you’re dangling your keys on the end of a USB-connected device (mostly in case they pull it out of the port rather than the risk of physical damage in this case), so digging out an extension cable to let the keys rest on my desk was necessary to connect to my desktop. Laptops are naturally more convenient here.

U2F (Universal Second Factor)

I didn’t buy it for this particularly – until there’s Firefox support, U2F is a bit too edge-case for me. However, with the addition of the relevant udev rules (easily Google-able), it does work with Google Chrome on Debian and is a more convenient alternative to the Google-Authenticator based two-factor auth I already had enabled on my Github and Google accounts. The nice thing is that both services will fall back to accepting codes from the app on your phone if a U2F token isn’t present, so it’s low-risk to turn this on. The Yubikey is more convenient than having to dig out my phone and transcribe a code, though (not least because a security-conscious nerd like me now has over a screenful of entries in the app) – and is also impervious to the phone running out of battery or otherwise failing.

OTP

Other than confirming it works on Yubico’s demo site, I’ve not found a use for this just yet. It’s worth noting that, if I understand correctly, it relies on Yubico having the corresponding public key to the private key embedded in the device, and thus uses require authenticating against Yubico’s servers. It is possible to insert your own key onto the device and run your own auth service, but that seemed a bit much effort for one person (and hard to convince myself it’d be more secure in the long run). Meanwhile, I see I could use the Yubikey as a second factor for key-based SSH authentication, or even a second factor for sudo. But in both cases it would require Yubico’s servers being up to authenticate the second factor, and I’m not sure what my backup plan for that would be. At the very least, I’d want to own a second Yubikey as fallback before locking down any service to the point where I couldn’t get in without one.

PGP

This was the main use-case I bought the Yubikey for. Previously I’d had a single RSA PGP key which I used for both encrypting and signing. The difficulty with this set-up is that you end up wanting the private key key on half a dozen machines which you regularly use for e-mail, but the more places you have it, the more likely it is to end up on a laptop that gets stolen or similar.

For my new key, I’ve switched to a more paranoid setup which involves keeping the master key completely offline, and using sub-keys on the Yubikey to sign/encrypt/authenticate – these operations are carried out by a dedicated chip on the Yubikey and the keys can’t be retrieved from it. You’re still required to enter a PIN to do these operations, in addition to having physical possession of the Yubikey and plugging it in. However, the nice trick about the sub-keys is that in the event of the Yubikey being lost/stolen, you can simply revoke them and replace them with new ones – without affecting your master key’s standing in the web of trust.

The Yubikey 4 supports 4096 bit RSA PGP keys – unlike its predecessors which were capped to 2048 bits. To make all this work, you need to use the 2.x series of GnuPG – 1.x has a 3072 bit limit on card-based keys and even that turned out to be more theoretical than achievable. I couldn’t initially persuade GnuPG 2.x (invoked as gpg2 on Debian/Ubuntu) to recognise the Yubikey, but a combination of installing the right packages (gnupg-agent, libpth20, pinentry-curses, libccid, pcscd, scdaemon, libksba8) and backporting the libccid configuration from a newer version finally did the trick, with gpg2 –card-status displaying the right thing. Note that if running Gnome, some extra steps may be required to stop it interfering with gnupg-agent (but you should get a clear warning/error from gpg2 –card-status if that’s the case).

I generated my 4096 bit subkeys on an offline machine (a laptop booted from a Debian live CD) and backed them up before using “keytocard” in gpg to write them to the Yubikey.

I got into a bit of a tangle by specifying a 5 digit PIN – gpg (1.x) let me set it but then refused to accept it, saying it was too short! Fortunately I’d set a longer admin PIN and the admin PIN can be used to reset the main PIN. I’m not sure if that was just a gpg 1.x bug, but I’ve gone for longer PINs to be safe.

Having waded through all that, the pay-off is quite impressive: Enigmail, my preferred solution for PGP and e-mail, works flawlessly with this set-up. It correctly picks and uses sub-keys marked as for encryption purposes, and it prompts for the Yubikey PIN exactly as you’d expect when signing, encrypting and decrypting. It’s worth having a read of the documentation, and giving some consideration to the “touch to sign” facility. This goes some way towards making up for the Yubikey’s weakness compared to some more traditional PGP smart cards: it doesn’t have a trusted keypad for you to enter your PIN, so there’s always the risk of it being intercepted by a keylogger. But touch to sign means malware can’t ask the key to sign/encrypt/decrypt without you touching the button on the key for each operation. In any case, this set-up is a quantum leap over my old one and means I can make more use of PGP on more machines, with less risk of my key being compromised.

It’s worth noting that the Yubikey is fully supported on Windows and is recognised by GnuPG there too. This set-up might finally persuade me that a Windows machine can be trusted with doing encrypted e-mail.

Making the Yubikey 4 talk to GPG on Debian Jessie

I’ve just bought myself a Yubikey 4 to experiment with.

The U2F and OTP features are of some interest, but the main thing I bought it for was PGP via GnuPG. I was disappointed to discover that it works (at least as far as gpg –card-status showing the device) on current Ubuntu (15.10) and even on Windows (!), but not Debian stable (Jessie). Still, this is quite a new device…

In every case on Debian/Ubuntu, you need to apt-get install pcscd

For the non-internet-connected machine on which I generate my master PGP key and do key signing, I can just use the Ubuntu live CD, but since my day-to-day laptop and desktop are Debian, this does need to work on Debian stable for it to be of much use to me. A bit of digging revealed this handy matrix of devices, and the knowledge that support for the Yubikey 4 was added to libccid in 1.20. Meanwhile, jessie contains 1.4.18-1. Happily, a bit more digging revealed that retrieving and using the testing package’s version of /etc/libccid_Info.plist was enough to make it all start working:

[david@jade:~]$ gpg --card-status                            (28/11 20:20)
gpg: detected reader `Yubico Yubikey 4 OTP+U2F+CCID 00 00'
Application ID ...: XXX
Version ..........: 2.1
Manufacturer .....: unknown
Serial number ....: XXX
Name of cardholder: David North
Language prefs ...: en
Sex ..............: male
URL of public key : [not set]
Login data .......: david
Private DO 1 .....: [not set]
Private DO 2 .....: [not set]
Signature PIN ....: not forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

I’ve raised a bug asking if the new config can be backported wholesale to stable, but meanwhile, you can copy the file around to make it work.

For U2F to work in Google Chrome, I needed to fiddle with udev as per the suggestions you can find if you Google about a bit. The OTP support works straight away, of course, as it’s done by making the device appear as a USB keyboard.

When my beeper goes off

As regular readers will know, I run a co-located server with a few friends. I’m also responsible for things like the WiFi network at church. Inevitably, these things run just fine for weeks, months or even years at a time – and then fail. The first I hear of it is usually from a would-be user, sometimes days later.

The world of open-source has a solution to this, and it’s called Nagios (actually, I think the cool kids have jumped ship to the more recent Icinga, but I haven’t played with that). It’s a monitoring system – the idea is that you install it somewhere, tell it what computers / devices / websites to monitor, then it can e-mail you or do something else to alert you when things fail or go offline.

The really nice thing about it is that it’s all pluggable – you can write your own checks as scripts, and also provide your own notification commands, for example to send you a text.

The only problem is, where to put it? Hosting it on the co-lo box I’m trying to monitor is a bit self-defeating. Fortunately this is a rather good use for that Raspberry Pi I had lying around – I found a convenient internet link to plug it into, removed all the GUI packages, and it’s running Nagios just fine. So far, without really even trying, I’ve got it monitoring 76 services across 29 hosts. Some of the checks are a bit random – e.g. a once-per-day check on whether any of my domain names have expired – but now it’s possible to buy ten-year renewals it’s nice to have an eye on this sort of thing, as who knows if one’s registrar will still have the right contact details to send reminders in a decade?*

So far it’s working a treat, texting me about problems so I can fix them before users notice, although if I add many more checks I suspect I’ll have found an excuse to invest in a Raspberry Pi 2 to keep up with the load.

If you’re doing it right, you’ll configure Nagios to know which routers and computers stand between it and the rest of the internet (and hence what it’s trying to monitor). This means it won’t generate a slew of alerts if its internet connection dies, just one. It also means you get a useful map of what you’re monitoring:

* I’m pretty sure Mythic will do, because they’re great, but it never hurts to have a backup in place.

Windows and SSDs

So, having had such good results with the SSD I put in my ageing desktop, I decided to get one for my ageing laptop too. I did the copying a bit differently – since the SSD was 50GB smaller than the spinning rust it was replacing, a straight dd wasn’t going to work. In any case, it’s a bit boring waiting hours for blank space to be copied across. There was only 30GB of actual data.

I connected both drives to another machine – I couldn’t get hold of a SATA caddy to have them both on the laptop – and again, booted to SystemRescueCD. This time, though, I used GParted to copy-paste the boot partition, made an empty NTFS “main” partition and used rsync to copy the contents of the old C: drive to the new. Much faster to just copy the data. Some quirk of ntfs-3g or rsync seems to have lost the “hidden” attribute from some files, but otherwise it all worked.

You’ll have realised from my mention of NTFS that this laptop runs Windows. The purists may call me a traitor to Linux, but having at least one Windows box around is useful for all sorts of things, and this is mine.

The final annoyance was that Windows refused to boot from the new drive:

A required device is inaccessible

It did however prompt me to boot from the Windows CD and hit repair, which worked. Given how rapidly it worked, I can only assume Microsoft write the UUID of the drive to the boot sector or some such, as a tedious anti-piracy measure.

The SSD magic definitely seems to have made this laptop usable again – previously it was slow to the point of being unusable and you could hear the disk crunching inside it. Hopefully the SSD should give battery life a slight boost and be more robust against being dropped or jolted too.

Since I’m now all SSD’d up, I’ve got my desktop running DBAN to erase the old mechanical hard disks I’ve replaced. Did I mention how handy having a network boot server is?

Down the rabbit hole: OAuth, service accounts and Google Apps

I have quite a few small programs which use Google’s APIs for one thing or another:

  • Updating pages on Google Sites automatically
  • Reading a Google Docs spreadsheet and sending SMS reminders for a rota inside it
  • Reading a Google Calendar

Until recently, reading a public Google Calendar didn’t require authentication – one could simply consume the XML from the calendar’s URL and work with it using the GData API. Google knocked that on the head at some stage. Shortly afterwards, I woke up to a slew of error e-mails marking the apocalypse.

The apocalypse is how I refer to the day when Google finally removed the ability for scripts like mine to authenticate by presenting a username and password. For sure, hard-coding these into a script is not good practice, but it worked well for many years, and was a lot simpler and better documented than the alternative…

OAuth

Much has been written elsewhere about OAuth, but the main problem I had was that Google’s examples all seemed to centre around the idea of bouncing a user in their web browser to a prompt to authorize your use of their data. This is all very well for interactive web applications (and, indeed, much better than asking users to trust you with their password), but where does it leave my non-interactive scripts?

I eventually dug up the documentation on service accounts. The magic few lines of Python are these:

from oauth2client.client import SignedJwtAssertionCredentials
json_key = json.load(open('pkey.json'))
scope = ['https://spreadsheets.google.com/feeds']
credentials = SignedJwtAssertionCredentials(json_key['client_email'], json_key['private_key'], scope)

The JSON for the key can be obtained via the Google APIs developer console. Most Google APIs, and things built on top of them, can then take the credentials object, and authenticate as a service account. Apparently you can also do really clever things like impersonating arbitrary users in your Google Apps domain – for example, to send calendar invites as them – but that’s for another day.

Solid State Disks

For my less geeky readers, a Solid State Disk (SSD) is similar to the mechanical “hard disk” which has traditionally been the storage in most PCs and laptops. However, an SSD has no moving parts and works entirely off memory chips – a bit like a USB memory stick. The big advantage of them is they’re a lot faster to read data from than a mechanical hard disk.

SSDs have been standard issue at work for a while now, but I hadn’t yet had occasion to buy one myself. I was pleasantly surprised by how much of a price collapse had taken place – I picked up a 250GB SSD from Ebuyer for £64. The use-case was my home desktop PC, which was glacially slow at resuming from hibernate, and struggled to run Windows under VirtualBox. A Core2 Duo should have no problem with this from the CPU side, but it felt like I/O performance was the problem.

I was surprised by just how small and light the SSD was – of course, with no moving parts or motors to spin the platters, it’s about half the size and a tenth the weight of the equivalent mechanical disk.

Out with the old, in with the new

Out with the old, in with the new

Since the SSD was the same size (250 marketing gigabytes, which is to say 232.9 actual gigabytes) as the “spinning rust” it was replacing, I simply connected it, booted into SystemRescueCD from my network boot server, and used dd to copy the block device of the old disk over onto the new one. It looked like it was going to take about four hours, so I issued a shutdown for five hours hence and went out for the day.

Having now had a chance to try it out, I’m properly impressed – the machine boots and resumes much faster and Windows under VirtualBox is now snappy enough to be usable. At that sort of price, I shall have to see about getting one for my laptop too.

Update: some good discussion on social media. The downsides of SSDs are pointed out, e.g. limited write capacity (if you were to write to this one continuously for 2.5 days, you’d wear it out), and that they aren’t suitable for archiving as power-off data retention can be limited to months. None of this matters for my use-case, but enterprise-grade SSDs with enterprise-grade price tags also exist to try and solve at least the write lifetime issue.