Everything you never knew you wanted to know about phone numbers

A couple of months ago, a number of my friends were surprised to encounter a “landline” phone number (01865 …) which, when called, got through to someone on a mobile phone. So I was inspired to write up a bit about the technology behind that and how it’s easier and cheaper than you think it is.

Here are some fun facts about phone numbers (I live in the UK, and most of these are UK-specific).

  1. 07 doesn’t necessarily mean mobile. Yes, there’s a fair chance that a UK phone number starting with 07 will have somebody’s mobile (or voicemail) on the other end of it, but it is perfectly possible to buy 07 numbers which have no device/SIM card attached to them and simply direct calls to other numbers.
  2. 01/02 doesn’t necessarily mean landline. Analogous to the above, while it’s a fair bet than an 01x or 02x UK phone number corresponds to a physical piece of copper with one or more telephone handsets on the other end of it, that doesn’t have to be the case. And even if it is, your call could be diverted to somewhere else entirely by the recipient’s equipment (at their cost). It’s especially important to know that, because of this, a business publishing a number with a certain area code is no longer a guarantee that they are actually based in that area.
  3. You don’t have to have voicemail on your mobile. If you’re as annoyed as I used to be with people leaving ten second voicemails identifying themselves when you already knew from the “missed calls” list and caller ID who they were, then you can usually ask your mobile phone operator to disable voicemail on your mobile. I don’t miss it.
  4. 03 means “non-geographic”. Because pricing of 08xx numbers in the UK is complicated, numbers beginning with 03 were introduced for use in company/government settings where a nationwide number is needed, and should have similar cost to a normal national call to 01/02. Most mobile and landline packages treat 03 the same as national calls to 01/02
  5. “Virtual” phone numbers are surprisingly affordable. As I’ve blogged here before, I run an 07xx number which we program each week to divert calls to whoever is leading that week’s walk for the Ramblers. It has a fixed cost of around £14 per year, plus a per-minute cost for the forwarded calls. Last year it cost us less than £20 in total.
  6. Caller ID should no longer cost money in the UK. The telecoms regulator Ofcom ruled last year that Caller ID on landlines should no longer be a chargeable extra (given that it’s been a standard facility of the network for decades now, charging more for not turning it off seemed dubious to me for quite a while). Given this, all you need is a landline phone handset modern enough to have a display, and you can see who’s calling you. If you’ve ever wondered why it takes one ring before the ID shows up, it’s because it is transmitted as a modem-like series of sounds after the first ring (as long as you don’t pick up very quickly).
  7. 17070 can be useful. If you want a UK landline to read back its own number to you, most networks allow you to get this by dialing 17070, an engineers’ test facility which should be free and work even on stopped/suspended lines e.g. when you’re moving into a property – as long as it has a dial tone, 17070 should work.
  8. You can “call” people using Facebook Messenger, WhatsApp or Signal. Look carefully at the interfaces in these apps, and you’ll see the option to call people using them. The advantage of this is that the call goes over the internet, so provided you are on a non-metered internet connection, it won’t cost you on a per-minute basis. This is especially useful when calling home from abroad (although, at least for another two months, EU roaming means UK citizens abroad in Europe can usually call home on their mobiles at the same cost as calling from within the UK).

tar pipe with nc, updated for 2019

If you’re a Linux user, you’re probably familiar with the so called tar pipe, a quick and dirty method for transferring files across the local network. It works, and it uses the most basic of tools. Indeed, if you want to shove a load of data between two Windows machines as a one-off, e.g. for a backup, I often find it quickest to boot them both from a Linux live CD / USB stick and use tar pipe (ntfs-3g to mount the disks, naturally). Much easier than trying to persuade file sharing to work properly.

My personal variant, on systemrescuecd:

sending end:~# tar cvf - * | nc receiving-end 1234

receiving end:~# nc -l -p 1234 | tar xv

The addition of v for verbose means you get a print out of files being sent and arriving, giving you a crude approximation of progress and a rough idea of when it’s finished.

It’s also worth noting that the connection provided by netcat is bidirectional (it’s just a TCP socket), so you can in fact establish it the other way round (which is handy if the receiving end is e.g. the Windows Subsystem For Linux, where the Windows firewall gets in the way of listening for an inbound connection):

sending end:~# tar cvf - * | nc -l -p 1234

receiving end:~# nc receiving-end 1234 | tar xv

SSD transplant: Windows 8

A disk I’d been keeping a concerned eye on for some time … now retired

As is traditional while staying with relatives at Christmas, I did some PC upgrades. I was much happier with the slightly venerable Lenovo desktop running Windows 8 once I’d swapped out its rather noisy/crunchy 1TB hard disk for a 250GB SSD I happened to have spare.

Either I’m getting better as I get older, or the tools are improving. Five minutes on Google suggested using ddrescue from SystemRescueCD, and I simply deleted the references to recovery partitions which were beyond the end of the original disk. By far the longest part of the job was taking a backup first.

I’ll do a separate post on how that backup was done, as it was also the fiddliest part of the operation.

Credit where credit’s due to Microsoft: although Windows refused to boot after the transplant (“A required device is inaccessible”), it did offer me safe mode, and rebooting from there restored normal service. No need for CDs or suchlike faff.

TP Link Archer VR600

TP-Link Archer VR600

You might be wondering why I’m blogging about another router just after buying myself something much nicer. The answer, as ever, is one of the handful of friends and family for whom I still do tech support. In their case, it really had to be a single-box solution which does everything, and while MikroTik is all good fun to spend hours configuring for oneself, something much more plug in and go was needed in this case.

The person in question had a Billion 8800NL (ISP supplied), which is well regarded but felt a bit flimsy. More to the point, it didn’t seem capable of reliably reconnecting after line drops without being turned off and on again. Things had massively stabilised (maybe one reboot per week required) and I was almost tempted to leave well alone, but even one failure to recover automatically is really too many for this user. It needs to Just Work, especially since we are indulging in the rather adventurous practice of VoIP over ADSL as this person’s primary “landline”.

A trawl around Amazon for ADSL routers is a rather boring thing. Anything costing less than £100 – and plenty costing that or more – seem to have at least some reviewers ranting about lock-ups, over-heating and dead spots. In the end, we spent £100 on the TP Link Archer VR600, partly because it looked OK and partly because I could go and get one from Argos rather than waiting for delivery.

It’s quite nicely built, and the web interface makes reasonable sense. It allows various things including setting it to respond to pings from the internet (essential for my tech support “clients”, whose lines I have configured on my monitoring system so I get notified of any outages), and the usual array of port forwarding, WiFi, etc. It was rather sad to find an option in there to have the thing reboot itself on a daily or monthly schedule – surely an admission that they haven’t engineered it very well in the first place…

That said, it was really good to find an option for automatic firmware updates – the days of downloading arcane .bin files and uploading them by hand are (or should be) well over, and I’d much rather have an installation like this one take care of itself automatically.

The router also allows remote admin from a specified IP address, which is handy as it allowed me to set it up for remote control from a location of mine with a fixed IP. This is good in theory, but the web interface is horribly broken unless you visit it at http://ip-address-of-router (i.e. anything different in the address bar, caused by assigning some DNS or reaching it indirectly via a port forward, causes it to get upset and fail to load its CSS).

I’ll update this post in a couple of weeks with how well it manages to hang on to the ADSL connection (and recover it in the event of blips).

Update, 5 January 2019: The end user now thinks the connection is rock solid reliable. Nagios shows it does still flake out occasionally, maybe once every few days, but these blips all seem to recover automatically without human intervention. The Billion is going on eBay.

Innotech iComm and SSH port forwarding

You don’t get uptime like that on anything modern

A bit of a blast from the past, this one.

Back in 2011, we replaced all the heating at the church. Sadly this was just before the era of off the shelf heating controllers which did multiple zones and could be controlled from a web page or an app. So instead, we have a more old-fashioned HVAC controller made by Innotech. It cost (from memory) a couple of grand to source and install, and it’s less capable (in my opinion) than a Raspberry Pi with a few relays wired to it.

It was designed to be controlled over serial, so the installer attached an Ethernet to serial module to it, we ran Ethernet to the basement, and bam – we can use their clunky but servicable suite of Windows apps to program the temperatures and seven day calendars controlling the heating and hot water.

Sensors report … lots of things

The installer claimed this could be done remotely by forwarding the port the software uses (20000) from our ADSL router to the controller, then connecting to our IP address remotely. This never worked at the time (we suspected the latency on our ADSL upset it), but now we have a Virgin Media link it was time to try again.

This stuff may be arcane, but exposing it unprotected to the Internet felt like asking for trouble. However! A quick port forward in PuTTY (connecting to the Raspberry Pi sitting in our comms cabinet), and it actually manages to talk to localhost and works.

Now that it’s finally possible to work with this stuff from the comfort of my own home, I am tempted to see if I can reverse engineer enough of its communications to write a web front end and ditch the elderly Windows apps.

Triple monitors + T470s: yes you can

If you happen to have a Lenovo T470s and a USB-C dock with a single HDMI output, it seems Windows 10 can cope with driving twin external displays: one over the dock, and one on the laptop’s own HDMI port. This is in addition to the laptop screen, although my particular monitors aren’t HD.

Update: don’t unplug the connections after suspending the laptop, or the internal display won’t work when you wake it up again!

Mikrotik hAP ac: really rather nice

Mikrotik hAP ac

I got myself an early Christmas present. Various things have always bothered me about ISP-supplied routers. In particular, the BT Home Hub 6:

  • Slow web interface
  • Can’t be made to respond to ping from the internet (or at least, the machine running my monitoring system)
  • IPv6 support feels sort-of iffy – hard to pin this down, but sometimes devices seem not to get a v6 address for no good reason
  • No way to get it to tell you stats, e.g. how much have I downloaded this month? (Useful to know if you’re pondering the cost of switching to an ISP with usage-based billing)
  • No guest WiFi network option
  • Broadcasts a BT Free Wifi type network with no way to turn it off
  • Occasionally gets a different IPv6 prefix when rebooted

And, although you can keep the WiFi network name the same when swapping in a new router, you still end up having to reconfigure static IP addresses, port forwarding, etc. Time to separate the job of routing from the job of speaking to my ISP…

Various colleagues recommended Mikrotik. I had a dig around their Home/SME offerings and decided on the hAP ac – for a two bedroom flat, fewer Ethernet ports and faster WiFi makes sense. It’s handy that it has five ports, because all four on the HomeHub were occupied, and of course you need an extra one to link to whatever takes over the job of establishing your DSL connection. Fortunately I happened to have one of these lying around:

The classic OpenReach VDSL modem (ECI). They don’t do them any more.

These aren’t the most awesome VDSL modems in the world – you can’t get it to tell you the sync speed, etc. – but the HomeHub claimed I was syncing at 80mbps down and 20 up, and speed tests via the above and the Mikrotik suggest I’m still in that ballpark. Maybe I’ll replace it with something fancier in due course.

First impressions of the Mikrotik are good – with their quick setup and some Googling, it took me less than 20 minutes to re-establish WiFi and an internet connection with IPv4 NAT and a sensible default firewall. Someone out on the internet had written up the instructions for getting BT’s IPv6 working, and it looks like their prefixes are supposed to last for 10 years – so hopefully telling the Mikrotik to supply a “prefix hint” to re-request the same one on reboots should put a stop to the occasional changes.

The web interface is nice and snappy and allows you into all the hidden corners. You do need to know a decent amount of networking, and a bit of Linux IPTables, to make sense of it all. You can also configure over SSH via the command line.

To make the transition easier, I set it to broadcast the same WiFi network name (with the same password) as the old HomeHub. Almost everything transitioned over seamlessly. The one exception was the Amazon Echo (interestingly, the newer Echo Dot was OK). A bit of Googling suggests that it does not like the default DHCP lease time on the Mikrotik. Ten minutes does seem a bit tight, so I’ve bumped it to 24 hours and Alexa now seems happy.

Finally, guest WiFi was easy to turn on. I have a more complicated future set-up in mind, but for now, everything is in place and it’s nice to know that next time I change ISPs, I’ll only need to plug in a new bridge (or even just new credentials for the PPPoE link), and everything else will stay the same. And for the first time in four years, Nagios can run active ping checks on my home connection and see that it’s up.

Update: the “Torch” and packet dumping features are excellent – this sort of instrumentation capability comes in really handy for the discerning nerd, e.g. seeing what your IoT devices are up to.

My paperless life

Stepping down as church treasurer led to the eviction of several large box files full of paper from my spare bedroom / home office. Inspired by a recent article in PC Pro, I looked at the freed-up space and thought … what if all the stacks of paper in my flat could be made to disappear?

Scan it and shred it

Keeping paper originals of most things is increasingly unnecessary in the UK, and indeed most utility companies etc. no longer send paper bills or charge extra for doing so.

I didn’t want to spend hundreds on a fancy double-sided, sheet feeding desktop scanner (or have it occupying all the space I’d just reclaimed!), but fortunately I was able to borrow one from work for the weekend to scan in all the old paper worth keeping.

That done, the question is how best to digitise and destroy new paper as it comes in. Following the PC Pro article, I managed to find a Fujitsu ScanSnap iX100 on eBay – it was “reconditioned” but came in the original box with all the manuals and shrink wrap, so a bit of a bargain for £130. It’s tiny, battery powered, and communicates over WiFi. The killer feature on top of that is that it can scan directly into various cloud services (e.g. Dropbox) without needing to be paired with a PC or phone. So I can push incoming post etc. straight through it without having to boot up a laptop first or faff with an app.

The PC Pro article didn’t go into details on how all this works, and I was initially disappointed and thought I’d misunderstood. However, the thing to do is ignore the Windows software, ignore the “ScanSnap” Android app, and go directly to “ScanSnap Cloud”. This is the one which you can use to configure the scanner to hook up to your WiFi, scan directly to ScanSnap’s cloud service (free once you’ve bought the hardware), and sync from there to Dropbox/wherever, without needing an intermediate device. You know you’ve got it all set up right when powering on the scanner makes the scan button and WiFi light go purple, like this:

Incidentally, I thought for a few minutes that I’d bought a dud (even after charging it up for a few hours) because I couldn’t work out how to get it to power on. The answer is to open both the paper trays out (duh!).

I’m properly impressed with this now it’s up and running – the OCR is very good and simply embeds the text in the PDF while leaving the original image of the page visible – so you can hit Ctrl-F and find text. I was even more impressed that, rather than simply naming files after today’s date and time, it has a reasonably good go at extracting a date from the document itself and also a file name (e.g. it manages to pick out the name of the bank when fed a bank statement).

Security?

Of course, for all this to work, one has to be happy with one’s potentially quite sensitive documents being fed to a cloud service.

ScanSnap Cloud keeps your scan history for two weeks. This isn’t configurable (although you can purge it manually from the app). That’s good enough for me – anything especially sensitive can be zapped as soon as it’s scanned; most things can be cleaned up automatically. Obviously the history purging doesn’t affect the copies saved to Dropbox or similar.

Update: a spot of network sniffing reveals that (apart from DNS lookups) the only communicating it does is over HTTPS to a service hosted in Microsoft Azure. All pretty sensible.

Finishing touches

So at this point I have a “ScanSnap” directory in the root of my DropBox which is full of PDFs with reasonably helpful file names. Leaving them all in one big flat folder and using the DropBox search function (which does search the text inside the PDFs) might be good enough, but a bit of sorting wouldn’t go amiss.

Paul Ockenden mentioned in the original article that what he really wanted was for documents to be automatically sorted into folders depending on what they were. ScanSnap isn’t quite that clever (but then again, a universally “right” answer to that problem would be pretty tricky). However, this is where a spot of scripting rounded it off for me.

My DropBox folder is already synchronised onto a Linux machine, so what I wanted was a script to fire as new files came in which would spot certain patterns in the file name and move the PDF into the right place accordingly. ionotify is the Linux feature of choice for this job; a bit of experimentation confirmed that Dropbox seems to buffer incoming files somewhere temporary and then move them into the right place, so listening for IN_MOVED_TO events in the ScanSnap directory allows one to apply some simple rules based on the file name. I’ll post more on that (and the code) another time.

Google Apps: Error 1000 when changing SMTP settings

Do you have Google Apps on one of your domains, left over from the days when they gave it away for free to organisations of fewer than 10 users? I do.

Ever tried to tick this box and got an “We are unable to process your request at this time. Please try again later. (Error #1000)”?

A search suggests that many people on “legacy” (free) Google Apps have run into this, but naturally Google aren’t going to help unless you become a paying customer.

Fortunately, I stumbled across the solution: after ticking the box and before clicking Save, change one of the other settings on the page as well. I used the catch-all e-mail setting (and then changed it back later). Google are correct that the setting change above takes a few hours to propagate to all your GMail users, but it has now started working for me.

Reflections on nine years as a church treasurer

And so, having been (joint) treasurer at St Columba’s since a few weeks after graduating in the summer of 2009, I’m finally stepping down at the end of the year.

Why?

Fair question. Having a time-consuming and technical role is the perfect go-to excuse to avoid doing anything else for the church, and I had polished up the IT involved to significantly cut the hours required.

Did things get better in nine years?

CAF Bank’s online banking web interface sure didn’t, but at least (contrary to what some people still think) it is possible to have a system where two people authorize all outgoing payments.

We already had a pleasing number of people donating by monthly standing order when I took over, but these days there are only two (soon to be one) people left who regularly send in cheques.

Suppliers, visiting preachers and hirers have all got much more adept at receiving and sending payments electronically. I haven’t actually gone through with my threat to destroy all our cheque books, but the number written each year has finally dropped to match the number of fingers on less than one hand.

We do still have one supplier who accepts BACS payments but “doesn’t have online banking” so continues to send us reminders by post to pay their invoices – until the end of the month when it becomes apparent that we have.

Cash in the plate on Sunday has dwindled in proportion to the rise of standing orders, but still adds up to a fair bit.

The level of paperwork required every 24 months to prove we’re not international money launderers continues to be a pain in the posterior for an organisation staffed by part time volunteers.

Charities need to shut up and take our money

One continued irritation is transferring the results of our “special collections”. The idea is simple – six times a year, we have a retiring collection for a worthy cause. We pick two charities who do work locally in Oxford, two UK-wide and two international. We claim all the Gift Aid we can on the money, and then send the balance to the charity in question. This is usually very tedious as charities large and small often don’t publish details for donations to be made via bank transfer. More often than not they do have a means to donate by card, so I can do that and claim the amount back personally – but this is messy as they insist on using details harvested this way to send you postal begging letters for years afterwards (MS Society, I’m looking at you). A particular favourite was Citizens Advice Oxford (earlier this year) who we ended up blindly posting a cheque to as they have no details on their website about how to donate.

Will I miss it?

Sort of, but I’m busy enough these days that having the hours of my life back will be very, very nice indeed. And nine years is long enough that one is in danger of becoming a single point of failure.