A developer could go mad in this business.

One minute, you think you grok all things Kubernetes.

The next minute, you’re tracking down a simple issue reported by a routine penetration test, and you find you’ve spent half a day down a rabbit hole of over-engineering.

But I’m getting ahead of myself - let’s start at the beginning.

The ticket was complaining that one of our apps exposed metrics and other such data at a particular URL.

Said URL was served up by a Kubernetes ingress controller, so how hard could it be to work out where the data was actually coming from?

A bit of rummaging confirmed that said ingress controller was simply reconfiguring a well-known web server to do some proxying. No sign of any metrics sneaking in.

So I read the generated configuration file. Nothing too surprising, a bunch of path-based matching for each configured backend, then a catch-all to respond with 404 for anything which didn’t match.

Hmm. Well. If our suspect URL doesn’t match any of the paths configured, it can only be hitting the default backend!

Now, if it were me, I’d configure the Well Known Web Server to produce a 404 from that last matcher itself. No need for anything fancy. But what actually happens is that it passes that traffic off to a whole separate container, configured as a Kubernetes service, whose raison d’etre is to produce HTTP 404s.

Hmm. This is beginning to feel like over-engineering, but the default backend is a real thing, and the Docker image for it is up to v1.5!

What’s really frustrating about it, though, is that it turns out to be the culprit.

Google, if you’re going to over-engineer things, at least engineer them correctly.