Following on from the tale of VMWare and the massive packets, another leak recently sprung by an abstraction...
Those of us who have been doing Docker for a while will probably have had some sort of security audit done, either for our own peace of mind or because a customer insisted.
One thing which crops up is that it's bad practice to run things inside containers as root. The vast majority of use-cases do not need the privileges thus bestowed, and in the event of a bug in Docker allowing a process to "break out" onto the underlying host, much better for it not to be running as the superuser.
So we've all been good (or at least, made a start on being good) and switched our containers over to running processes as a non-root user. You just add one as part of the Dockerfile and then switch to it. Apart from changing your web process to run on port 8080 rather than port 80, it probably made no difference and you felt happier for it.
Sadly, the satisfaction was a bit tainted last week when I encountered an unfortunate issue.
One or two containers will be stateful and thus need filesystems mounting in from the underlying host.
In the bad old days, everything in the container was running as root, so permissions on said files were not an issue.
In the brave new world, everything in the container was (probably) running as a user with UID 1001, and those files ended up owned by 1001. And all was well.
Right up until the point that somebody decided to create more than one local user on the underlying Docker/Kubernetes host. Let's say the group membership of user 1001 inside the container was necessary for it to read certain files. And now let's say that the "external" user 1001 lacked said group memberships ... guess which one wins?
I can't think of a good way to work around this as a container author - forcing us all to make our users have UIDs too high to be likely to exist outside seems bonkers - so the best I can suggest is that people building container hosts do not create multiple local user accounts on them.