Does Kubernetes support SELinux?

In my previous post about setting up a Kubernetes cluster using Fedora CoreOS nodes I mentioned the fact that SELinux should not be disabled when creating Kubernetes clusters.

That is in complete contradiction to what many online tutorials12345 will tell you. But why is that?

Generic “SELinux is just a PITA” myth

One reason is the idea, common among those who might have jumped across to RedHat-based system administration from other distributions, that SELinux breaks things and it’s just easier to disable it. It isn’t, really, not for servers. Maybe for a workstation that could be an acceptable compromise: you might do unpredictable things on it, so you might end up spending a lot of time troubleshooting a tool that feels like it’s getting in the way of your work most of the time without much of a tangible security benefit.

But servers are different: you use them to perform a job that is somewhat pre-defined: you’re either running software directly on it, and you should know what that software is supposed to do with the network and the file system, or you’re running containers, in which case what you’re doing is still somewhat predictable, and you should regulate to the extent to which the workload is predictable.

The “SELinux doesn’t work with K8S because kubelet doesn’t support it” myth

Another reason that originated this idea that one should disable SELinux for Kubernetes clusters is that Kubernetes specifically doesn’t play nice with SELinux because the Kubernetes official kubeadm setup tutorial says so in the tab about RedHat-based distributions which states, talking about disabling SELinux, that

This is required to allow containers to access the host filesystem, which is needed by pod networks for example.

Digging deeper, one will find, however, that SELinux is being used by people in production Kubernetes clusters, and there really aren’t many issues, at most what is reported is the need to set SELinux labels on a few files and directories on some setups, but other setups may not even require any additional work. You can test this yourself with the example Kubernetes setup tutorial I wrote and mentioned at the start of this post.

As Daniel Walsh himself wrote in a blog post, CRI-O integrates very well with SELinux and prevents dangerous actions like a container loading an old, unmaintained and therefore potentially vulnerable kernel module and breaking out of the isolation. Additionally, the Kubernetes API itself contains resources to specifically configure SELinux labels for containers. Doesn’t sound like something they would do for a tool that “doesn’t work with Kubernetes”, according to some. Also, the CNCF security whitepaper mentions SELinux as a tool that can be used to provide isolation and limit privileges, which is as much as we could expect from an high-level, architecturally-minded document.

Kubernetes DOES work with SELinux enabled, so you shouldn’t disable it, certainly not before even trying

In conclusion, try things for yourself before giving up on a tool that could end up providing a critical security benefit. If you want to share a story about your own experience with SELinux and Kubernetes, it would be really appreciated in the comments to this post on dev.to or directly to my email carmine@carminezacc.com


  1. https://www.matthiaspreu.com/posts/fedora-coreos-kubernetes-basic-setup/ ↩︎

  2. https://ostechnix.com/install-kubernetes-cluster-using-kubeadm/ ↩︎

  3. https://upcloud.com/resources/tutorials/install-kubernetes-cluster-centos-8 ↩︎

  4. https://docs.oracle.com/en/operating-systems/oracle-linux/kubernetes/kube_ha.html#requirements-selinux-ha ↩︎

  5. https://linoxide.com/setup-kubernetes-kubeadm-centos/ ↩︎