Kubernetes authentication via LDAP / Active Directory in VMware PKS / TKGI
Running software in production, most of the time requires following certain governance rules. One of which might be using personalized accounts (e.g. for auditing capabilities).
Kubernetes by itself does not provide a User-Management as you might know from other platforms. So you have to come up with a solution for this.
One Option could be, creating local accounts for everyone who needs access. But curating all these accounts is a very tedious task. Especially, if you already have a user directory like Active Directory (which is mostly the case).
But how do you connect K8s to your existing AD? The probably most common OpenSource-DIY-answer to this would be DEX.
Dex is pretty good but requires a bit of effort and knowledge (which you have to provide through the entire K8s lifetime).
Fortunately, VMware Enterprise PKS provides an out of the box solution for this =D
Note: VMware Enterprise PKS (formerly Pivotal Enterprise PKS) was recently renamed to VMware Tanzu Kubernetes Grid Integrated Edition.
But since the cli tool has not changed in the version I’m using (PKS 1.7), I’ll keep calling it PKS throughout this post.
In the most recent version (1.8), pks cli was also renamed to tkg cli.
RBAC in vanilla Kubernetes
The general concept is similar to other solutions – you can use a RoleBinding to bind a specific User to a specific Role. The Role in turn defines certain permissions.
User
There are two types of users – service accounts and normal users. Service accounts are stored within Kubernetes. Normal users are supposed to be managed outside of Kubernetes by a third party tool.
By default, everyone who provides a certificate, signed by the Cluster Certificate Authority is authenticated. The Certificate’s subject would be the “username”, which can be used in a RoleBinding.
Roles
kind: Role – a Role is defined within a Namespace and is only available within the same. Thus, a Role can only define permissions for objects which are living inside a Namespaces (e.g. Pods, Deployments, Services,…).
The following example shows a role, which grants permission to “read” pods within the namespace development.
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: auditor namespace: development rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "watch", "list"]
kind: ClusterRole – A ClusterRole is defined on the Cluster level. Thus it can assign permissions to objects which are not namespaced (e.g. Nodes, Namespaces itself,…).
You can also use a ClusterRole to assign permissions to namespaced objects. In that case, these permissions are globally applied to all namespaces (unless you define specific namespaces).
Following, a typical ClusterRole which grants permission to see secrets in any namespace:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secret-ro rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]
For ClusterRoles it’s also possible to create aggregated rules.
Bindings
Bindings provide the mapping between a user and a (Cluster)Role.
As there are two different kinds of roles, there have to be two kinds of bindings too.
kind: RoleBinding – binds of a Role to a User.
The following example binds the Role “auditor” to the user “John” within the namespace “development”.
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: rb-auditor namespace: development subjects: - kind: User name: John apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: auditor apiGroup: rbac.authorization.k8s.io
kind: ClusterRoleBinding – binds a ClusterRole to a User.
The following example binds the ClusterRole “secret-ro” to the user “Jane”.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: crb-secret-ro roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secret-ro subjects: - kind: User name: Jane
If you want to learn more about Kubernetes RBAC, check the official docs.
Authentication via Active Directory
I’m going to demonstrate, how to map ActiveDirectory Groups to a Kubernetes Role.
Initial Situation
I have one K8s Cluster called tkg-common.
I have two (relevant) namespaces in that Cluster “ns-dev-green” and “ns-dev-blue“
I have two teams “dev-green” and “dev-blue“.
Both teams have a dedicated AD-Group “g_dev-green-admins” and “g_dev-blue-admins“
I want to assign each team via their respective AD Group to their respective namespace.
They should be able to do whatever they want in their namespace, but they should not have any permissions outside of it.
Prepare the PKS Installation
If you are using the PKS Management Console, log in with root and navigate to (1) Administration –> (2) PKS Configuration –> (3) Wizard –> (4) 3. Identity
(If you are not using the Management Console, you can find the same options in Ops Manager –> PKS Tile –> UAA)
Check (1) AD/LDAP radio button. Afterward, enter your (2) AD information.
Note, that all the paths are required as distinguished names. Also note, that I’m using (3) userPrincipalName instead of the default “cn“.
This step enables you to bind AD groups to your PKS installation. If you only want to use AD Users to manage your K8s Cluster from an infra-perspective, this is enough.
But we want to also use AD Users within our K8s Clusters. Therefore we have to check (4) Configure Created Clusters to Use UAA as the OIDC Provider. Leave the UAA information as they are (but keep a note of the UAA OIDC Groups Prefix, as we need it later).
Apply the changes and wait until your environment is ready again.
Afterward, you either have to create a new cluster or upgrade the existing.
Creating Custom Roles
I’m assuming, you have a working Login for your PKS environment already. If you don’t, check Managing Enterprise PKS Users with UAA
Login to PKS via CLI and get the cluster credentials. As you see, I’ve only one cluster, called tkg-common.
vraccoon@ubu:~$ pks login -a pks.vraccoon.lab -u pks-admin@vraccoon.lab --ca-cert docs/certs/opsman-root-ca.cer API Endpoint: pks.vraccoon.lab User: pks-admin@vraccoon.lab Login successful. vraccoon@ubu:~$ pks clusters PKS Version Name k8s Version Plan Name UUID Status Action 1.7.0-build.26 tkg-common 1.16.7 small-lb ba3535a8-513d-4dde-9a6a-1ce728158903 succeeded CREATE vraccoon@ubu:~$ pks get-credentials tkg-common Fetching credentials for cluster tkg-common. Password: ************ Context set for cluster tkg-common. You can now switch between clusters by using: $kubectl config use-context <cluster-name>
I’ve created a small yaml file, which creates the Namespace, the Role, and the RoleBinding:
apiVersion: v1 kind: Namespace metadata: name: ns-dev-green --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-dev-green namespace: ns-dev-green rules: - apiGroups: - '*' resources: - '*' verbs: - '*' --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: rolebinding-dev-green namespace: ns-dev-green subjects: - kind: Group name: oidc:g_dev-green-admins apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: role-dev-green apiGroup: rbac.authorization.k8s.io
Lines 1-4: This simply creates the Namespace called “ns-dev-green”
Lines 7-18: Creates a Role called “role-dev-green” within the namespace “ns-dev-green” and allows all actions (=verbs) against all resources in all API Groups.
Lines:
Lines 21-33: Creates a RoleBinding called “rolebinding-dev-green” within the namespace “ns-dev-green”. It links the role “role-dev-green” (lines 30-33) to group “oidc:g_dev-green-admins” (lines 26-29).
Note that I’ve added prefix “oidc:” to my AD Group’s name. This is the prefix I’ve specified earlier in the PKS Management Console.
Let’s create these objects:
vraccoon@ubu:~$ kubectl create -f dev-green-admins.yaml namespace/ns-dev-green created role.rbac.authorization.k8s.io/role-dev-green created rolebinding.rbac.authorization.k8s.io/rolebinding-dev-green created vraccoon@ubu:~$
Login user with dev-green-admin
In order to test it (and prevent confusion), I’ll first delete my kubeconfig and my pks token:
vraccoon@ubu:~$ rm -R ~/.kube ~/.pks
Now we can get the kubeconfig for user “dev-green-admin@vraccoon.lab”, which is member of the group “dev-green-admins”:
vraccoon@ubu:~$ pks get-kubeconfig tkg-common --api pks.vraccoon.lab --ca-cert opsman-root-ca.cer --username dev-green-admin@vraccoon.lab Password: ************ Fetching kubeconfig for cluster tkg-common and user dev-green-admin@vraccoon.lab. You can now use the kubeconfig for user dev-green-admin@vraccoon.lab: $kubectl config use-context tkg-common vraccoon@ubu:~$
The syntax is as followed:
pks get-kubeconfig <pks-cluster-name> –api <pks-api-endpoint> –ca-cert <ca-which-signed-this-cluster> –username <userPrincipalName>
Note: Though, we are using pks cli, this user is not allowed to create/modify any clusters. In fact, this user is not even allowed to login via “pks login …” command.
Test User Permissions
Let’s check for Pods in this Namespace:
vraccoon@ubu:~$ kubectl get pods Error from server (Forbidden): pods is forbidden: User "oidc:dev-green-admin@vraccoon.lab" cannot list resource "pods" in API group "" in the namespace "default"
We are getting a “Forbidden”. But it also says namespace “default”.
So let’s check our current context:
vraccoon@ubu:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * tkg-common tkg-common dev-green-admin@vraccoon.lab
Apparently, there is no Namespace defined, which throws us into the default Namespace. But as you know, the role, we created earlier, only grants us permissions within the namespace “ns-dev-green”.
So let’s switch into this namespace:
vraccoon@ubu:~$ kubectl config set-context $(kubectl config current-context) --namespace=ns-dev-green Context "tkg-common" modified. vraccoon@ubu:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * tkg-common tkg-common dev-green-admin@vraccoon.lab ns-dev-green
Now our default context is “ns-dev-green”.
So let’s check for Pods again:
vraccoon@ubu:~$ k get pods No resources found.
That looks good. There are no Pods, but we are not getting a Forbidden!
Let’s try to start a Pod (Quick’n’Dirty though):
vraccoon@ubu:~$ kubectl run nginx-green --image=nginx --generator=run-pod/v1 pod/nginx-green created vraccoon@ubu:~$ k get pods NAME READY STATUS RESTARTS AGE nginx-green 1/1 Running 0 5s
That works too.
One final test – start a pod in dev-blue’s namespace “ns-dev-blue”
vraccoon@ubu:~$ kubectl run nginx-green-invaders --image=nginx --generator=run-pod/v1 --namespace=ns-dev-blue Error from server (Forbidden): pods is forbidden: User "oidc:dev-green-admin@vraccoon.lab" cannot create resource "pods" in API group "" in the namespace "ns-dev-blue"
Again, we are getting a forbidden – as expected.
Conclusion
We have seen, how to realize RBAC with ActiveDirectory Groups within Kubernetes, using VMware Enterprise PKS / VMware Tanzu Kubernetes Integrated Edition.
For the sake of simplicity, I’ve shown a role, which was supposed to be very easily understandable. But you can literally go crazy with defining these permissions.
The important part is, that using PKS / TKGi provides an out of the box solution to integrate your user directory (ActiveDirectory, LDAP, SAML) to your Kubernetes Clusters.