Setting Ressource Quotas with Tanzu Mission Control
Nobody wants a noisy neighbor. Neither in real life nor in IT environments. Though in real life it can be difficult to keep the neighbors at bay, in IT environment, we can simply restrict the amount of allowed resources by setting quotas on certain levels. Nothing new so far.
Kubernetes also has a concept of Resource Quotas. You can set these quotas on Namespace level to restrict things like memory, cpu or storage beeing used or set limits on the total amount of objects from a certain kind (e.g LoadBalancers). Pretty straight forward so far.
But setting this on Namespace level can be a quite tedious task, especially with many Namespaces on many different clusters. And this is where Tanzu Mission Control can help you. With TMC, you can set up groups of clusters and apply Quota Policies to the entire group and include/exclude certain Namespaces based on Labels, as you would expect with Kubernetes.
My Setup
I’ve already two Cluster prepared:
tkgi-c1 → A TKGI K8s Cluster, provisioned from the TKGI MC and afterwards attached to TMC
tmc-c2 → A Tanzu Kubernetes Cluster, provisioned from TMC straight into my Supervisor Cluster
Both sitting in a TMC Cluster Group called “cg-vraccoon”.
On both of them, I will create a Namespace called “ns-quota” which will get the Quota Policy assigned.
Create Namespaces
I know, creating a Namespace in K8s is super simple, but since I have TMC, I wanted to do it through the Web Console:
Navigate to Clusters → tkgi-c1 → Namespaces → Click CREATE NAMESPACE
In the following screen enter the Namspace specs:
- Name: ns-quota
- Cluster: tkgi-c1
- Workspace: default
- Labels
- type: resource-restricted
The Label is really just a Label, used to identify the Namespaces later. It can be called however you want. But don’t forget to click ADD LABEL (otherwise, the Label will not be saved) then, click CREATE.
Repeat the process for the second cluster. Make sure, it has the same Label applied too.
Create the Quota Policy
Quota Policies are assigned to either Clusters or Cluster Groups (even though they actually apply to Namespaces).
Navigate to Policies → Assignments → Quotas → Clusters → cg-vraccoon (the Cluster group)
Then click CREATE QUOTA POLICY.
In the wizard, enter the following:
- Policy name: pol-quota
- CPU requests/CPU limits: leave empty
- Memory requests: 64 Mi
- Memory limits: 128 Mi
Next, we can either include or exclude specific Namespaces identified by selectors. If you leave both empty, the quotas will be assigned to all Namespaces (excluding the kube-* Namespaces). In our case, we only want to include the two Namespaces created earlier, which can be identified by the type:resource-restricted label.
Label selectors:
type : resource-restricted
Don’t forget to click ADD LABEL SELECTOR, afterwards click SAVE.
Test the Policy
Now as the policy seems to be applied, let’s jump over to the K8s cluster(s).
Check if the Policy is applied
I’m logged in to both K8s clusters:
vraccoon@ubu:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE 172.31.201.1 172.31.201.1 wcp:172.31.201.1:administrator@vsphere.local tkgi-c1 tkgi-c1 tkgi-admin@vraccoon.lab tmc-c2 172.31.201.3 wcp:172.31.201.3:administrator@vsphere.local tmc-clusters 172.31.201.1 wcp:172.31.201.1:administrator@vsphere.local tmc-clusters
Display Resource Quotas on tmc-c2:
vraccoon@ubu:~$ kubectl config use-context tmc-c2 Switched to context "tmc-c2". vraccoon@ubu:~$ kubectl get resourcequotas -A NAMESPACE NAME AGE REQUEST LIMIT ns-quota tmc.cgp.pol-quota 7m2s requests.memory: 0/64Mi limits.memory: 0/128Mi
Display Resource Quotas on tkgi-c1:
vraccoon@ubu:~$ kubectl config use-context tkgi-c1 Switched to context "tkgi-c1". vraccoon@ubu:~$ kubectl get resourcequotas -A NAMESPACE NAME AGE REQUEST LIMIT ns-quota tmc.cgp.pol-quota 7m29s requests.memory: 0/64Mi limits.memory: 0/128Mi
We can see the quota policy on both clusters, as expected. From here on, I’ll stay on the tkgi-c1 cluster, since the K8s behavior is the same on both clusters.
Test the Quota
This is pretty much basic Kubernetes stuff, but it just didn’t feel right to only apply a quota without testing it.
So I prepared a small deployment file, with resource requests/limits for memory and saved it as dep-test-quota.yaml:
apiVersion: apps/v1 kind: Deployment metadata: labels: app: dep-test-quota name: dep-test-quota namespace: ns-quota spec: replicas: 1 selector: matchLabels: app: dep-test-quota template: metadata: labels: app: dep-test-quota spec: containers: - image: tutum/hello-world name: hello-world resources: limits: memory: 64Mi requests: memory: 32Mi
Create the deployment:
vraccoon@ubu:~$ kubectl create -f dep-test-quota.yaml deployment.apps/dep-test-quota created
Check if the Pod is running:
vraccoon@ubu:~$ kubectl -n ns-quota get pods NAME READY STATUS RESTARTS AGE dep-test-quota-8f8c4966-wsz4f 1/1 Running 0 53s
Looking good – Check the Quota next:
vraccoon@ubu:~$ kubectl -n ns-quota describe resourcequotas tmc.cgp.pol-quota Name: tmc.cgp.pol-quota Namespace: ns-quota Resource Used Hard -------- ---- ---- limits.memory 64Mi 128Mi requests.memory 32Mi 64Mi
Seems good so far. The Deployment is utilizing 50% of the allowed resources so far. So let’s scale the deployment to 3 Replicas. That would mean, the deployment is requesting 150% of the allowed resources, thus one pod/replica should fail:
vraccoon@ubu:~$ kubectl -n ns-quota scale deployment --replicas=3 dep-test-quota deployment.apps/dep-test-quota scaled vraccoon@ubu:~$ kubectl -n ns-quota get deployment,replicaset,pod NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/dep-test-quota 2/3 2 2 2m42s NAME DESIRED CURRENT READY AGE replicaset.apps/dep-test-quota-8f8c4966 3 2 2 2m42s NAME READY STATUS RESTARTS AGE pod/dep-test-quota-8f8c4966-lpjvg 1/1 Running 0 22s pod/dep-test-quota-8f8c4966-wsz4f 1/1 Running 0 2m42s
As expected, line 6 shows that the Deployment has only 2/3 replica ready. Pretty obvious why, but let’s have a look at this anyway by checking the events:
vraccoon@ubu:~$ kubectl -n ns-quota get events LAST SEEN TYPE REASON OBJECT MESSAGE 106s Normal Scheduled pod/dep-test-quota-8f8c4966-lpjvg Successfully assigned ns-quota/dep-test-quota-8f8c4966-lpjvg to cea46ae2-2305-4193-843a-bdb59c0f80af 105s Normal Pulling pod/dep-test-quota-8f8c4966-lpjvg Pulling image "tutum/hello-world" 104s Normal Pulled pod/dep-test-quota-8f8c4966-lpjvg Successfully pulled image "tutum/hello-world" in 1.422727654s 104s Normal Created pod/dep-test-quota-8f8c4966-lpjvg Created container hello-world 104s Normal Started pod/dep-test-quota-8f8c4966-lpjvg Started container hello-world 4m6s Normal Scheduled pod/dep-test-quota-8f8c4966-wsz4f Successfully assigned ns-quota/dep-test-quota-8f8c4966-wsz4f to cea46ae2-2305-4193-843a-bdb59c0f80af 4m6s Normal Pulling pod/dep-test-quota-8f8c4966-wsz4f Pulling image "tutum/hello-world" 4m4s Normal Pulled pod/dep-test-quota-8f8c4966-wsz4f Successfully pulled image "tutum/hello-world" in 1.480966178s 4m4s Normal Created pod/dep-test-quota-8f8c4966-wsz4f Created container hello-world 4m4s Normal Started pod/dep-test-quota-8f8c4966-wsz4f Started container hello-world 4m7s Normal SuccessfulCreate replicaset/dep-test-quota-8f8c4966 Created pod: dep-test-quota-8f8c4966-wsz4f 107s Normal SuccessfulCreate replicaset/dep-test-quota-8f8c4966 Created pod: dep-test-quota-8f8c4966-lpjvg 107s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-vkpjv" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 107s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-5pwxk" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 107s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-lzrb7" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 107s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-9mf4d" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 107s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-ntt4r" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 107s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-hwmns" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 107s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-fpmrj" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 106s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-pnr5q" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 106s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 Error creating: pods "dep-test-quota-8f8c4966-vktkm" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 40s Warning FailedCreate replicaset/dep-test-quota-8f8c4966 (combined from similar events): Error creating: pods "dep-test-quota-8f8c4966-kn8mg" is forbidden: exceeded quota: tmc.cgp.pol-quota, requested: limits.memory=64Mi,requests.memory=32Mi, used: limits.memory=128Mi,requests.memory=64Mi, limited: limits.memory=128Mi,requests.memory=64Mi 4m7s Normal ScalingReplicaSet deployment/dep-test-quota Scaled up replica set dep-test-quota-8f8c4966 to 1 107s Normal ScalingReplicaSet deployment/dep-test-quota Scaled up replica set dep-test-quota-8f8c4966 to 3
Line 24 tells us pretty clear, that we are exceeding our quota (not that we are suprised though).
Conclusion
Resource Quotas are nothing super special. And Tanzu Mission Control is not doing any black magic to enforce them in the K8s clusters. But it definitly helps you to manage them on a bigger scale.