Tanzu Kubernetes Grid on vSphere 6.7 – Deploy Workload Cluster

In my Last Post Tanzu Kubernetes Grid on vSphere 6.7, I’ve shown how to deploy the TKG Management Cluster, using tkg-cli. Now, I wanna actually use it and deploy a Production Cluster.

“Login” with tkg-CLI

It’s not a traditional login though. In my last Post I used Windows to deploy everything. Now I’m changing gears and switch over to my Ubuntu VM. I’ve installed the tkg-cli already, but it doesn’t know anything about the existence of the Management Cluster:

vraccoon@ubu:~$ tkg get management-cluster
 MANAGEMENT-CLUSTER-NAME  CONTEXT-NAME

In order to add it, we need the kubeconfig file from the management cluster and put it to ~/.kube/config

vraccoon@ubu:~$ cat .kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5ekNDQWJPZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EWXdOakV4TVRFd04xb1hEVE13TURZd05ERXhNVFl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT1g5ClVXVzF0SERSREtWaGpOZVF5UGN4V3czR0RwWFk4WW9PV0w1MEpVV2NmU0cva3BHekh2bHZ5a3JMWnl2NkhMU0wKWUo3YTM2YVFUcWlZZ1BIR0F6ZHRCeFZXbERaRUJ1L2FJN2lkZHJGc2x4emVqUVdGeXU4dXlGRnBpTHE3N0RWNwpvNmZVK0o5OEZNV0cydkI4S09UVTJRK2ZjZDBzNEIzZzY0b1NyelF4cE5kQXJTYU9tRHM5Z0NhdC8xSVRFaERrCmN3WndscGpqcGIyb2ZydkJuZHdqZVhGU0JtM2hBZHozNC85SDRrYzBFSDZZU1krdFNCa1B2dEl2cnl5VG1EMC8KM1c4Z05yK3ROeU1MMUhFVVB6NWRFbGNuTHRnREgvVDM0Q0V6V04zNkdObGRZbEJlME1FRi8vN2tuSFdiV1NzVgppeGZIVGFzbGhwRzFIdCt3TzJzQ0F3RUFBYU1tTUNRd0RnWURWUjBQQVFIL0JBUURBZ0trTUJJR0ExVWRFd0VCCi93UUlNQVlCQWY4Q0FRQXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRjhqMDJ1N1pnWHQ1dHpTNWw4TjVyNjYKNWc0U3B6VVZmOGV3ZXRkcFUreUo3RXVERU5hVG51OTRpeHI2dkRwa3c1TFRJbWtsWVpQYXJmckh0K1FMbWxidQplSnNPUGRmMWl6M3ZtcFFKcUczanczNUE4WHNRYkZoTnNnVG01eUJHYWxmQzBTdFRDVmJEOGNSSGZuNncvRU9QClA2TjBGb0dPQWtJdVBJR25MUDgxaXl4R2VvMlkrRzhmdXNJQTFXVzJ2Sk1LLzdvSGxrNFpCbGs2RWZwMjJCWk8Kd1J0ZkpGWS9rRkZBTFgxUFlKN0FCVVpIdkYzanZmM05TZmpyVUx1THJ1TUl2dEovdkJjVStFUEM5NGhPZXV0egpjaHZpTTE4UkhZZThHTW5aWVZlcDRuYU1kbGlLQVJ4T2hRY2FxL3c5dWYxTUdkZG9va3dyME9DQXQ3NXVuYzg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://172.31.1.184:6443
  name: tkg-mgmt-vsphere-20200606130718
contexts:
- context:
    cluster: tkg-mgmt-vsphere-20200606130718
    user: tkg-mgmt-vsphere-20200606130718-admin
  name: tkg-mgmt-vsphere-20200606130718-admin@tkg-mgmt-vsphere-20200606130718
current-context: tkg-mgmt-vsphere-20200606130718-admin@tkg-mgmt-vsphere-20200606130718
kind: Config
preferences: {}
users:
- name: tkg-mgmt-vsphere-20200606130718-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJY21FYkdoSXBOZWt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBMk1EWXhNVEV4TURkYUZ3MHlNVEEyTURZeE1URTJNRGhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXVZV1pYSVhqdzdDNUxCWDMKMWowRUFxUy9TUkh2OUVnTEp5ZmxLNUlpOW1sUDhmQlR3aUlNRDJmWXB2OS8vUWdJTVhiVmV0K1VZTnQ3WmtJVwpIM0dTMmVEbkdxenVmdzcwTnFIQXhtYlJONWgrSjYyQjM4Rml4WkpDUnUzSG9ZVGZvUklzN044V3pMRDlVQjRwCmhJTUI3RlV6YTBEQVVKYkV1czR5QU5DUVVlYmZFUHJVUjRGQ2NwUUxNTG5XREx6TVRoWWsvS1NYZE9aRzJxRXkKd3JvNlI4ajkxNTNtVElzU0NkZlFnSW94bWpsUWhyRDZDWEErL3NEMEFNWHN1QktrcVM0WEtKamtuYnV4aFBwQwpXM0Z1eDRBRnNrOC9QRERaOXFUM3hYL2N4RVl4WkxoM2NXNVlHdHBBblNuV3F0WDhFVHlsOENDU0IrSU84UWxiCm9CSzJ5UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFFT3lmS0dkTGhzS3hzeWFrbXpuVDhaQU1nRExMMFFnTnNsSQpwT1NLUXhmdk1Ib0Jkd3FycjdpMm1PTzR4YkdoYzcyLzUzMmlPRGJMYVlUVXZNNXZGVDYxalFIVnZWOUdRUVMrCkRoVG8zcTRWdnhGdDYxNmJldUhzZ29qZmM0Z2h3Mkl3M2hpUnJPYWxxYkV5S2hkVS9RMFEwUDBMYWsrZE5QTC8KbmpCblVhekpoampjK3pmWW81THZRSnA2U1R3anNCTGQ2N0s0TmQzV1piSCtHMGRWclczMUthb1AzVHFVbkNaYgpnejlrdklQTFNrdVZCMmRRYXEwZWV2MkgvRnRaRURKYmxDN01XUjAxSjRIdXVLY09MTzM1RjVKOTQ0UjVMTXZNCkwyOVdGejdKdDV1MVhySWZ4M1RFaXRnTVpzQkIvS3hFMm5FOWY5WFVZK0NValVSZE9uZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBdVlXWlhJWGp3N0M1TEJYMzFqMEVBcVMvU1JIdjlFZ0xKeWZsSzVJaTltbFA4ZkJUCndpSU1EMmZZcHY5Ly9RZ0lNWGJWZXQrVVlOdDdaa0lXSDNHUzJlRG5HcXp1Znc3ME5xSEF4bWJSTjVoK0o2MkIKMzhGaXhaSkNSdTNIb1lUZm9SSXM3TjhXekxEOVVCNHBoSU1CN0ZVemEwREFVSmJFdXM0eUFOQ1FVZWJmRVByVQpSNEZDY3BRTE1MbldETHpNVGhZay9LU1hkT1pHMnFFeXdybzZSOGo5MTUzbVRJc1NDZGZRZ0lveG1qbFFockQ2CkNYQSsvc0QwQU1Yc3VCS2txUzRYS0pqa25idXhoUHBDVzNGdXg0QUZzazgvUEREWjlxVDN4WC9jeEVZeFpMaDMKY1c1WUd0cEFuU25XcXRYOEVUeWw4Q0NTQitJTzhRbGJvQksyeVFJREFRQUJBb0lCQUEwNC8zQS92cmNRM25ITQo4d2dhK3pFeENzMHJjUjNKRUxwdXRuKy9mNnh1WHh0UVZMZnVjMHVaekRCQzM1MXFPQ05HWS9ySStxdFltVmYxCmQ5d3YzUmFZV0FCbnVPdm5aZktLM3RHRlBING82VHpzdWVmM3daRnhWalgxOXBlRmYrYmNBOFd5Tk03TUFwSDIKUkdGRWNSdW1DdThuQTAzN0lQUnJnOWJaQnFBL3BBdU5MZFFjdjhzMjZlTTVoY3JoV2FLM05WWnNIMXIvR1VBNApOcWxDV0Q4TjhYTW5KSHZ5TG9laTQ3Uk1MQzNFVUM1K3lGS3lHbEl1MEpTSjltZldXSDRCcWRKOElJNzJJTmxwCjFaOUFQOUcrZjcyWTY0emxnNUppVjUyVGh0ek93QVFZNVpRS3ZEZVdEWDkrTGp3TlBwWmhKYm1qMnpqREpmdXAKdzlSSmZ1MENnWUVBeGg2VmE1dGVQeE13aGFBbFdzem5PWW9IU29BQnlKc1dSV0FQbmFwamYraGpHL1o0VWZ2MwpNVjkyRUZMd1JidjZnNWkzOTNrUkxNZk45aXZHa00wcFV5VFVaM3ZxbmM3ZWF6c3BCWDRrelduRkhJVk5mOVg1CmZYTExOOHpkdDQxb1Y2ckhaU2oxdXc3d2crWUlyUXB1bjBIT1BSbkE2ZjdvSW1LOGxhQkhoRzhDZ1lFQTc3alcKVi83T3gvYnhydjJmT1BkdGFjUmxPYWFYbzhRL3IvYm1DNTEyZk9tMzlGZ0xQTWpKTzVsZWNaTmlHanpRWG1WWAp1YVU2cmNzL1dZRGxIQ0pWaXU5RTBIN1BpcWRCc0tNNVFoRDlHa1RpckhINHdGZE9NSjd0K1RHRHRDOTlMcE1rCnhZQzliNWd6eHByNkM3NlRaRWpYZDZXUWNRZk9GODd1SnA2b3hFY0NnWUVBb2ZlYTNIQVdhcVo3Z3FMY0p4RmcKNzI5aWFvdWY2YXF3V0dNaUlSbU5ZcUpQZENyWlR0MFl4NnB0VVFjZEcwV0VsbFVpQVJWZTd5Y2h3R0VsWW5mMwprdHVITWxyaUFjVi9uRmF2UUtoUjJnVGdlbUtZYXl3NVhVK2R4NjZhakZiMHVNY0xZQzVPUm5EK1BEYXhYUllzClBkS0VrdnNjOWEvSmcyTUpIYUg5RmMwQ2dZRUFoSjhNcmpnQTdDM1pQWWVBckFKdTNLSFRvcFpndERCaFQ1ZFgKWTd1a2pxeTZvWXFJQlFQTUdKWGI0eGUzb1c1ZGxLdWFZZEZnYWovVWQwN1E4Y1NvOEtrNHQrUkFLNlFtdW5OQwp2U01xODNxQ3NRYUlxTmhrSUkvOGRlMkh3SXE1UmFnUUl0ZjdkWjZObm1Xa3loU1A5RjQ4SGl4UmdjYmdwTmxOCmRiNDIvZ2NDZ1lFQXd0TC9Cdk9IUlo3MEwzblk2UkJsM1pNYmRUU0NWdmh1SlFOZEQ4TXJaTUI1bTNEQTVmTisKa3pBOHZtRHlEZjJwaDM4SVZNckR4cWFRNHpseUViSVF1d0d0bExuSzlBNzNlWVp2UTUxbGRoL1VWYTdkbmJvMgpWM0NZZXdIaTY4ZU5YaTVkOXVrbWlsRVpmYVVDQjVLRnJyQ3ZyUnFFUzFpQmVjSlBVVDR4ejFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

Let’s check if its working:

vraccoon@ubu:~$ kubectl get nodes
NAME                                                    STATUS   ROLES    AGE     VERSION
tkg-mgmt-vsphere-20200606130718-control-plane-262zv     Ready    master   4h37m   v1.18.2+vmware.1
tkg-mgmt-vsphere-20200606130718-md-0-7d96774647-8hvq5   Ready    <none>   4h31m   v1.18.2+vmware.1

We can see two nodes, one if which seems to be the Master while the other is the Worker.
Now, as we have logged in our Kubernetes Management Cluster (meaning, we have the kubeconfig at it’s default location ~/.kube/config), we can add this cluster to tkg-cli:

vraccoon@ubu:~$ tkg add management-cluster -v 1
Using configuration file: /home/vraccoon/.tkg/config.yaml
New management cluster context has been added to config file

That looks good. Confirm, that it worked:

vraccoon@ubu:~$ tkg get management-cluster -o yaml
- name: tkg-mgmt-vsphere-20200606130718
  context: tkg-mgmt-vsphere-20200606130718-admin@tkg-mgmt-vsphere-20200606130718
  file: /home/vraccoon/.kube-tkg/config
  isCurrentContext: false

Ok, that worked.
We can manage multiple Manamgent-Cluster from our tkg-cli. Thus we need to set the context to the Management Cluster, we want to run our commands against:

vraccoon@ubu:~$ tkg set management-cluster tkg-mgmt-vsphere-20200606130718
The current management cluster context is switched to tkg-mgmt-vsphere-20200606130718

Deploy the Production Cluster

Deploying a cluster is actually pretty simple. Similar to the Management Cluster, you have two different templates aka “plans” – dev (single management plane) and prod (high available management plane).

Simply run tkg create cluster tkg-helloworld –plan=dev

vraccoon@ubu:~$ tkg create cluster tkg-helloworld --plan=dev
Logs of the command execution can also be found at: /tmp/tkg-20200607T140140031038029.log
Creating workload cluster 'tkg-helloworld'...

Validating configuration...
Waiting for cluster nodes to be available...

Workload cluster 'tkg-helloworld' created

vraccoon@ubu:~$

When the deployment finished, we can see the cluster as part of tkg:

vraccoon@ubu:~/.kube$ tkg get clusters
 NAME            NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES
 tkg-helloworld  default    running  1/1           2/2      v1.18.2+vmware.1

Next, we need to get the credentials:

vraccoon@ubu:~/.kube$ tkg get credentials tkg-helloworld
Credentials of workload cluster 'tkg-helloworld' have been saved
You can now access the cluster by switching the context to 'tkg-helloworld-admin@tkg-helloworld'

The credentials are automatically added to the kubeconfig file (~/.kube/config):

CURRENT   NAME                                                                    CLUSTER                           AUTHINFO                                NAMESPACE
          tkg-helloworld-admin@tkg-helloworld                                     tkg-helloworld                    tkg-helloworld-admin
*         tkg-mgmt-vsphere-20200607144656-admin@tkg-mgmt-vsphere-20200607144656   tkg-mgmt-vsphere-20200607144656   tkg-mgmt-vsphere-20200607144656-admin

We only have to switch the context:

vraccoon@ubu:~/.kube$ kubectl config use-context tkg-helloworld-admin@tkg-helloworld
Switched to context "tkg-helloworld-admin@tkg-helloworld".

Lastly, let’s check the nodes in the recently deployed cluster:

vraccoon@ubu:~/.kube$ k get nodes
NAME                                   STATUS   ROLES    AGE     VERSION
tkg-helloworld-control-plane-tlvx7     Ready    master   12m     v1.18.2+vmware.1
tkg-helloworld-md-0-6f6c887655-57zpk   Ready    <none>   3m57s   v1.18.2+vmware.1
tkg-helloworld-md-0-6f6c887655-sl2wz   Ready    <none>   3m58s   v1.18.2+vmware.1

Last words

As you have seen, it’s actually pretty easy to deploy a kubernetes cluster with tkg. The deployed cluster kind of a child cluster of our Management Cluster. The Management Cluster also takes care of its lifecycle management and also manages the shared services.

2 Replies to “Tanzu Kubernetes Grid on vSphere 6.7 – Deploy Workload Cluster”

    1. I’m was not using NSX-T for this setup.
      The K8s Nodes where connected to the same L2 network which was a simple vDS Portgroup.
      Inside the K8s cluster, I was using Calico as CNI.

Leave a Reply

Your email address will not be published. Required fields are marked *