Tanzu automatically adding DNS entries with External DNS
In my experience, a lot of companies are struggling to introduce Kubernetes and other Cloud Native solutions, simply because they are trying to operate it the “traditional way”. If you just implement new technologies and completely ignore operational concepts/processes and so on, you will probably fail or at least have a hard time to build acceptance and operate the environment.
One example for this are DNS entries. Imagine you are deploying a new App into your K8s Cluster. To be able to reach it, most of the time DNS Records are being created, not least because a lot of applications depend on forward and reverse lookup of their own names.
With K8s it takes minutes, if not seconds to deploy the app … then you wait until your Change Request Ticket will be handled … then you notice, you made a typo … redeploy again in seconds … new ticket, another day or two …
Wouldn’t it be better to have the DNS entries automatically created as part of the app deployment?
This is possible with a tool called External DNS. External DNS is not a DNS server itself. Instead, it propagates DNS entries from your ingress or httpproxy objects to your real DNS servers. It is compatible with a lot of different DNS servers like Google Cloud DNS, AWS Route 53, AzureDNS, Infoblox, CoreDNS, RFC2136 and many more.
I’m going to show how it could work with a Windows based DNS Server.
Preparing Microsoft DNS Server
In my lab, I’m using Microsoft Windows 2019 based DNS Server. Since MS DNS Server is not explicitly listed in the External DNS provider list, we can use the RFC2136 standard to communicate with it.
NOTE: This post is only supposed to show the concept. I’m pretty much disabling any security mechanism in my DNS Server in order to keep it simple. Don’t do this in production! Instead, use GSS-TSIG for authentication. I might cover it in a later post.
Prepare the DNS Zone
Open Windows DNS Server Manager, navigate to your Zone (or create a new), right-click on the zone (2) and click “Properties” (2)
In the “General” tab, select “Nonsecure and secure” (1) for dynamic updates.
Go to tab “Zone Transfers” check “Allow zone transfers” and either select “To any server” or enter a specific server.
That’s all I need to prepare on the DNS Server. I’ve basically allowed everyone to request a full zone transfer and update DNS entries. Pretty much a paradise for attackers =P
Again, in production you should use either TSIG or in case of MS DNS Servers GSS-TSIG.
Install External DNS extension
Next, we can install External DNS extension into our K8s Cluster.
My environment
I’m using a NSX-T based vSphere with Tanzu environment, with an Ubuntu based Tanzu Kubernetes Cluster in version 1.20.8.
However, it should work the same way, regardless of using NSX-T or NSX-ALB; vSphere with Tanzu or TKGm and any official Tanzu Kubernetes Release.
I’ve already installed the following :
- Tanzu Kubernetes Extensions Repository 1.5
- kApp Controller v0.30.0_vmware.1
- cert-manager 1.5.3+vmware.2-tkg.1
- contour 1.18.2+vmware.1-tkg.1
I haven’t configured anything special. For this showcase I’ve pretty much just followed the official docs, without doing any modifications: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.5/vmware-tanzu-kubernetes-grid-15/GUID-packages-prep-tkgs-kapp.html
Install External DNS
Installing External DNS extension is pretty straight forward too. Like with all TKG Extensions, it’s the data-values-file where the magic happens. Following is my external-dns-data-values.yaml
namespace: tanzu-system-service-discovery deployment: args: - --source=service - --source=contour-httpproxy - --domain-filter=vraccoon.lab - --policy=sync - --registry=txt - --txt-owner-id=k8s - --txt-prefix=external-dns- - --provider=rfc2136 - --rfc2136-host=172.31.1.10 - --rfc2136-port=53 - --rfc2136-zone=vraccoon.lab - --rfc2136-insecure - --rfc2136-tsig-axfr env: [] securityContext: {} volumeMounts: [] volumes: []
Lines 4-5: Specify which objects can trigger the creation/modification/deletion of a DNS record.
Line 6: Specify the domain to create DNS records for.
Line 7: If policy is set to “sync“, DNS records will also be deleted. By default it is set to “upsert-only“, which can only add records, but not delete them.
Lines 12-14: Specify the DNS server to be queried.
Line 15: Again, something you wouldn’t do in production.
After creating the external-dns-data-values-file, we can install the extension:
╭─vraccoon@home ~ ╰─➤ tanzu -n tanzu-package-repo-global package install external-dns -p external-dns.tanzu.vmware.com -v 0.10.0+vmware.1-tkg.1 -f external-dns-data-values.yaml - Installing package 'external-dns.tanzu.vmware.com' I0227 10:49:16.381540 9204 request.go:665] Waited for 1.017734495s due to client-side throttling, not priority and fairness, request: GET:https://172.31.250.3:6443/apis/acme.cert-manager.io/v1alpha3?timeout=32s | Installing package 'external-dns.tanzu.vmware.com' | Getting package metadata for 'external-dns.tanzu.vmware.com' | Creating service account 'external-dns-tanzu-package-repo-global-sa' | Creating cluster admin role 'external-dns-tanzu-package-repo-global-cluster-role' | Creating cluster role binding 'external-dns-tanzu-package-repo-global-cluster-rolebinding' | Creating secret 'external-dns-tanzu-package-repo-global-values' | Creating package resource / Waiting for 'PackageInstall' reconciliation for 'external-dns' | 'PackageInstall' resource install status: Reconciling Added installed package 'external-dns'
Run a test deployment
To test it, I’ve prepared a very simple deployment. It spins up a web server and creates a httpproxy object in front of it.
apiVersion: apps/v1 kind: Deployment metadata: labels: app: hello-world name: hello-world spec: selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - image: tutum/hello-world name: hello-world --- apiVersion: v1 kind: Service metadata: labels: app: hello-world name: hello-world spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: hello-world --- apiVersion: projectcontour.io/v1 kind: HTTPProxy metadata: name: hello-world spec: virtualhost: fqdn: hello-world.vraccoon.lab routes: - conditions: - prefix: / services: - name: hello-world port: 80
Line 40: This is the FQDN we want to use. It has not been defined anywhere else yet. Thus, the expectation is, that a new DNS entry with this value will be created.
After creating the above objects, we can check the external-dns pod logs:
╭─vraccoon@home ~/ ╰─➤ kubectl -n tanzu-system-service-discovery logs external-dns-7f959c6557-d4ndv -f <OUTPUT ABBREVIATED> time="2022-02-27T10:56:52Z" level=info msg="Adding RR: hello-world.vraccoon.lab 0 A 172.31.250.5" time="2022-02-27T10:56:52Z" level=info msg="Adding RR: external-dns-hello-world.vraccoon.lab 0 TXT \"heritage=external-dns,external-dns/owner=k8s,external-dns/resource=HTTPProxy/default/hello-world\"" <OUTPUT ABBREVIATED>
Apparently, a Resource Record (RR) has been created. Let’s double-check on the DNS Server.
Seems to work =D