Tanzu Kubernetes Grid (TKG) lets you deploy upstream and open source projects in an automated fashion. The deployment can be performed across heterogeneous platforms like On-premises vSphere, Amazon Web Services, VMware Cloud on AWS. TKG, provides the automation capabilities for a production Kubernetes environment. In addition to this, it also provides supporting services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires.
TKG provides an easier way to spin up the Kubernetes clusters compared to the DIY options which involve several manual tasks. This is highlighted in Kelsey Hightower’s Kubernetes The Hard Way tutorial.
While the bring-up and configuration of the cluster is automated, a solution like TKG should also provide ways to scale clusters on-demand when the workloads grow.
Like any good automation tool, TKG not only provides automation capabilities around the creation of a Kubernetes cluster but also provides options to scale it up as and when required.
The TKG cluster deployment wizard has a standard set of sizes relating to the type of deployment you choose. For times when one might require a way to scale the existing cluster to meet the scalability needs, there’s the CLI way. In this blog post, we will look at how we can scale the cluster using the TKG command-line interface.
Let us first spin up a cluster with a dev plan. This will spin up a control plane, a load balancer and a worker node.
tkg create cluster abhilashb --plan dev
Let’s switch context to the newly created cluster and make sure we are making changes to it and not any other cluster. We do this by using the below set of commands.
tkg get credentials <cluster name> - abhilashb in our case
kubectl config use-context <context provided by above command> - "abhilashb-admin@abhilashb" in our case
kubectl get nodes - to cross verify you are getting the right context
Now let’s look at the commands to scale the cluster. On our cluster that we just created, we will scale the control plane nodes to 3 nodes (as we cannot have an even number of control plane nodes in a managed setup), let’s also increase the worker node count to 3 nodes.
The command for this is:
tkg scale cluster abhilashb --controlplane-machine-count 3 --worker-machine-count 3
Now let’s verify the nodes by running the kubectl get nodes command
We can now see that the cluster has scaled up to 3 control plane nodes and 3 worker nodes. Now all these available nodes can have applications deployed across them.
This is the power of automation that Tanzu Kubernetes Grid brings to the upstream Kubernetes. Try it in your environments and let me know what you think about it. I will also try to cover TKG’s integration with other tools like Velero and Prometheus in the upcoming blog posts. Stay tuned!!
Please drop a comment if you find this helpful or if you have any feedback for me.