2️⃣Deploy Kubernetes

In this section, I will show how to deploy Kubernetes 1.21 with Calico CNI plugin.
Please ensure that you have a sufficient number of nodes prepared and have already installed Ubuntu Server 20.04.
The same deployment method is applicable at least until version 1.23 and has been verified to be upgradable to the current latest version 1.27 in subsequent articles.

Add a Model

The model holds a specific deployment. It is a good idea to create a new one specifically for each deployment.
Remember that you can have multiple models on each controller, so you can deploy multiple Kubernetes clusters, or other applications.

Machine Preparation

Add some machines

For a Production-Ready Charmed Kubernetes, you should have at least three etcd nodes, two master nodes, and several worker nodes. All these nodes should have password-less login set up in advance.
:::tip{title="Note"} the master and etcd are not deployed on the same node. Although they can be deployed on the same node, I recommend deploying etcd separately. :::

View existing machines

Run the following command to see if the machine has been set up. Also, make sure to remember the machine's ID because we'll need it for our deployment later on.

Deploy Kubernetes

Now, we begin deploying Kubernetes according to the roles assigned to the machine in the Machine Preparation step.
Please proceed with the following deployment command:
:::warning{title="Note"} At the beginning, we will not deploy it in a high-availability state. We will temporarily deploy only a single replica for each component.
Please make sure to replace the "service-cidr" and "calico-cidr" with the appropriate values.
We use Calico CNI and enable IgnoreLooseRPF, You need to install ethtool on the machine in advance. :::
Juju is now busy creating instances, installing software and connecting the different parts of the cluster together, which can take several minutes. You can monitor what’s going on by running:
Once all the workloads are displayed as active, we can start increasing the number of replicas for the components by running:

KubeAPI Load Balancer

Once the scaling is complete, We've got two Master nodes, and to spread the load across both, we need to bring in a load balancer (LB). This LB can be set up using a controller internally, or externally using F5 or other load balancing devices.

Software LB

If you are using an external load balancer, you can skip this part.
Here's how to set up the software LB using a controller.
Now, we have a load balancer distributing requests across two master nodes.
You may have noticed that the load balancer itself isn't highly available as it's deployed on a specific node. If this node experiences issues, worker nodes will be unable to access the master nodes. We can solve this issue by deploying Keepalived:

External Load Balancer

If you are not using an external load balancer, you can skip this part.
When using an external load balancer, you need to distribute the traffic to two Master nodes, and it is recommended to ensure high availability of the external load balancer in a production environment.

Retrieve Config File

Finally, You can use the following command to retrieve the cluster config file:

Use CoreDNS charm

CoreDNS has been the default DNS provider for Charmed Kubernetes clusters since 1.14. It will be installed and configured as part of the install process of Charmed Kubernetes.
For additional control over CoreDNS, you can also deploy it into the cluster using the CoreDNS Kubernetes operator charm.
Once everything settles out, new or restarted pods will use the CoreDNS charm as their DNS provider. The CoreDNS charm config allows you to change the cluster domain, the IP address or config file to forward unhandled queries to, add additional DNS servers, or even override the Corefile entirely.
Bootstrap a ControllerUpgrade Kubernetes