To create a highly available control plane, we install kubeadm on the first control plane node almost the same way as for a single control plane cluster, then we join the other control plane nodes in a similar manner to joining worker nodes
ssh to controlplane01
ssh controlplane01
sudo -i
Set shell variable for the pod network CIDR.
POD_CIDR=192.168.0.0/16
Boot the first control plane using the IP address of the load balancer as the control plane endpoint
Set a shell variable to the IP of the loadbalancer
LOADBALANCER=$(dig +short loadbalancer)
…and install Kubernetes using --control-plane-endpoint
and --upload-certs
which instructs it that we are building for multiple controlplane nodes.
kubeadm init --pod-network-cidr $POD_CIDR \
--control-plane-endpoint ${LOADBALANCER}:6443 \
--upload-certs
Copy both join commands that are printed to a notepad for use on other control nodes and the worker nodes.
kubectl --kubeconfig /etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
kubectl --kubeconfig /etc/kubernetes/admin.conf create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
Check we are up and running
kubectl --kubeconfig /etc/kubernetes/admin.conf get pods -n kube-system
Exit root shell
exit
Prepare the kubeconfig file for copying to student-node
node, which is where we will run future kubectl
commands from.
{
sudo cp /etc/kubernetes/admin.conf .
sudo chmod 666 admin.conf
}
Exit to student-node
exit
On student-node
, Copy down the kubeconfig so we can run kubectl commands from student-node
```bash { mkdir ~/.kube scp controlplane01:~/admin.conf ~/.kube/config }
Be on student-node
For each of controlplane02
and controlplane03
controlplane02
Become root
sudo -i
kubeadm init
on controlplane01
student-node
exit
exit
controlplane03
Next: Worker setup</br> Prev: Node Setup