Prepare and Build Production Environment for Kubernetes with kubeadm
Basically I am concentrating what’s been said on the official Kubernetes webpage here, so you can refer for detailed information.
Having Kubernetes up and running on a server, you need,
- Install a container runtime
- Install K8S command tools
There are three ways listed on the official website to setup Kubernetes and three container runtime choices. In this post, I will use docker as container runtime and kubeadm to setup the control plane.
Please note that in this post, I will only setup PRIVATE K8S cluster with ONE control plane node, NOT any work node.
All commands are executed on CentOS 8
Install container runtime
I have put all commands together as below.
curl https://raw.githubusercontent.com/ystatit/bash/master/k8s-docker.sh | bash
Install K8S command tools
Next, let’s install K8S command tools. Basically we are installing kubelet, kubectl and kubeadm.
curl https://raw.githubusercontent.com/ystatit/bash/master/k8s-tools.sh | bash
Initiate K8S control plane
At this point, we have already setup the environment for K8S. Now we start to initiate the cluster by using kubeadm command. If you do not specify --control-plane-endpoint flag, kubeadm will parse your current network segment configuration and use it for control plane endpoint ip address, which most likely is a private ip address, if you use VMs on clouds.
To initiate a Kubernetes cluster with public ip address for control plane endpoint, please refer Deploy Kubernetes with Specific Public IP Address for Control Plane Endpoint.
kubeadm init
After couple minutes, it will finish as shown below.
Please note that since the control plane endpoint is a private ip address, 172.x.x.x as shown, the client you want to use to communicate with the cluster has to have network connectivity to 172.x.x.x.
- admin.conf contains credential to communicate with K8S cluster, put it at $HOME/.kube as config on your client, where in my case, my client can access 172.31.43.204:6443
- OR use this command on your client to communicate to K8S cluster on current ssh session
- You need to install K8S network so pods can run on K8S
- Use this command to add work node to K8S cluster
Run command №2 and list all running pods on K8S, you shall see that two pods are pending, see effect below.
That is because Network for K8S is not installed yet. Therefore, as example, install weave network as below,
Once done, control plane node’s status will change from NotReady to Ready. And coredns pod will be deployed.
But at this moment, normal loading pods are still not able to run on control plane node cause node is tainted. By default, control plane is not used to deploy loadings, we could see that a taint is present with key and value .
kubectl get nodes -o jsonpath='{.items[*].spec.taints[*]}' | jq
So we must untainted it. Once done, run the check command again and taint is empty.
kubectl taint no cp-2 node-role.kubernetes.io/master:NoSchedule-
Now verify again that pods do run on control plane and with K8S network ip address assigned.
And that’s it! We have successfully bootstrapped a PRIVATE K8S cluster with ONE control plane node with kubeadm command!
In my next post, I will show you how to bootstrap a PUBLIC K8S cluster and adding extra control plane node into the cluster. Stay tuned!
For joining work node, please refer:
Regenerate Kubernetes Join Command to Join Work Node