Prepare and Build Production Environment for Kubernetes with kubeadm

Image: https://www.ovh.com/blog/why-ovh-managed-kubernetes/

Basically I am concentrating what’s been said on the official Kubernetes webpage here, so you can refer for detailed information.

Having Kubernetes up and running on a server, you need,

  1. Install a container runtime
  2. Install K8S command tools

There are three ways listed on the official website to setup Kubernetes and three container runtime choices. In this post, I will use docker as container runtime and kubeadm to setup the control plane.

Please note that in this post, I will only setup PRIVATE K8S cluster with ONE control plane node, NOT any work node.

All commands are executed on CentOS 8

I have put all commands together as below.

curl https://raw.githubusercontent.com/ystatit/bash/master/k8s-docker.sh | bash
Prepare Product Environment for Kubernetes
Prepare Product Environment for Kubernetes

Next, let’s install K8S command tools. Basically we are installing kubelet, kubectl and kubeadm.

curl https://raw.githubusercontent.com/ystatit/bash/master/k8s-tools.sh | bash
Prepare Product Environment for Kubernetes
Prepare Product Environment for Kubernetes

At this point, we have already setup the environment for K8S. Now we start to initiate the cluster by using kubeadm command. If you do not specify --control-plane-endpoint flag, kubeadm will parse your current network segment configuration and use it for control plane endpoint ip address, which most likely is a private ip address, if you use VMs on clouds.

To initiate a Kubernetes cluster with public ip address for control plane endpoint, please refer Deploy Kubernetes with Specific Public IP Address for Control Plane Endpoint.

kubeadm init
Prepare Product Environment for Kubernetes

After couple minutes, it will finish as shown below.

Please note that since the control plane endpoint is a private ip address, 172.x.x.x as shown, the client you want to use to communicate with the cluster has to have network connectivity to 172.x.x.x.

Prepare Product Environment for Kubernetes
  1. admin.conf contains credential to communicate with K8S cluster, put it at $HOME/.kube as config on your client, where in my case, my client can access 172.31.43.204:6443
  2. OR use this command on your client to communicate to K8S cluster on current ssh session
  3. You need to install K8S network so pods can run on K8S
  4. Use this command to add work node to K8S cluster

Run command №2 and list all running pods on K8S, you shall see that two pods are pending, see effect below.

Prepare Product Environment for Kubernetes

That is because Network for K8S is not installed yet. Therefore, as example, install weave network as below,

Prepare Product Environment for Kubernetes

Once done, control plane node’s status will change from NotReady to Ready. And coredns pod will be deployed.

Prepare Product Environment for Kubernetes

But at this moment, normal loading pods are still not able to run on control plane node cause node is tainted. By default, control plane is not used to deploy loadings, we could see that a taint is present with key and value .

kubectl get nodes -o jsonpath='{.items[*].spec.taints[*]}' | jq
Prepare Product Environment for Kubernetes

So we must untainted it. Once done, run the check command again and taint is empty.

kubectl taint no cp-2 node-role.kubernetes.io/master:NoSchedule-
Prepare Product Environment for Kubernetes

Now verify again that pods do run on control plane and with K8S network ip address assigned.

Prepare Product Environment for Kubernetes

And that’s it! We have successfully bootstrapped a PRIVATE K8S cluster with ONE control plane node with kubeadm command!

In my next post, I will show you how to bootstrap a PUBLIC K8S cluster and adding extra control plane node into the cluster. Stay tuned!

For joining work node, please refer:
Regenerate Kubernetes Join Command to Join Work Node

AWS Certified SA, SysOps & Developer Associate, Alibaba Cloud certified SA. Focusing on Azure, Prometheus w/ Grafana, ELK and K8S now.