Deploy Kubernetes on CentOS7

Yst@IT
5 min readMay 27, 2019

--

Container has been very popular recently. There are many different tools that can manage/orchestration containers and recently I am looking into Kubernetes. A brief introduction,

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

Cloud service providers such as AWS, Azure, GCP, have their own kubernetes services such as EKS, AKS and GKE. With those fully managed service, users can get kubernetes service with only a few clicks and not to worry about high availability and elasticity.

But there are still some customers that choose to host/maintain their own kubernetes locally due to some policy/regulation requirements. Therefore this article is going to show you how to build kubernetes step by step on CentOS7.

I have two VMs ready, one for master node, one for worker node.

VMs for kubernetes

Setups for both master and worker node.

# Setup repository for kubernetescat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
Kubernetes Deployment
# Set SELinux in permissive mode (effectively disabling it)setenforce 0
sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config
Kubernetes Deployment
# Install kubernetes commandsyum install kubelet kubeadm kubectl -y
Kubernetes Deployment
Kubernetes Deployment
# Start kubelet service when rebootingsystemctl enable --now kubelet
Kubernetes Deployment
# Load the br_netfiler modulemodprobe br_netfilter
lsmod | grep br_netfilter
Kubernetes Deployment
# To make sure traffic is routed correctlycat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
Kubernetes Deployment
# Enable the settingsysctl --system
Kubernetes Deployment
# Add Docker repositoryyum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Kubernetes Deployment
# Install Docker CE
yum install docker-ce -y
Kubernetes Deployment
Kubernetes Deployment
# Enable, start docker and kubelet servicesystemctl enable docker.service
systemctl restart docker
systemctl enable kubelet
systemctl start kubelet
Kubernetes Deployment

So far, we have done the basic setups for both master and worker nodes. Next, let’s start to work on the master node.

# Initial master nodekubeadm init
Kubernetes Deployment

After initialization, copy down the kubeadm join command. It is needed to join worker nodes into kubernetes cluster.

Kubernetes Deployment
# If you are running as root, enter this command in order for kubeadm commands to workexport KUBECONFIG=/etc/kubernetes/admin.conf
Kubernetes Deployment
# Deploy a pod network to the clusterkubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

For more detail and options about pot network, please refer pod-network

Kubernetes Deployment

Now let’s return to worker node. If you have the following error msg when issuing kubeadm join, you can do the following steps to solve it.

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/

If you have the above error msg when running kubeadm join command, then do the following setups.

# Install necessary packagesyum install yum-utils device-mapper-persistent-data lvm2
Kubernetes Deployment
Kubernetes Deployment
# Create /etc/docker directory & setup daemon
# Restart docker service
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
Kubernetes Deployment
# Finally, add worker node to kubernetes cluster by using the command generated from master node after kubeadm init commandkubeadm join 172.31.39.53:6443 --token 2nkiam.xxxxx \
--discovery-token-ca-cert-hash sha256:266c7d0a89f26976fa8b5952f6xxxxx

It is very import to make sure that worker node is allowed to connect to worker node 6443 port!

Kubernetes Deployment

The worker node has been successfully added to kubernetes cluster. Now back to master node to confirm that the worker node is up and running.

# Get the list and status of nodeskubectl get nodes
Kubernetes Deployment

From the image, you can see that the worker node is Ready, meaning up and running. Till now, we have successfully done setting up the kubernetes cluster with a node added to it.

The last part is to deploy some containers to verify if the cluster works correctly. From master node,

# Create a container running sample image at port 8080
kubectl run node-hello --image=gcr.io/google-samples/node-hello:1.0 --port=8080
# Expose pod to outside world, external ip is the local ip of node server
kubectl expose deployment.apps/node-hello --type="NodePort" --port 8080 --external-ip=172.31.40.107
Kubernetes Deployment

Now a sample container is deployed, use a browser to check the result!

Kubernetes Deployment

--

--

Yst@IT
Yst@IT

Written by Yst@IT

Cloud Solution Architect, focusing on Oracle Cloud Infrastructure currently.

No responses yet