🚀 Kubernetes Cluster Setup with kubeadm
This guide provides a step-by-step process to set up a Kubernetes cluster using kubeadm
on multiple nodes.
🏗️ Cluster Architecture
- 3 VMs on Cloud
- Master Nodes: These nodes play a crucial role in managing the control API calls for various components within the Kubernetes cluster. This includes overseeing pods, replication controllers, services, nodes, and more.
- Worker Nodes: Worker nodes are responsible for providing runtime environments for containers. A group of container pods can extend across multiple worker nodes, ensuring optimal resource allocation and management.
🔄 Cluster Components
- API Server: Frontend for Kubernetes that processes API requests.
- etcd: A distributed key-value store for cluster data.
- Controller Manager: Ensures the desired cluster state.
- Scheduler: Assigns workloads to worker nodes.
- Kubelet: Agent that runs on each node to ensure container health.
- kube-proxy: Handles network communication.
- Container Runtime: Runs containerized applications (e.g.,
containerd
).
Network Setup: Open Required Ports 🔓
Control Plane
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 6443 | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | kube-apiserver, etcd |
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 10259 | kube-scheduler | Self |
TCP | Inbound | 10257 | kube-controller-manager | Self |
Note: Although etcd ports are included in the control plane section, you can also host your own etcd cluster externally or on custom ports.
Worker Nodes
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 10250 | Kubelet API | Self, Control plane |
TCP | Inbound | 10256 | kube-proxy | Self, Load balancers |
TCP | Inbound | 30000-32767 | NodePort Services† | All |
🛠️ Master Node Setup
🔹 Run these commands on the master node:
1️⃣ Disable Swap (Required for Kubernetes to work correctly)
swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
2️⃣ Update Kernel Parameters (For network bridging and packet forwarding)
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
3️⃣ Install Container Runtime (Using containerd
)
curl -LO https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-amd64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.7.14-linux-amd64.tar.gz
curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
✅ Check if containerd
is running:
systemctl status containerd
4️⃣ Install runc
(OCI Runtime for containers)
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
sudo install -m 755 runc.amd64 /usr/local/sbin/runc
5️⃣ Install CNI Plugins (Networking for Kubernetes)
curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-amd64-v1.5.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.5.0.tgz
6️⃣ Install kubeadm
, kubelet
, and kubectl
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet=1.30.0-1.1 kubeadm=1.30.0-1.1 kubectl=1.30.0-1.1 --allow-downgrades --allow-change-held-packages
sudo apt-mark hold kubelet kubeadm kubectl
✅ Check installed versions:
kubeadm version
kubelet --version
kubectl version --client
7️⃣ Initialize the Control Plane
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=$(hostname -I | awk '{print $1}') --node-name $(hostname)
📌 Copy the kubeadm join
command displayed after initialization!
8️⃣ Configure kubectl
for Master Node
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
9️⃣ Install a CNI Plugin (Calico for Networking)
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
kubectl apply -f custom-resources.yaml
🛠️ Worker Node Setup
🔹 Run these commands on worker nodes:
1️⃣ Disable Swap
swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
2️⃣ Update Kernel Parameters
(Same as in Master Node, run the same commands here)
3️⃣ Install Container Runtime (Same as Master Node)
4️⃣ Install runc
(Same as Master Node)
5️⃣ Install CNI Plugins (Same as Master Node)
6️⃣ Install kubeadm
, kubelet
, and kubectl
(Same as Master Node)
7️⃣ Join Worker Nodes to the Cluster
Run the kubeadm join
command copied from the master node. If you forgot it, regenerate it on the master node:
kubeadm token create --print-join-command
Then, execute the command on the worker nodes:
sudo kubeadm join <MASTER_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH>
🎯 Verify the Cluster
On the master node, check if all nodes have joined successfully:
kubectl get nodes
You should see output similar to:
NAME STATUS ROLES AGE VERSION
master Ready control-plane 10m v1.30.0
worker-1 Ready <none> 5m v1.30.0
worker-2 Ready <none> 5m v1.30.0
✅ Congratulations! Your Kubernetes cluster is up and running! 🚀🎉