How to run a Kubernetes cluster on my laptop
In this tutorial, we are going to deploy a single-node vanilla Kubernetes cluster on a virtual server on our laptop.
Prerequisites
We need a Linux, Mac (both Intel and Apple Silicon), and Windows laptop with at least 6GB of memory for this tutorial.
Outline of steps
- Step-1: Install Multipass
- Step-2: Create a virtual server
- Step-3: Enable IP routing and forwarding on the node
- Step-4: Install container runtime
- Step-5: Configure cgroups
- Step-6: Install Kubernetes
- Step-7: Interacting with the cluster via kubectl
- Step-8: Install CNI plugin
- Step-9: Run an Nginx Pod on Kubernetes
Step-1: Install Multipass
Multipass is a handy tool for running Ubuntu virtual servers on Linux, Mac, or Windows. Install Multipass following the installation instructions for your platform.
Step-2: Create a virtual server
Launch a virtual server using the Ubuntu LTS version 22.04.
$ multipass launch -d 8G -m 2G -c 2 -n k8s30 22.04
Parameters to set the specs for the virtual machine:
- -d - Disk size in GB.
- -m - Memory size in GB
- -c - Number of virtual CPU cores to be assigned to the virtual machine
- -n - Name of the virtual machine
Log in to the virtual server.
$ multipass shell k8s30
This will open a bash shell to the newly created virtual server.
This virtual server is the only node in our Kubernetes cluster. We will install the Kubernetes control plane on this node. Also, all Pods in our cluster will be deployed on this node so this is a single-node Kubernetes cluster.
Step-3: Enable IP routing and forwarding on the node
A Pod deployed on the node is assigned a unique IP address. To route traffic to the Pod IP address, Kubernetes CNI uses iptables
with network bridging
.
So, we must load overlay
and br_netfilter
kernel modules and set the kernel parameters to enable IPv4 forwarding and apply iptables rules to bridged traffic on the node.
Add the overlay
and br_netfilter
to the list of kernel modules to be loaded at boot:
$ sudo tee -a /etc/modules-load.d/k8s.conf > /dev/null <<EOT
overlay
br_netfilter
EOT
Load the modules:
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
Verify the modules are loaded:
$ lsmod | grep overlay
overlay 155648 0
$ lsmod | grep br_netfilter
br_netfilter 32768 0
bridge 352256 1 br_netfilter
Configure the kernel parameters:
$ sudo tee /etc/sysctl.d/k8s.conf > /dev/null <<EOT
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOT
Apply the new configuration:
$ sudo sysctl --system
Step-4: Install container runtime
containerd is the container runtime we are going to use for this cluster.
The containerd project releases binaries for both x86 and ARM architectures.
Download the latest containerd version. (1.7.17
at the time of this writing). Replace the <arch>
with amd64
or arm64
depending on the CPU architecture of your laptop.
$ wget https://github.com/containerd/containerd/releases/download/v1.7.17/containerd-1.7.17-linux-<arch>.tar.gz
If you are using a newer version of containerd, replace v1.7.17
with the corresponding version.
Extract and copy the downloaded binary to /usr/local
:
$ sudo tar Cxzvf /usr/local containerd-1.7.17-linux-<arch>.tar.gz
Run containerd
as a service:
$ wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
$ sudo mkdir -p /usr/local/lib/systemd/system
$ sudo cp containerd.service /usr/local/lib/systemd/system
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now containerd
Check the status of containerd
service:
$ sudo systemctl status containerd
The containerd
service must be in active
state.
containerd depends on runc for spawning containers. Let’s install runc now.
The runc project binaries are available at GitHub runc releases page.
Install the corresponding binary by replacing <arch>
with amd64
or arm64
according to your CPU architecture.
$ wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.<arch>
$ sudo install -m 755 runc.<arch> /usr/local/sbin/runc
Step-5: Configure cgroups
When multiple containers are running on the same host, the container runtime must limit the amount of CPU and memory consumed by individual containers. Without such control, one container can drain all the computing resources on the host.
containerd uses Linux control groups or cgroups
to impose this control. containerd interfaces with cgroups via a driver. Since we are using systemd
as the init system in our host, we must configure containerd
to use systemd cgroup driver.
The configuration file of containerd - /etc/containerd/config.toml
is not created by default. So, let’s create it.
$ sudo mkdir /etc/containerd
$ containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
In the config.toml
, change SystemdCgroup = false
to SystemdCgroup = true
.
$ sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
Restart containerd
for the configuration to take effect.
$ sudo systemctl restart containerd.service
Step-6: Install Kubernetes
Install kubeadm
, kubelet
, and kubectl
on the host.
kubeadm is a tool for installing and managing a Kubernetes cluster.
kubelet is an agent that runs on each node in the Kubernetes cluster.
kubectl is a CLI tool for interacting with a Kubernetes cluster. We do not have to install kubectl
on the same host that we install Kubernetes. But for this cluster let’s install kubectl
also on the same host.
The version of these three software must match the Kubernetes version. We choose 1.30
which is the latest version of Kubernetes at this time.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Create kubeadm-init.yml
.
$ tee kubeadm-init.yml > /dev/null <<EOT
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.30.0
networking:
serviceSubnet: "10.96.0.0/16"
podSubnet: "192.168.0.0/16"
dnsDomain: "cluster.local"
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
EOT
The kubeadm-init.yml
set the following key parameters in the cluster.
- serviceSubnet - The subnet used for assigning IP addresses to Kubernetes Services.
- podSubnet - The subnet for assigning IP addresses to the Pods. These two subnets must not conflict with the host IP address.
- cgroupDriver - As we configured cgroup driver for
containerd
we must also configurekubelet
to use thesystemd
cgroup driver.
Initialize cluster with kubeadm
using the kubeadm-config.yml
.
$ sudo kubeadm init --config kubeadm-init.yml
It will take a few minutes for kubeadm
to download and run the container images from the Internet.
Step-7: Interacting with the cluster via kubectl
kubectl
is the CLI tool for interacting with the cluster.
For parameters to connect and authenticate with the cluster, kubectl
refers to the config file in $HOME/.kube/config
. This config file is created by kubeadm
at /etc/kubernetes/admin.conf
during the cluster deployment.
So, let’s copy it to $HOME/.kube/config
.
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Update file permissions so we can read this file without sudo
.
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Use kubectl
to list all Pods running in the cluster.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-76f75df574-n9zr5 0/1 Pending 0 5m25s
kube-system coredns-76f75df574-ssqv4 0/1 Pending 0 5m25s
kube-system etcd-k8s30 1/1 Running 1 5m40s
kube-system kube-apiserver-k8s30 1/1 Running 0 5m40s
kube-system kube-controller-manager-k8s30 1/1 Running 0 5m40s
kube-system kube-proxy-2ttxz 1/1 Running 0 5m25s
kube-system kube-scheduler-k8s30 1/1 Running 1 5m40s
Since we have not yet deployed any Workloads, the Pods you see here belong to the Kubernetes control plane.
We have two core-dns
Pods in Pending
status. Let’s print the Pod Events to see why they are pending.
$ kubectl describe pods <coredns-pod-name> -n kube-system | grep Event -A 10
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 51s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
The Pods have failed due to untolerated taints in the node.
Check the taints
in the node. If you are using a different hostname replace k8s30
with your hostname.
$ kubectl get nodes k8s30 -o yaml | grep taints -A 8
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
- effect: NoSchedule
key: node.kubernetes.io/not-ready
status:
addresses:
- address: 192.168.64.27
type: InternalIP
Remove taints.
$ kubectl taint node k8s30 node-role.kubernetes.io/control-plane:NoSchedule-
$ kubectl taint node k8s30 node.kubernetes.io/not-ready:NoSchedule-
Check the Pod status.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-76f75df574-gpl5f 0/1 ContainerCreating 0 17m
kube-system coredns-76f75df574-nd4gp 0/1 ContainerCreating 0 17m
kube-system etcd-k8s30 1/1 Running 0 17m
kube-system kube-apiserver-k8s30 1/1 Running 0 17m
kube-system kube-controller-manager-k8s30 1/1 Running 0 17m
kube-system kube-proxy-v4ffg 1/1 Running 0 17m
kube-system kube-scheduler-k8s30 1/1 Running 0 17m
Thecoredns
Pods are still not running so let’s check the details again.
$ kubectl describe pod <coredns-pod-name> -n kube-system | grep Event -A 10
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m52s (x4 over 20m) default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal Scheduled 3m7s default-scheduler Successfully assigned kube-system/coredns-76f75df574-gpl5f to k8s30
Warning FailedMount 2m35s (x7 over 3m7s) kubelet MountVolume.SetUp failed for volume "config-volume" : object "kube-system"/"coredns" not registered
Warning NetworkNotReady 2m33s (x18 over 3m7s) kubelet network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
core-dns
is waiting for a CNI plugin to be available. So let’s install a CNI plugin.
Step-8: Install CNI plugin
Kubernetes depends on CNI plugins to implement network connectivity between Pods. There are several open-source network plugins but we’ll use Calico for this cluster.
Calico project provides manifests for installing Calico CNI on Kubernetes.
Install the Tigera Calico operator.
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
Install Calico.
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
Wait few minutes for the containers to run and check the Pod status.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-7bff98dfc7-lwzhv 1/1 Running 0 15m
calico-apiserver calico-apiserver-7bff98dfc7-s7sss 1/1 Running 0 15m
calico-system calico-kube-controllers-6f8988b47c-7sp45 1/1 Running 0 18m
calico-system calico-node-6q8t2 1/1 Running 0 18m
calico-system calico-typha-6fcfcb6b56-q4m9p 1/1 Running 0 18m
calico-system csi-node-driver-ttgjt 2/2 Running 0 18m
kube-system coredns-76f75df574-gpl5f 1/1 Running 0 68m
kube-system coredns-76f75df574-nd4gp 1/1 Running 0 68m
kube-system etcd-k8s30 1/1 Running 0 68m
kube-system kube-apiserver-k8s30 1/1 Running 0 68m
kube-system kube-controller-manager-k8s30 1/1 Running 0 68m
kube-system kube-proxy-v4ffg 1/1 Running 0 68m
kube-system kube-scheduler-k8s30 1/1 Running 0 68m
tigera-operator tigera-operator-7f8cd97876-gkdkr 1/1 Running 0 19m
All Pods are running now which means our cluster is ready to accept Workloads.
Step-9: Run an Nginx Pod on Kubernetes
Let’s run an Nginx web server on our Kubernetes cluster.
$ kubectl run --image nginx nginx
Wait a few minutes for Kubernetes to download the image and run.
Check the Pod status:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 83s
Create a port forwarding from the host to the Nginx Pod.
$ kubectl port-forward pod/nginx --address 0.0.0.0 9000:80 &
We use &
to let the port-forward
command run in the background and get the shell prompt back.
Test Nginx web server with curl
.
$ curl http://127.0.0.1:9000
You’ll get the Nginx home page as the response.
To stop port forwarding use fg
command to bring the most recent background job to foreground.
$ fg
Press Ctrl+c
to stop the running command.
Wrap up
In this tutorial, we deployed a vanilla Kubernetes cluster on our laptop.
Vanilla Kubernetes is what you get as a binary release from the open-source Kubernetes project.
There are several resource-optimized Kubernetes distributions for laptops. But we get to understand more Kubernetes internals by installing vanilla Kubernetes.
Do not install vanilla Kubernetes on the host OS as we could do a breaking change. Working on a virtual server, we can delete the virtual server and start over at any time.
If you feel that installing vanilla Kubernetes is too much work, you can quickly install a Kubernetes cluster on your laptop with Minikube.
Minikube installs a minified version of Kubernetes on a Docker container. But Minikube is an advanced tool that can even run multiple Kubernetes clusters on your laptop.