Container technologies have gained huge popularity in the past few years. First there is Docker, and later comes Kubernetes. Kubernetes, simply put, allows you to build a cluster of servers that run containers. It makes decision where to schedule containers, maintains high-availability of containers, manages related services like network routing, DNS resolution, and storage, and handles many more things. I recently bought four Raspberry Pi 4 (4GB model) to build a toy Kubernetes cluster at home. I actually had such idea long ago but the memory was a bit limited on older Pis, so the 4GB model Pi 4 was a perfect choice for this setup. In this blog, I’ll show you how to successfully setup a Kubernetes cluster on a bunch of Raspberry Pis.
Initial preparation
For this setup, I chose to run Ubuntu Server 20.04 LTS instead of Raspbian because Ubuntu runs in full 64-bit mode. The version of Kubernetes installed is 1.18 which was the latest at that time.
You might want to first boot the Pi with default Raspbian OS to update its firmware which helps with the overheating issue seen when Pi 4 was first release last year. You can update the firmware with the following commands on Raspbian
$ sudo apt update
$ sudo apt install rpi-eeprom
# checks whether update is available
$ sudo rpi-eeprom-update
$ sudo rpi-eeprom-update -a
$ sudo reboot
# verify the EEPROM is updated after reboot
$ sudo rpi-eeprom-update
Before starting the Kubernetes installation, let’s make a few essential tweaks to the system. First we need to make sure the hostname and name resolution is properly configured. Here I configured /etc/hosts
, if you have actual DNS server running, that also works.
# set a hostname
$ hostnamectl set-hostname pic<node-id>.sg-home.shawnliu.me
# edit /etc/hosts
$ vim /etc/hosts
192.168.10.11 pic01.sg-home.shawnliu.me pic01
192.168.10.12 pic02.sg-home.shawnliu.me pic02
192.168.10.13 pic03.sg-home.shawnliu.me pic03
192.168.10.14 pic04.sg-home.shawnliu.me pic04
Another thing is to enable some cgroup features which are disabled by default.
$ sudo sed -i 's/$/ cgroup_enable=memory cgroup_memory=1/' /boot/firmware/cmdline.txt
$ sudo reboot
Setting up the container runtime: CRI-O
Although Kubernetes manages containers, it doesn’t actually run them by itself. Instead, it relies on container runtimes (e.g. Docker, CRI-O) to actually control containers. Docker is probably the most popular choice here but in this case I’ll configure CRI-O instead, which is a lightweight runtime designed for Kubernetes.
First let’s prepare the network.
$ modprobe br_netfilter
$ echo 'br_netfilter' | sudo tee /etc/modules-load.d/netfilter.conf
$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
$ sudo sysctl --system
Next we can install the CRI-O packages. In my case, when I was configuring the system (early August 2020), the latest CRI-O package in the main repo (the first one below) was 1.17 which had some issue running on ARM (see this GitHub issue). Therefore, I had to add another repo which contains the latest 1.18 build. You may not have to do so by the time you read this post.
$ . /etc/os-release
$ sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/x${NAME}_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
$ wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/x${NAME}_${VERSION_ID}/Release.key -O- | sudo apt-key add -
# add cri-o 1.18 repo for Ubuntu 20.04
$ sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.18/xUbuntu_20.04/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:1.18.list"
$ wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.18/xUbuntu_20.04/Release.key -O- | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install -y cri-o
$ sudo apt-mark hold cri-o
Now we can start CRI-O. Note that if you installed following the above lines, the system runc
package will be installed instead of the one in the CRI-O repo. Therefore, we have to update the runtime_path
in the config so that CRI-O can find runc
.
$ sudo sed -i 's/runtime_path = ".*"/runtime_path = ""/' /etc/crio/crio.conf
$ sudo systemctl start crio
$ sudo systemctl enable crio
Setting up Kubernetes
Finally it’s the turn to install Kubernetes itself! We need to install the Kubernetes packages first on all nodes.
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
Then on the master node, initialize a Kubernetes cluster. I had to create a YAML file to pass in the configuration because the cgroupDriver
setting cannot be set on the command line of kubeadm
and yet I must configure it for kubeadm
to bootstrap the cluster successfully. After much struggle, I came up with this YAML file that does what I want. The podSubnet
setting is to set the network range for cluster internal networking. Make sure it’s different from your host network!
$ cat init.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
podSubnet: "192.168.20.0/24"
$ sudo kubeadm init --config init.yaml
In case something goes wrong, you need to reset the cluster before doing kubeadm init
again.
$ sudo kubeadm reset --kubeconfig /etc/kubernetes/admin.conf
Take note of the join command shown in the output of kubeadm init
. You need it later! If you lost it or you want to join a node after the initial token is expired, you can also run the following command to obtain another token.
$ kubeadm token create --print-join-command
Now on the slave nodes, join the cluster.
$ sudo kubeadm join 192.168.10.11:6443 --token coyqq2.zyywem2trz5szn7w --discovery-token-ca-cert-hash sha256:7501929a45f1e25bbf3af5e33f3a96c79006967c5c6991eb8e6e705de1cd801a
Setting up kubectl
The Kubernetes cluster is up and running, but we need a way to control it. That’s what kubectl
does. We need to set it up so it knows how to talk to the Kubernetes cluster. The instruction is also shown in the output of kubeadm init
.
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
In case you want to control the cluster from another node, just copy the config file to
$HOME/.kube/config
on that node.
After configuring kubectl
, you should be able to see all nodes joined the cluster.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
pic01.sg-home.shawnliu.me Ready master 15h v1.18.6
pic02.sg-home.shawnliu.me Ready <none> 5h41m v1.18.6
pic03.sg-home.shawnliu.me Ready <none> 5h36m v1.18.6
pic04.sg-home.shawnliu.me Ready <none> 5h17m v1.18.6
Setting up Calico
The Kubernetes cluster is running, are we done? Well, not quite. We still need to configure the networking between nodes so your containers (more acturately, pods) can talk to each other. Calico is one of the project that provides networking capability for Kubernetes. It does so with layer 3 routing using BGP protocols. There are also other projects such as Flannel and Weave.
If you go and look at the quickstart guide of Calico, you’ll notice it tells you to deploy Calico with the Tigera operator. Unfortunately, it doesn’t support ARM. Therefore, we take the traditional approach to setup Calico.
$ curl https://docs.projectcalico.org/manifests/calico.yaml -O
$ kubectl apply -f calico.yaml
# verify calico is running
$ kubectl -n kube-system get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
calico-kube-controllers 1/1 1 1 8h
coredns 2/2 2 2 9h
To test Calico, we can spin up some test containers and do a ping test.
# start two containers
$ kubectl run pingtest1 --image=busybox -- sleep infinity
$ kubectl run pingtest2 --image=busybox -- sleep infinity
# check their IP within the cluster
$ kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pingtest1 1/1 Running 0 15m 192.168.20.193 pic04.sg-home.shawnliu.me <none> <none>
pingtest2 1/1 Running 0 66s 192.168.20.65 pic03.sg-home.shawnliu.me <none> <none>
# ping test
$ kubectl exec -it pingtest1 -- ping 192.168.20.65
PING 192.168.20.65 (192.168.20.65): 56 data bytes
64 bytes from 192.168.20.65: seq=0 ttl=62 time=0.777 ms
64 bytes from 192.168.20.65: seq=1 ttl=62 time=0.793 ms
64 bytes from 192.168.20.65: seq=2 ttl=62 time=0.756 ms
# route check
$ ip route get 192.168.20.193
192.168.20.193 via 192.168.10.14 dev tunl0 src 192.168.20.0 uid 1000
Now we finally have the entire cluster setup :)
Conclusion
In this post, I shared my experience building a Kubernetes cluster on Raspberry Pis. Overall it’s not too difficult, but I ran into a few wired problems that were never mentioned in similar posts so I decided to write this post to provide some documentation. I learned to appreciate the power and usefulness of Kubernetes at work, and it drove me to setup my own cluster at home. What drives you to do so? Let me know in the comments below!
comments powered by Disqus