Kubernetes cluster

How to install Kubernetes Cluster on Raspberry 4 Pi with 8GB ram and the new Raspbian Bullseye 64bit. And an SSD drive for the cluster data.

The fun part is to build the cluster here some pictures of my cluster build.

4x Raspberry Pi, 8GB, 4x 64 GB USB 3 flash drives and 4x power adapters. I later change one of the 64 GB flash drives with a 256 GB SSD drive – on the master node to manage the data storage for the cluster.
2x Cluster case.
And here the fun part to build the Raspberry Pi cluster. Hands on – very nice to build this together.
Here my Raspberry Pi Kubernetes cluster with 1 master, and 3 nodes. You also need a small switch here I have a TP-link switch with 10 ports. But a small 5 port switch will also be ok.

Image the drives

To make it simple – did I give my Raspberry Pi’s a fix IP address on my pfSense in the DHCP server. This can also be done in your wifi router.

So image the Raspberry Pi OS lite 64-bit to your USB flash drives and to the SSD drive for the master.
In the configuration define the hostname, password and the local settings.

Do these for the master with the SSD drive and for the nodes with USB 64 GB flash drives. Use USB drives are more secure and faster than SD-cards.

The SSD drive for the master is not only used for the OS but also for the NFS server, we will install on the master. So the SSD drive will host the data for the Kubernetes cluster.

Ready to power on

Plug-in the Ethernet connections, drives and the power to start the cluster the first time.

Now we are ready to connect to our cluster with ssh from our computer.

Remember to give to Raspberry Pi’s a fix IP address on you router or on pfSense (DHCP server). And reboot them so you have the right IP’s on your cluster master and nodes.

First login with SSH

Login with SSH to the master and to your nodes and update the OS.

sudo apt update
sudo apt upgrade -y

Also an good idea to check and update the firmware to the latest version.

sudo rpi-update
sudo rpi-eeprom-update -a

We must also add "cgroup_memory=1 cgroup_enable=memory" to cmdline.txt file.

Kubernetes need this settings to handle the memory.

Must be added to the cmdline.txt in the end of the first line with only one space between rootwait and cgroup_memory.

I use “vi” to make the change but you can also use “nano” – if you not know the vi text editor.

sudo vi /boot/cmdline.txt

And make a reboot so all the updates are in place for the installation.

NFS server installation

The next is to install our NFS fileserver on the master. So we can save all data from the Kubernetes Cluster in one place and also can make this persistent.

sudo apt-get install nfs-kernel-server -y

sudo mkdir /mnt/nfsshare
sudo chown -R pi:pi /mnt/nfsshare
sudo find /mnt/nfsshare/ -type d -exec chmod 755 {} \;
sudo find /mnt/nfsshare/ -type f -exec chmod 644 {} \;

This is only to install on the master and with the new Raspbian Bullseye 64 bit the nodes have the drivers needed for the NFS client.

We also have to activate our nfsshare so that our Kubernetes Cluster can access it.

sudo vi /etc/exports

Add this line to the last line of the file and save.

/mnt/nfsshare *(rw,all_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)

Now we have to activate the nfsshare.

sudo exportfs -ra

Install Kubernetes Cluster

sudo curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --no-deploy=local-storage

This will look like this.

We now install Kubernetes Cluster on our master without local-storage as we like to have this on our NFS server as default persistent storage.

After the installation check that our Kubernetes Cluster is up and running with.

kubectl get nodes

Here you will only see your rpimaster. Later can you also see your nodes.

Install git on the master

sudo apt install git

Install helm on the master

Helm is the command line tool to install programs on our cluster.

sudo curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

chmod 700 get_helm.sh
sudo ./get_helm.sh

Now helm is installed can we use helm to install our first repositories for our installation.

helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami


helm repo update

Add nodes to our Kubernetes Cluster

sudo cat /var/lib/rancher/k3s/server/node-token

Here we get the token we must have to add extra nodes to our Kubernetes Cluster. You must not have nodes you can also add them later so many you like or you can also run Kubernetes only on the master.

K10fghteytsaec4df633e77d82tyedsyj3fdf81e::server:2007dabacef775c65e2bd0958aad1ebf

Remember YOUR token you can’t use my token ;O)

sudo curl -sfL https://get.k3s.io | K3S_TOKEN="K10fghteytsaec4df633e77d82tyedsyj3fdf81e::server:2007dabacef775c65e2bd0958aad1ebf" K3S_NODE_NAME="rpinode1" sh -

Run this command on node rpinode1. Make a new on for node2 “rpinode2” and for the other nodes you will add.

You can all times check on the master with

kubectl get nodes

And now you will see the nodes get connected to the master.

Parameter to change

Kubectl can’t all times find the config file. I think the best way is.

mkdir .kube
chmod 777 .kube


sudo scp [email protected]:/etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chmod 777 ~/.kube/config

So we copy the config file into /home/pi/.kube/config on the master and on all nodes. Maybe not the best way – but this is working.

Edit the config file and change 127.0.0.1 to the IP of your master in my case ist this 192.168.0.220.

sudo vi ~/.kube/config

Change 127.0.0.1 -> 192.168.0.220

Back to our storage

We have to add our NFS sever to Kubernetes Cluster.

Deploying the NFS provisioner to our Kubernetes Cluster. Make this nfs.yaml file to do this.

vi nfs.yaml

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: nfs
  namespace: default
spec:
  chart: nfs-subdir-external-provisioner
  repo: https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
  targetNamespace: default
  set:
    nfs.server: 192.168.0.220
    nfs.path: /mnt/nfsshare
    storageClass.name: nfs
    
--activate the nfs in k3s--

Copy the nfs.yaml file to the k3s server.

sudo cp ./nfs.yaml /var/lib/rancher/k3s/server/manifests/nfs.yaml

sudo reboot

Copy this to the k32 server and make a reboot.

After the reboot can you see the storage with this command

kubectl get storageclasses

Set the NFS storage as default storage for the Kubernetes Cluster.

kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now check the NFS storage

kubectl get storageclasses

Now you can see that NFS (default) is your primary storage for the cluster.

Installing metallb load balancer

Here you have to edit the IP range 192.168.0.220 is my master and 192.168.0.221 -> 192.168.0.223 are my nodes.

With the load balancer is it possible to reach my installed programs over any IP address from the Kubernetes Cluster and on the port I have defines in the installation.

vi metallb.yaml

configInline:
  address-pools:
   - name: default
     protocol: layer2
     addresses:
     - 198.168.0.220-198.168.0.223

helm install metallb metallb/metallb -f metallb.yaml

Helm will install the load balancer on my Kubernetes Cluster.

Install Portainer GUI

Nice to see something on a web gui. So we will install Portainer to take a look on our Kubernetes Cluster for the first time.

helm repo add portainer https://portainer.github.io/k8s/

helm repo update

helm install --create-namespace -n portainer portainer portainer/portainer \
--set tls.force=true

Open Portainer with your browser

https://192.168.0.220:30779

So our Kubernetes Cluster is up and running. And we are ready to install our applications.

Knud ;O)