Raspberry Pi K3S Kubernetes Multi Master High Availability Cluster with MySQL

In my previous blog, I’ve showed the installation of Kubernetes cluster using Raspberry Pi’s. It was working very well until recently, when one of the SD cards failed. This is due to heavy R/W’s on the disks thanks to etcd and logging.

In this blog, you’ll see how to setup Rancher K3S Kubernetes Cluster with external database MySQL/PostgreSQL

My Setup

  • Raspberry Pi 4 8GB X 6
  • Raspberry Pi POE HAT X 6
  • Ubuntu Server 21.04
  • POE Switch (Gigabit preferred) X 1
  • External MySQL server running on QNAP TS431K System
  • SSD Card Class 10 128 GB (32 GB is enough) X 6
  • Total Cores = 24 Total Memory = 48 GB Total Disk = 768 GB
  • Persistence Storage NFS Share = 2 TB on QNAP TS431K System
  • Management host with Ansible and Python3

To setup the cluster use Pyratlabs Ansible Role for installing and configuring HA Cluster with MySQL. You can use any external host to run the MySQL server on it. I’m using QNAP for many things which comes with MySQL server pre-installed. Running a MySQL on a highly available RAID-1 configured QNAP NAS System is the best choice I’ve got!


Two files which needs to be redited are shown as under.



- name: Build a cluster with HA control plane
  hosts: k3s_cluster
    k3s_become_for_all: true
    k3s_etcd_datastore: true
    k3s_use_experimental: true  # Note this is required for k3s < v1.19.5+k3s1

      advertise-address: "{{ ansible_eth0.ipv4.address }}"
      datastore-endpoint: "mysql://USERNAME:PASSWORD@tcp("
      write-kubeconfig-mode: 644
      node-external-ip: "{{ ansible_eth0.ipv4.address }}"
      # cluster-cidr:
      # flannel-backend: 'none'  # This needs to be in quotes
        - traefik
        - servicelb        

      node-ip: "{{ ansible_eth0.ipv4.address }}"
      node-external-ip: "{{ ansible_eth0.ipv4.address }}"

    - role: xanmanning.k3s

In the above file, database-endpoint is pointing to the MySQL server running on QNAP with IP address Replace Username and Password according to your setup. I’ve disabled traefik and servicelb replacing the latter with nginx and metallb


      ansible_user: ubuntu
      ansible_python_interpreter: /usr/bin/python3
      k3s_control_node: true

      ansible_user: ubuntu
      ansible_python_interpreter: /usr/bin/python3
      k3s_control_node: true

      ansible_user: ubuntu
      ansible_python_interpreter: /usr/bin/python3
      k3s_control_node: true

      ansible_user: ubuntu
      ansible_python_interpreter: /usr/bin/python3

      ansible_user: ubuntu
      ansible_python_interpreter: /usr/bin/python3

      ansible_user: ubuntu
      ansible_python_interpreter: /usr/bin/python3

First 3 nodes are master nodes while the remaining nodes are worker nodes.

To install the K3S cluster and configure as per above configuration run the following command.

/home/naanu/ansible-role-k3s# ansible-playbook -i inventory.yml ha.yaml

MySQL DB K3S having the Kine table

Install kubectl on the management host. I’m using Oh My ZSH https://github.com/ohmyzsh/ohmyzsh with kubectl alias enabled. Below are the things to do after initial cluster configuration.

  1. Metallb – Load balancer


kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml

  1. Nginx – Ingress Controller


helm show values ingress-nginx/ingress-nginx > ingress-nginx.yaml

edit hostNetwork = True
hostPort = True
Kind = Deployment

kubectl create ns ingres-nginx

helm install ingress ingress-nginx/ingress-nginx -n ingress-nginx –values ingress-nginx.yaml

kgp -A

  1. Cert-Manager – Certificate Manager using Letsencrypt


kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

Add Letsencrypt Cluster Issuer

k apply -f letsencrypt.yml

If it throws an error like “Error from server (InternalError): error when creating “letsencrypt.yml”: Internal error occurred: failed calling webhook “webhook.cert-manager.io”: Post “https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s&#8221;: context deadline exceeded”

Restart the nodes
ansible -i inventory.yml k3s_cluster -b -u ubuntu -a “systemctl restart k3s”

k apply -f letsencrypt.yml

  1. Install NFS Client Provisioner and make it default

Clone https://github.com/justmeandopensource/kubernetes

go to kubernetes/yamls/nfs-provisioner

k apply -f rbac.yaml
k apply -f default-sc.yaml
k apply -f deployment.yaml

If deployment throws an error like ErrImgPull change the image of nfs-provisioner in deployment.yaml to


  1. Install Nextcloud

helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update
k create ns nextcloud
helm install nextcloud nextcloud/nextcloud –namespace nextcloud –values nextcloud.values.yml
k apply -f ingress.yml

If it throws an error, disable Liveness and other probes

** Use below commands for migration **
Copy all the files to the new PVC
Login to the container, install sudo if not available
Run command as user www-data
sudo -u www-data php -d memory_limit=-1 occ files:scan –all –verbose

  1. Docker Private Registry

helm repo add twuni https://helm.twun.io
helm repo update
k apply -f 4-pvc-nfs.yaml
helm install docker-registry twuni/docker-registry -f config-values.yaml

6.a Registry Credentials

kubectl create secret docker-registry regcred –docker-server=http://registry.vinaybabu.in/v2/ –docker-username=babvin –docker-password=Rundeck21
k patch serviceaccount default -p ‘{“imagePullSecrets”: [{“name”: “regcred”}]}’

  1. Mariadb
    k apply -f mariadb_deploy.yml -f secret.yml -f mariadb_svc_internal.yml

After completing all the above steps, you have a self hosted cluster with load balancer, ingress controller, NFS controller as default storage class for persistent data storage, Nextcloud storage for sharing files, and Docker Private registry.

If you enjoyed this post, I’d be very grateful if you’d help it spread by emailing it to a friend, or sharing it on Twitter or Facebook. Thank you!

What am I missing here? Let me know in the comments and I’ll add it in! OR tweet it to me @babvin

2 thoughts on “Raspberry Pi K3S Kubernetes Multi Master High Availability Cluster with MySQL

  1. Very nice blog, I love it.
    I hope you have thought of how to avoid hardcoding username/password in ha.yaml file.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s