Running Unifi Controller On Home K8s Cluster With MetalLB

In the last two posts (part 1 and part 2), I covered how I turned my desktop machine into a hypervisor using EXSI to serve 6 VMs for a Kubernetes cluster. In this post we are going to cover how to setup the unifi controller software on the cluster.

I run Unifi (Ubiquiti) networking kit throughout my house. It is industrial grade networking equipment. To run the equipment you have to run the unifi controller software. The controller software gives you a UI that you use to configure your switches, access points etc. Ubiquiti sell a Cloud Key that runs the software but ~£130 feels like a lot of money to me for something just to run a configuration UI. I used to run the controller software in docker on my [https://global.download.synology.com/download/Document/Hardware/DataSheet/DiskStation/13-year/DS713+/enu/Synology_DS713_Plus_Data_Sheet_enu.pdf](Synology Diskstation 713+) but since I have started using the Synology to run my home CCTV setup, the 8 year old hardware was really struggling to run both the Video Surveillance and Unifi Controller software. So I thought it would be great to move the Unifi Controller over to my newly created Kubernetes cluster.

To make this work we are going to have to solve 2 problems:

  1. We are going to have to setup persistance in our cluster as the Unifi Controller software saves the network config and that needs to be there when the pod restarts
  2. We need a way to expose the controller software to our network on a static IP – just using one of the node IPs is not great as if that node goes down then all of our network kit can no longer talk to the controller, what we want is a virtual IP that points to a node that is running in our cluster

With the preamble out of the way lets get into solving problem one. How are we going to setup persistance in our Kubernetes cluster. To do this we need to setup a persistent volume, we can then claim this volume and attach it to a pod. For the persistant volume I thought the easist thing to do was to setup an NFS server. That way pod could launch on any node and simply attatch to the NFS server to mount the volume.

To create the NFS server, I simply made another clone of a VM (as covered in part 1). Once cloned, I changed the IP to the next available sequential IP 192.168.1.206. To setup the NFS share I ran the following commands:

sudo apt update
sudo apt install nfs-kernel-server

sudo mkdir /var/nfs -p
sudo chown nobody:nogroup /var/nfs
sudo chmod 777 -R /var/nfs

This installs the NFS server onto Ubuntu and sets up a folder for the share (/var/nfs). I set the permissions on this share wide open so that anyone can write to this share, which is good enough for a home cluster.

The last part is to expose this share out by editing the file /etc/exports and adding the following line to allow any machine on my network to read or write to the share:

/var/nfs	192.168.1.1/24(rw,sync,no_root_squash,no_subtree_check)

To make those changes take effect we need to restart the NFS server with sudo systemctl restart nfs-kernel-server.

With that in place we need to setup the persistant volume. A persistent volume is a way for you as the administrator of the cluster to make a space available for people using the cluster to mount on their pods. To setup the NFS share we just created as a peristent volume we can use the following yaml and applying with kubectl apply -f volume.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nas
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /var/nfs
    server: 192.168.1.206

This is making 20GB available for people to use on the share (/var/nfs) that we just setup on our NFS server (192.168.1.206). To use the persistent volume with the unifi software we need to claim it. To claim it we create a persistent volume claim with the following yaml and applying with kubectl apply -f claim.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: unifi-claim
spec:
  storageClassName: slow
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

We can check that the persistent volume has a bound claim by:

kubectl get persistentvolumes
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
nas    20Gi       RWO            Recycle          Bound    kube-system/unifi-claim   slow                    23h

We can see that the persistent volume’s status is Bound and the claim is unifi-claim which was the claim we just created.

To setup the unifi controller software, I following the instructions on this helm chart. To begin we need to add the helm repo:

helm repo add k8s-at-home https://k8s-at-home.com/charts/
helm repo update

Before we install the chart we need to create a custom values.yaml in order to pass our persistent volume claim in for the unifi controller pod to use:

persistence:
  config:
    enabled: true
    type: pvc
    existingClaim: unifi-claim

With that in place we can install unifi using helm install unifi k8s-at-home/unifi -f values.yaml. Once installed I checked the status of the unifi pod and noticed that it wasn’t started. The error from the pod was bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program., which I realised is because we haven’t setup the NFS client on our worker nodes. This is quite straight forward to fix by logging onto each of the worker nodes and running:

sudo apt update
sudo apt install nfs-common

With this run on every node our pod now starts. But we have another issue, I checked the NFS share and no files were being written. From running the unifi software on docker on my Synology I knew that you needed to mount a volume to /var/lib/unifi on the container. From checking the pod definition I could see the volume mount was missing. I could not see a way to provide it on the chart. As an aside this is one of the things I dislike about Kubernetes. It can feel like death by configuration some of the time! Anyway I got the unifi deployment into a yaml file using kubectl get deployment unifi -o yaml > unifi.yaml and then added another volume to it:


...[snip]...

      volumeMounts:
        - mountPath: /var/lib/unifi
          name: config
        - mountPath: /config
          name: config
...[snip]...

      volumes:
      - name: config
        persistentVolumeClaim:
          claimName: unifi-claim

With the extra volume in place at the /var/lib/unifi path I applied the deployment to my cluster and volia, files started appearing in the NFS share. Internally the unifi controller uses a Mongo database for state and it puts those files in the share.

The second problem to solve is how to make the controller available on a static virtual IP. We want to do this for two reasons. Firstly, if we pick a random node’s IP and use that for our controller then if that node goes down for any reason our controller will be offline. Secondly, if we use a service of type NodePort then this provides is with high port numbers in the range 30000-32767, there is no (easy) way to use the real port numbers of the controller. This is important as the unifi network equipment talks to the controller on a set of predefined ports and there is no way we can change that.

To solve our problem enter Metal LB. Metal LB is an awesome piece of software that allows you to setup a Kubernetes load balancer and point it at a virtual IP. Metal LB takes care of broadcasting this virtual IP and routing it to an active node on your cluster. If a node goes down then no problem, as Metal LB will repoint the virtual IP at a different node. This nicely solves both of our problems above.

To install Metal LB:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml

Once installed to setup a virtual IP for Metal LB to use, we simply can create the following config map using kubectl apply -f map.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.61-192.169.1.61

Above I’m making the IP address 192.168.1.61. I’m doing that because that was the IP my old unifi controller was on so it enables me just to swap the controller out.

To make unifi use this IP we just have to change the deployment to make it use a Kubernetes load balancer by updating our values.yaml file to:

service:
  main:
    type: LoadBalancer
persistence:
  config:
    enabled: true
    type: pvc
    existingClaim: unifi-claim

Then we can apply using helm helm upgrade --install unifi k8s-at-home/unifi -f unifi.yaml. Once we have done that we can see that the unifi service is now being exposed on external IP 192.168.1.61 by using kubectl get service unifi:

NAME    TYPE           CLUSTER-IP       EXTERNAL-IP                                                                                  AGE
unifi   LoadBalancer   10.106.167.144   192.168.1.61

By using a load balancer it will expose all of the container ports on 192.168.1.61, we can verify by hitting the UI on https://192.168.1.61:8443. It works! To complete the setup, I backed up my old unifi controller and then restored it.

All of the open source software that makes the above possible is so awesome. Seriously impressed with Metal LB, how easy it was to setup and how cool it is. I now have a very highly available unifi controller, running in kubernetes and it performs like a dream!

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s