How To Access Your Home Kubernetes Cluster From Anywhere With Tailscale

I have written quite a few blog posts on how to set up a home Kubernetes cluster. To operate the cluster you need to have network access to Kubernetes API endpoint (hosted by the kube-apiserver). This is the endpoint that the kubectl command line tool talks to make API requests to the cluster. Due to the fact my home cluster is just on my home network, I wanted a way to be able to access the cluster from anywhere in the world.

I wanted to avoid the obvious solution to this problem of opening up a port on my router and setting up port forwarding to the cluster. This is not a nice approach from a security standpoint as you are opening up a port to the internet.

Then I remembered a Hanselminutes podcast I had listened to about a technology called Tailscale (I would highly recommend checking out the podcast which has the founder of Tailscale Avery Pennarun on it). Tailscale is a really cool VPN technology that solves a lot of the common pain points with a VPN and fits the distributed world we now live.

At a high-level Tailscale puts all of your devices that join your network on a private IP address and then uses very clever NAT traversal and firewall punch through techniques (see this excellent blog post for the details) to enable secure point to point connections between any two devices.

Tailscale is an awesome solution to our problem as it will enable us to connect to our cluster on a private secure connection from anywhere in the world, without the need for any port forwarding. To make our cluster work with Tailscale we simply have to follow these steps:

  1. Sign up for a free Tailscale account (NB its free for 20 machines, I’ve got no affiliation with Tailscale)
  2. Install Tailscale on each control plane node and connect (tailscale up)
  3. Install Tailscale on the client machine you want to use to connect to Kubernetes and connect to Tailscale
  4. Get the Tailscale IP of your control plane node. To do this you can either log into the Tailscale UI and find the IP or you can SSH onto the control plane node and run tailscale ip
  5. Replace the IP address of the control plane node in your kube config file with the Tailscale IP address
  6. We are now almost there we can actually connect to our control plane now. But unfortunately we are not quite done as if you run kubectl get nodes now you will get a TLS error. This is because the Tailscale IP is not one of the IPs in the kubeapi-server certificate. We could pass the allow untrusted flag to kubectl and it would work but this isn’t great.
  7. To fix the TLS error we simply need to add the Tailscale IP to our Kubernetes API certificate, this excellent blog posts explains how to do it so I’m not going to rehash the information here. Once the new certificate is in place you should be good to rock, verify by running kubectl get nodes and check you get a response and no TLS error
  8. For an extra check, disconnect from your home network and connect to a different network (I tethered to my phone) and run kubectl get nodes again. It should work! Which is actually amazing when you think about it as Tailscale has just punched through the networks from your client machine tethered to your phone all the way to your Kubernetes API.

With Tailscale in place, you are now able to operate your Kubernetes cluster from anywhere in the world without exposing any ports on your home network!

How To Run a Vault HA Cluster on Home Kubernetes Cluster

I have blogged about how I set up a bare-metal Kubernetes cluster on VMWare. In this post I’m going to explain how to run a HA Vault cluster on the cluster.

Vault is a cloud native secrets engine that can protect your secrets and allow applications to retrieve them giving you a central place to store secrets. This video by Armon Dadger Hashicorp CTO gives a really good oversight Vault.

To run a HA Vault cluster in Kubernetes requires quite a few steps so I wanted to document how to approach it. Disclaimer this is by far from a guide on how to deploy a production grade vault cluster. It is aimed at giving you a way to deploy a HA cluster as starting point or if you are link me and want to deploy a Vault cluster on your home Kubernetes cluster for fun!

Step 1- Create Vault Namespace

The official vault helm chart which we are going to use to install Vault sets up Vault in a vault namespace. In order to get our cluster ready we need to create the vault namespace using the following command:

kubectl create namespace vault

Step 2 – Generate Certificates

In order to run a Vault cluster you use a TLS certificate for vault to use in order for it to expose its https endpoints. There are a few approaches we can take but the easiest is to generate our own root CA and then use that root CA to sign a certificate. We can then add the root CA to the trust store of the machine we want to use to access the cluster so that it will trust the vault endpoints.

To generate the root CA first we need to create a new private key:

openssl genrsa -out rootCAKey.pem 4096

Create the certificate signing request with this key:

openssl req -x509 -sha256 -new -nodes -key rootCAKey.pem -days 10000 -out rootCACert.pem

Note I’m using an expiry of around 30 years for the root CA.

If you are doing this for real then you would probably want to look at setting up an intermediary but as this is just for fun we are going to sign the server certificate directly with the root CA. Lets generate a private key for the server:

openssl genrsa -out vault.key.pem 4096

For the vault server certificate, we will want to configure a few subject alternative names in order for everything to work correctly. Firstly, we can come up with a domain that we will later alias to point to our vault cluster, I’m going to use vault.local for that. Next, we need to create subject n subject alternative names in the format of vault-X.vault-internal where n is the number of vault nodes we want to run and x is the node number. For example, if we run 3 nodes then we would need to create a certificate with the 3 addresses vault-0.vault-internal, vault-1.vault-internal and vault-2.vault-internal. These internal endpoints are used by the vault nodes to talk to one another.

In order to create the subject alternative names we need to create a config file like the one below:

[req]
req_extensions = v3_req
distinguished_name = dn
prompt = no

[dn]
CN = vault.local

[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1 = vault.svc.cluster.local
DNS.2 = vault-0.vault-internal
DNS.3 = vault-1.vault-internal
DNS.4 = vault-2.vault-internal
DNS.5 = vault.local

With this config file in place we can now create the CSR for our vault server certificate:

openssl req -new -key vault.key.pem -sha256 -out vault.csr -config cert.config

To generate the server certificate we just need to sign the CSR we just generated with the root CA:

openssl x509 -req -sha256 -in vault.csr -CA rootCACert.pem -CAkey rootCAKey.pem -CAcreateserial -out rbaServerCert.pem -days 365 -extfile cert.config -extensions 'v3_req'

We have now created our two certificates one for the root CA and one for the vault server itself.

Step 3 – Upload Certificates

In order for Vault to be able to use the certificates, we have generated we need to load them into the cluster as kubernetes secrets. We need to only upload the certificate for the root CA so that we can set Vault to trust any certificate that is signed by this root. This is needed so that the Vault instances trust each other and can talk over TLS to each other.

To upload the root CA cert we can use an opaque secret:

kubectl --namespace='vault' create secret opaque rootCA ./rootCACert.pem

For the server certificate we need to upload both the certificate and the private key in order for our Vault container to use that certificate to host its TLS endpoint. To do this we can use the command:

kubectl --namespace='vault' create secret tls tls-server --cert ./vault.cert.pem --key ./vault.key.pem

This command will upload the server certificate using the Kubernetes inbuilt tls secret type which will create a secret with the key and certificate as data under tls.crt and tls.key.

Step 4 – Set up Local Storage Volumes

In order to run a Vault cluster we need to choose a storage backend. The solution with the least moving parts is to configure vault to use an inbuilt raft storage backend. To make this work we have to create persistent volumes for each vault container to use. To solve this we are going to create a persistent volume on disk on each node, we can use node affinity to pin the volume so that it is always on the same node.

To set this up we first have to configure local storage by creating the local storage class:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
reclaimPolicy: Delete
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

With the file above created, we can create the storage class using the kubectl apply -f storageclass.yaml command.

Next, we have to create the volumes themselves. I have 3 worker nodes in my cluster and am going to run a 3 node vault cluster so I am going to create 3 persistent volumes by using the code:

apiVersion: v1
kind: PersistentVolume
metadata:
name: volworker01
namespace: vault
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /srv/cluster/storage/vault
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker01

In the above code you will need to change the worker01 to the name of your worker node. This is the line that makes the volume only available on that node. You will have to copy and paste this file for as many nodes as you are going to run, altering the name and the node affinity clause each time. Then use the kubectl apply -f <filename> to create the volumes on the cluster.

Before the volumes will work you have to make sure the path you set physically exists on disk, so be sure to create that on each of your worker nodes.

Step 5 – Install Vault

With all of the above in place we are finally ready to install Vault! We are going to use helm to install Vault, so the first thing we need to do is add the hashicorp repo to helm:

helm repo add hashicorp https://helm.releases.hashicorp.com

Next we need to create the following values.yaml:

# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: false
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "latest"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
server:
# These Resource Limits are in line with node requirements in the
# Vault Reference Architecture for a Small Cluster
resources:
requests:
memory: 256Mi
cpu: 500m
limits:
memory: 256Mi
cpu: 500m
# For HA configuration and because we need to manually init the vault,
# we need to define custom readiness/liveness Probe settings
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
# extraEnvironmentVars is a list of extra environment variables to set with the stateful set. These could be
# used to include variables required for auto-unseal.
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/rootCA/rootCACert.pem
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
type: secret
name: tls-server
type: secret
name: rootCA
# This configures the Vault Statefulset to create a PVC for audit logs.
# See https://www.vaultproject.io/docs/audit/index.html to know more
auditStorage:
enabled: false
dataStorage:
enabled: true
storageClass: local-storage
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
tls_key_file = "/vault/userconfig/tls-server/tls.key"
tls_ca_cert_file = "/vault/userconfig/rootCA/rootCACert.pem"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200&quot;
leader_ca_cert_file = "/vault/userconfig/rootCA/rootCACert.pem"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200&quot;
leader_ca_cert_file = "/vault/userconfig/rootCA/rootCACert.pem"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200&quot;
leader_ca_cert_file = "/vault/userconfig/rootCA/rootCACert.pem"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
autopilot {
cleanup_dead_servers = "true"
last_contact_threshold = "200ms"
last_contact_failure_threshold = "10m"
max_trailing_logs = 250000
min_quorum = 5
server_stabilization_time = "10s"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 8200
view raw values.yaml hosted with ❤ by GitHub

This values.yaml configures 3 vault nodes and mounts the certificates using the secrets we created earlier. I have tuned down the memory requirements of each node as the VMs I’m running are pretty small.

With all of this in place we can get Vault installed:

helm install vault hashicorp/vault --namespace vault -f values.yaml

Once vault is installed we can check its running by listing the pods:

kubectl get pods -n vault

This should print out the 3 vault pods vault-0, vault-1 and vault-2. To get the cluster up and running we need to initialise one of the vaults and then save the unseal keys and root token and add those to the other vaults.

To initialise one of the vaults first open a terminal on the container using:

kubectl exec -it --stdin=true --tty=true vault-0 /bin/ash

Now we need to turn off TLS verification as we are accessing vault on localhost, do this by setting up VAULT_SKIP_VERIFY environment variable:

export VAULT_SKIP_VERIFY=1

Now we can initialise vault using:

vault operator init

Make sure you save the unseal keys and root token somewhere safe and then unseal this vault by running vault operator unseal 3 times and providing 3 of the unseal keys. Then ssh onto the other two vault containers using the kubectl exec command given above but changing the pod name to vault-1 and vault-2, export the VAULT_SKIP_VERIFY variable and then run vault operator unseal 3 times to unseal both of the other vaults.

Step 6 – Configuring Your Machine To Use Vault

To finish our setup we need to configure our machine to be able to trust our new Vault cluster. To do that first load your root CA certificate into your machine’s trust store. How to do this will vary based on which operating system you are on. For Mac you can follow these instructions.

I have Metal LB setup as a load balancer on my cluster as explained in this post. Metal LB creates a virtual IP address that will load balance across the cluster and route to vault. I have added an entry to my /etc/hosts file with vault.local pointing at the virtual IP address for the Metal LB IP that points to vault.

Next, install vault on your machine, instructions to do this can be found on the vault install page. With all of that configured we can set the vault address for the cli to use by running:

export VAULT_ADDR=https://vault.local:8200

Now we can successfully use our new vault HA cluster, test it out by running vault status you should see the status of the cluster.

Creating a virtual 6 node HA Kubernetes cluster with Cilium using VMware – part 2

In part 1 we setup 6 Ubuntu Server VMs (using VMware ESXI) all available on the host network. In this part we are going to take those 6 VMs and install Kubernetes with Cilium CNI. Our Kubernetes setup will have 3 master nodes (control plane) and 3 worker nodes, making it a highly available cluster. The astute reader will point out that all of the VMs are running on a single host so making this cluster HA is kind of pointless. Yes to a point I would agree with you, but making the cluster HA is better for two reasons, firstly, it will enable zero downtime upgrades to the control plane and secondly it gives us experience in making a HA cluster which for a production use cases we would want to do.

To setup Kubernetes I have decided to go with kubeadm. I know this is cheating a tiny bit versus installing all of the components ourselves, but even though kubeadm does a bit of heavy lifting for us, we will still need to understand a bit about what is going on to get the cluster working.

The instructions I’m going to be following can be found on the kubeadm site. I’m going to talk through the salient points here as I had to add a few workarounds to get the cluster running. First step is to install kubeadm on every node (it would’ve been better to have included this in our first VM that we cloned as it would’ve saved some time but you live and learn). To install kubeadm:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Once kubeadm is install we can run the following command on master01 to initailise the master node: `

kubeadm init --control-plane-endpoint=skycluster --pod-network-cidr=10.10.0.0/16 --skip-phases=addon/kube-proxy

The above command sets the control-plane endpoint as is recommended by kubeadm if you want to setup a HA cluster. For now skycluster is just a hosts entry that points to master01‘s IP. I have added this entry to every machine in our setup. We are setting the skip-phase kube-proxy flag because I am planning on using Cilium as a CNI. Cilium uses eBPF to super power your cluster and provides the kube-proxy functionality by using eBPF instead of iptables. I recently interviewed Dan Wendlandt the CEO of Isovalent who are the company behind Cilium in the Form3 .tech podcast, an episode well worth listening to!

I got a warning saying that swap in enabled and kubeadm should be run with swap off. To turn off swap run sudo swapoff -a. To make sure that swap stays off even after a reboot we need to edit /etc/fstab and comment out the following line /swap.img none swap sw 0 0 by adding a # to the start of it. After turning off swap, I ran the the kubeadm init command again.

This time command timed out. From the error message it states that the kubelet cannot be contacted. The kubelet is a service that runs on each node in our cluster. Running the command systemctl status kubelet reveals that the kubelet service is not running. For more information as to why the service isn’t running I ran journalctl -u kubelet, then press Shift+g to get to the bottom of the logs, then right arrow to see the full error message which was Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\ is different from docker cgroup driver: \"cgroupfs\"". To fix this we have to switch either kubelet or docker to use the same cgroup driver. From doing some reading it is recommended to use systemd as the cgroup driver so I updated docker to use systemd by running:

sudo -i
echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' >> /etc/docker/daemon.json
systemctl restart docker

With that change in place lets run kube init... again.  This time we get: Your Kubernetes control-plane has initialized successfully!

Wahoo!

To be able to contact the cluster from our machine we need to add the cluster config to our local kube config file. To do this we can copy the file /etc/kubernetes/admin.conf from the master01 node onto our machine. From there we can grab the entry for the new cluster and put it into our kube config file located in $HOME/.kube/config. Once we have done that if we get nodes by running kubectl get nodes we see

NAME STATUS ROLES AGE VERSION
master01 NotReady control-plane,master 13m v1.22.0

The node is not ready! To find out why we can describe the node using kubectl describe node and then if we look in events, we see the error container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. When you install a new kubernetes cluster it does not come with a CNI. The container CNI is responsible for handling the IP addresses for pods that get created (amongst other things). As stated earlier we are going to use Cilium as our CNI of choice so we can super power our cluster.

To install cilium we can use the following command

helm install cilium cilium/cilium --version 1.9.9 \
    --namespace kube-system \
    --set kubeProxyReplacement=strict \
    --set k8sServiceHost=192.168.1.200 \
    --set k8sServicePort=6443

Which I got from the cilium documentation on how to install cilium without kube proxy. I got the port of kube api server by describing the kube-api server pod using describe pod kube-apiserver-master01.

Now that cilium is installed when we check the status of the nodes again we now see:

NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 15m v1.22.0

Awesome! The first control plane node is now fully working!

Lets not rest on our laurels, the next job is to setup the two other control plane nodes nodes. This would’ve been made easier if i would’ve had the forsight to pass --upload-certs flag to the initial kubeadm init command. This flag stores the control plane certificates inside a kubernetes secret, meaning the joining control plane nodes can simply download them. Unfortunately, I did not do that so I had to copy the certificates manually. Luckily this helper script is available on the kubeadm documentation:

USER=kevin # customizable
CONTROL_PLANE_IPS="master02 master03"
for host in ${CONTROL_PLANE_IPS}; do
  scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
  scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
  scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
  scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
  scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
  scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
  scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
  # Quote this line if you are using external etcd
  scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done

This script copies all of the certificates we need onto each of our control plane nodes (master02 and master03). Once run, we have to go on to each node and copy all of the certs to /etc/kubernetes/pki (this folder won’t exist so we need to create it). Then we need to copy the etcd-ca.key and etcd-ca.crt to /etc/kubernetes/pki/etcd/ca.key and /etc/kubernetes/pki/etcd/ca.crt respectively. Once those changes are in place I had to change docker to use systemd and turn off swap on each node by (the steps we saw earlier):

sudo -i
swapoff -a
systemctl restart docker
systemctl daemon-reload

With those changes in place we can now run the kubeadm join command that we got given when we successfully ran kubeadm init:

sudo kubeadm join skycluster:6443 --token xxx --discovery-token-ca-cert-hash xxx --control-plane

After running this on both nodes we see a message saying that the node joined successfully. Now when we check the status of the nodes we see:

master01 Ready control-plane,master 24h v1.22.0
master02 Ready control-plane,master 7m48s v1.22.0
master03 Ready control-plane,master 23m v1.22.0```

All of our control plane nodes are now ready to go. To join the worker nodes we just have to run the following set of commands:

sudo -i
swapoff -a
echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' >> /etc/docker/daemon.json
systemctl restart docker
systemctl daemon-reload
kubeadm join skycluster:6443 --token xxx --discovery-token-ca-cert-hash xxx

These commands automate the switching of docker on each of the worker nodes to use systemd, then we use kubeadm to join the node to the cluster. With this command run on each worker node we now have a fully working cluster, which we can verify with kubectl get nodes:

NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 24h v1.22.0
master02 Ready control-plane,master 16m v1.22.0
master03 Ready control-plane,master 32m v1.22.0
worker01 Ready 4m12s v1.22.0
worker02 Ready 2m8s v1.22.0
worker03 Ready 101s v1.22.0

Lets check the health of cilium using the awesome cilium cli. To install, simply download the release for your OS and copy the binary to your path. With that in place we can run cilium status which gives us the following output:

We can see that cilium is healthy and we have 6/6 cilium containers running (one on each node). We now have a fully working Kubernetes cluster ready for action.

Creating a virtual 6 node HA Kubernetes cluster with Cilium using VMware – part 1

I decided to recommission decent spec desktop (16 core – Threadripper 1950x, 32GB RAM) into a server of VMs. With the first project to be to build out a HA Kubernetes cluster. Why? When you can get a Kubernetes cluster in a click from a cloud vendor? For fun and to learn! This blog series will take you through the journey of building out the cluster.

To start on the project I decided to use VMware’s ESXI to host the VMs. For those who don’t know ESXI is an OS that only exists to allow you to serve VMs. If you want the full blown product it is very expensive but luckily you can get it for FREE from VMware, simply by registering for a free license and added once installed. There are a couple of limitations on the free license but none that will get in the way of most hobbiests. The limitations are you can only have 2 real CPUs in your VMs and as most people are going to put this on a single machine that won’t matter and the way backups are of the whole fleet of VMs is restricted, again this is not a concern for me. So for free this is a pretty good deal.

To install, I simply downloaded the x86 64bit ISO and put it on a USB drive using BalenaEtcher. Then simply boot from the USB drive and hit next a few times through the ESXI install. Once installed a web url is displayed showing the IP address of the machine, that is how you interact with ESXI. The UI is a bit 1990s but remember we are talking about a free license here!

For the cluster VMs I decided to use Ubuntu 20.04 LTS server edition. For my use case I don’t need the added bloat that comes with desktop Ubuntu. I created a new VM and attached the Ubuntu ISO and ran through the installer. What is very cool is the default way ESXI sets up the VMs is that it makes them available on the main network. So for example in my home network I’m using 192.168.1.1/24 and the VMs get an IP address in that CIDR. That means that from another machine when you SSH into the VM you don’t even know that its a VM, you can treat it like any other machine on the network. ESXI setups up a virtual switch on routes traffic through the host machine to the VM automatically.

I named my first machine master01 as for my HA Kubernetes cluster I’m going to build out 3 master nodes to run the control plane and 3 worker nodes. Once the first machine is up and running there is a really easy way to create the other 5. Simply shutdown the VM and then clone it, this Youtube video explains how to clone a VM so I’m not going to repeat the instructions here.

Once all 6 machines have been created, there are a few things we need to do to tidy them up. Firstly, I wanted them all to have sequential IPs in my network so I would remember where they were. To do this we simply need to edit the file /etc/netplan/00-installer-config.yaml to something like the following:

network:
  ethernets:
    ens160:
      dhcp4: false
      addresses: [192.168.1.202/24]
      gateway4: 192.168.1.1
      nameservers:
              addresses: [8.8.8.8,8.8.4.4]
  version: 2

In the above configuration we are setting the static IP of the machine to 192.168.1.202 with a default gateway of 192.168.1.1 to make this config apply run sudo netplan apply. The next thing we need to do is rename the machines, as currently they all have the same hostname as the copied VM. To do this first we update the hostname in /etc/hostname and then update any reference to the old machine name in /etc/hosts to the new one. With those changes in place we need to reboot to make them apply.

Lastly to make the machines easier to work with, I setup the /etc/hosts file on my laptop to the following:

192.168.1.200 master01
192.168.1.201 master02
192.168.1.202 master03
192.168.1.203 worker01
192.168.1.204 worker02
192.168.1.205 worker03

This means that we can SSH into each machine using a friendly name rather than the IP address.

And with that we now have our 6 machines ready to go, in part 2 we will get on to installing Kubernetes…