How to setup a k3s cluster on a NAS

The ubiquity of kubernetes in the DevOps community can’t be overlooked, its become the go to orchestration solution for a lot of companies. I typically view my Homelab as a way to tinker and keep up to date with tech, so I wanted to give kubernetes a chance at home. Since I wasn’t fully sold on how I could utilize it at home, I decided to go the VM approach instead of buying new bare-metal hosts for the purpose.

I looked into kubernetes the hard way, it was a great deep dive into the components - but I was looking for something a bit more lightweight. K3s seemed perfect for my usecase, it was: Lightweight, Easy to install (via ansible or k3sup) and had low ram requirements. Time to setup the VMs and get to the races!

Setting up VMs on my NAS

My home NAS is an 4 core CPU (i3-10100 CPU ) with 16GB of RAM (Crucial 2666Mhz) and 16TB of space running on a B460M motherboard with Unraid as my software of choice. My considerations at the time of purchase was that it should be low power as well as support Intel Quick Sync for Plex - So the setup isn’t optimized for heavy workloads. That being said, its already running 5-6 docker containers and 1 VMs without any issues - All at a relatively low power draw.

I uploaded the latest Debian 12 ISO on to the NAS images and went to the VM page to create a new VM. Unfortunately Unraid allows you to create just 1VM at a time (Some posts online indicate you can copy over the disk image to clone it) - So I just did the creation process 3 times manually. In the future, I should probably have a pre-provisioned image with user account and basic tools installed and use that as the base.

The 3 VMs worked great, I was able to SSH into all of them and ensured that they have a static IP. Now I had to decide how to install k3s on these VMs.

Setting up k3s

I tested out two approaches(k3sup and ansible), eventually settling on the Ansible based approach since I’m most comfortable with it. I’ll describe the k3sup approach and then the ansible approach later.

k3sup

k3sup is an extremely easy to use and one-command approach to setting up k3s. You can target a remote host by just specifying the IP, k3sup will execute the commands via SSH. Install k3sup via their github repo instructions before proceeding.

Kubernetes has a server-agent model, so its prudent to setup the server first and then join the agents to it. In order to setup the server, you can just run the command k3sup install --ip $IP --user $USER. This will already result in a kubeconfig file that you need for accessing the cluster in the current location from which you ran the k3sup command.

In order to setup the agents, simply run k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER and k3sup will automagically install the required software and setup the cluster.

You can run kubectl --kubeconfig=$(pwd)/kubeconfig get nodes to check if the nodes are all present as expected.

k3sup was extremely easy to use, but the setup process felt a bit ephemeral to me. I’m sure you can also track which version of k3s the nodes run, but I still felt a bit uneasy that it wasn’t defined in a static manner. Most of the problems can also be fixed by a simple script or by allowing ansible to call k3sup instead - meaning easier integration with the rest of my homelab setup.

Ansible

Most of my homelab is already controlled by ansible, so it felt natural to try and rely on ansible atleast for the provisioning aspects of the homelab. The team I work with at the Ethereum Foundation maintains this massive list of ansible roles for use in most of our day to day activities. We do have some baremetal k3s clusters that have been provisioned with the bootstrap and k3s roles. I know that the setup has worked well for almost a year and the config and versions are easy to track/read - So I wanted to reuse them for my homelab as well. 

I modified my inventory.ini to include the new VM hosts that I provisioned:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# VMs on home NAS, used for k3s  
k8s-node-1 ansible_host=...
k8s-node-2 ansible_host=...
k8s-node-3 ansible_host=...

[k3s_cluster]
k8s-node-1
k8s-node-2
k8s-node-3

[k3s_server]
k8s-node-1

[k3s_agent]
k8s-node-2
k8s-node-3

The above inventory.ini now defines the 3 VMs I created, the ansible_host variable defines the IP address they are reachable with, The k3s_cluster defines the entire cluster, the others define the server and the agents

I then created a file in group_vars/k3s_cluster.yaml that defines the k3s version to use as well as the server IP:

1
2
k3s_version: v1.27.4+k3s1
k3s_server_ip: ...

The server and agent have a very similar group_vars/k3s_server.yaml or group_vars/k3s_agent.yaml file:

1
2
3
k3s_node_type: server # or agent
k3s_server_extra_args: >- # or `k3s_agent_extra_args`
--node-external-ip={{ ansible_host }}

It is also possible with the ansible role to setup a wireguard mesh in between the hosts (Bit overkill for a homelab, but useful if you have many hosts).

Then you’d have to install the ethpandaops ansible collection found here.

The playbook to run would look as below:

1
2
3
4
5
6
- hosts: k3s_cluster
become: true
serial: "{{ batch_count | default('100%') }}"
roles:
- role: ethpandaops.general.k3s
tags: k3s

Now it all comes together with ansible-playbook -i inventories/servers/inventory.ini playbooks/setup_k3s.yml. This will setup the cluster in the configuration defined and allow you to update it with ease if you ever need to. You can SSH into one of the nodes and grab the kubeconfig, place it in your local .kube/ folder and from then on kubectl get nodes should work without any hitches(make sure you have the right context set)!

Cleanups can be managed with a simple variable being set, -e k3s_cleanup=true!

Conclusion

We looked into the why as well as a few approach to setup k3s, we now should have a running cluster that’s ready for use! We definitely could have done certain things in a more automated manner, but I was able to get the cluster up in under an hour - so its probably enough for now. Stay tuned for more about how to actually deploy things to the cluster and to learn if I decide to give up on it altogether :D