After assembling the cluster case and installing the Raspberry Pis, the next step was to install an operating system and prepare the nodes for running Kubernetes. I found no reason not to use the Debian based Raspberry Pi OS Lite, so that's what I did. Since I primarily use Debian, I thought I'd try out the Ubuntu Raspberry Pi Imager available on the Raspberry Pi downloads page . Although installing the imager.deb package required manually installing some missing dependencies, the Imager application itself was extremely simple to use. I just selected the OS and the SD card and proceeded with writing each of the 4 SD cards I'd be installing in the Pis. I should note that at the time of writing this, a 64-bit version of Raspberry Pi OS was currently in beta. I chose to stick with the 32-bit stable release because I prefer stability over the bleeding edge, and because I only was only using Raspberry Pis with up to 4GBs of RAM.

Image: Raspberry Pi Imager screenshot

Since Raspberry Pis are often used headless, Raspberry Pi OS provides an easy way to enable the SSH server without needing to connect a display. As shown in the steps found here , you simply create a file titled “ssh” in the boot directory on the SD card. I did this after writing each of the SD cards, before ejecting them.

Next, I installed the SD Cards in the Pis and powered only one on. If I powered on all the Pis on at this point, they would have all had the same hostname and with similar MAC addresses, it would have been difficult to tell them apart when connecting via SSH. Instead, I powered them on and individually connected via SSH, changed the hostname using raspi-config, and rebooted each one, before powering on the next.

On my network this was a fairly simple process because I have a pfSense appliance as my router, and I have the DNS server configured to register hostnames from DHCP leases and static mappings. This allowed me to just ssh to each new Pi using "pi@raspberrypi" without knowing the IP address it was assigned from DHCP. If I didn't have this configured on my network, I would have had to do an nmap scan or look at DHCP leases to find the IP address of each new Raspberry Pi.

This is also why I only changed the hostnames and didn't configure any static IP addresses on the Raspberry Pis. It's something I generally do for all the servers on my home network. Instead of statically configuring an IP address on the server itself, I rely on static DHCP mappings and hostname registration in DNS. Using DHCP and DNS in instead of configuring static IPs everywhere makes it much simpler to change an IP address of a server if I ever need to, and makes it so I rarely have to remember IP addresses. I configured static DHCP mappings to reserve a specific IP address for each of the Pis. This wasn't strictly necessary, but it gives me finer control of the lease times and other options given to each Pi and makes the IP addresses predictable.

Here are the simple host names I chose: 1)

  • k3s-master-rpi001
  • k3s-worker-rpi002
  • k3s-worker-rpi003
  • k3s-worker-rpi004

The "k3s" prefix is the version of Kubernetes I plan on using. I'll talk more about this in the next post. The "master" or "worker" designation refers to the role of the node. Since K3s only supports multiple master nodes when using an external database, I'm using only a single master node in my cluster, (which is quite common). Lastly, the "rpi" suffix identifies that the node is a Raspberry Pi and I follow it with a simple numeric identifier that has room for up to 999 Raspberry Pis--hey! one can dream right?

Once I had all the Pis up and running, I logged in to each and changed the default pi user password and expanded the file system by once again using the raspi-config utility. At this stage, many of the "follow-the-bouncing-ball" guides out there will have you reduce the amount of RAM allocated for the GPU, disable swap memory, and add some cgroup options to support containers. However, I put off doing these tasks for now because I wanted to build out the cluster incrementally and see what impact each change would have individually.

Image: raspi-config screenshot

Speaking of building incrementally, at this point I thought it would be a good idea to setup Ansible so I could incrementally apply some of the changes I would need to make across all of the nodes. Using a VM which I dedicated for Ansible, I used the command ssh-copy-id pi@ to copy my user’s ssh public key onto each Pi. This allows Ansible to connect to each node as the pi user without a needing a password. I then created a simple inventory file and made sure Ansible could log into the all nodes using an ad-hoc command to gather record some device details. I'll be doing more with Ansible later in this series and I've made all of the files available on my GitHub here.

brian@ansible-vm:~/k3s_ansible$ cat hosts
[k3s_rpis]
k3s-master-rpi001
k3s-worker-rpi[002:004]

brian@ansible-vm:~/k3s_ansible$ ansible -u pi -m shell -a \
> 'uname -a && \
> lsb_release -d && \
> cat /proc/cpuinfo | grep "Revision\|Model" &&\
> cat /proc/meminfo | grep MemTotal && \
> lscpu | grep "Architecture\|Model name\|CPU max MHz"' \
> k3s_rpis
k3s-master-rpi001 | CHANGED | rc=0 >>
Linux k3s-master-rpi001 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : c03112
Model       : Raspberry Pi 4 Model B Rev 1.2
MemTotal:        3999744 kB
Architecture:        armv7l
Model name:          Cortex-A72
CPU max MHz:         1500.0000

k3s-worker-rpi002 | CHANGED | rc=0 >>
Linux k3s-worker-rpi002 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : c03112
Model       : Raspberry Pi 4 Model B Rev 1.2
MemTotal:        3999744 kB
Architecture:        armv7l
Model name:          Cortex-A72
CPU max MHz:         1500.0000

k3s-worker-rpi003 | CHANGED | rc=0 >>
Linux k3s-worker-rpi003 4.19.118-v7+ #1311 SMP Mon Apr 27 14:21:24 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : a02082
Model       : Raspberry Pi 3 Model B Rev 1.2
MemTotal:         948280 kB
Architecture:        armv7l
Model name:          Cortex-A53
CPU max MHz:         1200.0000

k3s-worker-rpi004 | CHANGED | rc=0 >>
Linux k3s-worker-rpi004 4.19.118-v7+ #1311 SMP Mon Apr 27 14:21:24 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : a22082
Model       : Raspberry Pi 3 Model B Rev 1.2
MemTotal:         948280 kB
Architecture:        armv7l
Model name:          Cortex-A53
CPU max MHz:         1200.0000

Since the installation of Raspberry Pi OS Lite is done by directly imaging the SD cards, you don't go through an installer to set localization options like the timezone or NTP servers. I noticed that the timezone was set to Europe/London on all of the Pis. While I didn't expect it to really be an issue, I wanted to ensure that logs would have the accurate timestamps. To be certain that the system time and timezone were configured identically across all the nodes, I created and ran the Ansible playbook shown below. I had to set the NTP server manually in the timesyncd configuration file because I found that the NTP server received via DHCP on my network was frustratingly being ignored by timesyncd. I found references to this being a bug in past versions, but it should have been resolved in systemd version 241 running on the Pis. I may revisit this later, but for the time being, I just manually set this in the timesyncd configuration file.

# timesyncd.yaml
---
- hosts: k3s_rpis
  remote_user: pi
  become: yes
  tasks:
    - name: Set NTP configuration in /etc/systemd/timesyncd.conf
      lineinfile:
        path:    /etc/systemd/timesyncd.conf
        line:    'NTP=192.168.2.1'
        create:   yes
        state:    present
      register: timesyncd_conf

    - name: Start systemd-timesyncd, if not started
      service:
        name: systemd-timesyncd
        state: started
      register: timesyncd_started

    - name: Restart systemd-timesyncd, if running and timesyncd.conf changed
      service:
        name: systemd-timesyncd
        state: restarted
      when: timesyncd_started.changed == False and timesyncd_conf.changed

    - name: Enable systemd-timesyncd, if not already enabled
      service:
        name: systemd-timesyncd
        enabled: yes

    - name: Set timezone to America/Los_Angeles
      timezone:
        name: America/Los_Angeles
      register: timezone

    - name: Restart cron if timezone was changed
      service:
        name: cron
        enabled: yes
        state: restarted
      when: timezone.changed

Next, I created and ran another simple playbook to update the apt package list and upgrade the installed packages.

# apt-upgrade.yaml
---
- hosts: k3s_rpis
  remote_user: pi
  become: yes
  tasks:
    - name: Update apt package lists and upgrade
      apt:
        update_cache: yes
        upgrade: safe
</pre>
Finally, I then rebooted all the nodes, again using an adhoc ansible command with the reboot module.
<pre>brian@ansible-vm:~/k3s_ansible$ ansible -u pi -m reboot -b k3s_rpis
k3s-worker-rpi003 | CHANGED => {
    "changed": true,
    "elapsed": 27,
    "rebooted": true
}
k3s-worker-rpi004 | CHANGED => {
    "changed": true,
    "elapsed": 30,
    "rebooted": true
}
k3s-worker-rpi002 | CHANGED => {
    "changed": true,
    "elapsed": 33,
    "rebooted": true
}
k3s-master-rpi001 | CHANGED => {
    "changed": true,
    "elapsed": 36,
    "rebooted": true
}

While there were still some additional tasks that one might argue should have been done to "prepare the nodes" before the Kubernetes installation, as a general rule, I try not to apply changes until I know I need them. If you've worked with Linux for any amount of time, you've probably found yourself troubleshooting an issue and applying a bunch of "fixes" that you found in different forums. Once you actually fixed the issue, you were probably left with changes that you either shouldn't have made, or you just couldn't tell which change actually fixed the issue. To avoid this kind of situation, I try to practice the Agile/DevOps approach of applying small incremental changes, and checking for feedback along the way.

NEXT: Kubernetes at Home Part 4: Installing the Cluster

PREVIOUS: Kubernetes at Home Part 2: Choosing the Hardware

- Brian Brookman


Comments

comments powered by Disqus