- 2 Raspberry Pi 4 Model Bs (4GB)
- 2 Official Raspberry Pi 4 Power Supplies
- 2 Raspberry Pi 3 Model Bs
- 2 Official Raspberry Pi 1/2/3 Power Supplies
- 4 32GB Samsung EVO Select microSD Cards
- Yahboom Raspberry Pi Cluster Case
- MikroTik Routerboard hEX
- CAT5e Ethernet Patch Cables
- Synology DS216j NAS
I chose to build my cluster using four Raspberry Pis because it's way cooler than a two node cluster. Besides that, I already had two existing Pi 3s and felt that 1GB of RAM might be a limiting factor for what I had planned. I decided to add two additional Pi 4s with 4GBs of RAM each. It's been sufficient for my needs so far but, I'm hoping to find an excuse to upgrade to the recently announced 8GB Pi 4 because well, I JUST NEED THEM, OKAY?!
Since the Raspberry Pi's performance is so greatly impacted by the SD card, I also decided to purchase some respectable cards. Priced at around $8 for a 32GB card, there's really no reason not to to. The EVO Select cards I've linked above are the same as EVO+ cards, they're just exclusive Amazon versions. Check out this benchmark article which rates them highly. I also planned on using my existing Synology DS216j NAS as persistent storage for the cluster containers.
For power, I ultimately decided on using dedicated power supplies for each of the Pis and went with the official Raspberry Pi branded ones. There are a few reasons I came to this decision. At first, I was hoping to build a cleaner setup that didn't require so many plugs. I had two options, either purchase a USB charging hub and short cables or use optional PoE HATs with a PoE switch. While the newer Raspberry Pi 4 Model Bs have higher stated power requirements than previous models, they can still be run with a minimum 5V at 2.5A as long as you aren't also powering other peripherals. That being said, I still had a very difficult time finding a reputable USB charging hub that was able to simultaneously provide enough power to each Pi when they would all be plugged in. Many of these hubs only provide the required amperage when only some of the ports are in use. I also anticipated that it might be challenging to find cables that would work because of the known USB-C cable compatibility problem in the v1.0 revision of the Pi 4. The other option was to power the Pis with the optional PoE HAT (which also provides the Pi 4 minimum of 5V at 2.5A). However, since the PoE HAT only works on the the Pi 3 Model B+, I would have had to upgrade those Pis as well. I would have also had to purchase a small 802.3af PoE switch with enough capacity to power all of the Pis. This may be something I reconsider in the future, but at the time, it didn't seem like money well spent since this isn't something I planned on moving once installed. Using the official power supplies, I could rest easy that all of the Pis, including the Pi 4s, would be sufficiently powered.
As far as the case, I wanted a simple stacked cluster case that displayed the Pis nicely and didn't cause any thermal issues. As a consequence of the increased power and performance of the Raspberry Pi 4s, they also generate more heat and are known to throttle the CPU under load. While I considered a case with fans, I was hoping for a silent, air cooled cluster without cheap fans that might eventually start buzzing. Since a recent firmware upgrade was made available to help reduce the heat and power consumption of the Pi 4, I felt confident that an open air case without active cooling should be sufficient. It hasn't been an issue yet, but I may do some testing in another post soon.
For networking, the MikroTik Routerboard was something I already had on my network. Most any 5 port switch would have worked just fine, but I liked the idea of having a managed switch for monitoring purposes and the flexibility to configure VLANs and routing for the cluster if I wanted to. With some Google colored ethernet patch cables added, it all fit nicely under the Pis for a fairly nice looking setup.
PREVIOUS: Kubernetes at Home Part 1: Introduction