Preface

My first step into Kubernetes was attending a Kubernetes 101 workshop at the Southern California Linux Expo ( SCaLE 18x ). As a network engineer, I admit that my goal at the time was only to gain a high-level understanding of the concepts so I wouldn't be completely ignorant if I ever had to help support or troubleshoot a Kubernetes network in the future. Fast-forward to a few months later and I’m now planning to build a Kubernetes cluster on Raspberry Pis as a platform for hosting most of my home projects. In this ongoing build-log series, I'll be documenting the iterative steps I'll be taking to build my home cluster while deep diving into Kubernetes a long the way.

There are a few reasons this is a build-log and not a build guide. If you search for " Raspberry Pi Kubernetes Cluster " online there are already hundreds of step-by-step tutorials for building a Kubernetes cluster on Raspberry Pis. Unfortunately, I found that most of these tend to be more prescriptive than descriptive. They focus more on how to get the cluster working rather than how the cluster works. The other problem is that Kubernetes is open source and modular and there's a fair amount of variation with how it can be deployed. That's why I'm instead making this a build-log where I'll be documenting the build over time, discussing my hardware and design choices, the challenges encountered, monitoring and managing the cluster, and digging deep into the networking that's happening under the hood.

Why Kubernetes?

Kubernetes takes a lot of the common infrastructure components that are commonly used to support highly available modern applications and packages them up together with a common interface. What do I mean by that? Well, for redundancy and performance, for years production web applications have generally been hosted on a pool of servers. Since there are multiple servers in a pool, a dedicated load-balancer appliance is then needed to forward client requests to the different servers in the pool. There's also often a need for a reverse-proxy appliance to handle SSL encryption and URL rewrites. Each server also needs to be able to communicate with other resources like a database which also must be highly available. If changes need to be made, they often have to be repeated on each server and can't always be easily reversed. If additional servers need to be added to scale performance or capacity, they also have to be added to the pool on the load-balancer. If a server goes down or becomes problematic, it's not always easy to replace them quickly. Some of these management tasks could be automated, but it's still a lot to manage. Kubernetes is a container orchestration platform that bundles a lot of these components together and makes it simpler to manage applications as deployments (not servers), create load balancers and reverse proxies while still being modular enough to support bare metal and cloud provider integrations.

Using it to host applications at home, I get all the benefits of the redundancy, load balancing, scalability, and self-healing features built-in, without needing to build and manage separate nodes and attempt to build automation myself. I can deploy all of this on a few Raspberry Pis and manage everything by defining objects instead of individually managing containers or VMs. All the while, learning an open-source platform that's now very widely used not only in the cloud, but in data-centers, and even in embedded systems to support micro-services architectures. If you're interested, you can check out all the features at Kubernetes.io .

Core Concepts

Before we get started with the actual build, there are some basic Kubernetes terms and concepts that I should explain. I'll progressively introduce more concepts throughout this series, but you also can read more in the official documentation .

Cluster Architecture

The first thing we need to understand is that Kubernetes is deployed as a cluster of one or more computers we call nodes . In my case, I'm using Raspberry Pis as nodes, but in cloud environments they're typically VMs. There are two types of nodes-- worker nodes and master nodes.

Worker nodes are the nodes that actually host our containers. They run a process called kube-proxy to control networking, and a process called kubelet which controls the node and it's containers through various APIs.

Master nodes run the control plane components and are responsible for managing the worker nodes and tracking their state. I should mention that while master nodes can also simultaneously act as worker nodes by hosting containers and running the control plane components at the same time, this isn't the default.

This is all a bit easier to understand visually.

insert diagram

Above we see that the nodes are the computers within the cluster, and the instances of our containerized applications run on each worker node. We also see that the containers running on each of the nodes exist within something called Pods . That leads us into the next topic.

API Objects

Kubernetes is a container orchestration platform which operates on the desired state principle. When we interact with a Kubernetes cluster, we do so using the Kube-API component of the Kubernetes control plane. We define API objects representing our desired state and the Kubernetes control plane continuously tries to make the current state match the desired state. I'll be introducing the API objects as we go, but there are a few basic API objects we should define before we move on:

Pods are objects which are intended to represent a single instance of an application. They are the smallest unit of execution that can be created or deployed in Kubernetes. You can think of a Pod as defining an environment for running a container which also includes the container itself. Although Pods can contain more than one container, it's not common. Pods are also considered ephemeral instances of an application; they can be created, replicated, add deleted as necessary. For example, if a Pod crashes, Kubernetes can just automatically stop the crashed Pod and start a replica Pod to replace it. You can even instruct Kubernetes to deploy a new version of your app this way by incrementally replacing old Pods until they are all replaced with new Pods.

Services are objects that define how Pods should communicate with each other or the outside world. In other words, they allow us to define networking to and from Pods. For example, if our web front-end Pods needed to talk to our back-end Pods, we'd need to define a service object that dynamically "selects" all of our back-end Pods and creates a virtual IP address so that our front-end Pods all have a stable virtual IP that can be used to reach any one of our back-end Pods. I'll be delving services more in depth later in this series.

Volumes are objects associated with Pods that defines storage that is mounted within the containers in a Pod. Normally, the files created within a container would only be available to the container and would only exist while the container is running. If the container is restarted, the files and any changes to those files are lost. Volumes defined with a Pod, provide a way to mount files into a container, in a more persistent way. There are several different types of volumes that Pods can use, with varying levels of persistence. For example, an emptyDir volume, exists for the lifetime of the Pod and would survive a container restart, but not a Pod restart. On the other hand, while a Persistent Volume Claim type volume also lives for the lifetime of the Pod, the actual data exists within a persistent volume resource outside of the Pod. I'll also be talking about this more in depth later as well.

Deployments are objects that define a deployment of Pods. They can be used for new Pod roll-outs, incrementally replacing existing Pods, rolling back to an earlier deployment, scaling up deployments, and pausing deployments. With a deployment you can specify how many replicas of a Pod to deploy (called a ReplicaSet ) and Kubernetes will create or remove Pods to scale the deployment accordingly.


NEXT: Kubernetes at Home Part 2: Choosing the Hardware

- Brian Brookman


Comments

comments powered by Disqus