Saturday, 19 October 2024

Underpinning Kubernetes

 


Kubernetes is the de facto choice for deploying containerized applications at scale. Because of that we are all now familiar with its building blocks that allow us to build our applications such as deployments, ingresses, services and pods.

But what is it that underpins these entities and how does Kubernetes manage this infrastructure. The answer lies in the Kubernetes control plane and the nodes it deploys our applications too.

The control plane manages and makes decisions related to the management of the cluster, in this sense it acts as the clusters brain. It also provides an interface to allow us to interact with the cluster for monitoring and management.

The nodes are the work horses of the cluster where infrastructure and applications are deployed and run.

Both the control plane and the nodes comprise a number of elements each with their own role in providing us with an environment in which to run our applications.

Control Plane Components

The control plane is made up of a number of components responsible for the management of the cluster, in general these components run on dedicated infrastructure away from the pods running applications.

The kube-apiserver provides the control plane with a front end via a suite of REST APIs. These APIs are resource based allowing for interactions with the various elements of Kubernetes such as deployments, services and pods.

In order to manage the cluster the control plane needs to be able to store data related to its state, this is provided by etcd in the form of a highly available key value store.

The kube-scheduler is responsible for detecting when new pods are required and allocating a node for them to run on. Many factors are taken into account when allocating a node including resource and affinity requirements, software or hardware restrictions and data locality.

The control plane contains a number of controllers responsible for different aspects of the management of the cluster, these controllers are all managed by the kube-controller-manager. In general each controller is responsible for monitoring and managing one or more resources within the clusters, as an example the Node Controller monitors for and responds to nodes failing. 

By far the most common way of standing up a Kubernetes cluster is via a cloud provider. The cloud-controller-manager provides a bridge between the internal concepts of Kubernetes and the cloud provider specific API that is helping to implement them. An example of this would be the Route Controller responsible for configuring routes in the underlying cloud infrastructure.

Node Components

The node components run on every node in the cluster that are running pods providing the runtime environment for applications.

The kubelet is responsible for receiving PodSpecs and ensuring that the pods and containers it describes are running and healthy on the node.

An important element in being able to run containers is the Container Runtime. This runtime provides the mechanism for the node to act as a host for containers. This includes pulling the images from a container registry as well as managing their lifecycle. Kubernetes supports a number of different runtimes with this being a choice to be made when you are constructing your cluster.

An optional component is the kube-proxy that maintains network rules on the node that plays an important role in implementing the services concept within the cluster.

Add Ons

In order to allow the functionality of a cluster to be extended Kubernetes provides the ability to define Add ons.

Add ons cover many different pieces of functionality.

Some relate to networking by providing internal DNS for the cluster allowing for service discovery, or by providing load balancers to distribute traffic among the clusters nodes. Others relate to the provisioning of storage for use by the application running within the cluster. Another important aspect is security with some add ons allowing for security policies to be applied to clusters resources and applications.

Any add ons you choose to use are installed within the cluster with the above examples by no means being an exhaustive list.

As an application developer deploying code into a cluster you don't necessarily need a working knowledge of how this infrastructure is being underpinned. Bit I'm a believer that having an understanding of the environment where your code will run will help you write better code.

That isn't to say that you need to become an expert, but a working knowledge of the building blocks and the roles they play will help you become a more well rounded engineer and enable you to make better decisions. 

No comments:

Post a Comment