As containerization became a more and more popular deployment choice it was natural that tools would need to be developed to manage systems that may comprise large numbers of containers each focusing on different aspects of functionality.
Kubernetes is one such tool providing an orchestration layer for containers to handle everything from lifecycles and scheduling to networking.
It took me quite some time to get to grips with the concepts behind Kubernetes, I think this was largely because the definitions and explanations online can vary greatly. Presented below are the definitions that finally enabled me to understand what Kubernetes is trying to do and how it goes about achieving it.
I am not a Kubernetes expert so by no means am I presenting these explanations as definitive, all I hope is that they help someone else start their journey towards understanding the purpose and operation of Kubernetes.
Nodes
Nodes are the virtual machines that make up a Kubernetes cluster that can run and manage applications.
One node is designated the master and implements the control plane functionality to maintain and manage the cluster. Other nodes orchestrated by the master run the applications that the cluster is responsible for. Which nodes run which applications will vary depending on the nature of the applications alongside constraints such as CPU and memory usage.
Pods
A pod is the smallest deployment unit of Kubernetes. It can run one or more containers, since Kubernetes treats the pod as a single unit when it is started or stopped then so are all the containers within it.
Whilst in theory a pod could be comprised of multiple container types it is a common pattern for there to be a one to one relationship between a pod and container, for example to provide an API or access to an underlying datastore.
Sometimes addtional container types may be added to a pod to provide cross cutting concerns to the main container. This will quite often follow the sidecar pattern and be related to functionality such as acting as a sink for logging or providing a proxy to a network interface.
Deployments
We said earlier that one of the functions of Kubernetes is to manage lifecycle and scheduling concerns, a deployment is how we indicate to Kubernetes how these things should be dealt with.
A deployment might define:
- A pod and an associated container image.
- That a certain number of instances of the pod should be running at all times.
- CPU and memory requirements for each pod, this may also involve setting limits for the amount of resource pods should be allowed to consume.
- A strategy for how an update to pods should be managed.
Kubernetes will attempt to ensure that the deployment always matches the state described. if your application crashes then an unresponsive pod will be swapped out for a fresh one, if the amount of resource a pod is consuming increases then an existing pod may be moved to a node with more available resource.
When you update your deployment to reference a new version of your container then Kubernetes will also manage the transition from the existing pods to new pods that are running your updated container.
Services
Now with our application running in containers within pods we need a way for other applications in the cluster to be able to take advantage of it.
We wouldn't want pods to have to directly communicate with other pods, not only would this cause problems from a networking point of view since pods can come and go, but also we need a mechanism to ensure load is distributed across all the pods running the application.
Services within Kubernetes act a bit like a load balancer, they sit above a group of pods providing a consistent route to the underlying functionality. When a pod requires functionality implemented by another pod it sends a network request to a DNS entry defined by Kubernetes that represents the service endpoint.
Pods can now be freely added and removed from the service and pods don't need to be aware of each other in order to make use of their functionality.
Ingresses
Services provide an internal route to functionality provided by pods but it's likely that we will want to make some of this functionality available outside the cluster.
An ingress exposes an HTTP endpoint outside of the cluster that points at an internal service. In this sense an ingress acts like a reverse proxy onto the internal load balancer provided by the service allowing applications outside the cluster to invoke the underlying functionality.
An ingress can also provide other functionalities such as path based routing or SSL termination to present a consistent and secure interface to the world outside the cluster.
This has been a whirlwind tour of the basic concepts within Kubernetes, it is by no means exhaustive. I hope it enables you to understand the purpose of Kubernetes to aid your learning of the intricacies of an actual Kubernetes cluster. The devil is always in the detail but understanding of the fundamental concepts provides a solid bed on which to build the rest of your understanding.
No comments:
Post a Comment