Sunday 25 April 2021

Docker Basics

 


In my previous post I covered the explanation of Kubernetes terminology that finally helped me gain an understanding of its purpose and operation. In this post I decided to delve one layer deeper to cover the basics of Docker.

As with my previous post I must add a disclaimer that I am not a Docker expert so the descriptions below aren't meant to be authoritative, but they are the explanations that make sense to me and aided my initial understanding of the topic.

Containers vs Virtual Machines

Prior to the advent of containers, of which Docker is one implementation, the most common deployment mechanism in cloud computing was virtual machines.

Virtual machines provide full hardware virtualisation by the means of a hypervisor, they include a full operating system install along with abstractions of hardware interfaces. Software is installed on the machine as if it was a normal physical server, however since the hypervisor can support multiple virtual machines on a single physical server they enable the available resources to be maximised.

Containers do not try to provide an abstraction of a whole machine, instead they provide access to the host operating system kernel whilst also allowing each individual container to be isolated from each other. Not only is this a more efficient use of resources but it allows software to be packaged in such a way as to include all required dependencies in an immutable format via container images.

Daemon and Clients

Docker follows a client server architecture. The Docker daemon provides an API for interacting with Docker functionality and also manages containers running on the host machine.

The Docker client provides users with a mechanism for interacting with the Docker daemon. It allows users to build, run, start and stop containers as well as many other commands for building and managing docker containers.

A second client called Docker Compose allows users, via a YAML file, to define the make up of a whole system comprising multiple containers. It defines which containers should run along with various configuration information related to issues such as networking or attachment to storage.

Images, Containers and Registries

A Docker image defines an immutable template for how to build a container. A powerful aspect of docker is that it allows images to be based on other images creating a layered approach to their construction. For example you may define an image for your container to start with an image for the operating system you want to work with, then add the image of the web server you want to use followed by your application. These steps are defined in a Docker File that provides the instructions on how each layer should be built up to define the container image.

A container is a running instance of an image. When running a container you define the image you want it to be based on plus any configuration information it might need. The important aspect is that the container contains everything necessary for the application to run. As opposed to deployment to a virtual machine that might rely on certain dependencies already being present a container is self contained and therefore highly portable.

A Docker registry is a means for storing and sharing images, it acts like a library for different container images that can be updated as new versions of the container are defined. When using Docker Compose to define the make up of a system you will often specify the version of a container to run by pointing at a particular version of a container within a registry.

Clearly a technology as complex as Docker has many intricacies and complexities that I haven't covered in this post. However more advanced topics are always easier to approach once you have sound understanding of the basics. Never try to tackle the task of understanding everything about an area of technology, instead see it as a journey and accept it may take some time for the knowledge to take hold. The explanations I've provided in this post helped me on that journey, hopefully they can help you too.

Sunday 18 April 2021

Kubernetes Basics

 


As containerization became a more and more popular deployment choice it was natural that tools would need to be developed to manage systems that may comprise large numbers of containers each focusing on different aspects of functionality.

Kubernetes is one such tool providing an orchestration layer for containers to handle everything from lifecycles and scheduling to networking.

It took me quite some time to get to grips with the concepts behind Kubernetes, I think this was largely because the definitions and explanations online can vary greatly. Presented below are the definitions that finally enabled me to understand what Kubernetes is trying to do and how it goes about achieving it.

I am not a Kubernetes expert so by no means am I presenting these explanations as definitive, all I hope is that they help someone else start their journey towards understanding the purpose and operation of Kubernetes.

Nodes

Nodes are the virtual machines that make up a Kubernetes cluster that can run and manage applications.

One node is designated the master and implements the control plane functionality to maintain and manage the cluster. Other nodes orchestrated by the master run the applications that the cluster is responsible for. Which nodes run which applications will vary depending on the nature of the applications alongside constraints such as CPU and memory usage.

Pods

A pod is the smallest deployment unit of Kubernetes. It can run one or more containers, since Kubernetes treats the pod as a single unit when it is started or stopped then so are all the containers within it.

Whilst in theory a pod could be comprised of multiple container types it is a common pattern for there to be a one to one relationship between a pod and container, for example to provide an API or access to an underlying datastore.

Sometimes addtional container types may be added to a pod to provide cross cutting concerns to the main container. This will quite often follow the sidecar pattern and be related to functionality such as acting as a sink for logging or providing a proxy to a network interface.

Deployments

We said earlier that one of the functions of Kubernetes is to manage lifecycle and scheduling concerns, a deployment is how we indicate to Kubernetes how these things should be dealt with.

A deployment might define:

  • A pod and an associated container image.
  • That a certain number of instances of the pod should be running at all times.
  • CPU and memory requirements for each pod, this may also involve setting limits for the amount of resource pods should be allowed to consume.
  • A strategy for how an update to pods should be managed.

Kubernetes will attempt to ensure that the deployment always matches the state described. if your application crashes then an unresponsive pod will be swapped out for a fresh one, if the amount of resource a pod is consuming increases then an existing pod may be moved to a node with more available resource.

When you update your deployment to reference a new version of your container then Kubernetes will also manage the transition from the existing pods to new pods that are running your updated container.

Services

Now with our application running in containers within pods we need a way for other applications in the cluster to be able to take advantage of it.

We wouldn't want pods to have to directly communicate with other pods, not only would this cause problems from a networking point of view since pods can come and go, but also we need a mechanism to ensure load is distributed across all the pods running the application.

Services within Kubernetes act a bit like a load balancer, they sit above a group of pods providing a consistent route to the underlying functionality. When a pod requires functionality implemented by another pod it sends a network request to a DNS entry defined by Kubernetes that represents the service endpoint.

Pods can now be freely added and removed from the service and pods don't need to be aware of each other in order to make use of their functionality.

Ingresses

Services provide an internal route to functionality provided by pods but it's likely that we will want to make some of this functionality available outside the cluster.

An ingress exposes an HTTP endpoint outside of the cluster that points at an internal service. In this sense an ingress acts like a reverse proxy onto the internal load balancer provided by the service allowing applications outside the cluster to invoke the underlying functionality.

An ingress can also provide other functionalities such as path based routing or SSL termination to present a consistent and secure interface to the world outside the cluster.

This has been a whirlwind tour of the basic concepts within Kubernetes, it is by no means exhaustive. I hope it enables you to understand the purpose of Kubernetes to aid your learning of the intricacies of an actual Kubernetes cluster. The devil is always in the detail but understanding of the fundamental concepts provides a solid bed on which to build the rest of your understanding.

Thursday 1 April 2021

Creating Chaos

 


In software development chaos engineering is the process of running experiments against a system in order to build confidence in its ability to withstand unexpected conditions or changes in environment.

First developed in 2011 by Netflix as part of their adoption of cloud infrastructure, it's underlying principles have been applied to many situations but typically experiments include things such as: 

  • Deliberately causing infrastructure failures, such as bringing down application servers or databases.
  • Introducing less favourable network conditions by introducing increased latency, packet loss or errors in essential services such as DNS.

In an attempt to automate these experiments Netflix developed a tool called Chaos Monkey to deliberately tear down servers within its production environment. The guarantee that they would see these kinds of failures helped foster an engineering culture of resilience and redundancy.

We may not all be brave enough to run these experiments within our production environment but if we choose to experiment in the safety of a test environment then what principles should be following?

Steady State Hypothesis

A secondary advantage to chaos engineering is the promotion of metrics within the system. If you are to run automated experiments against your system then you must be able to measure their impact to determine how the system coped. If the system behaviour was not observed to be ideal and changes are made then metrics also act as validation that the situation has improved.

Before running an experiment you should define an hypothesis around what you consider the  steady state of your system to be. This might involve error rates, throughput of requests or overall latency. As your experiment runs these metrics will indicate if your system is able to maintain this steady state despite the deterioration in the environment.

Vary Real World Events

It's important that the mechanisms you use to degrade the environment are representative of the real world events your system might have to cope with.  We are not looking to simulate an event such as server failing we are actually going to destroy it.

How you choose to approach the make up of the failures being introduced is likely to depend on the impact such an event could potentially have and\or the frequency at which you think such an event might occur.

The important consideration is that there should be some random element to the events. The reason for employing chaos engineering is to acknowledge the fact that for any reasonably complicated system it is virtually impossible to accurately predict how it will react. Things that you may have thought cannot happen may turn out to be possible.

Automate Continual Experiments

As you learn to implement the principles of chaos engineering you may rely on manual experimentation as part of a test and learn approach. However this can be an intensive process, the ultimate goal should be to develop the ability to run continual experiments by introducing a level of automation to the experiments.

Many automated tools, including Chaos Monkey, now exist to aid this type of automation. Once you have an appreciation on the types of experiments you want to run, and are confident your system produces the metrics necessary to judge the outcome, then these tools should be used to regularly and frequently run experiments.

The principles of chaos engineering are finding new application in many different aspects of software development, including topics such as system security for example by deliberately introducing infrastructure that doesn't conform to security best practices to measure the systems response and it's ability to enforce policy.

Not every system will lend it's self to a chaos engineering approach, for example an on-premise system where servers are not as easily destroyed as is the case in the cloud may limit options for running experiments. There also needs to be consideration as to the size of the potential blast radius for any experiment and a plan for returning to previous environmental conditions should the system fail to recover.

Your system's reaction to a large number of the experiments you run will likely surprise you in both good and bad ways. As previously stated for a system of any reasonable complexity it is unrealistic to expect to have an accurate view of how the system works under all possible conditions, the experiments you run are a learning exercise to try and fill in these gaps in your knowledge and ensure you are doing all you can to make sure your system performs the role your users want it to.