We take for granted that we can bring up a virtual machine within a few clicks, use it for a vast variety of different workloads and then spin it back down again. Servers have gone from things that need to be carefully maintained and looked after to an ephemeral resource we can create and throw away.
The technology unpinning this ability is the hypervisor. A piece of software that provides an abstraction of physical resources such as CPU and memory to allow the creation of virtual machines. This allows a powerful host server to be utilised to provide a large number of isolated guest machines, increasing efficiency and productivity.
History of Virtualisation
Virtualisation had previously existed to the extent that multiple different software applications could run on the same hardware concurrently. In the late 1960s IBM developed a research tool called SIMMON that took this a step further allowing hardware resources to also be virtualised on mainframe computers. This was soon extended to cover operating system resources such as kernel tasks such that the idea of having virtual machines built on top of real hardware was born.
For many years this virtualisation was the preserve of mainframe systems, until in around 2005 the first attempts at virtualisation for x86 systems started to gain momentum.
Originally the hypervisors being developed were complex and prone to relatively slow performance, but as technology advanced the level of virtualisation that could support the cloud services we now take for granted started to emerge.
Types of Hypervisor
Hypervisors can be broadly categorised into two types, type one and type two.
Type one hypervisors run directly on the host machines hardware and therefore eliminate the need for an underlying operating system. For this reason they are often referred to as native or bare metal hypervisors.
Type one hypervisors are very efficient and often more secure, they are typically used within data centres on powerful servers hosting a large number of virtual machines.
Type two hypervisors run on the host machines operating system in the same way as any other application, they provide an abstraction of the host operating system to create an isolated process the host can interact with. For this reason they are often referred to as host hypervisors.
Type two hypervisors are less efficient than type one because the host operating system prevents them from having direct control of the underlying hardware. However they are a more practical option when virtualisation is needed on a machine that isn't a server.
Benefits of Virtualisation
Aside from the ability to turn one server into multiple virtual machines, what other benefits does virtualisation bring?
As we've already touched on virtualisation can greatly increase efficiency. Rather than having multiple applications each being hosted on their own server, potentially not making full use of the available resources. All applications can be hosted on their own virtual machine on the same server.
The ability for virtual machines to be created quickly and automatically also provides an element of scalability that doesn't exist if new physical servers need to be added to a farm. The fact the underlying system is virtualised also provides an element of portability, the application can be hosted on any physical server capable of running a virtual machine to the same specification.
Coupled to this idea of portability is also the concept of snapshots and failure recovery. Since the environment an application is running in is software controlled it allows the current state of the virtualised hardware to be recorded into a snapshot which can be used to deal with failure by returning the system to a known good state.
Virtualisation also provides the ability to create crafted environments for legacy systems. Where an older application may have requirements that are no longer compatible with modern hardware, virtualisation offers a means to create such an environment whilst still utilising a cloud based architecture.
We sometimes take cloud computing for granted without thinking about the technology that underpins it. For the majority of applications it isn't really necessary to have a knowledge of how hypervisors are enabling them to be deployed into a crowd environment. But I still think it can helpful in us being well rounded engineers to at least have knowledge of the layers in our stack, what they provide and their limitations and strengths.