Sunday 19 August 2018

Divide and Deploy


Certain patterns and practices can come and go within software engineering, either becoming defunct, disproven or otherwise going out of fashion. However, divide and conquer as epitomised by principles such as Single Responsibility has always been a strategy to deliver scaleable and robust solutions.

It is the application of this mindset that lead to the advent of microservices. This technique divides an application into a group of loosely coupled services each offering specific, cohesive and a relatively narrow set of functionalities.

Any architecture or pattern when described succinctly can sound overly simplfied, the description in the previous paragraph doesn't enlighten the reader to the devil in the detail when it comes to the implementation and application of a pattern such as microservices.

This post also won't cover these topics in enough detail to fully arm the reader but it will attempt to point in the right direction.

What is Micro?

The effectiveness of any microservices approach will be reliant on developing the correct segregation of duties for each service, this provides the modularity that is the basis for the success of the pattern. The reason this aspect of the approach is non-trivial is because it is possible for microservices to be too small.

This can create increased cognitive load on those tasked with working on the system as well as creating a ripple effect when changes are required as each service becomes less independently deployable due to its increased dependencies on other services to achieve an outcome.

A good starting point can be to align your initial design to the agents of change within your organisation, this might be areas such as ordering, billing, customer or product discovery. This can enhance the benefit of the scope of change being contained whilst also allowing engineers and the rest of the business to have a common view of the world.

Another more technical analysis can be to asses which is more time consuming, to re-write or to re-factor? A sweet spot for a microservice and an indication of its scope is whether it would be quicker to re-write or to re-factor when a wide reaching change is required.

This isn't to say that it is optimal to be continually re-writing services but the freedom offered by an architecture where this is the case can be a great aid to innovation and the effective implementation of change.

Micro-Communication

Once we have a suite of microservices how can they be drawn together to form a coherent and competent system whilst remaining loosely coupled? Two possible solutions to this conundrum are REST APIs and an Event Bus.

These two different approaches to communication between services fulfil the differences between explicit communication and a loose transmission of information.

On occasion two microservices will need to explicitly communicate, as an example an ordering service may need to communicate with a payment service and require an immediate response. 

In these situations services can present a RESTful API surface to allow this kind of interaction. Intrinsically linked to this approach is the concept of service discovery, there are various approaches that can be taken but essentially this involves microservices registering their capabilities and the APIs they offer to clients to allow a system to form dynamically as new services are introduced.

This kind of communication exists where interactions between services are necessarily pre-defined but sometimes it is desirable and advantageous to allow these interactions to be looser.

An event bus offers this capability by allowing services to broadcast the occurrence of certain events and allowing others to react to those events without having to be aware of the source.

Following a similar example the same ordering service on successful completion of an order might broadcast this as an event so that services that handle communication with customers can take the appropriate action. The loosely coupled nature of this communication allows behaviour to be easily changed without the need for refactoring in the service handling the initial action.       

Micro-Deployment

So now we have our collection of properly sized and loosely coupled services how do we manage their deployment into our environment?

A key element of microservice deployment is independence, a sure sign that a microservices architecture is flawed is when it is necessary to deploy a number of services concurrently. This speaks to strong coupling caused either by a tightly coupled domain or an incorrect segmentation of responsibilities. 

A lack of independence also negates the intrinsic promotion of a CI\CD approach that microservices when applied correctly can offer.

Independence of deployment when combined with a load balanced blue\green approach also creates an environment where changes can be gradually introduced and quickly rolled back if problems present themselves. This is in contrast to a monolithic architecture where a small problem in one area may result in working changes in other areas having to be rolled back.

Microservices is a pattern that comes more with a serving suggestion than a recipe. Its proper application cannot be applied via a cookie cutter methodology. There are certain aspects that are universal but a good architecture will only be designed by understanding how they can be applied to your particular situation whilst keeping an eye on the successful realisation of its benefits. Many of these benefits will enable you to make mistakes by giving you the freedom to reconfigure, refactor and realign your architecture without causing widespread disruption.


Sunday 5 August 2018

The Miracle of HTTP


The web is now all pervasive in the majority of peoples lives, sometimes its explicit when we are surfing the web and sometimes is implicit for example in relation to the rise of the Internet of Things.

The complexity of the implementation of the web is on a scale that can cause you to wonder how this thing always keeps working. The perceived speed of advances in the capability of the web can lead you to think that the technology is constantly evolving and refining.

Whilst this may be true for the technologies that enable us to implement things on the web, many of the underlying technologies that underpin the web and keep it working have remained surprisingly static since its inception.

Once of these technologies is the Hypertext Transfer Protocol (HTTP).

What Did HTTP Ever Do For Me?

HTTP is a request\response protocol that allows clients, for example your desktop browser, to request data from a server, for example the server allowing you to read this blog.

It acts within the application layer of the Internet Protocol suite and handles the transfer of resources, or hypermedia, on top of a transport layer connection provided by protocols such as the Transmission Control Protocol (TCP).

It is a descriptive protocol that allows requests to describe what the user wants to do with the indicated resource, to this end each request uses an HTTP verb such as GET, POST, PUT and DELETE and each response uses status codes such as OK, NOT FOUND, FORBIDDEN to form an exchange that describes the interactions during a session.

Each request and response also contains certain Header Fields that can be used to further describe both the contents of the request\response and the parties sending them.

If you were to see an exchange of HTTP requests between your browser and a web site you would be surprised at how much it is human readable, understandable and intuitive. This speaks volumes for the initial design of the protocol and it's elegant simplicity.

Without HTTP we wouldn't have a way to allow the enormous amount of data on the internet to be queried for, retrieved, uploaded or managed.

The Brief History of HTTP

Given the importance of HTTP you may think that it has been under constant revision and refinement as the use and the size of the web grows.

Actually this is not the case, the first published version of the HTTP protocol (v0.9) was published in 1991. This was followed by v1.0 in 1996 and v1.1 in 1997, with v1.1 of the protocol still being the prevalent driving force behind the majority of web usage.

Over the intervening years there have been clarifications and refinements to v1.1 and the built in extensibility designed into the protocol has allowed improvements to be made, but still the protocol delivering the majority of the benefits we derive from the web has remained largely unchanged since what most would consider the birth of the public use of the internet.

This really does go to show the benefit of designing for simplicity, if you develop robust and descriptive technology this can be the mother of invention when creative people pick up the tool you've provided them and combine it with their imagination.

What The Future Holds

So does all this mean that HTTP is done and dusted with all problems solved? Well no, our use of the web and the nature of the data we channel through it have changed since the 90's and while HTTP is able to service our demands there is scope for improvement.

HTTP/2, derived from Google's experiments with the SPDY protocol, is a major release and update to the HTTP protocol. It focuses on performance and efficiency improvements to adapt HTTP to the ways in which we now use the web.

Officially standardised in 2015 the backwards compatibility of HTTP/2 with HTTP v1.1 allows the adoption of the new protocol to spread without fundamentally breaking the web.

As of 2018 around 30% of the top 10 million web sites support HTTP/2 with most major browsers also providing support for the new version of the protocol.

The substantial effort involved in being able to successfully advance a fundamental protocol such as HTTP whilst not breaking the web speaks to the relative slow progression of new versions of the protocol.

Its easy to take the web for granted without admiring the engineering that originally built the basis for all the future innovation that has subsequently changed all our lives. Anyone that has worked in an technological industry will to speak to the difficulty in getting things right and the normality of deciding this isn't going to work and being forced to start-over.

Whilst it would be misleading to assume this never happened during the development of HTTP, the small number of published versions of the protocol combined with the explosive and transformative impact of the web truly is something to behold and admire.

Next time your using the web, or an app, or talking to your smart speaker tip your hat to the engineering that not only made it all possible but that also provided the stability for the evolution that got us to this point.