Monday, 2 July 2018

RESTful Experience


Many different technologies have been devised to provide structure to the transfer of data from here to there and back again. These have ranged from the heavyweight to the lightweight.

REST, or Representational State Transfer to give it it's full name, is now the prevalent mechanism when dealing with an API to retrieve or send data, with these APIs being described as RESTful.

What does it mean for an API to be RESTful? Is it simply the sending and receiving of JSON via HTTP requests?

Although REST does have some requirements around the plumbing of how requests are made and received there are also philosophies that go deeper than this traffic management aspect.

Stateless

All the information necessary to process a RESTful request should be contained in the request itself. This is to say that the server receiving the request should not need to use state about the current session in order to process the request.

This puts the power with the client to decide which APIs to call in whatever order, it ensures the API surface is unambiguous with no required knowledge of the order APIs should be called in or any possible side effects.

This statelessness should extend to authentication and authorisation which each request containing the necessary information for both those important factors to be fulfilled.

Its important to realise that this only applies to the state of the session and the processing of the requests, the state of the resources and data being accessed is of course subject to change between requests and will have the concept of state.

Uniform Interface

REST APIs deal in the currency of resources, a resource can be almost any data item, this could represent a customer, a shopping basket, a book or a social media post.

Resources should be uniquely identifiable and have a representation in the system that is descriptive and processable.

Operations on these resources should be via standard HTTP verbs that describe the operation that is taking place.

  • GET: Read.
  • POST: Create.
  • PUT: Update\Replace.
  • DELETE: Remove.

The HTTP response codes returned from using any of these requests should also be related to the state of the resource, for example:

  • 404: Resource with that unique identifier cannot be found.
  • 405: Not allowed, such as when a resource cannot be deleted or modified.
  • 409: Conflict, when a resource with that unique identifier already exists.
  • And so on...

The paths used when accessing these resources should also be self explanatory and naturally readable.

  • GET /api/v1/customer/ - Return all customers.
  • GET /api/v1/customer/866823e5 - Return a specific customer.
  • GET /api/v1/customer/?lastname=smith - Return all customers with a last name of smith.

All of this structure allows an API to be self discoverable by a user familiar with the resources being represented.

The path can also be used to impose a versioning system, ensuring that when breaking changes must be made to the how the API behaves or the representation of the data being returned that this is non-impactful for existing consumers of the API.

Cacheable and Layered

Much of what we've discussed allows REST APIs to implement certain characteristics to increase performance such as caching.

GET requests within a REST API should be cacheable by default with standard HTTP mechanisms being used to control the life time of the cached data. This helps reduce latency whilst also reducing the strain being placed on the backend sources of the data.

Segregating an API based on resources allows for a certain hierarchy and layered architecture to be put in place.

This lends itself to the micro-services model allowing systems to be composed by narrowly focused components dealing with a particular element of the domain being modelled.

REST has come to dominate the API landscape because the application of a few simple rules greatly simplifies the process of implementing an API as well as reducing the barriers to entry for a consumer to get an up and running with the API.

On occasion it may be difficult to always adhere to the rules we've laid out and it may be the case that an API being RESTful is a metric rather than an absolute. But these situations should be rare and once you have acquired a RESTful eye you will soon become adept at modelling your world according to its guidelines.


Tuesday, 26 June 2018

Black Swans and Antifragility


All of us who work in software development will have the scars caused by a disastrous update, outage or product launch. These experiences shape the way we approach our roles in the future, we are more attuned to possible catastrophe, planning strategies to deal with things going wrong and positively expect them too.

The Black Swan Theory, postulated by Nassim Nicholas Taleb, deals with the nature of unexpected events, although this is in a wider context then technology there are none the less parallels to the kinds of events as IT professionals we have to react to.

Within this theory a black swan is an event with the following properties:
  • It is an outlier, not expected, with past experience not pointing to its possibility.
  • It carries an extreme impact.
  • Human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

In software engineering events that could be considered black swans might be sudden increases in the load placed on a system, catastrophic hardware failure or a breach in security.

So are we doomed to the consequences of these events or can we architect our systems to try and cope with the aftermath.

Modularity and Weak Links

Black swan events have an increased, or at least more sustained, impact on complex systems.

Complexity breeds mistakes and causes solutions to become harder to envisage, therefore a strategy for combating complexity can help combat the cause and effect of disastrous events.

Composition by breaking down a systems problems space into multiple simplified chunks can work at many levels, from individual blocks of code to whole sub-systems.

Allowing these blocks to be swappable and having the ability to re-configure and re-organise them not only allows functionality to be easily changed it also allows a non-functioning areas of a system to be quickly fixed or replaced.

The links between modules can transmit stress and failure as well as functionality, the weaker they are the easier they can be broken when necessary.

Redundancy and Diversity

Black swan events related to hardware or service failure become more catastrophic when no alternative is available.

Redundancy and diversity are strategies for ensuring that alternatives do exist to ensure continuity of service. Discussions around redundancy and diversity can get caught up in slightly pedantic arguments, for the purposes of this discussion lets try and simplify.

Redundancy can be viewed as having more than one of a particular resource while diversity can be thought of as having more than one channel for the functionality.

Using the example of databases, redundancy would be achieved by having back-ups, mirroring or replication whereas diversity might be achieved by using multiple cloud providers for hosting data.

Testing and Probability

No strategy for dealing with failure can be said to be fully implemented unless it has been proven to be effective via testing.

This verification or testing can be as simple as ensuring a database backup is valid and can be restored from , it can also be as sophisticated and automated as the chaos monkey techniques employed by companies such as Netflix.

There have been many instances of companies who believe they have plans to cover every eventuality being left floundering once disaster strikes.

If we were to list all the possible disasters that could befall us they would be numerous. each having a probability and a level of impact on our system.

These two factors need to be balanced along with the cost and effort of mitigation. Whenever protecting against the scenario is relatively straightforward this should be implemented regardless of likelihood.

For everything else the impact of the event should be balanced against its probability, in these scenarios don't be too quick to write off an event as improbable, if it would bring your system to its knees then its worth considering having a strategy.

Trying to asses the potential failures in your system is a good way of assessing your architecture and infrastructure, highlighting technical debt or areas for improvement. If at the same time you can develop strategies to try and reduce fragility then you be able to sleep more easily in your bed.         


Tuesday, 29 May 2018

Layered Testing


Many software systems will reach a scale where achieving adequate coverage of all provided features via traditional manual testing will prove virtually impossible, especially in a reasonable time scale.

A strong suite of automated tests, coupled with the adoption of a shift left mindset, can provide a solution to this problem. This approach can mean a system is under almost constant testing, at least for critical journeys and functionality, without which users would be adversely affected.

As with most aspects of software engineering, its imperative for this testing infrastructure to have well defined structure to avoid the evils of spaghetti code. Code quality shouldn't be something only in developers mind when writing production code, its as important for maintaining good quality tests.

This structure starts by identifying the different categories of tests that you will write, what they are designed to achieve, and how they fit into the bigger picture.

Unit Tests

Unit tests are defined by having a single source of failure, namely the class that is being tested. This obviously discounts the tests themselves having bugs, whilst developers often gravitate to wanting the test themselves to be wrong when they fail, this isn't the case as often as we may want it to be.

This single source of failure is maintained by all functional dependencies of the class being tested being mocked. The distinction being drawn here with functional dependencies is to avoid model classes and such like also having to be mocked, if the dependency offers functionality that could cause the test to fail then it should be mocked.

Unit tests should be run as frequently as possible and at least as often as change is merged into a shared branch, for this to have value the tests must follow FIRST principles of being fast, independent and repeatable.

Unit tests are therefore your fist line of defence against bugs being introduced into a code base and represent the furthest left it is possible to introduce automated testing. Indeed when following a Test Driven Development (TDD) methodology the tests exist even before the code they will be testing.

Integration Tests

Within a code base classes do not in fact operate independently, they come together in a grouping to perform a purpose within your overall system. If unit testing should fulfil the role of defending against bugs being introduced inside a class's implementation, then integration testing should act as a line of defence against them being introduced within the boundaries and interactions between classes.

By their very nature integration tests don't have a single reason to fail, any class within the group you are testing has the potential to cause a test to fail. Where in our unit testing we made extensive use of mocking to simulate functionality, in our integration testing we are looking to include as much real functionality as possible.

This makes integration tests harder to debug but this is a necessary price to pay to validate all the individual sub-systems of your code interact properly.

Whilst even the most ardent advocate of TDD wouldn't write integration tests prior to implementation, like unit tests integration tests should be fast and run frequently.

Automated UI Tests

In the majority of cases software is provoked into performing some operation based on input from a user. Implementing automated UI testing is an attempt to test functionality as that user, or at least in as close an approximation of the user as possible.

As with integration tests, these automated UI tests will have multiple reasons to fail, in fact they are likely to have as many reasons to fail as there are sources of bugs in the system being tested.

Although it is necessary to engineer an environment for these test to run, and not all functionality will lend itself to being tested in this way, these tests should contain virtually no mocking.

Automated UI testing is never going to lend itself to being as fast as integration testing and unit testing. For this reason they should be structured such that they can be easily sub-divided based on what they are testing. This will allow their execution to be targeted on the areas of change within a code base and the critical paths that cannot break.

They are likely to be run less frequently but they serve as a health check on the current suitability of the code for deployment. They are also work horses for the mundane, freeing up the time of precious manual resource to concentrate on testing more abstract qualities of the code.

These three areas by no means cover all the possible types of test you may wish to write, one notable exception being testing performance via Non Functional Tests (NFT). However they do demonstrate how a well thought through testing architecture consists of many layers each with a different purpose and objective.

Sunday, 13 May 2018

Technological Milestones



The writer Arthur C. Clarke postulated three laws, the third of which is "Any sufficiently advanced technology is indistinguishable from magic".

The users of technology and the people that engineer it will be likely to have a different view on the accuracy of that law, users will very often be enchanted by a technological advancement whilst engineers understand its foibles and intricacies.

Technology often doesn't move at quite the blistering pace that many believe it to, technology often advances via small increments while the ideas of how to utilise it is what experiences rapid advancement.

But sometimes giant leaps forward are made that mark milestones in what is possible and the magic that engineers can demonstrate.

Solid State

The fundamental building block of the entire modern world is the solid state transistor. The ability to fabricate these building blocks in ever smaller dimensions has driven the development of ever more powerful computers and all the other advancements in electronics we have witnessed.

The ability to use semiconductors to engineer structures like transistors evolved during the 1950's and 1960's and led to the development of the first computers.

The technology was further refined with the invention of the integrated circuit, or chip, and via the application of Moore's Law has driven the development of the modern world.

Moore's Law, named after engineer Gordon Moore, states that the number of transistors that can be fabricated in a given area doubles approximately every two years. 

The simple view of this is that the processing power of a computer doubles every two years, this proved to be true for the best part of forty years with the rate only slowing in recent times.

Solid state electronics truly is the genie that can't be put back in the bottle. 

The Internet and the World Wide Web

Solid state electronics gave us powerful computers that could accomplish many tasks, the next advancement came when we developed technology to allow these machines to work together and broke down the barriers around data and its uses.

All though the terms are often used interchangeably the Internet and the World Wide Web have very different histories separated by many decades.

The history of the Internet, the ability to connect computers over large distances, dates back to American military development during the 1960's and 1970's.

The World Wide Web, the ability to organise and make available data over the network the Internet provides, has its origins during the 1980's and 1990's as part of the World Wide Web project at the CERN research institute in Switzerland.

The pre-Web world is now almost unimaginable even for those of us that have memories before web-sites, apps and social media.

The ability to connect, share and interact on a global scale has changed the world forever, never has so much data, covering so many topics been available to so many people.

An interesting side note to this is that the protocols that control the movement of all this data such as TCP/IP or HTTP have changed remarkably little since there inception given the importance they now have to how we live our lives. Proof that not everything move at break neck speed.

Artificial Intelligence

It is often difficult to predict the next great step before it happens, the difficulty of doing this could well be what defines the genius of those that takes these steps forward for us.

If I was asked to predict now what in decades time people would look back on as a milestone it would be the emergence of Artificial Intelligence into the everyday world.

Whilst the dream of building Artificial Intelligence stretches back for many decades it is only in recent years that applications of the technology has started to become relatively common place.

We are still only touching the surface of what the technology will be capable of delivering and as this unfolds fear around these capabilities may grow.

An untold number of sci-films have predicated disastrous consequences leading from the invention of thinking machines, while these stories are built on a misunderstanding of the technology involved perhaps we are seeing the birth of a technology that proves Arthur C. Clarke's 3rd law and will cause many to classify it as magic.                

Tuesday, 24 April 2018

Developing Relationships



When you hold a senior technical position within an organisation, whilst no skill set should trump your technical knowledge of your chosen discipline, it becomes increasingly important that your able to manage relationships with those around you.

The approach to this will vary based on the people in question, whether technical or non-technical or based on the direction of seniority.

Dealing with these issues may not be why you entered your chosen profession, but if you end up in a position where you work in complete isolation it suggests that your work has no consequences and this is also not a desirable situation to be in,

As with many skills these communication skills will be earned by trial and error with bumps along the way but mastery is none the less important.

Taking People With You

There are many aspects of software engineering that align themselves more to art than science, it is also possible for a question to have multiple correct answers.

All of this means when you engage with others in your technical community people may not see things your way immediately.

If the first presentation of your argument doesn't persuade your peers its important to take a step back. Views can often become entrenched if arguments are presented more forcibly. 

Instead it can often be more beneficial to allow others to pursue alternative solutions. This will either lead to your view point being proven correct or a genuinely better idea being demonstrated. The more often this plays out the more likely people will be persuaded to your way of thinking and understand your rationale.

There can be a difficult balancing act in this situation, on occasion when arguments relate to critical issues such as security it may be necessary to be firmer but it should always be preferred to take people with you rather than force them to agree with you.        

Not Everyone Cares About The Tech

You will often be required to engage in technical conversations with non-technical colleagues. Whilst in these situations it should be a goal to educate your colleagues in technical matters, it has to be acknowledged that this has limitations.

Issues around technical debt and architecture will always be problematic for people without a technical outlook, this means no matter how much they may want to, an understanding of the technical merit of addressing these issues may always be slightly out of reach.

Instead an alternative strategy is to focus on the consequences and make the impact of these issues more tangible.

This maybe errors reported by users, performance impacts limiting sales or completed user journeys or an increase in the cost of development of new features.

If you can't make people appreciate an issue on its technical merits present the impacts in terms of things that your colleagues will appreciate. This equates to not trying to make people understand a solution make them appreciate that there is a business related problem that needs solving.

Constantly Pragmatic

If I had to sing the virtues of any non-technical skill that a technical role should require it would be pragmatism.

If you work in a sufficiently large organisation you will interact with many different people across many different subjects. If you resolve to win every argument or discussion this presents your progress in delivering the things you really care about will be severely curtailed.

Identify the discussions that are core to what you are trying to achieve and as a consequence those that are less important and where you can bend a little.

These two lists are likely to vary over time, some discussions will need immediate resolution and some may represent more long time thinking where you have more time to persuade people of the merits of your approach.

You may find that issues that are lower down the priority order for you are higher for others and being prepared to bend with others views may be reciprocated when attention turns to something you feel is key.

This may not always be easy but learn to understand problems that do have multiple workable solutions even if your preference would have been to go in another direction and save you energy and credit for the situations where you feel only one solution is the right solution.

As you interact with others as part of your role it isn't enough to only have technical skills, you also have to have an understanding of human nature and how others will interact with you and with your wider team.

This will be frustrating and possibly time consuming but is an unavoidable consequence of people being people. Remember that others maybe thinking along very similar lines when it comes to their interactions with you, recognise your own biases and tendencies to be illogical.

On nearly every occasion everybody involved in a discussion will be aiming for the same outcome only the means to achieve it is a cause of disagreement, understand that this is often a journey and help people to arrive at the same destination as you even they take a different route.          

Tuesday, 17 April 2018

Technological Endeavour


Its has long been the case that any serious business enterprise is expected to have a digital presence in the form of websites and apps.

This has lead many different companies to become involved in software development activities but at what point to do you become a technology company.

Is it enough for digital channels to represent your primary connection with your users and customers or does the development of technology need to extend further towards the heart of your company's operation?

Deployment Opportunities

A great deal of insight can be garnered from your attitude to your production environment. While no self respecting company should play fast and loose with production up-time or service levels, technology companies are single minded in their determination to reduce barriers to deployment and pride themselves on the rate at which they can deliver change.

Non-technology companies see deployment to production as something to be feared and controlled.

This fear is generally driven by a lack of faith in their ability to be certain code is ready for production combined with an anxiety about the effectiveness of system monitoring to spot issues before users post-deployment.

True technology companies embrace automation as an answer to these questions and find themselves unable to work within the constraints of human process or manual road blocks.

Few companies can claim to be at the scale of a company like Amazon, but as an example of what can be achieved using this mindset in 2014 Amazon used Apollo, their internal deployment tool, to make a total of 50 million deployments to various development, testing and production environemnts (an average in excess one every second).


Whilst this is an extreme example it demonstrates that is it impossible to achieve large scale technology deployment and still keep humans and human process part of the deployment chain.

Data, Data, Data

Many industries used to have a simple model for making profit, a good or service was offered and if users liked it an opportunity to make money would present itself.

This simple model involves a degree of risk that you may misjudge what users want or not be able to fully realise potential sales or interactions.

Technology companies realise that a digital marketplace offers a unique opportunity to monitor and react to users activity, they realise that this data can reduce risk and uncertainty and allow users likes and dislikes to be predicated and measured.

This combined with the potential to rapidly deploy change into production provides a unique opportunity for test and learn, to be continually taking advantage of marginal gains.

This mindset assumes things could always be better, not necessarily by introducing new features or functionality but by monitoring and improving what is already being made available to users.

Leader Alignment

Technology companies are attempting to leverage engineering skill and expertise to deliver profits and growth, they see this as their number one skillset that shouldn't be degraded by any other concern.

To facilitate this they ensure the leaders and decision makers within the business are aligned to their engineering operation and that they have a technological view point combined with an experience of delivery.

This means that they act as guardians for the integrity of engineering practices within the organisation and maintain standards regardless of the pressures to implement change.

This isn't to say that engineering is conducted for engineerings sake, ultimately the needs of the business still need to be fulfilled,  but once a commitments is made to construct something then the quality of the engineering employed is not a variable in the process.

Non-technology companies view the engineering function with their business as purely a production line of change that can be scaled and and manipulated easily to deliver any functionality in any time scale.

Engineering quality takes a back seat to the need to deliver and quality is a lever to be adjusted rather than an absolute.

Whether or not you are a technology company or a non-technology company as presented here is not a matter of being right or wrong. Its possible for a company to offer a digital marketplace whilst still only considering it one avenue to attract users but not the only one.

Its possible to develop software but not to feel the need to become a technology company, but if you decide to embark on attaining that label ensure that the mindsets and attitudes within your organisation understand what it takes to achieve.

Certain practices need to be let go and others need to be embraced, trust needs to be placed in engineers as well as an appreciation for what they offer your business.

Nobody should be expecting to mature into the next Amazon or Google but this is simply a degree of scale the practices and principles are universal. 

Tuesday, 3 April 2018

Micro-World


Modern development philosophies represent a campaign to divide our code bases into ever decreasingly sized chunks. Referring to micro-apps, micro-services even nano-services has now become common place when describing a target architecture.

So why do we think less is more? Is this simply a practice of trying to devise ways to have smaller and smaller blocks of distinct functionality or are the benefits of this way of thinking only realised with slightly more subtle thinking.

As with most schools of thought employed within software engineering there are shades of grey, while micro-services and micro-apps aren't a silver bullet they do represent an approach that can have a positive impact on your code.

Solidly Decoupled

Regardless of whether or not you are following a micro approach presiding over a loosely coupled code base is a desirable situation to be in.

The ability and likelihood of code conforming to SOLID principles will be inversely proportional to its size, that isn't to say that its impossible to have a large well structured class but it is certainly harder to achieve.

Source code isn't an asset where its value increases with its volume, the more code you write the more technical debt you are probably incurring so anything that results in dealing with smaller areas of code at any one time is likely to be a driver for improvement.

Adopting a micro approach will naturally encourage you to think in this decoupled way. Thinking about how to make a piece of functionality self contained and independent will promote an adoption of single responsibility, open-close and interface segregation while dependency inversion will likely be a tool to help you achieve the necessary decoupling.

Drivers For Change

A major reason for employing a micro philosophy is to take a more agile approach to implementing change. Whilst this may seem an obvious advantage its effectiveness is reliant on understanding the drivers for change within your business.

The nature of some changes to a system maybe purely technical but the majority will be as a result of a required change in functionality needed by the business.

This requires a balancing act between drawing up dividing lines based solely on technical concerns, to create the smallest most self-contained micro apps or services, and having an architecture that mirrors the business your software operates within.

The nirvana that is the goal of this approach is for these views to converge causing the most efficient configuration of your software to happily match the requirements of the most frequently altered areas of your system.

This will likely lead to different levels of granularity based on whether the concern of the code you are looking at is purely technical, for example a cross-cutting concern such as logging, or more business related, for example order fulfilment.

Again, regardless of whether or not you chose to adopt a micro architecture understanding the business your software serves is no bad thing.

Micro Deployability

When you first embark on dividing your code into micro-apps or micro-services then the initial natural view point is on the code itself.

However at least equally important, and potentially more important, is the ability of these new micro elements to be deployed independently.

Whilst it is possible to realise many of the benefits described here while still deploying your software as a monolith, the agility of your team to affect effective change for both your users and your business will be severely curtailed if your micro-apps or micro-services cannot be deployed in isolation.

Achieving this acknowledges that different areas of your code base exhibit differing speeds of change, it also acknowledges that certain areas have a criticality to your business that means you must have the ability to fix them the instant they are being less than effective.

Implied in both the above points is that your system must also be sufficiently composed as to allow the performance of different areas to be monitored independently.

A micro way of thinking should influence every area of your development from the code itself, to its deployment to the monitoring and management of the functionality it implements.

Architectural buzz words or new paradigms may come and go but well organised code exhibiting SOLID principles will always be in fashion. In this respect micro-apps and micro-services are not a fad, they are a natural extension to these time honoured approaches to software development.

It can be more subtle then simply trying to compose your system of an ever increasing number of distinct elements but a goal of decomposition and de-coupling will not steer you far wrong.

Monday, 26 March 2018

What Does Your Backlog Say About You?


If you review a backlog it will intentionally or not reveal a lot about a team, its priorities, its values and the future its heading towards.

An even more fascinating aspect to this window into a teams inner workings is that if you were to tell the team what you think it says about them they would more than likely disagree.

This ability to analyse teams isn't some sort of mystical unexplainable art, its the drawing of logical conclusions based on the work the team chooses to prioritise as well as the apparent goals and motivations behind having this work on the backlog in the first place.

What You Don't Value

If you asked a team do you place value on stability, security and sustainability you would very much expect to see a room full of nodding faces.

However the make-up of stories pulled into sprints can very often tell a different story.

Teams will often have a strategy around how to deal with tech debt or security issues and the need to balance this with continued feature development. When a team has these issues under control they are able to maintain this forward momentum in a sustainable way while users continue to make the most of whats on offer.

Teams whose backlogs demonstrate significant technical debt who then continue to prioritise new features are sending a message which while inconvenient is none the less true.

It sends a message that the team intends to move forward regardless of whether or not they are taking their users with them and regardless of whether or not the direction they are heading in is built on sustainable foundations.

It places little importance on issues currently being faced by users, while users are facing issues with functionality already deployed this should the be number priority of any team. Promising users jam tomorrow is very rarely an effective strategy.

Something Like

A backlog should not simply be a place to register ideas or represent a to-do list.

A backlog should be a list of well defined work items that have definite value either to your users or to you as a business. All of these work items should be ready to go should the team be in a position to start work, it shouldn't be necessary to filter the backlog to find these work ready items or spend a time demystifying what is required.

Obviously coming up with this list of work items is an iterative process, it isn't possible to instantly define and document requirements or the technical implementation that will fulfil them. But this process should take place outside of the backlog and must be disciplined enough to only migrate items to the backlog once they have progressed to the ready state.

A backlog full of half thoughts or reminders about functionality that may or may not be useful will struggle to keep your development team productive and also fails to demonstrate a clear strategy for the product being developed.

Requirements can change and items on a backlog aren't fixed in stone but this doesn't mean they can't always be descriptive of whats required at a moment in time and be ready for implementation.

Spaghetti Stories

User stories should ideally be independent of each other, this should enable a backlog to be fluid and allowing teams to quickly re-order work items depending upon prevailing priorities.

It maybe that stories are inter-related in terms of delivering an end-to-end feature to a user but the ability of the team to work on them shouldn't necessarily follow the same inter-connection.

This is not always easy to achieve and can have as much to do with the architecture of the code base being worked on as the teams ability to craft stories.

But disorganised teams or teams that lack a clear direction are likely to construct a backlog that becomes rigid, this can be seen in sprint planning activity when the team struggles to put together an effective sprint because of blockages on certain stories causing a ripple effect through the backlog limiting the amount of work that is ready to pull in.

This can also make it difficult to identify the next shippable version of the software, this isn't always necessarily when the next feature is ready but the point at which a group of changes can be made to the code that move it forward whilst allowing it to be stabilised.

A key skill for any agile team is the ability to map a path from story to story delivering software with value to users and the business along the way.

The backlog and the sprints it drives are the heartbeat of an effective team, it is more than just a collection of work items it should be a manifestation of the strategy the team is following and the direction it is heading in.

As much as it can reflect the success of a team it can also be indicative of its failings. Because of this teams should take the time to asses the health of its backlog and attempt to draw conclusions on what could be improved or to emphasise what is working well.

Treat your backlog as n indication of the agile health of your team, treat it with the respect that you afford your codebase and keep a keen eye out for signs that the quality of its stories are on the decline.

Monday, 5 March 2018

Agile Deployment



Deployment is the end goal of any software development activity. Whether it be to the desktop, a server or app stores, why write software if the ultimate aim isn't to deploy it so it can be consumed and used.

Methods and strategies for deployment are widely debated and although the technologies used are important the overriding factor governing success is the mindset that's adopted towards shipping the code.

If asked what deployment should be like in an agile environment many teams would use variations on the concept of continuous integration. But this concept can sometimes be quite intangible how do you actually know if your achieving continuous deployment?

Continuous is Different from Regular

An unfortunate consequence of us using the word continuous to describe an ideal deployment strategy is that it implies a regular cadence is the only thing we should be judged by.

A more descriptive phrase of what we should be aiming for would be unrestricted deployment.

While a regular cadence of deployment brings with it many benefits, no matter what the frequency maybe if any of your processes enforce that deployments can only be made at that cadence then this will still cause you and your team problems.

An effective deployment strategy enables you to release whenever you want or need to. It means the only factor in deciding whether or not to deploy is whether or not the code is ready.

An ultimately effective strategy stops this even being a decision that needs to be consciously made, your systems and processes deploy code whenever it reaches the state of being done.

Deploy Code Not Features

Not every change to a code base results in something that is visible to the user. The introduction of new features can take time to come to fruition and require many changes to the code base.

If we view value as only coming from completed features this can easily lead to big bang deployments that drop a large amount of change into production.

If we can derive value from the stepping stones in getting to a complete feature then we can break this single potentially disruptive deployment into many smaller safer changes.

These small deployments will, via a cumulative effect, still get us to the desired outcome but because the surface area of change was always kept to small increment our risk of breaking something will be greatly reduced.

The user may not see any visible difference until the final deployment that fully enables the feature but they also haven't seen errors or been frustrated by a large deployment that changed many things.

This approach of deploying code when its ready will also encourage us to architect our code to be formed of distinct, separate and well defined blocks, this has many benefits for our overall code quality as well as making our deployments less stressful.

Not Wrong for Long

Many aspects of software development can be an inexact science. This is not just down to the complexity of building software but also the unpredictable nature of trying to predict how users will interact with it.

The concept of failing fast in an agile environment acknowledges this and attempts to make it ok to try things even if they will sometimes fail to have the impact you'd hoped for because we learn by the experience.

It is also, no matter the level of testing employed, virtually impossible to deliver bug free software. While we may hope to reduce these inevitable defects to the smallest and least impacting sometimes one will slip through the net.

Both these aspects means rolling back or fixing production will always be something that a team has to face. If the team doesn't have faith or confidence in its deployment process, or has to wait to deploy in the next available slot, then stress levels will rise.

This can easily lead to a reluctance to try things or conduct worthwhile experiments around what users want or need.

All of the points that have been made here link back to the fact that deployment should be easy, repeatable and in no way onerous or stressful.

The long slow build up of stress towards release day should be put to bed in favour of an automated and flexible approach that puts code into production whenever it serves the purpose it was originally intended for and meets our definition of done.

Code it, ship it and move on.  



Monday, 19 February 2018

Bring Out Your Defects



Debugging is a universal pain known to all developers and software engineers, despite our best efforts to improve and get it right this time its an inevitable outcome given the complexity of writing code at scale.

It can have many stages from denial and anger to acceptance and ultimate resolution.

Given that, engaging in debugging is unavoidable and its clear that we need a strategy for effective debugging. For many this is built up from the scars and wounds of previous battles with a code base, it will also be influenced by the particular area of development that you operate.

What is presented here is far from a full proof strategy that will always enable you to root out defects quickly and painlessly, but any tips and tricks can be useful to have in your armoury when you go into battle.

Don't Panic

First and foremost don't beat yourself up when a bug is uncovered, writing bug free code is virtually impossible once your code base grows beyond a certain size. Developers and development teams should be judged on how they deal with defects not simply whether or not they exist.

The majority of defects will be simple mistakes, when a defect first appears we can often assume it has a complicated root cause. Instead, accept your human fallibility and expect to find out that you've just had a momentary failing of intelligence.

The first action should be to make sure that you understand the manifestation of the defect and you have a path to reproduce, without that not only will you struggle to find the cause you will also have little confidence that a potential fix is effective.

Once you start attempting to find a fix try to reduce the number of variables in play, change one thing at a time and have a heightened sense of when you've reached a point where your not sure of the logic of what your trying anymore. 

When it works you need to be able to explain why.

For a complex defect you will more than likely have several false starts, reset your approach whenever your in a situation where even if something works you won't be sure how your got there.

Use Your Tests

Unit tests are invaluable in proving your code works after you make a change, they also have equal value in demonstrating how your code isn't working.

Well written tests act as documentation for how things are supposed to work as well as providing an effective test harness to enable you to exercise the problematic piece of code.

When a defect has been identified, assuming you don't already have a failing test, construct a test that will expose the problem and use this as your primary means to attack the issue.

Not only will this help you analyse and ultimately fix the problem it will ensure that any potential future regression can be identified quickly and stopped.

The tests you add will also act as documentation for any developer who may touch that area of code in the future of the potential problems that can be introduced.

Write Debuggable Code

The majority of us tend to worry about optimisation far too early in the life of a code base, this early over optimisation can often come at the expense if readability and simplicity.

While there are always caveats to any sweeping statement in the majority of cases a lack of maintainability is likely to hurt your team much sooner than a lack of performance. Indeed one is likely to have a linkage to the other as successive developers make sub-optimal changes to a code base they don't understand.

Debuggable code exhibits a clear statement of intent along with a clear approach.

Overreaching for conciseness or performance can act to hide intricacies that will hamper efforts to debug and patch.

Comments are no savour for this situation, while they have a place they have no direct link to the code they are associated with and can easily cause more confusion than insight.

If you are unfortunate enough to have to apply a fix to code like this then consider refactoring to increase maintainability for the next developer to come along. Code that has previously had defects, especially ones that have been difficult to solve, is probably more likely to have them again and each subsequent patch is only likely to make the situation worse.

Being an effective debugger is one of the skills that comes with experience, the scars that can come from a debugging battle are a rite of passage for a developer. Whilst it can portray elements of an art over a science an acceptance of it being part of the development cycle and approaching it in an analytical manner can provide an effective framework for gaining that experience.