Monday 29 May 2017

Later Never Comes


Within software development there are many different variations on the theme of putting off till tomorrow something that could be addressed today.
We'll fix this later, we'll sort this out in a future release, we'll re-visit this again at some point.
The main problem with most of these phrases is that later often never comes, to put it another way the problem with throw code is we very rarely throw it away.
There are good and bad ways of being pragmatic in balancing the need for engineering excellence and the need to ship software. Whenever the former is placed above the latter we call that technical debt and we must all be aware of its implications.
False Velocity
The first impact of technical debt is the unrealistic expectations it gives birth to because of the misleading uplift it provides in a teams velocity.
Increases in velocity brought about by deliberate tech debt have essentially been bought on credit with the debt being collected by a future downward swing in velocity or in bugs and defects being shipped to production.
Unfortunately we very often limit our assessment of the health of a project to the short term and only see momentary uplift the tech debt is providing, when the debt comes back to make its presence felt we attribute this to variety of other explanations and don't connect the dots back to the decisions taken in the past.
We need to realise that the only changes in velocity that are sustainable are the long term trends that we gain by addressing how a team is operating not in deliberately weakening our product.
Refactor Ratchet
Refactoring is a core skill of software engineering, take any sufficiently complex technological system and you will see a continual cycle of build, learn and refine.
Too often people believe refactoring is a synonym for re-writing, this tends to either lead to a belief that it is required because of mistakes in development or simply because of a desire by the developers to do something more interesting.
Building software is a complex business, it is near impossible to nail a design or architecture on the first attempt. As more information is gained around what the system is growing into and what it needs to achieve it will become necessary to re-evaluate the approach and refine.
One of the first casualties when we decide to fix things later is unit testing, this severely impacts a teams ability to effectively refactor code. Each attempt to refactor becomes a leap into the unknown without the ability to quickly verify continued correct operation.
It is usually the case that required refactoring grows in line with the amount of untestable code in a code base, without a design for testability strong coupling can mean any attempt to address issues in the code will involve a large area of affected code.
Software Isn't Magic
With software we can sometimes do magical things but the engineering that gets us there is far from magic.
Although we work with a different raw material we are subject to many of the same rules and truths of any other form of engineering, build you house on weak foundations and it will eventually collapse.
It is very easy for the efforts and hard work of a development team to create a false sense of security for stakeholders that no matter what decisions are made the product continues to work even if the team are warning about future possible calamities.
In many other forms of engineering such an approach would be considered negligent, if warnings about the structural integrity of a bridge or building were ignored we would rightly be concerned.
Although cracks in a software product are not always as visible they are there if long standing tech debt isn't addressed, similar consequences can also be expected.
Its unrealistic to say we will never incur any tech debt, sometimes in the interest of pragmatism it is necessary to do just enough engineering.
Most developers will accept this, the cynicism and skepticism when told we will fix this later is born from the fact that they have become used to later never arriving because new features are on the horizon.
Software systems are like any other system, ignore its deterioration and it will eventually fail, if you have a good development team they will often find a way to moves this date back but all this often achieves is increase the size of the eventual breakdown, eventually quality isn't optional.

Monday 22 May 2017

Homework Excuses


The majority of developers will recognise the benefits of unit testing and most will make an effort to write some.
But often the determination to ensure high test coverage can be paper thin with excuses often being made for why in this instance we won't write tests or fix the ones we've broken.
On a large number of occasions the root of these excuses will highlight something more fundamentally wrong with a code base.
False Assurance
Sometimes a justification for not writing unit tests right now is that "I know this works".
Firstly, I'm sure all developers can recall a time when their certainty in something being true has turned out to be false, this would come under the same category as "I haven't changed anything" or "That can't happen".
Unit tests not only prove a class does its job but it also ensures that it is achieving this outcome in a way thats expected.
Secondly, and probably more importantly, the value of unit tests isn't really in proving that code works at the time of writing. As professional developers this should be a given, pull requests shouldn't be raised for non-functional code.
The real value of unit tests is proving code works at any and all points in the future. The longer you leave code uncovered by unit tests the wider the window that a bug or regression will be introduced unseen by the team and possibly making it into production.
A healthy if slightly distrustful attitude in this situation would be "you say this works, show me the tests".
This Is Hard To Test
Another common excuse is that this particular class is hard to test.
Sometime this can be true for a variety of reasons, but a healthy attitude to this situation is to assume that this must mean some refactoring is required, with code only being declared untestable when a clear rationale has been defined.
This should largely only be the case when the code that requires testing has a dependency on something we don't have the ability to change or that cannot easily be mocked.
Un-testability is something that can spread through a code base like a virus. An untestable class spreads the disease to every class it becomes a dependency too. The result of this is that the scale of required refactoring increasing rather than our code coverage.
So whenever someone declares a class untestable assume that this is our fault until proven otherwise.
Much More Than Testing
While the major benefit of unit testing will always be testing our code it does bring other benefits that shouldn't be dismissed.
I am a big believer that unit test make the best code documentation. On many occasions when wanting to understand more about how a class is supposed to be used I have looked through the unit tests, when these tests are well written I find a plethora of information about how a class is interacted with, what error conditions can occur and what output I should expect to get from it.
Unit testing is also a key part of building a continuous integration and deployment workflow.
Whenever we talk about CI we are not referring to a pipeline that continually delivers code of a questionable quality with a varying probability of working.
If we intend to automate the delivery of our software then we must create an environment where we can automate some degree of certainty that things are working.
This will never be full-proof, even the most highly tested code has been known to contain bugs, but I would back it over a fingers crossed approach.
Whenever code is deemed to be untestable we need to view this as a design decision because it comes with consequences.
This attitude needs to be pragmatic as nearly all code bases will have areas that genuinely aren't practical to unit test, but this should be a last resort with a clear rationale not an immediate decision made every day.

Wednesday 17 May 2017

Agile Appearances


Sometimes the best of intentions can ultimately lead to incorrect or undesired behaviour.
This unfortunate chain of events can also be seen in the adoption of agile.
Largely this is caused by concentrating too much of the mechanics of agile, such as Scrum or Kanban, coupled with an unwillingness to let go of past learnt behaviour.
What kind of behaviours or approaches to agile should we recognise as having an admirable goal but flawed execution?
Restricting Communication
A key aspect of agile should be encouraging communication between the people with the skill set and experience to solve problems.
Sometimes this communication will be structured and there is often a necessity for people to manage their time and workload.
But at no time should restrictions be put in place that prevents people who need and want to communicate from getting together and talking a problem through.
This may be seen in only allowing certain things to be discussed at certain times or by placing bureaucratic controls on who can talk to each other and when.
These kind of restrictions often ultimately waste more time than they are designed to save, the reason agile promotes communication is because that is at the heart of every solution.
Only the answers to the simplest of questions are derived by an individual in isolation, every other problem is solved via team work the currency of which is communication.
Ignoring MVP
The concept of a Minimum Viable Product (MVP) is often very easy to pay lip service too without realising its purpose and benefit.
Too often an MVP is seen as something we have to settle for, we want more but are being told its not possible.
This can lead to the definition and delivery of an MVP largely being ignored, either meaning no value is seen to be represented by it or in more scope being included then is strictly necessary to form a delivery.
Shipping software is the whole purpose of the process, shipping an MVP should excite everyone involved not because its the finished article of what we're hoping to achieve but because its our first opportunity to learn if we can deliver value.
Value doesn't scale with the amount of code written or scope delivered, increasing the amount of both these things can also increase the scale of any failure.
MVP should encourage early and frequent delivery it isn't something to define purely to satisfy the development team.
Long Term Integrated Planning
Agile could be seen as the religion of uncertainty, nearly all of its values are related to an acceptance that uncertainty cannot be ignored.
Once an agile approach has been put in place and effective scrum teams are seen to be delivering there is often a strong temptation to plan further and further in advance in more and more detail with the assumption of certainty in the status quo continuing.
This planning is very often undone by users failing to see the value in a release that we were confident about on in the ball of string that can unravel when too many interdependencies between scrum teams are required to deliver a release.
This is not to say that no-body should be thinking more than a few sprints ahead, but there should be an inverse relationship between the time span of the planning and concreteness of the plans being made.
Although it seems like an obvious statement we need to be agile, our planning needs to be fleet of foot with the ability to change when new information becomes available or an unexpected roadblock presents itself.
Agile is not a complicated science with many laws and rules of operation, it is a set of principles trying to promote the behaviours that have been proven to deliver.
Sometimes we can lose sight of this and build up a large amounts of process that ends up stopping us realising those benefits, presented here are three examples of that scenario but always be on the look out for others.

Monday 1 May 2017

N-Tier Architecture


A key requirement of software architecture is to provide structure and order to a code base, an applicable adage could be "a place for everything and everything in its place".
The major tool that can be deployed to achieve this is to define clear and distinct layers through the code.
These layers must have a clearly defined purpose and their role should be clear and unambiguous.
This often is not as straightforward as you might think, certain pieces of functionality often slip through the cracks or are not easily compartmentalised.
But aside from these cross cutting concerns quite often the structure of software can be divided into four main layers.
Presentation
The majority of software products have a UI, a mechanism by which data can be shown to a user and their interaction with it can be captured.
A presentation layer should be solely concerned with the logic and implementation of how data will be rendered, it should make no decisions on what will be shown.
The data to be shown may very well require processing to transform it into a more appropriate form to be rendered but this processing should not contain any business logic that changes the data within the context of the domain.
It should be a conduit to capture user interaction but should have no logic that processes or actions this interaction aside from passing it down the line to a layer that does contain this knowledge.
Application
The purpose of the application layer is to provide a shell or container for the other layers to exist in.
It provides the structure necessary for the code to execute in the target platform.
In the context of GRASP classes in this layer are often referred to as Controllers. They should provide the necessary glue to combine all the other layers to produce a working application.
No logic exists in this layer aside from the knowledge necessary to operate in the chosen platform.
As much as possible this layer should be boiler plate code.
Business Logic
This layer defines the operations and functionality relevant to the domain that the software represents.
This functionality is not tied to the implementation of the presentation layer, it represents the end to end processes that your software is designed to provide, this functionality can be visualised in any number of ways.
The opportunity for re-use for this layer across your software product suite is high and should be extremely testable, if layering is being effectively implemented this layer will be key to ensuring that your software is operating properly.
Data Access
Just about all software involves the processing and presentation of data.
There are a plethora of different methods and technologies that can be used to store this data, the purpose of the data access layer should be to abstract this potentially complicated area of code and provide a consistent view of the domain.
This layer will very often also include providing access to data accessed via APIs or services and will likely be asynchronous in nature.
The only logic in this layer should be how to retrieve data and represent it within the context of the domain, not what should be done with it or how it should be presented to the user.
As well as defining what these layer do and don't do its important to adequately define the interface between them. Effectively doing this leads to testability, extensibility and the possibility of re-use.
The antithesis of the layered approach is often referred to as spaghetti code, categorised by the inability of those that work on it to fully understand the structure of the code or even begin to think about extending its purpose or making improvements.
There is a reason that good software engineers often hate untidiness or disorder, a well organised and structured code base delivers clear and unambiguous value.
When assessing your code base be sure to evaluate if you can clearly draw rings around these four main layers and not see any bleeding of functionality across these boundaries, "a place for everything and everything in its place".