Sunday 25 September 2016

Agile Quality



Being an agile team is not achieved by flipping a switch, becoming agile is a journey where the validity of qualifying yourselves as being agile is a sliding scale.

Along with an adherence to the structure and ceremonies of your chosen agile approach there is also a required mindset that needs to be adopted in order for the benefits of agile to be realised.

So if being agile is not a binary metric how do we know in our day to day working lives if were moving in the right direction towards our agile nirvana.

Story Writing

An agile story is the communication device by which stakeholders can communicate to the team the outcomes that need to be achieved to implement some desired functionality.

As with any form of communication its possible to do this badly, to generate misunderstanding and confusion that ultimately lead to a failure to deliver or a delivery that provides the wrong outcomes.

Story writing should not be an arduous process, it should promote discussion but it shouldn't cause confusion.

A key to this is realising that story writing needs to start only when a broad understanding of outcomes is already in our grasp.

The process of story writing should be for the team to create an understanding of what needs to be done and the approach that will be taken, it shouldn't be a painful process of trying to tease out of stakeholders what the point of doing this work is.

You can learn a lot about agile quality by watching a team in a story writing session, are you watching a well oiled team who understand the product they work on and where its going? Or are you watching a group of individuals churning out software without an understanding of why?

Backlog

Once a certain number of stories have been written then you have created a backlog.

Again a backlog can be seen as a form of communication, it describes the journey a product is going on and identifies the signposts and milestones that need to be hit along the way.

A good agile backlog is fluid, the next priority can be changed, stories can be added, removed and re-ordered.

What shouldn't be fluid is the over arching themes of what is trying to be achieved, the product that is being built and the path of its evolution.

A low quality backlog will not demonstrate this cohesion, it will present itself as just a collection of work items thrown together in order to give a team something to do. It will not have obvious goals or points of delivery.

The quality of a backlog can often been seen by asking a member of the team to talk you through the stories, what they achieve, why they are in the order they are and where a deliverable exists.

Delivery

The point of all this is to deliver software and almost regardless of what development philosophy you apply the effectiveness of a team should be measured in their ability to deliver working software.

However we need to understand what is realistic to expect.

No matter the quality of a team the ability to deliver any amount of scope in any amount of time is a fallacy.

What should be valued is the predictable nature of a teams delivery, this predictability will come from an accurate and stable velocity.

When this is combined with a quality backlog made up of quality stories this enables accurate estimation.

A stable velocity can only be achieved when a team, whose members don't change, is working its way through a well crafted backlog towards a well understood goal.

Velocity should be the third measure of the agile quality of a team, as the team improves its practices it should demonstrate an upward trend but not have large variation between every sprint, within the short term past performance should predict future delivery.

Ultimately all these aspects of agile combine and interact, the important aspect is to embrace the agile mindset and realise it is not a project management process.

It is a living organic approach that embraces change and wants to deliver what is required on regular and improving cadence. But as with any system, garbage in will lead to garbage out.

Sunday 18 September 2016

Extend Don't Modify


One of the five pillars of SOLID software engineering is the open/closed principle:

    A software entity should be open for extension but closed for modification. 
 
With the advent of distributed source control systems, such as Git, many claim that the open/closed principle (OCP) has lost its relevance. We don't need to fear modification since we can accurately track change, roll it back if required and visualise the history of a piece of code.

I argue that this is looking at OCP in a limited sense, software has an inherent ability to be fragile and therefore any architecture or implementation that promotes modification is on a path to increase this fragility and ultimately lead to defects and instability.

Concentration of Knowledge

Regardless of whether or not we can track modification we should still ask ourselves why we are required to modify and not extend a class, often this is because we have created a concentration of knowledge and therefore a concentration of change.

If we have a class responsible for drawing shapes we will be forced to change this class every time the number of types of shape in our universe expands, if we instead delegate responsibility for graphics to each individual shape then we can naturally extend this universe just by adding new shape classes.

The above example is generated by the fact that knowledge of how to draw shapes was concentrated in one place when the number of shapes we have to work with was going to be on an upward trajectory.

Sometimes its a fine line between obeying OCP while also adhering to other SOLID principles such as Single Responsibility Principle (SRP), however both principles are trying to limit the need for change, if we concentrate knowledge in a particular class we need to keep in mind how much that knowledge may need to expand.

Too Many Cooks

Another important aspect of OCP is a much more practical one.

Source control systems may be good at tracking change but they cannot smooth the problems we encounter when many engineers have rolled up their sleeves to go under the hood of a piece of code.

Creating a code base where many developers may be working in the same area will have an impact on velocity both in terms of developers struggling to get their code merged in, struggling to understand an area of code under modification and with the defects that will arise as a result.

Sometime we concentrate solely on the properties of code that can be slightly abstract in nature but the ease at which a code base can be worked on by a team is something that should not be forgotten or ignored.

OCP helps create a code base where many developers can work together in harmony without tripping over each other and without creating confusion.

This Used to Work

Software has the potential to be inherently fragile, it is written by developers who are only human and will make mistakes.

We put certain automation in place to try and catch these errors but they still on occasion make it through.

Promoting change in already working software should be left for refactoring activities to improve the overall quality of the code, it shouldn't be a daily occurrence because the software has been built in such way that it must be deconstructed before being re-built with new functionality on every iteration.

The ideal situation to be in is that whenever new functionality is added the risk of defects only exists within that new functionality. It shouldn't be the case that all bets are off on the working state of previous functionality because its implementation has been modified or changed to accommodate the additions.

The SOLID principles are not always universal truths, there are occasions where strict appliance would not improve a code base, but that needs to be viewed in the round sometimes the benefits they bring can be both broad and subtle.

Although the invention of many of those principles pre-dates the world in which we are developing software the truth within them still holds, how to engineer good software hasn't changed that much and its unlikely any of those five principles will ever be declared defunct.

Monday 12 September 2016

Feature Quest



When working on a software project it can sometime feel like your part of a never ending convey belt of feature delivery.

Its easy to fall into a trap of becoming obsessed with the delivery of more and more features for users and to value this above all other aspects of the product.

Actually there is more to developing a product then just bombarding users with a constant stream of shiny new features.

When you come to think about it, how often do the major platforms such as Google, Facebook, Apple or Amazon release new features that are visible to users? Its not as often as you think.

Working is a Feature

Sometimes we assume that users are judging us by the cadence at which we are giving them new toys to play.

Generally if you ask a user "would you like it to do this..." you will get some form of positive reaction, however if you ask them what bugs or annoys them about using your product they will talk about problems they have using the current feature set.

Performance and stability need to be viewed as features that are just as valuable as the next bright idea, users have very little tolerance for a decline in either of those two aspects of your product.

You are much more likely to lose users because your product is not functional or doesn't do what it says on the tin then you are to boredom of a perceived slow down in new features.

Quality in your code base is not a nice to have, ultimately its what users value and no amount of bells and whistles will distract them from it.

Internally Facing Features

We don't necessarily have to view the features we deliver as being always focused on the user.

In our own way the organisations we work in are consumers of our product, we have to work with it, deploy it and maintain it.

Features can address those key areas as much as they might have an impact on things the user is aware of.

Although working on these areas might be seen as making our lives easier they also have an indirect knock on affect on our users.

Anything that makes our life easier makes delivering software easier and easier delivery means we can make more frequent updates and in turn deliver to the user at a higher rate.

When evaluating potential new features we need to look at the whole delivery chain not just the relative tip of the iceberg represented by what is visible to the user.

Feature Overload

Pick any popular piece of software and ask people what they use it for and in the vast majority of cases that list will be relatively small and not include everything that software is capable of.

It is also equally likely that if you compare how the features on the list work now to how they worked in the original version you will see a large progression.

If we take an agile iterative approach to development we should develop a mindset of "better and better" not "more and more".

We should concentrate on the core set of features that we feel our the prime value drivers in our product and work on ways to deliver that functionality in an ever increasingly better way.

These improvements may be marginal but by continually delivering them in a relatively short amount of time they can add up.

A performance tweak here and a bug fix there, some polish to the UI here and some smoother UX there can all make users feel like your product is more useful in their lives.

Sometimes we develop a features at all costs attitude that does more harm then good and is not really want users want.

Users want the functionality that originally brought them to your product to work and work well.

In time they come to expect you to offer them more but they don't expect this at any where near the pace that we sometimes think they do.

When the time for new features does come it should be an organic extension to what you already offer without the need to re-invent the wheel or find the silver bullet.

Sunday 4 September 2016

Flipping Dependencies



Sometimes we can be guilty of following a principle or a pattern without fully understanding or appreciating why this is a good thing.

Some could argue what difference does it make if your writing good code? But without understanding what makes the code good you invite the risk of unintentionally degrading it.

One of these patterns is dependency injection, sometimes developers will implement this pattern without appreciating that the reason for doing it is to apply the principle of Inversion of Control (IoC).

Static vs Dynamic

A class that does not apply IoC is statically bound to its dependencies, they are defined when the class is compiled and will never change without the code being changed.

When a class is written to apply the IoC principle it is dynamically bound to its dependencies they are not necessarily defined at compile time, instead they are loosely coupled by a shared contract in the form of an interface between the dependency and the dependent.

The major point here is that we have inverted who is in control of defining which implementation will fulfil the dependency for the class we have written. Rather than the class itself controlling this the framework in which the class operates is in control.

So why is this inversion a good thing?

Hang Loose

The primary benefit of IoC is to create loose coupling between classes.

The benefits of this to the dependent class should be clear, it makes the class eminently testable and increases the opportunities for re-use.

It is almost by definition not possible to test a class that is statically bound to its dependencies.

A unit test should only have one reason to fail, namely that the class being tested is not supplying its required functionality. It should not be the case that the failure is caused by one of its dependencies, this would mark this test as an integration test, this doesn't mean the test has no value but it does make it harder to interpret the results.

If its impossible to break a class away from its dependencies then its impossible to write a unit test.

Quite often despite the fact that a class has the control to define the implementation of its dependency it actually has no reason to have this control, it has not got a dependency on the implementation only on the functionality.

By applying IoC we properly model this situation and allow the class to function with any implementation willing to provide the desired functionality, we have made the class more flexible, re-usable and ensured we do not have to needlessly write more code or copy code we've already written.

Loose coupling also has a benefit to the class supplying the dependency, by being statically bound to an implementation we don't allow for any change in that implementation without the requirement to transmit that change to the dependent classes.

By dynamically binding to the functionality we allow for change in implementation to have no impact on the dependent class allowing it to carry on working in the way it did before. 

Giving Up Control

Once a class is no longer responsible for creating its dependencies it can also be absolved of any responsibility for implementing logic relating to that creation.

The IoC principle allows our use of constructs such as singletons, object pools or prototypes to be both hidden from the class that needs the functionality and centralised within our implementation of a container to realise the IoC principle.

This is another example of how application of IoC is all about leaving the class that requires dependencies to be written with only concern for the functionality it needs to offer and not also be responsible for playing a role in implementing the framework we need our code to run in.

In this way IoC can be seen as playing a role in ensuring we follow multiple elements of the SOLID principles, not only dependency inversion but also single responsibility and the open close principle. 

Too often we can simple show developers a piece of code and simply tell them this is good code.

But this encouragement to simply repeat what you've been shown without appreciation for why the code is good will inevitability lead to the subtitles of the code being missed.

It is never the case in software engineering that any particular pattern can be endlessly applied, there are always other factors to consider and different situations to recognise.

The way to enable developers to appreciate this is to teach them the principles that are used to define good code, which patterns to use are a natural consequence of understanding these principles.