Monday 27 May 2019

Effective Test Automation



As a DevOps mentality has taken hold in the industry teams have increasingly focused on test automation. This is mainly because an effective test automation strategy can be a driver for an increased release cadence by reducing the amount of manual effort involved in declaring code ready for production.

On the face of it this may seem like a straight forward endeavour. Write the tests, run the tests and release the software. However like many aspects of software engineering there is a subtly to its proper application and it is definitely the case that when badly implemented it can lend no value, and at worst can actively degrade quality.

Effective test automation cannot be defined or explained in a single blog post but presented below are a few things to look out for when judging the effectiveness of test automation in your code base.

Red Means Stop

The number one mistake that teams make when implementing automated testing is for there to be no consequences when tests fail. There are often various reasons given for this, there is a bug in that test, it's a timing issue, that wouldn't happen when the application runs for real.

In all those situations if the underlying issue can't be addressed then those tests shouldn't be run because they offer little value. The whole purpose of an automated test suite is to act as a traffic light for the code base, if we're all green we are good to go, if anything is red we need to stop and fix it. Having a situation where software is still released despite tests failing will create a culture of ignoring tests, this will either increase the amount of manual testing you need to do to get comfortable with the codebase or you will knowingly ship with defects.

Much more value can be derived from a smaller set of automated tests that are robust and meaningful than trying to create a larger set of tests that are fragile and where the results require significant interpretation.

Fix Early or Release Broken

Once you have a set of reliable tests you are prepared to put faith in the next most important factor is when you run them. The further right in the development timeline the tests are run the more pressure there will be to ship code regardless of the results.

Depending on the timespan of your project it is also likely the cost of fixing defects found further right in the process will be higher, this will likely be in spite of the fact that any fix may be more like a patch than a well engineered solution.

The further left the tests run, by which we mean the nearer to the time the developer first writes the code, the more time is available to find a fix and the cheaper the fix is likely to be. The closer the tests are run to the proposed release date the more it will become a box ticking exercise due to the significant pressure of continuing regardless of the results.

Variable Scope and Focus

The release of any piece of software is often focussed on particular areas of functionality, this naturally means the potential for bugs or issues is higher in these areas. Whilst there are likely to be core areas of functionality that always need to be verified we can maximise the effectiveness of an automated test suite by allowing it to adapt to the current focus of developers attention.

This shift in focus may be automatic based on analysis of developer commits or it may be via configuration or any such mechanism that allows manual changes in emphasis. Knowing that the available automation resources have been focused on the areas of code most likely to have regressed will go a long way to increasing confidence that these new or adapted features are ready to ship.

The building of an automated test suite is always done with the best of intentions but implementations often end up being sidelined and not given particular relevance in the release process. This usually comes from a view point that simply writing the tests is enough, this isn't the case. Tests that don't relay unequivocal information about the state of the code base, or that are run too late for any information to be effectively acted upon represented wasted effort.

To avoid this decide on your areas of nervousness when releasing and try and develop strategies for these concerns to be addressed via automation. Also treat this automation like any other code, expect it to need refactoring and developing as the code base moves on. Treat it like a living breathing area of code that is your ally in making the important decision over when something is ready to ship. 


No comments:

Post a Comment