Sunday 7 October 2018

Testing Outside the Box


Automated testing of some description is now common place within the majority of development teams. This can take many different forms, unit testing, integration testing or BDD based testing.

These different approaches are designed to test individual units of code, complete sub-systems or an entire feature. Many teams will also be automating non-functional aspects of their code via NFT and Penetration testing.

But does this represent the entire toolbox of automated testing tools that can be utilised to employ a shift left strategy? Whilst they certainly represent the core of any effective automation strategy if we think outside the box then we can come up with more inventive ways to verify the correct operation of our code.

Randomised UI Testing

Using BDD inspired tests to verify the implementation of a feature usually involves the automation of interaction with the applications UI, simulating clicking, scrolling, tapping and data entry. This interaction will be based on how we intend users to navigate and interact with our UI, however users are strange beasts and will often do things we didn't expect or cater for.

Randomised UI testing attempts to highlight potential issues if users do stray off script by interacting with the UI in a random non-structured way. The tests do not start out with a scenario or outcome they are trying to provoke or test for, instead they keep bashing on your UI for a prescribed period of time hoping to break your application.

Sometimes these tests will uncover failures in how your application deals with non-golden path journeys. On occasion the issues it finds may well be very unlikely to ever be triggered by real users but non the less highlights areas where your code could be more defensive or less wasteful with resources.

Mutation Testing

Unit tests are the first line of defence to prove that code is still doing the things it was originally intend to do, but how often do we verify that the tests can be relied upon to fail if the code they are testing does develop bugs?

This is the goal of mutation testing, supporting tooling will deliberately alter the code being tested and then run your unit tests in the hope that at least one test will fail and successfully highlight the fact that the code under tests is no longer valid. If all your tests pass then these mutations could be introduced by developers and potentially not be caught.

The majority of tooling in this area makes subtle changes to the intermediary output provided by interpreted languages. This may involve swapping logical operators, mathematical operators, post and prefix conditions or assignment operations.

Issues highlighted by mutation testing enable you to improve your unit tests to ensure that they cover all aspects of the code they are testing. It's also possible that they will highlight redundant code that has not material impact and can therefore be removed.

Constrained Environment

Up until now we've concentrated on testing the functional aspects of code but failures in non-functional aspects can be equally impacting to users. One approach to this is Non Functional Testing (NFT) that pours more and more load on your system in order to test it to breaking point.

Another approach can be to run your system under normal load conditions but within an environment that is constrained in someway.

This might be running with less memory than you have in your standard environment, or less CPU. It might mean running your software alongside certain other applications that will compete for resource or deliberately limiting bandwidth and adding latency to network calls.

In a mobile or IoT context this could take the form of running in patchy network conditions or with low battery power.

Although this style of testing can be automated they don't necessarily produce pass\fail output, instead they allow you to learn about how your system reacts under adversity. This learning may highlight aspects of your system that aren't apparent when resources are plentiful or conditions are perfect.

It's also possible that this style of testing will show that you can reduce the resources of the environment you deploy your code into and benefit from cost savings.

The first goal of any automated testing approach should be to verify correct operation of code but with a little imagination its possible to test other aspects of your code base and create feedback loops that drive continual improvement.

Effective development also involves treating all code that your write equally, tests are as much part of your production code as the binaries that are deployed into the production.

Defects in these test will allow defects into your production environment that will potentially impact users. Not only should care be taken in writing them but they should be under the same focus of improvement as every other line of code that you write.


No comments:

Post a Comment