Monday 9 January 2017

Metrics to Live By


The use of static analysis tools is now common place in most development teams, they offer a multitude of different numbers and measures to represent a plethora of different aspects of a code base.
Paying attention to all these different statistics and deciding on possible actions would be a full time occupation not just to remedy any potential issues that are being highlighted but also simply to understand what the metric is trying to represent.
So is static analysis simply a waste of time? No, but there are certain measures that its worth paying more attention to.
Code Coverage
I'm not about to preach the virtues of unit testing, I have done that countless times but when assessing the fruits of your testing labour its important to realise that analysing code coverage is about more than just looking at the headline number.
Many tools allow your code base to be broken down in terms of complexity or reference count, its key to marry this with code coverage.
A coverage figure of 80% may sound impressive and more than adequate but if the 20% not being covered represents the most complex and widely used areas of code then this will severely limit the confidence that can be taken from the tests passing.
The headline code coverage figure is still important but try to ensure that this doesn't represent the low hanging fruit of easy to test code leaving the core of your application untested.
Duplication
Its possible for many tools to scan your source code for blocks of identical code and highlight where you have duplication.
Duplication is not only to be avoided in terms of the maintenance over head it brings but when it spreads through a code base it can be indicative of more fundamental problems in the make-up of a system.
It may be that a developer was left with little choice but to duplicate code because trying to make an area of code re-usable was too challenging because of bad abstractions or poor de-coupling.
It may also be that the possibility of re-use simply wasn't obvious because of overly complicated or obscure code.
No matter what the cause duplication in a code base degrades its quality and causes challenges in trying to grow and extend its functionality.
Pay attention to its growth and look to see what structural or architectural problems it may be indicating.
Technical Debt
Most tools have many different measures of technical debt, some straight forward and some complicated to get to grips with.
Mostly they relate to a code base being analysed against a set of rules designed to highlight deviations from agreed principles and practices relating to structure, syntax and coding practices.
These rule sets can be large and are often broken up into different categories of severity in terms of the impact of the technical debt they represent.
No matter what the rule or severity the team must agree on which rules represent how they want to write code and agree that areas of code that break these rules must be fixed.
Too often when analysing the output of these tools teams will skip over violations deeming them not important or not urgent.
Any rule that carries this assessment should be removed, even if they represent small pieces of technical debt they will build up to cause the team to not be able to see the bigger more critical problems and become disillusioned with the benefit of running these types of scans.
Software engineering can be a very creative and intangible, assessing the quality of a code base is not as simple as running the code through a machine and getting a red or green light.
The output of all static analysis tools should be viewed with the benefit of the knowledge you have about the history and make-up of the source, code isn't written by robots.
But they do provide indicators as to where things are going well and where some of your attention should be be directed, use them as a tool to improve things not a weapon to punish each with and make sure you understand what drives the numbers to ensure you maximise their usefulness.

No comments:

Post a Comment