Sunday 19 June 2016

The Scientific Method



Agile is made up of many aspects and means different things to different people.
I believe one of the things it teaches us is that we can't control the environment our software will be used in, our predictions are educated guesses and we must have an acceptance that we may be wrong in part or entirely.
In this aspect Agile development shares much with the scientific method, as developers we are conducting experiments with the goal of discovering what it is our users really want.
Constructing an Hypothesis
Whether were planning the release of an entirely new piece of software or adding a new feature to something already existing.
The need we are trying to fulfil might seem obvious, it may be based on feedback from our users or it may be an idea of our own invention. Either way whether or not users end up thanking us for its addition is not just about the idea but also the execution.
We should prepare to make changes after we ship, these changes may be big or small, we may be close to the mark of whats reuired or a distance away.
The important aspect is that we anticipate the need for refinement and give ourselves somewhere to go in the code.
Conducting the Experiment
The only people that can conduct our experiment are our users, they are the ones we are trying to provide value to and it is they who will judge if this value is being derived.
However feedback directly from users is often difficult to extract and interpret.
We therefore need a way to monitor their behaviour, a way in which we can measure the interactions and reactions people are having with and to our software.
For this reason analytics and instrumentation need to be a core part of what we ship.
Analytics to observe what users are doing, or not doing, with what we have given them and instrumentation to gauge the performance of our implementation.
Thought is required in what these key measures should be to avoid ambiguity but the data we collect represents the results of our experiment.
Interpreting the Result
We started with an hypothesis of how we thought we could please our users and add some value.
We put software in their hands and ensured we were collecting data on how it was being used.
Now we need to analyse the results of our endeavours and here we must not be afraid of a negative frame of mind.
It is all too easy in these situations to have a bias towards proving we were right. We knowingly or unknowingly construct our analytics platform to provide measures that will cast us in a positive light.
We should instead always be fearful that we may have missed something or not quite nailed it.
If we approach our analysis in this frame of mind and yet still the results prove that what we delivered was a hit with users we will truly know we got it right.
The flip side to this must be that we are accepting when the data proves we were wrong in part or wholly. This doesn't have to be seen as a negative, we've gained understanding of what doesn't work and this can feed a better approach.
Agile should all be about continual iteration and regular deployment of something new to users, if we get it wrong this time the next release is just round the corner and we will be better next time. 

No comments:

Post a Comment