Sunday 24 March 2019

User Trials and Errors



The lives of a software engineering team would be made a lot easier if they were always feed with exact requirements from the people who will use their software that if delivered would guarantee success.

There are many areas of software development that make it an inexact science, one of these is how to identify and meet users wants and needs. Certain aspects are obvious, no users want software to crash or error, they want software to be secure, reliable and fulfil its main purpose. Beyond this which features users want to see and how they want these feature to work is difficult to predict.

There are certain techniques that we can use to try and smooth this journey towards having happy and content users, but the premise behind adopting these strategies needs to be that we don't have the power to simply predict the right path.

There is no crystal ball, only controlled trial and error.

Analysing Failure

The majority of modern software applications will employ some form of analytics, collecting data about how software is being used for future analysis. Unfortunately in almost equal proportion this task is approached with a certain bias.

Frequently we decide on the data points we will collect with a view, either conscious or subconscious, to prove we are right in our assumptions. We look through a lens focused on how we expect and\or want people to use our software.

This will very often lead to a situation where when the data doesn't prove conclusive we are hampered in further analysis because we aren't collecting the data that would unlock our understanding.

When deciding what data to collect a general rule should be to record everything. Obviously this needs to consider user privacy, anonymity and consent. In this context we are talking about recording how users interacted with the software not their personal details or data.

We also need to set expectations on what our analysis will reveal, analytics will generally raise questions rather than answers. It will point out where we are failing but not the solution. However the value of these observations is that they aren't guesses, they are facts based on real data relating to real users.

Open for Criticism 

Research with users is also common practice with many teams, however this can be equally tainted with the desire to be proven right. We can lean towards wanting to show users our new ideas and gain acceptance for them, however this can be undermined by the natural human reaction to being asked if you want more.

User are often likely to be positive towards the possibility of new features but this doesn't mean the current lack of this feature is their number one gripe with the software.

This kind of research can be much more useful if we are prepared to open ourselves up for a bit of criticism. The insights we can gain from effective analytics will have already highlighted problem areas, engaging with users to ask them exactly why these problems present themselves can yield further insight into what is wrong.

Openly asking people to criticise you is not an easy step to take, and sometime this criticism may not be presented in a useful or constructive manner but if you engage and try and structure the conversation you will learn a lot about the problems that if you can solve will make a big difference to users views on your software.

Trial and Error

A theme that has been running through this post is that you can't necessarily trust what users say or what they do in isolation if it's under controlled circumstances. The only real source of truth is what they do in large numbers when they are using your software in the real world.

This affects not only your analysis of issues but also the effectiveness of your solutions, nothing is certain until you've tried it with real users in real situations. This implies that you have to take a big leap into the unknown whenever you think you have something to improve your users experience but thankfully tools exists to allow a more experimental approach.

By applying A\B testing you can trial your ideas either on a random or targeted proportion of your user base and allow analytics to determine if it has changed user behaviour or has influenced outcomes. This enables you to perform trails on real users in real situations whilst also reducing the impact when you are inevitable on occasion proved wrong.

If we except that perfecting features first time every time is not a realistic ambition, adopting a strategy of trialing data driven experiments in the wild is the next best option.

Much of adopting an agile mindset is in accepting a lack of control over the process of delivering effective software. The techniques it promotes are geared around making the best of the situation given that limitation. Many that want to be agile struggle with letting go of the illusion of control, it isn't always a comfortable situation to be in but that doesn't make any the less true.

Learning to use analytics and techniques like A\B testing will help with the acceptance of this situation and you can even learn to love the excitement of whether this idea will be proven to be the one that makes a big difference, just don't lose heart when it turns out it isn't quite there yet.


No comments:

Post a Comment