Monday 27 July 2015

The Art Of The Finger In The Air



Estimating is a part of a developers world, hardly any conversation with a non-developer will proceed for long without one of the following being asked,
"How hard do you reckon that is?"
"How long will that take?"
These questions are usually accompanied by a sigh from the developer before he or she tries to give his or hers best non-committal answer.
Unfortunately estimating has become a game we play rather than an effective use of our time.
Estimates Aren't Weapons
One of major reasons for this is because estimates are seen as synonymous with commitments, once a team has given an estimate its often filed away as ammunition to shot them down with in the future.
Estimates are given in the moment, their quality should be measured in comparison with the amount and quality of the information available at the time. When this information changes either in quality or size we should expect that the estimate will change.
We also conversely shouldn't expect the estimate to change when the information doesn't change, asking the same question until you get the answer you want is a good definition of a waste of time.
As estimates get better we mean that they are getting closer to reality not that they are getting bigger or smaller, the only time you will be able to truly accurately estimate a project is the day after you finish it.
Estimates Should Be Relative
Estimates are best used when they are stacked up against other estimates, providing those estimates are based on the same quality and amount of information.
If feature A is estimated at four weeks and feature B is estimated at eight weeks, the important thing here isn't the actual numbers its the fact that based on the information we have feature A looks like its half as complicated as feature B.
This could be because feature A is actually easier to implement than feature B or because feature A is better defined than feature B.
Many factors may go into our next step we may chose to go with feature A or we may chose to simplify feature B or to fill in the gaps in its definition to try and increase the quality of the estimate, again by quality we mean closer to reality not smaller.
Either way we gain a lot more insight then if we only knew that feature B was estimated at eight weeks.
Estimates Don't Identify Risk
Estimates are often seen as managing risk when in fact its the information gained in trying to estimate that is truly identifying the risk.
What do we understand about the risk of feature B by knowing it is being estimated at eight weeks? What about if we knew the estimate was higher because developers were worried about the amount of server work required?
Is the best course of action to hope the developers are right and feature B will take eight weeks or to see what can be done to reduce the amount of server work involved.
It may be that nothing can be done but by not just asking for the estimate but by also asking for an explanation as to the factors that went into making it we understood a lot more about the problem and where the risk was.
Agile should be all about communication, quite often the discussion is more important than the answer it brings.  
Estimate Value As Well As Implementation
To take the idea of estimates being relative even further we should also estimate the value that we expect a certain piece of work to bring.
In planning poker what if Product Owners, and why not users, were asked to estimate the value they thought a feature bring? What if we could identify features that score 3 for implementation but 13 for value?
Agile and MVP should be all about identifying situations like this were a lot of value can be gained from small amounts of effort and even more importantly identifying when a large amount of effort would be wasted on delivering little value.
In reality it won't always be possible to identify situations like this, quite often value and effort will be proportional and broadly in line but the occasions where you can identify the mismatch will give you a big advantage, they will help you be the first to market delivering clear value.
Estimating Isn't Just About Numbers
The most frustrating thing about the estimating game we play is that we all know it doesn't really work.
Developers know when the estimates they are giving aren't based on reality, and Product Owners and Project Managers know the sense of security they are taking from the estimates is false.
We need to stop thinking of estimates as being numbers that we can plug into a plan we won't follow and instead see them as a process by which we define the outcome we want and iteratively zone in on when we think it may be delivered.
This does require us to acknowledge some things just can't be estimated accurately if their is not enough information about the intended outcome.
For example if I asked you,
"How long will it take you to bake a cake?"
You might reasonable ask, "What sort of cake?" and we can continue the conversation until we know exactly what kind of cake we are after.
The same applies with software, don't ask your developers to estimate the length of a piece of string, work with them to draw a sketch of the string that way you'll both know how long it is. 
  
  

Wednesday 22 July 2015

Contract Negotiations



Its a common situation for developers to end up with a dependency problem when they're looking at their sprint task board.
"We need to do that before we can do this, and we can't start on those until that bit is done"
This is often blamed on poor planning or on some other team who haven't delivered, sometimes that is the case, but sometimes the sprint board is actually telling us something important about our code.
Coupling Up
The inability to develop parts of the code side-by-side is a pretty good definition of tight coupling.
If you find that your having to organise your sprint tasks in a specific order, and more importantly in a sequential order, you should consider if this is because the classes involved and too tightly glued together.
Fight the Good Fight
Another important aspect to consider if your having trouble organising your tasks is will it always be like this every time we change this area of the code?
Good code design anticipates change and has a plan for making it as smooth as possible, if changing one particular area strikes fear into your heart and always leads to a messy scrum board this could point to some inherent problem with that portion of the code base. 
 Design By Contract
So what if you have established that your sprint board is a mess because its an accurate reflection of your code, what can you do about it?
The answer is interfaces.
If your classes are interacting with other concrete classes you are doomed to operate in a sequential manner, but what about if before we started coding anything we gave a little bit of thought to how these classes might need to interact.
If we defined these interfaces before we start coding all of a sudden we can code classes in parallel, providing we all agree to abide by the contract the interface describes once all the classes have been written the system will work.
Not only have we managed to speed up development we've also gained all the other advantages interfaces bring.
I know my class is built for change because I wrote it before you'd even finished yours, that shows me I definitely do not depend on the implementation details of your class because they didn't even exist.
Not only that I even have good unit tests that show my class works with anything that has agreed to honour the same contract.
When we next come to make changes to this part of our system we will once again be able to edit these classes side-by-side with no fear. 
Can't I Just Get On With It
Some will tell you its a waste of time to put all this effort in to defining interfaces.
"That's Just Overkill"
"We'll look at that if we ever need to change it"
We'll I find it a slightly strange argument to say its a waste of time to put some thought into something.
The time you take to think about the interfaces upfront will be repaid with interest as you continue to develop the system from the parallel development and built in plan for change that we've already discussed.
If defining the interfaces does truly take a long time all this proves is your system is complicated, the time saved in not thinking about it is simply a false economy that normally goes by the name technical debt, at some point you will be forced to think about it and it may have gotten even more complicated now you have some code that wasn't thought through at the beginning.
Re-factoring always takes longer then coming up with a good design in the first place and you can bet the point you need this re-design will come when you under pressure to ship.
Don't assume that your code base and problems in your sprint planning are separate, very often they are more connected then you might think, always be on the look out for signs that your code may need changing no matter where they might come from.    

Sunday 19 July 2015

Code For Sale



More and more companies now have a requirement to develop their own software, whether it be mobile apps, in house administration or a flashy website.
This has grown the market for software developers and the code their skill can produce.
Unfortunately the growing need for software has caused the growth of an opinion that software, and developers, can be treated like any other commodity that businesses need to purchase.
In this context by commodity I mean something where price is only differentiator, the basic product is the same you just need to barter to get it for the best price from the cheapest supplier.
To paint software in this light does a great dis-service to good developers and the great talent they have to produce a high quality product.
Engineered Not Manufactured
Software development is an engineering discipline and as such good software is crafted not manufactured.
The commodity view wouldn't be applied to other engineering disciplines.
It would be difficult to argue that the only thing that separates a Ferrari from a Toyota, or a Rolex from a Casio is price.
There would be recognition that more skill has gone into creating a higher quality product with a different set of attributes.
The toolbox available to developers comes from their experience and learning and is stored away inside them, they are not blindly following a blueprint, involved in a production line that's churning out homogenised blocks of code. 
Although the collective wisdom of software engineers over time has solved many common problems, what is not a solved a problem is the particular set of requirements your business has.
This is the essence of software engineering, the skill to use this engineering knowledge to solve the problem at hand, understanding that its subtly different from previous problems and requires a different solution, and having the creative vision to know what that solution is.
Not A Numbers Game
In many other areas of manufacturing it can be boiled down to numbers, if we use ten people it will take x, if we use twenty it will take y, where y < x.
It may cost more to throw bodies at it but it will speed up the production line because even if an element of skill is involved its just about getting on with it and making the commodity, twenty blacksmiths will produce more horseshoes in a given period of time then ten.
The fundamental difference between this and engineering is that time is required to engineer a solution to a specific set of requirements, good developers are not just producing the same horseshoe over and over again.
The skill comes not in writing the code, but in having the knowledge and skill to be able to work out what code needs to be written. Inspiration, creativity and honest to goodness brain-power are not scalable commodities, the same as it isn't reasonable to expect all developers to have these qualities in equal measure, its an unfortunate fact that not all developers are created equal.
Code Isn't Cucumbers
If software truly is a commodity its worth to a company should be easy to calculate, we have 80,000 lines of code, the current market price for code is $1 per line of code, we have $80,000 worth of code.
This obviously isn't the case, in the same way that estimating a software project can't be done just by knowing how much code is required and applying a market rate.
Indeed software in this sense is not an asset, the number of defects in software is going to be proportional to the number of lines of code, more code is not necessarily a good thing, this wouldn't be true for anything else that could be considered a commodity. 
Its difficult to put a value on some things, they aren't tangible, you can't reach out and touch them. Software is one of these things, it isn't ubiquitous, it can be done well and it can be done badly, and the difference between these two extremes is not easy to define.
The only way to "buy" good software is to invest in good developers , trust them and recognise that you are using them to provide a solution to your problem not just to churn out code. 

  

Wednesday 15 July 2015

Taking Out The Trash


We've all been in the position where we have no choice but to have something smelly in our code base.
We may have to integrate with a 3rd party library, we may have inherited some legacy code to cover some complicated piece of custom logic or we may be butting up against a badly implemented section of the OS were running on.
No matter what the source of the smell is we need to try and isolate its impact on the nice clean code we're so proud of.
Let us approach this from the point of view of the impact this sub-optimal code has on our architecture or design.
Its more than likely that this code also contains defects but in most of these situations we aren't able to fix it because we don't have or maybe don't fully understand the source code.
Get It Behind An Interface
One of the biggest impacts this trashy code can have on our code base is by having a badly designed interface.
This bad interface is the conduit through which the smell can spread into the rest of our code, bad practice in the interface leads to bad practice everywhere the interface is used.
It may even be that the developers of the smelly code didn't use an interface driven design, meaning not only is the code bad but it reduces the testability of the code that depends on it.
Get the bad code behind an interface that we can control, this will enable us to craft this interface properly resulting in every class that depends on it being clean and smell free.
Their may well need to be a wrapper class to implement the interface but here we can hide the hacky-ness required to use the bad code in one place so it isn't infesting the rest of the code.
Isolate The Smell
Its important that the bad code isn't spread through out our code base, it needs to be in as smaller a number of locations as possible, and these locations should be closely related.
Ideally the smell would be isolated to one layer of your code and arranged so that as few are classes as possible directly depend on it. We should be looking to put distance between the important logic in our code base and the bad code.
This distance allows us to filter the smell as it moves up the layers such that the air is fresh once we get to the critical parts of our code.
Plan To Rip It Out
In many cases it may not be possible to remove the bad code, we have to use this 3rd party, we have to deploy to this OS or we don't have time to re-factor that legacy code.
Even so taking the approach of planning to remove it well help in isolating its impact.
Once were at a point where we could switch out that smelly class we'll know we've done all we can.
The chances are if we get to this point we've ensured the rest of our code base remains testable, despite the bad code, and we've ensured that the clean code is as loosely coupled to the bad code as possible.
Its an important life lesson for all developers that not everyone takes as much care over things as you, some people do it wrong. An effective coping mechanism for dealing with this situation is an important skill.
Whenever you know you will have to integrate with someone else's code assume the worst and plan to protect your code from something you didn't design. 

Sunday 12 July 2015

Evolving Decomposition


Software is a living document it evolves, isn't written in stone and as time goes on will require changes to be made to it.
Sometimes these changes are not for the better, we may add a new feature but we do so at the expense of the integrity of the code base. Often these destructive changes are gradual, one minor sub-optimal change builds on another and so on until at some point the structure of the code has been compromised and is no longer a good solution to the original problem.
This situation is often called code rot and the solution to it is re-factoring.
Sometime this re-factoring can be pre-emptive, our experience and our nose tells us that this code is heading down the wrong road.
No matter what triggers the re-factoring it should achieve one of two things, increase maintainability and/or increase extensibility.
Exactly how this re-factoring can be achieved and the effects it should have is a vast topic, entire books have been written on the subject, but lets look at some of the common goals of re-factoring and the techniques we can use. 
Applying Some Abstract Thinking
Abstraction deals with complexity by hiding detail, a good abstraction talks only in terms of the functionality on offer not implementation, the "what" not the "how".
Rot can occur if overtime the abstraction starts leaking detail about what's going on under the hood, very often this is caused by a short-cut being taken because this leaking of information is easier then maintaining the abstraction.
This might be a problem with the abstraction itself or the under lying implementation either way its not what we want.
Re-factoring techniques to improve abstraction usually revolve around increasing encapsulation or generalisation. This might be putting fields behind getters and setters, changing a method signature to require less type checking or making better use of polymorphism.
The goal should be to ensure dependencies relate to abstraction not implementation and increase resilience to changes in detail.  
Separating Oranges From Apples
Another key aspect to a code base is the level of cohesion, this indicates the closeness of the relationship between data and functionality within the same class.
If classes are doing one thing and doing it well their should be a high level of cohesion, each cog of the class playing a role in that single outcome.
Rot can set in when functionality and data is placed in a convenient location for the developer making the change not in the most logical location for the design, this also often involves wide spread duplication.
Once again if your taking a sub-optimal decision because its easier that way something is wrong, either with the design or with the change your making.
Re-factoring techniques to increase cohesion generally involve recognising when more than one class or more than one method are co-existing.
In the case of a class this might be multiple methods all operating on different pieces of data within the class, in the case of a method it might sections of the method that do not operate on the operations from previous lines.
The goal here is to divide and conquer, split out the new class or the new method, reaching for ever smaller building blocks to make up the overall design. 
Changing What It Says On The Tin
Deciding on names within code either for a class, method or variable may sometimes seem like pedantry but problems relating to names often tell us a lot about the health of the code.
Difficulty in naming an element of code usually means what that unit does or represents is not well defined or is a list and not a single purpose.
Rot can very easily be introduced around naming, a change to what a method does or what a variable represents can mean the name is no longer accurate.
Names should always the reflect the functionality that a method offers or the data a variable represents, modern IDEs make re-naming a relatively trivial re-factor so their is no excuse.
Names are the first and in many cases only documentation for your code make sure its accurate.
Re-factoring should be a continual process of trying to evolve the code base in to an ever better solution to the problem, this is not always easy because sometimes the problem changes.
This is why maintainability and extensibility are such important qualities, code rot is an inevitable consequence of having multiple people operate on a code base that requires ever more features to be added to it, the key is to not let the mould spread. Recognise when its starting to take hold, cut it out and replace it with clean, healthy code.   

Wednesday 8 July 2015

Automatically Doing It Right


Thankfully in the vast majority of development teams the need for automated testing of the software that's produced is an accepted part of the development cycle.
But their are different types of automated testing and the difference is more than cosmetic it should inform your approach.
The main ways in which these types differ is in what can cause them to fail and how many of them you should be writing.
Testing the Bricks
Firstly we have unit testing, testing of a single building block of code.
Unit tests should have a single reason to fail, that is that there is a problem with the class under test.
The way to achieve this is by the proper application of Inversion of Control, once all a classes dependencies are being injected behind interfaces its a trivial matter to inject mock implementations during unit testing.
If this isn't possible its a smell with your design, it shouldn't lead you to the conclusion that making a class testable is difficult.
When a unit test fails you should be in no doubt as to which class is at fault, if your testing a piece of business logic and a unit test fails you shouldn't be asking yourself the question "I wonder if that's a problem with the database".
You should be writing as many unit tests as it takes to cover all a classes public functionality, a well designed class is a testable class, although at times it may be laborious it shouldn't be difficult.
Testing the Cement
Next comes integration testing, testing that our building blocks can fit together to form a well defined part of our structure.
There are various ways to approach this type of testing, either a Big Bang approach where we just test the whole structure, or a top-down or bottom-up approach where we test certain sub-systems before building the whole.
To return to our previous example, now we do test that the business logic works when it interacts with the database.
Our integration tests have as many reasons to fail as we have concrete classes involved in the tests, mocks can still be used for classes whose functionality we aren't interested in testing as part of the sub-system, this might include for example the UI.
We need to write as many integration tests as we have clearly defined sub-systems, much as with unit testing if we don't have clearly defined sub-systems this is our mistake, the solution isn't to say "well this just isn't testable". 
Testing the House
Finally we come to acceptance testing, testing that our grand plan has been implemented properly and does what we said it would do.
Now we are testing our whole system, we are using automation tools to take the place of the user and we are running our creation through its paces.
We need to write as many of these tests as we have acceptance criteria for our system to meet.
These tests have many reasons to fail but they show us exactly what the user will experience when they take these actions.
We've Built A Pyramid
What all this adds up to is that we have built a pyramid of automated testing.  
We are writing a large number of tests to ensure our foundations are solid, we are writing a smaller number of tests to ensure we have a solid core and a smaller number still of tests to ensure our system is acceptable
You might be asking why do we not just have a large square? Why don't we just write a lot of all these types of tests? The answer lies in how long they take to run and how fragile they are.
As we move up the triangle the more time the tests take to write and the longer they take to run. We never want to end up in a position where testing becomes a chore, we don't want people to dread writing them and we don't want to be put off running them.
By concentrating on unit tests we ensure that we can easily test a large proportion of a system quickly. This warm feeling that good unit testing produces means we can reduce the scope required by these extra layers of testing to verifying interaction we know the individual units of code work we just need to test they fit together. This reduces the need to write more of the slower complicated integration and acceptance tests.
As we move up the triangle the more fragile the tests become, the more likely they are to break when we change the system, so for the same reason we want to concentrate on unit testing because it offers the quickest feedback on the health of our code base. 
Automated testing is about ensuring our system is self checking by taking the weight of testing the basis of the design, this frees up testing experts in your QA department to spend their valuable time exploring the extremes of the system and concentrating on the "what if" scenarios.
The mantra is "test, test and test again" just be smart about how you implement it so that you never have to give much thought to testing your system its all just part of the process. 


Sunday 5 July 2015

Smarter Is Faster



A common mantra you will hear in many companies is to equate Agile with speed.
While in broad strokes it is true to say that Agile is about delivering quickly and frequently I think its important to understand where this speed is supposed to come from.
No Silver Bullet
Agile is not a silver bullet that will allow you to squeeze more effort, work or speed out of your developers.
If you try to use Agile to produce a large monolithic piece of software based on a large set of sometimes vaguely defined requirements you will face the same issues and problems that these kinds of projects have always faced.
What Agile is trying to tell us is that trying to deliver software in this way will always be slow.
Instead in order to get something of value out of the door quickly we should apply two relatively simple concepts, don't waste time and start off small.
Wading Through The Detail
At the start of a project, especially a new project, we know very little about what it is or what we should be building.
One approach to gaining this knowledge is to spend time, usually a large amount of time, thinking hard about what users of this system might want and producing a large list of requirements and producing a detailed plan.
Maybe we understand our customers inside out and we completely nail what it is they want, or as is often the case we realise we started off on slightly the wrong direction and what users want is subtly different from what we've delivered, they don't hate it but they don't love it.
Not only is the time we spent defining the requirements now wasted but also the time it took the developers to implement those requirements has also been wasted.
The situation for developers may be even worse if the new requirements based on customer feedback contradict some of the early up front requirements they might have to spend more time unpicking what they've already done.
How could we have avoided this? We could have admitted we're not sure exactly what the users want and instead produced something minimal that works and use this as a vehicle to get their feedback.
Sprints Not A Marathon
The true speed of Agile is delivered simply by developing small chunks of functionality that delivers some value to the user and provides an opportunity for iteration to deliver more in the future.
This doesn't involve developers working harder or faster, its simply that it takes less time to deliver something small.
It also takes less time to define the requirements for something small and less time to test something small.
Agile is not a magic wand that can distort time to enable something large and complicated to be delivered quicker, its a dose of realism that the only way to conquer the mountain is one step at a time.
Glorious Failure
Another aspect of Agile that is sometimes distorted in the name of speed is the concept of failing fast.
Its crucial to understand what should be meant by failure here.
Having developers work beyond their natural pace to point where they make mistakes is not failing fast, that's just being sloppy. Within Agile development quality is not an exchangeable commodity for anything else.
We've talked in this post already about user feedback what sort of feedback would we rather get,
"I quite like that idea but it would be more useful if....."
"I don't really know what it does it keeps crashing"
The type of failure we should embrace is where we slightly miss the target, we are aiming in the right direction we've just not hit the bullseye. If we can experience that kind of failure quickly we can improve, iterate and eventually hit that bullseye.
Failure to deliver something that works is just failing, users don't forgive that, nor should they, users aren't an extension of your QA department.
Agile does deliver speed, but only if you embrace what its trying to say, it doesn't do this with some clever technique to deliver 6 months worth of work in 4 months, it does so by only requiring 4 weeks worth of work in the first place.