Sunday 25 October 2015

Playing Nicely with Each Other



Nearly every app or piece of software of any reasonable size will end up having to integrate with another system, most commonly some kind of server.
This kind of integration can very often be a painful experience, two systems designed to come together seamlessly end up singing from different song sheets.
This pain also often results in aspersions being cast and blame being apportioned, "this is a server issue" or "it all looks fine in our logs".
So are we always doomed to this integration hell or are there things we can do to smooth the road?
Here not Everywhere
First of all integration shouldn't be all pervasive, code you don't control is dangerous, when you integrate with another system the surface area of your code that is exposed to this danger needs to be kept to a minimum.
When explaining the integration you need to be able to draw a circle around the class, package or namespace thats responsible for getting the job done. If you end up having to draw multiple shapes on the page to explain it all your probably just explaining how any changes in the system your integrating with are going to cause large scale destruction in your code.
Its also important, as always, that the integration is represented by a well engineered abstraction within your code. You never want your code to be exposed to details, this is especially true when you don't control those details.
For example you may be making a server call to get user data from a server, the only relevance to the rest of the code is that this data will be retrieved asynchronously from some source, the fact its an HTTP request or a SQL  query is detail. You need to throw another value into the header or adjust the query go right ahead its all in this class here.    
Its Not You Its Me
When it comes to testing your integration its important that your in a position where you can vary one thing at a time.
To go back to our example of making a server call, it should be possible to test your code against an idealised stubbed version of the server, whether this be loading data from a file or a more formal approach that provides some kind of virtualised backend.
Equally its important that on the server end you have some way to exercise your API that doesn't involve the application.
Obviously at some point everything needs to come together but before you leap headlong into it you need to have established some kind of confidence that your code works. If you don't have that confidence when something does go wrong you'll be left shrugging and probably laying the blame with the other guys.
Having this ability to stub or fake the interaction with the other system will also naturally force us to think about the interface, "exactly what is the JSON schema going to be?", "what format do you want us to send that in?". 
Independently Verified
Once we have defined the nature of that interface were in a really good position to practice some TDD and give ourselves another way to prove the integration is working.
As is the case with most things unit tests provide an effective first line of defence against things getting broken and can also be useful in playing out some "what if" scenarios.
We are now able to prove our individual classes work with our unit tests, prove our system as a whole works by using our replacement or fake server before finally attempting to actually integrate.
Despite all of this careful preparation things will still go wrong, quite often because one of the systems fails to comply with the agreed contract, "if I give you this your supposed to give me that".
In order for us to know whats going wrong we need to have identified exactly what that contract is and we need some way of verifying that individual elements when given the correct input will supply the correct output.
This isn't about apportioning blame its about having a way to quickly identify what needs fixing and fixing it, were all in this together after all.     

Sunday 18 October 2015

Telling A Tale of Code


When we become experienced developers there is a subtle change in how we read code, we stop necessarily reading keyword by keyword, variable by variable.
Instead were able to recognise prose we've seen before and appreciate the story being told whether good or bad.
Our ability to do this is dependent upon the care that is taken in trying to tell the story.
An often quoted mantra is "Good code doesn't need comments", while it would be a slightly extreme view point to never write comments its certainly true that good code is readable code and there are certain techniques we can use to maximise this readability without the need of explanation.
Whats In a Name?
All good stories need well defined characters, a skilled author is a master of nouns and verbs, clearly the name of a variable should convey what is being stored and a function should describe what action it will perform.
What can be more subtle is that in neither case should we be describing the how, a variable name shouldn't talk about the underlying data structure and a function name shouldn't expose detail about implementation. For example should a method be called saveToDatabase() or just save()?
Details change and unfortunately there is no inherent mechanism to keep our variable and method names in sync. One form of unreadable code is where you simply cannot tell what is going on, another just as destructive one is where you think you know but the reality is very different.   
Paragraphs and Chapters
No matter how skilled you become at reading code there is a limit to how much code you can interpret in one go.
Another approach to things like the Single Responsibility Principle and Interface Segregation Principle is that they limit the required scope of understanding of the reader.
By dividing and conquering we make our code more readable and understandable because their is less action on the page and less to get our heads round.
Logical structure also means we can take much more as read, if your injecting a dependency called IPersistentStorage that has a well defined CRUD interface I have no need to look into the detail and I don't need to understand anything more than what is on the page.  
Clear and Consistent Voice
In a similar manner to having a logical structure having a clear and consistent voice when writing code will also go a long way to helping others understand your story.
This means a common approach to naming, using similar solutions for similar problems and ensuring that a common plot is running through the whole story.
A common problem when looking at large open source projects is that many people have been involved in its development, each with a different story to tell and a different style and approach.
Individually all these different voices may be clear and logically but when there all layered over each other it suddenly becomes confusing. Navigating around the code base suddenly feels like different chapters from different books.
In many of these situations it doesn't really matter what you do, just always do it that way.
A large amount of comments in a code base is a slight smell, whether its because the writer feels you may not understand this bit or because they want to draw attention to there genius it implies the code itself is not well structured or logical.
This becomes even more prevalent as the code base evolves as the comments get more and more out of date with the code there supposed to explain.
There will be occasions when you really are forced to do something that requires explanation but ultimately nothing is more expressive than the code itself after all its this that defines what the story is and how it will end.  

Sunday 11 October 2015

Square Pegs and Round Holes


When writing unit tests one of the most important things that will determine whether we end up writing good tests or bad tests is how we deal with dependencies.
Bad tests won't properly separate a class from those it uses creating either unstable tests or tests whose results don't actually tell us anything about the health of the class supposedly being exercised. 
It is also very often the case that problems we face with dependencies when trying to test a class are indicative of bad design, testing shouldn't be difficult, but of course your all practising strict TDD and writing the tests first right?
So how can we supply dependencies during tests?
Sucking on a Dummy
A dummy object contains no useful functionality, it is not intended to be used by the class under test but simply by the wonders of polymorphism it can stand in for a dependency.
The fact that a dummy can be used during a test is a big smell, if you aren't going to use this object why am I having to supply it to you? Yes I could potentially pass null but that degrades readability and you have to know that null isn't actually going to break anything.
If your supplying a dummy to a constructor then clearly the class under test doesn't need the dependency to fulfil its role, maybe it does on occasion require that dependency but if its not essential it shouldn't be in the constructor.
If your supplying a dummy to an API then this would suggest you could make the developers life easier by supplying an overload that doesn't require the dependency (internally you can just call the original API passing null). This makes it much clearer how your API should be used requiring the developer to have less implied knowledge about how this all works.  
Faking It
Fakes appear to offer the same functionality as real dependencies but they take a short-cut or cheat in someway, for example supplying JSON from a file rather than calling a server.
Generally this is done to de-couple the code from some outside dependency and give more control over the data the code is working with, they also generally remove any performance factors from influencing how the code behaves.
There are many good situations to use fakes especially when prototyping or trying to debug specific conditions or data, unit testing is generally not one of this situations.
The implementation of fakes is not necessarily simple, this potential complication can add ambiguity to unit testing, this is something to be avoided.
The output of unit testing must be clear and unequivocal, if its red we need to fix the class under test, if its green were all good. Anything that detracts from this simplicity should be removed.    
Stubbing It Out
Stubs are essentially containers for pre-programmed responses to interactions, if X is called return Y, this is regardless of what is passed in the call and how many times that call happens.
They provide no response to APIs where nothing as been put in the container and they carry no expectations on what should and shouldn't happen.
This all leads to the important aspect of stubs, they play no role in deciding whether a test passes or fails.
Stubs should be used to fulfil dependencies where we don't particularly care how the class under test interacts with it, this will generally be because the call does not carry a high overhead and doesn't have any side effects for the system as a whole.
It maybe that we have an API that checks for some environmental condition or retrieves some configuration data, do we really care whether this fairly inconsequential method call is made 2 times or 3 times? Or are we more concerned with the fact that the business logic ends up doing the right thing?
Once again its important that if a test fails then we know for sure we have to fix something, if tests fail because of some trivial function call this detracts from that clear cut view.    
Send in the Mocks
And finally we come to mocks.
Mocks are much like stubs but with one important difference, they do play a role in deciding whether or not tests pass or fail.
Like stubs mocks contain pre-programmed responses to method calls but unlike stubs they do carry expectations on how these methods will be called. With mocks it does matter what methods are called, how often and with what arguments.
Mocks should be used for dependencies where it does matter how the class under test uses them, for example whether we make a server call once or twice or how many times we hit the database is important. 
With the other types of objects we've discussed were more concerned with verifying the output of the class under test, with mocks we are also interested in verifying how it provides that output.
Its a skill to determine the best way that mocks should be used, once again to hammer this point home failing tests have to carry a clear message, being over zealous in using mocks to enforce very strict limitations on the class under test will lead to nagging doubt that maybe the problem is with the test rather than the code being tested.
Its important that unit tests are treated with the same care and attention as our functional code base, this isn't just because were relying on these tests to give our code a clean bill of health but also because they shine a light on issues with our implementation.
When you encounter some difficulty in supplying a dependency to a class under test don't just write this off as being just one of the frustrations of writing tests, instead ask yourself what this tells you about the class your testing.
If your class is a well written loosely coupled class with high cohesion and a well thought out API maybe you shouldn't be having this problem?   

Sunday 4 October 2015

Always Be Solving Problems



Aside from the purely technical explanation what exactly is software?
I believe the most productive way to answer this question is that software solves problems. This may seem like a fairly broad explanation, but even a game could be described as solving the problem of boredom.
I believe this way of thinking enables us to better place ourself in the shoes of the user. Unfortunately I think too often we lose sight of why it is we are producing all this software, we assume we need to produce more and more and we assume too much about how users react to all this.
Users Experience Software
Sometimes we can fall into a trap of viewing User Experience (UX) as a distinct element of software when actually it is more pervasive than that.
UX is not the cherry on the cake its the whole recipe.
UI, features and defects all go to make up the experience users have when they interact with software, the moment we concentrate too much on any one element we degrade this experience.
Nowhere is this more evident than when we let ourselves become entangled in a conveyer belt of new features. Its incredibly easy to spoil a slick UX by muddying the waters by layering more and more features together until there is no longer any clear purpose to what we have created.
We shouldn't see software as an arms race, if we concentrate on one problem and provide a great solution users will come and word will spread.
Its no surprise that some of the most popular and successful apps are very simple propositions, they don't need much explaining and they don't readily add new features if it might harm this central use case (as an example think how long Facebook spent deliberating on the best way to introduce a dislike button).  
Be Platform Agnostic
Its a very common occurrence to hear things like
"We need an app that...."
"We should add something to the website that..."
These things are very often uttered very early on in the development of a solution, but they expose a decision being made about user experience based not on the needs of the user but on the desire to be on trend.
The choice of platform, such as mobile app, website or maybe even not a technical solution at all, should be made based on the how the problem is best solved for the user.
We should instead frame these statements as,
"We want the user to be able to ..."
Sometimes this will naturally point towards a platform, sometimes it will be less obvious, but there is nothing more frustrating as a user than being forced to use an inappropriate platform to fulfil a need.
Cutting Edge Isn't For The Faint of Heart
Development of a solution should always start with the problem, if you pick the right problem this guarantees that you will develop something people want.
Many companies fall into the trap of picking a technology first because they want to be seen as cutting edge and then engaging in a desperate hunt for a problem for this technology to solve.
This inevitably leads to unsatisfying solutions that feel forced, users may initially show enthusiasm for hi-tech cool stuff but if your not solving a problem they care about this will be short lived.
A tell tale sign of this is when too much explanation is required for the user to understand the proposition simply to get to grips with the technology, the best solutions to problems are when we just get it and say "why hasn't someone done this before".
As geeks we see software and technology as an end in and of itself, we code just for the sheer enjoyment of it.
As productive engineers we should be looking to answer questions and solve problems, we should understand that people don't interact with code they experience our solutions.
They don't strive for anything more, if you can't keep them on the hook without cracking out the bells and whistles then your not solving the right problem.