Test Driven Development (TDD) has long been viewed as one of the universal tenets of software engineering by the majority of engineers. The perceived wisdom being that applying it to a software development lifecycle ensures quality via inherent testability and via an increased focus on the interface to the code under test and the required functionality.
In recent years some have started to challenge the ideas behind TDD and question whether or not it actually leads to higher quality code. In May 2014 Kent Beck, David Heinemeier Hansson and Matin Fowler debated TDD and challenged its dominance as part of software engineering best practices.
The full transcript of their conversation can be found here: Is TDD Dead?.
Presented below are some of my thoughts on the topics they discussed. To be clear, I am not arguing that TDD should be abandoned. My aim is to provoke debate and try to understand if totally adherence to TDD should be relaxed or approaches modified.
Flavours of TDD
Before debating the merits or otherwise of TDD it's important to acknowledge that different approached to it exist. Strict adherence to TDD implies writing test first and using the so-called red-green refactor technique to move from failing tests to working code.
I think large numbers of teams who would purport to follow TDD will regularly not write tests first. Engineers will often find themselves needing to investigate how to implement certain functionality, with this investigation inevitability leading to writing some or all of an implementation prior to considering tests.
TDD as being discussed here applies to both scenarios, a looser definition of TDD would simply define TDD has an emphasis on code being testable. Many of the pro's and possible con's being discussed would apply equally whether or not tests were written prior to implementation or afterwards.
Test Induced Design Damage
Perhaps the most significant of the con's presented about TDD is that of test induced design damage.
Because discussions around TDD tend to focus on unit testing then adopting a TDD approach and focusing on testability tends to focus on enabling a class under test to be isolated from its dependencies. The tool used to achieve this is indirection, placing dependencies behind interfaces that can be mocked within tests.
One of the principle causes of test induced design damage is confusion and complication that comes from excessive indirection. I would say this potential design damage is not inherent in the use of indirection but is very easy to accidentally achieve if the interface employed to de-couple a class from a dependency is badly formed.
A badly formed interface where the abstraction being presented isn't clear or is inconsistent can have a large detrimental effect on the readability of code. This damage is very often enhanced when looking at the setup and verification of these interactions on mock dependencies.
Aside from testability another perceived advantage to indirection is the ability at some later point to change the implementation of a dependency without the need for wide spared changes in dependent code. Whilst these situations certainly exist perhaps they don't occur as often as we might think.
Test Confidence
The main reason for having tests is as a source of confidence that the code being tested is in working condition. As soon as the confidence is eroded then the value of the tests is significantly reduced.
One source of this erosion of confidence can be a lack of understanding of what the tests are validating. When tests employ a large number of mocks, each with their own setup and verification steps, it is easy for tests to become unwieldy and difficult to follow.
As the class under test is refactored and the interaction with mocks is modified the complexity can easily be compounded as engineers who don't fully understand how the tests work need to modify them to get them back to a passing state.
This can easily lead to a "just get them to pass" attitude, if this means there is no longer confidence that the tests are valid and verifying the correct functionality then any confidence that the tests passing means we are in a working state is lost.
None of this should be viewed as saying that unit tests or the use indirection are inherently bad. Instead I think it is hinting at the fact that maybe the testability of code needs to be viewed based on the application of multiple types of tests.
Certain classes will lend themselves well to unit testing, the tests will be clear and confidence will be derived from them passing. Other more complex areas of code maybe better suited to integration testing where multiple classes are tested as a complete functional block. Providing these integration tests are able to test and prove functionality this should still provide the needed confidence of a working state following refactoring.
So many aspects of software engineering are imperfect with no answer being correct 100% of the time. Maybe this is also true of TDD, in general it provides many benefits but if it can on occasion have a negative impact maybe we need to employ more of a test mix so that our overall test suite gives us the confidence we need to release working software.
No comments:
Post a Comment