The case for writing (isolated) test cases – #2
What is legacy code?
"Code without tests is bad code. It doesn't matter how well written it is; it doesn't matter how pretty or object-oriented or well-encapsulated it is. With tests, we can change the behavior of our code quickly and verifiably. Without them, we really don't know if our code is getting better or worse.” “To me, legacy code is simply code without tests.” Michael Feathers, Working Effectively with Legacy Code
"The code has no tests." Such a report often strikes me as a symptom of a task that was not entirely done, only half done. It sounds a bit like I would sound if I told you: Thank you for letting me join this trek. Btw, I happen to have only one functional shoe for trekking, the other one is kind of ruined. That means for every other step I make I have to be extra careful where my foot goes. It takes time. Bear with me.
Code without tests is fragile. What has taken weeks to build can be broken in one edit. The consequences of this asymmetry is that a team that has put a lot of effort into releasing its first version in production is reluctant to make significant changes to the code once "it works". Changing the code is too risky.
When your code will not change easily, you'd better have the best design from the start for your product. If your developers can't have the design right from the start, they will have to work with a poorly designed system, or suffer the consequences of unsafe attempts to improve it. In fact both these predicaments will occur anyway, because any reasonably successful software system once released in production immediately starts garnering requests for improvements of all kinds.
Let's suppose our initial design is far from perfect. How could it be otherwise? The beginning of a project is when we know the least about the problem to solve, about the constraints, and about our capacity to deliver a solution on time and within budget. Our design is bound to reflect that situation. After a while working on the project, we find ourselves dealing with this double trouble: an accumulation of poor design decisions, and no tests. About every change made to the code results in anomaly report tickets, lengthy user acceptance phases, delayed delivery and our initial enthusiasm a little bit more dented.
When facing such consequences our natural propensity to improve things shrivels and dies. We know what happens to teams in charge of a poorly designed system: their productivity falters despite the fact that everyone is working overtime, they abandon most hopes of working on an improved design, and their key members start polishing their résumé.
What about refactoring the code?
The remedy to the plight of accumulated poor design decisions and no tests is not refactoring. Refactoring -- the act of improving the design of a piece of code without changing the code's behavior -- crucially depends on having some tests for the parts we want to improve. Without tests, there's no efficient way to detect regressions we might have introduced while refactoring. This is why design improvement actions are fewer in a project where the codebase has no tests, and why they tend to get bigger in terms of quantity of code and impact: when working on such codebases, developers wait for the last moment if ever to undergo a change in the design.
Refactoring is a heuristic that is meant to work in the context of a project where one systematically writes isolated tests for the code being written. It is one of the three steps of the TDD approach, meaning that TDD practitioners spend one third of their coding time improving the code, a pattern that is called Merciless Refactoring in XP methodology. Refactoring per se cannot help the team fighting the convoluted mess of intricate dependencies that is a big legacy code base.
Wait, what about the Refactor.. right-click menu on our modern IDEs ?
Despite the progress made in integrated development environments with regard to automatic rearranging of code, no sequence of right-click refactor menu steps can lead you from a legacy codebase to a healthy one. That is because refactoring is not so much about exploiting the logical relationships between bits of code (which a computer, given a strong typing programming language can exploit) than understanding and isolating the functional role of each part of the system (which a computer is, beyond gathering useful data about the code, of no substantial help with) .
The best moment to connect our understanding of a part of the system's behavior with the corresponding part of code in the code base is when writing isolated tests for that part. In the case of a legacy codebase, that moment of opportunity has passed months or years ago. Accidental complexity won, and the system is too messy to be checked in a modular and comprehensive way through isolated tests. It's not that it's impossible: it's just deemed too expensive by the concerned actors in the project, be it the developer, the product owner or the stakeholder.
Is a legacy codebase completely out of control then? Not entirely. It is mostly in control, but any action to recover both safety of code changes and suitable functional knowledge of each part of the system is judged -- by product owners and sometimes developers as well -- as too expensive to be taken now, and as such is postponed sine die. This continuous postponing of adding isolated tests to the codebase, as well as budget control and disregarding what is not bringing immediate business value are what leads teams (and their customers) into the legacy code swamp. The developers have made themselves unable to improve the modularity and reliability of their code while still being capable of adding features to it for the benefit of the customer. Such changes to the system can be dubbed DECEIT changes. We have a DECEIT change when a part of the system's behavior is
- Described Easily by the product owner
- Changed Easily by the developers
- Impossible to Test by anyone
A long series of such incremental changes made to the system results in precisely this: legacy code, i.e code without tests. Developers and consultants like to attribute this phenomenon to "code rot" (resorting to biological fact-of-life explanations) or "technical debt" (resorting to over-indebtedness tragedies) or "accidental complexity". I attribute it to a misalignment in the initial team state of the art that was prevalent at the outset of the project, when the team in charge of the endeavor either ignored, omitted or removed from its toolset and process the practice of isolated tests.
It could be that the initial project was so small and simple that no need was felt to check and settle the understanding of every part of the code with isolated tests.
It could be that the developers naïvely promised themselves that this solution was a one-shot program, never to be made to evolve or reuse into a larger system.
It could be that they didn't master the techniques of unit tests, mocking, refactoring enough and thought that the usual "have some integrated and acceptance tests" strategy would fit.
Once the pain of not having isolated tests is felt by the team, it is often too late to change course. Indeed a project ridden with regression incidents, lateness and emergency fixes is not the best environment for acquiring new skills, which inevitably implies going slower and making mistakes.
The question of what to do to remedy the situation of a project having no tests is akin to any attempt to salvage an undertaking from strategic failure: embrace the full extent of the failure, and immediately start fixing the part of the process that caused it while managing expectations with lucidity. Instead of denying the situation by deferring again and again the necessary work of securing the codebase, it is better to admit failure, take the loss, and prepare the stakeholders for slower work and lowered productivity. Whatever techniques the developers need to acquire and start to apply, be it unit testing, refactoring, characterization tests, approval tests, or mocking, there is no way they can do that while keeping their productivity at its usual level. As much as "we could go faster if it wasn't for all the bugs and the entangled mess of dependencies" is true, achieving a new level of practice and process will certainly not happen overnight.
After all there is no way that I can walk with you on this trek while at the same time acquiring or crafting a new pair of shoes.