Working Effectively with Legacy Code by Michael C. Feathers
I’ve been thinking a lot about how software systems age. Like ecologies, relationships, and cities, software systems change over time, and what once worked seemlessly may now be clunky or no longer relevant. How do we respond as development teams when requirements change? This book offers some frameworks for addressing legacy code, which I’ll highlight in my notes here. In the preface the author likens the address of legacy code to sugery. “We have to make incisions, and we have to move through the guts and suspend aesthetic judgement”. We’ll meet the system where it’s at, fix what’s wrong and move the system to a healthier state.
Chapter 1: Changing Software
Unless we’re part of a founding eng team, the bulk of our careers are spent contributing to existing codebases. The author highlights that there are four primary reasons we would change software: 1) adding a feature, 2) fixing a bug, 3) improving the design, and 4) optimizing the use of resources. The first is straightforward. We need new functionality. This differs from refactoring, where we’ll maintain the functionality but make changes to the software to make it more maintainable. When we add a feature we’re adding a new capability. Related to refactoring is optimization, where we’re making changes that preserve the existing functionality but that will result in a more efficient use of resources (e.g. memory or runtime).
Chapter 2: Working with Feedback
The author makes a clear case for what they call “Cover and Modify”, that is to employ a safety net of test coverage to ensure we properly quarentine the problem as we triage it. This way we can get feedback on our changes, instead of the alternative of “Edit and Pray”. It’s true we’ll want system-level regression tests to pass on any code we releae, but the test coverage the author is referring to here is unit tests we can ensure pass as we start to make changes.
Chapter 6: I don’t have much time and I have to change it
In this chapter we’re first advised to proceed by making a pact on our team to only proceed with test coverage for any new code we write. Perhaps this hasn’t been our practice up to now, but by setting a new standard, even during a period of urgency, we are making the lives of future us much more pleasant and effective. I love the sentiment of, “remember code is your house, and you have to live in it”.
In the cases where the new feature you are adding can be written as entirely new code, the author advises creating a new method and invoking it everwhere it is needed. This way you can write a test for your new method. The practice is called a “sprout method”, the steps of which the author enumerates as 1) identifying what change you need to make to your code, 2) write a call for the new method you are going to write and comment it out everwhere it will need to be invoked (even before writing the method), 3) identify what local variables you will need from the method, make sure you pass the in as arugments to your call of the method, 4) determine if your new “sprouted” method will need to return a new variable, 5) develop the sprout method itself using TDD, and 6) uncomment out your method calls and run your tests. To help further isolate what consitutes a unit test, the author frames the following two questions - does the test run fast and can it help us localize errors quickly? It’s interesting to highlight speed here, because ultimatley we want the feedback as we leverage the “test harness” as we make changes to our code. If tests take a long time to run we won’t really get that realtime iterative feedback we need.