
Measuring TDD is a lot like measuring a cyclone!
Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason…yet). In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the “you can’t measure software development crowd”), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, “unless you spend the time to link your measurement or change program to business needs, they will be short-lived.” Just adopting someone else’s best practices in measurement tends to be counterproductive because every organization has different goals and needs. This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit. There is NO single measure or metric that proves you are getting the benefit you need from TDD. That is not to say that TDD can’t or should not be measured. A pallet of measures that are commonly used based on the generic goal they address are: (more…)