Measuring TDD is a lot like measuring a cyclone!

Measuring TDD is a lot like measuring a cyclone!

Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason…yet).  In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the “you can’t measure software development crowd”), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, “unless you spend the time to link your measurement or change program to business needs, they will be short-lived.”  Just adopting someone else’s best practices in measurement tends to be counterproductive because every organization has different goals and needs.  This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit.  There is NO single measure or metric that proves you are getting the benefit you need from TDD.  That is not to say that TDD can’t or should not be measured.  A pallet of measures that are commonly used based on the generic goal they address are: (more…)

Bug Case

It is all about the bugs!

Many common measures of software quality include defects. A collection of defects and information about defects can be a rich source of information to assess or improve the functional, structural and process aspects of software delivery. Because of the apparent ease of defect collection and management (apparent because it really is never that easy) and the amount information that can glean from defect data, the number of defect-related measures and metrics found in organizations is wide and varied.  Unfortunately, many defect measures are often used incorrectly or are expected to be predictive.

Defect measures that are useful while work is in process (or pretty close) include: (more…)

27805520612_494643575a_k

Software quality is a nuanced concept that requires a framework that addresses functional, structural and the process of the software delivery understand.  Measurement of each aspect is a key tool for understanding whether we are delivering a quality product and whether our efforts to improve quality are having the intended impact. However, measurement can be costly. To balancing the effort required to measure quality versus the benefit, you first need to understand the reasons for measuring quality.  Five of reasons quality is important to measure include: (more…)

Listen Now

Subscribe on iTunes                   Check out the podcast on Google Play Music

The Software Process and Measurement Cast 395 features our essay on productivity.  While productivity might not be the coolest subject, understanding the concept is critical to every company’s and every worker’s financial well-being.

Gene Hughson brings another entry from his Form Follows Function blog to the Software Process and Measurement Cast. Gene discusses the idea of accidental innovation.  Gene suggests that innovation is not a happy accident, but is a result of a process, structure, and technology that can enhance innovation. However, it can just as easily get in the way.

In our third column this week, Kim Pries, the Software Sensei, brings us a discussion of how software developers leverage assimilation and accommodation in the acquisition of knowledge.

(more…)

The more complex the door, the lower the 'door' productivity.

The more complex the door, the lower the ‘door’ productivity – but not always.

While productivity is a simple calculation, there are a few mistakes organizations tend to make.  The five most common mistakes reduce the usefulness of measuring productivity, or worse can cause organizations to make poor decisions based on bad numbers.  The five most common usage and calculation mistakes are: (more…)

26096385160_0723a35c74_o

In simplest terms, productivity is the ratio of output per unit of input.

Almost every conversation about change includes a promise of greater productivity.  In simplest terms, productivity is the ratio of output per unit of input.  While the equation is for calculating productivity is straightforward, as we have discussed, deciding on which outputs from an organization or process to count is never straightforward. The decisions on the input side of the equation are often equally contentious.  Three critical decisions shape what measures will be needed to supply the inputs used to calculate productivity.   (more…)

 How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Next week we will begin the read of Commitment – Novel About Managing Project Risk by Olav Maassen and Chris Matts.  I am currently trying to determine the approach to the blog entries for this book.  I believe this will be a relatively quick read or re-read based on the pace and style of the book.  This is mostly a graphic novel (one of my favorite styles), which might put some off of the book.  Buy your copy today and start reading. If you use the link above it will support the podcast.  I am running the poll for the next book after Commitment to save time when we are ready for the next, next book in a few weeks!  As in past polls please vote twice or suggest a write-in candidate in the comments.  We will run the poll for one more week.

Final Notes of HTMA (more…)