Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 421 features our essay on vanity metrics.  Vanity metrics make people feel good, but are less useful for making decisions about the business.  The essay discusses how to recognize vanity metrics and the risks of falling prey to their allure.

We will also have columns form Steve Tendon with another chapter in his Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here). Steve and I talked about Chapter 13.  Finally, Gene Hughson will anchor the cast with an entry from his Form Follows Function Blog.  Gene and I started talking about leadership patterns and anti-patterns. (more…)

Advertisements
Danger!

Danger!

Vanity metrics are not merely just inconvenience; they can be harmful to teams and organizations. Vanity metrics can elicit three major categories of poor behavior.

  1. Distraction. You get what you measure. Vanity metrics can lead teams or organizations into putting time and effort into practices or work products that don’t improve the delivery of value.
  2. Trick teams or organizations into believing they have answers when they don’t. A close cousin to distraction is the belief that the numbers are providing an insight into how to improve value delivery when what is being measured isn’t connected to the flow of value.  For example, the organization that measures the raw number of stories delivered across the department should not draw many inferences in the velocity of stories delivered on a month-to-month basis.
  3. Make teams or organizations feel good without providing guidance. Another kissing cousin to distraction are metrics that don’t provide guidance.  Metrics that don’t provide guidance steal time from work that can provide real value becasue they require time to collect, analyze and discuss. On Twitter, Gregor Wikstrand recently pointed out:

@TCagley actually, injuries and sick days are very good inverse indicators of general management ability

While I agree with Greger’s statement, his assessment is premised on someone using the metric to affect how work is done.  All too often, metrics such as injuries and sick days are used to communicate with the outside world rather than to provide guidance on how work is delivered.

Vanity metrics can distract teams and organizations by sapping time and energy from delivering value. Teams and organizations should invest their energy in collecting metrics that help them make decisions. A simple test for every measure or metric is to ask: Based on the number or trend, do you know what you need to do? If the answer is ‘no’, you have the wrong metric.

A beard without gray is a reflection of vanity at this point in my life!

A beard without gray might be a reflection of vanity at this point in my life!

Unlike vanity license plates, calling a measure or metric a ‘vanity metric’ is not meant as a compliment. The real answer is never as cut and dry as when someone jumps up in the middle of a presentation and yells, “that is a vanity metric, you are suggesting we go back to the middle ages.”  Before you brand a metric with the pejorative of “vanity metric,” consider:

  1. Not all vanity metrics are useless.
  2. Your perception might not be same as someone else.
  3. Just because you call something a vanity metric does not make it true.

I recently toured several organizations that had posted metrics. Several charts caught my eye. Three examples included:

  1. Number of workdays injury-free;
  2. Number of function points billed in the current quarter, and
  3. A daily total of user calls.

Using our four criteria (gamability, linked to business outcomes, provides process knowledge and actionable) I could classify each of the metrics above as a vanity metric but that might just be my perception based on the part of the process I understand.  (more…)

twitter

Measurement and metrics are lightning rods for discussion and argument in software development.  One of the epithets used to disparage measures and metrics is the term ‘vanity metric’. Eric Ries, author of The Lean Startup, is often credited with coining the term ‘vanity metric’ to describe metrics that make people feel good, but are less useful for making decisions about the business.  For example, I could measure Twitter followers or I could measure the number of blog reads or podcast listens that come from Twitter. The count of raw Twitter followers is a classic vanity metric.

In order to shortcut the discussion (and reduce the potential vitriol) of whether a measure or metric can be classified as actionable or vanity I ask four questions: (more…)

carrying-a-basket-on-his-head

Efficiency a measure of how much wasted effort there is in a process or system. A high efficiency process has less waste. In mechanical terms the simplest definition of efficiency is the ratio of the amount of energy used compared to the amount of work done to create an output. When applied to IT projects, efficiency measures how staffing levels effect how much work can be done. The problem is that while a simple concept, it is difficult because it requires a systems-thinking view of software development processes.  As a result it is difficult to measure directly. (more…)

Measuring TDD is a lot like measuring a cyclone!

Measuring TDD is a lot like measuring a cyclone!

Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason…yet).  In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the “you can’t measure software development crowd”), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, “unless you spend the time to link your measurement or change program to business needs, they will be short-lived.”  Just adopting someone else’s best practices in measurement tends to be counterproductive because every organization has different goals and needs.  This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit.  There is NO single measure or metric that proves you are getting the benefit you need from TDD.  That is not to say that TDD can’t or should not be measured.  A pallet of measures that are commonly used based on the generic goal they address are: (more…)

Bug Case

It is all about the bugs!

Many common measures of software quality include defects. A collection of defects and information about defects can be a rich source of information to assess or improve the functional, structural and process aspects of software delivery. Because of the apparent ease of defect collection and management (apparent because it really is never that easy) and the amount information that can glean from defect data, the number of defect-related measures and metrics found in organizations is wide and varied.  Unfortunately, many defect measures are often used incorrectly or are expected to be predictive.

Defect measures that are useful while work is in process (or pretty close) include: (more…)