Danger!

Danger!

Vanity metrics are not merely just inconvenience; they can be harmful to teams and organizations. Vanity metrics can elicit three major categories of poor behavior.

  1. Distraction. You get what you measure. Vanity metrics can lead teams or organizations into putting time and effort into practices or work products that don’t improve the delivery of value.
  2. Trick teams or organizations into believing they have answers when they don’t. A close cousin to distraction is the belief that the numbers are providing an insight into how to improve value delivery when what is being measured isn’t connected to the flow of value.  For example, the organization that measures the raw number of stories delivered across the department should not draw many inferences in the velocity of stories delivered on a month-to-month basis.
  3. Make teams or organizations feel good without providing guidance. Another kissing cousin to distraction are metrics that don’t provide guidance.  Metrics that don’t provide guidance steal time from work that can provide real value becasue they require time to collect, analyze and discuss. On Twitter, Gregor Wikstrand recently pointed out:

@TCagley actually, injuries and sick days are very good inverse indicators of general management ability

While I agree with Greger’s statement, his assessment is premised on someone using the metric to affect how work is done.  All too often, metrics such as injuries and sick days are used to communicate with the outside world rather than to provide guidance on how work is delivered.

Vanity metrics can distract teams and organizations by sapping time and energy from delivering value. Teams and organizations should invest their energy in collecting metrics that help them make decisions. A simple test for every measure or metric is to ask: Based on the number or trend, do you know what you need to do? If the answer is ‘no’, you have the wrong metric.

A beard without gray is a reflection of vanity at this point in my life!

A beard without gray might be a reflection of vanity at this point in my life!

Unlike vanity license plates, calling a measure or metric a ‘vanity metric’ is not meant as a compliment. The real answer is never as cut and dry as when someone jumps up in the middle of a presentation and yells, “that is a vanity metric, you are suggesting we go back to the middle ages.”  Before you brand a metric with the pejorative of “vanity metric,” consider:

  1. Not all vanity metrics are useless.
  2. Your perception might not be same as someone else.
  3. Just because you call something a vanity metric does not make it true.

I recently toured several organizations that had posted metrics. Several charts caught my eye. Three examples included:

  1. Number of workdays injury-free;
  2. Number of function points billed in the current quarter, and
  3. A daily total of user calls.

Using our four criteria (gamability, linked to business outcomes, provides process knowledge and actionable) I could classify each of the metrics above as a vanity metric but that might just be my perception based on the part of the process I understand.  (more…)

twitter

Measurement and metrics are lightning rods for discussion and argument in software development.  One of the epithets used to disparage measures and metrics is the term ‘vanity metric’. Eric Ries, author of The Lean Startup, is often credited with coining the term ‘vanity metric’ to describe metrics that make people feel good, but are less useful for making decisions about the business.  For example, I could measure Twitter followers or I could measure the number of blog reads or podcast listens that come from Twitter. The count of raw Twitter followers is a classic vanity metric.

In order to shortcut the discussion (and reduce the potential vitriol) of whether a measure or metric can be classified as actionable or vanity I ask four questions: (more…)

On a scale of fist to five, I'm at a ten.

On a scale of fist to five, I’m at a ten.

Quality is partly about the number defects delivered in a piece of software and partly about how the stakeholders and customers experience the software.  Experience is typically measured as customer satisfaction. Customer satisfaction is a measure of how products and services supplied by a company meet or surpass customer expectations. Customer satisfaction is impacted by all three aspects of software quality: functional (what the software does), structural (whether the software meets standards) and process (how the code was built).

Surveys can be used to collect customer- and team-level data.  Satisfaction is used to measure if products, services, behaviors or work environment meet expectations.  (more…)

Find the defects before delivery.

Find the defects before delivery.

One of the strongest indications of the quality of a piece of software is the number of defects found when it is used.  In software, defects are generated by a flaw that causes the code to fail to perform as required. Even in organizations that don’t spend the time and effort to collect information on defects before the software is delivered collect information on defects that crop up after delivery.  Four classic defect measures are used “post” delivery.  Each of the four measures is used to improve the functional, structural and process aspects of software delivery. They are: 

(more…)

1


The simple cumulative flow diagram (CFD) used in Metrics: Cumulative Flow Diagrams – Basics  and in more complex versions provide a basis for interpreting the flow of work through a process. A CFD can help everyone from team members to program managers to gain insight into issues, cycle time and likely completion dates. Learning to read a CFD will provide a powerful tool to spot issues that a team, teams or program may be facing. But to get the most value a practitioner needs to decide on granularity, a unit of measure, and time frame needed to make decisions.
(more…)

Foggy Day

Value at risk can only quantify what you can see.

Definition:

Value at risk represents the potential impact of risk on the value of a project or portfolio of projects.  Risk is monitored at specific points of the project life cycle.  Monitoring includes an evaluation of the potential cost impact of remediating the risks that have not been fully remediated weighted by the probability of occurrence.  Where the cost impact of risk is above program risk tolerance specific remediation plans will be established to reduce the estimated risk impact.  The value at risk metric provides the team with a tool for prioritizing risks and risk management activities.

Formula: (more…)