April 6, 2017
Strengths and Weaknesses are up in the air!
Jeremy Berriault provided an example from this presentation at QAI Quest 2017 for us to count test case points. Jeremy, QA Corner, indicated baseline data was required to effectively run the three test cases in his example
The logon, transaction
, reports and expected output blocks represent verification points. The arrows from one test case to another represent interfaces and steps are. . .steps. The results of the count is as follows:
test case 1
|test case 2
|test case 3
Deriving the complexity leverages the following chart: (more…)
March 9, 2017
How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition
I am traveling this week in India for the 13th CSI/IFPUG International Software Measurement & Analysis Conference: “Creating Value from Measurement”. Read more about it here. In the meantime, enjoy some classic content, and I’ll be back with new blog entries next week. (more…)
December 11, 2016
Subscribe on iTunes
Check out the podcast on Google Play Music
The Software Process and Measurement Cast 421 features our essay on vanity metrics. Vanity metrics make people feel good, but are less useful for making decisions about the business. The essay discusses how to recognize vanity metrics and the risks of falling prey to their allure.
We will also have columns form Steve Tendon with another chapter in his Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here). Steve and I talked about Chapter 13. Finally, Gene Hughson will anchor the cast with an entry from his Form Follows Function Blog. Gene and I started talking about leadership patterns and anti-patterns. (more…)
November 24, 2016
Vanity metrics are not merely just inconvenience; they can be harmful to teams and organizations. Vanity metrics can elicit three major categories of poor behavior.
- Distraction. You get what you measure. Vanity metrics can lead teams or organizations into putting time and effort into practices or work products that don’t improve the delivery of value.
- Trick teams or organizations into believing they have answers when they don’t. A close cousin to distraction is the belief that the numbers are providing an insight into how to improve value delivery when what is being measured isn’t connected to the flow of value. For example, the organization that measures the raw number of stories delivered across the department should not draw many inferences in the velocity of stories delivered on a month-to-month basis.
- Make teams or organizations feel good without providing guidance. Another kissing cousin to distraction are metrics that don’t provide guidance. Metrics that don’t provide guidance steal time from work that can provide real value becasue they require time to collect, analyze and discuss. On Twitter, Gregor Wikstrand recently pointed out:
@TCagley actually, injuries and sick days are very good inverse indicators of general management ability
While I agree with Greger’s statement, his assessment is premised on someone using the metric to affect how work is done. All too often, metrics such as injuries and sick days are used to communicate with the outside world rather than to provide guidance on how work is delivered.
Vanity metrics can distract teams and organizations by sapping time and energy from delivering value. Teams and organizations should invest their energy in collecting metrics that help them make decisions. A simple test for every measure or metric is to ask: Based on the number or trend, do you know what you need to do? If the answer is ‘no’, you have the wrong metric.
November 22, 2016
A beard without gray might be a reflection of vanity at this point in my life!
Unlike vanity license plates, calling a measure or metric a ‘vanity metric’ is not meant as a compliment. The real answer is never as cut and dry as when someone jumps up in the middle of a presentation and yells, “that is a vanity metric, you are suggesting we go back to the middle ages.” Before you brand a metric with the pejorative of “vanity metric,” consider:
- Not all vanity metrics are useless.
- Your perception might not be same as someone else.
- Just because you call something a vanity metric does not make it true.
I recently toured several organizations that had posted metrics. Several charts caught my eye. Three examples included:
- Number of workdays injury-free;
- Number of function points billed in the current quarter, and
- A daily total of user calls.
Using our four criteria (gamability, linked to business outcomes, provides process knowledge and actionable) I could classify each of the metrics above as a vanity metric but that might just be my perception based on the part of the process I understand. (more…)
November 17, 2016
Measurement and metrics are lightning rods for discussion and argument in software development. One of the epithets used to disparage measures and metrics is the term ‘vanity metric’. Eric Ries, author of The Lean Startup, is often credited with coining the term ‘vanity metric’ to describe metrics that make people feel good, but are less useful for making decisions about the business. For example, I could measure Twitter followers or I could measure the number of blog reads or podcast listens that come from Twitter. The count of raw Twitter followers is a classic vanity metric.
In order to shortcut the discussion (and reduce the potential vitriol) of whether a measure or metric can be classified as actionable or vanity I ask four questions: (more…)
October 11, 2016
Efficiency a measure of how much wasted effort there is in a process or system. A high efficiency process has less waste. In mechanical terms the simplest definition of efficiency is the ratio of the amount of energy used compared to the amount of work done to create an output. When applied to IT projects, efficiency measures how staffing levels effect how much work can be done. The problem is that while a simple concept, it is difficult because it requires a systems-thinking view of software development processes. As a result it is difficult to measure directly. (more…)