How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 9 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition titled, Sampling Reality: How Observing Some Things Tells Us about All Things.  Here is a summary of the chapter in four bullet points:

  1. You do not have measure the whole population to reduce uncertainty.
  2. The term statistical significance is not always the most important question to ask when collecting data.
  3. Experimentation is useful to reduce uncertainty.
  4. Regression is a powerful, but oft misunderstood, mechanism to understand what data is telling you!

Sampling
Developing an understanding of a population by measuring the whole is often expensive.  For example, if we wanted to determine how many 4 year old children live in Avon Lake, Ohio to plan for schools we could go door to door and complete an exhaustive census of the population. A less expensive means to reduce the uncertainty by measuring is using a sample. When sampling, each new observation increases what is known.

Sampling, even using small samples, is a technique to reduce uncertainty. In many instances, using small samples increases the economic value that measurement delivers.  Hubbard uses the example of the jelly bean experiment to show how a small number of small samples can be used to refine a 90% confidence interval.  The jelly bean experiment can also help estimators (and estimate users) to develop an intuitive understanding of confidence intervals.

The Student t-statistic was developed by a brewer in the Guinness Brewery when faced with a problem of trying to sample for quality control. Classic statistics would have required at least 30 samples to confirm quality levels, which would have been expensive. The brewer created a method to generate the data needed using a small number of samples. The t-statistic leverages the smaller sample size. The distribution of the t-statistic is like the normal curve except it is flatter, wider and thicker at the tales which means the 90% confidence interval for a t-statistic is more uncertain (wider) than it is for the z-score.  The brewer published his technique under the name Student as Guinness prohibited the publication. As a home brewer, I enjoy stories where beer changes the world in a positive fashion.  In the end, the t-statistic provides evidence that, when you have a lot of uncertainty, a few samples can greatly reduce it, especially with relatively homogeneous populations.

Measures based on large or small samples make different assumptions about the distribution of observations.  Large samples use z-statistics while measures based on small samples use the Student t-statistics.  Both of these types of statistics are forms of parametric statistics. Parametric statistics make some assumptions about underlying distribution. A common assumption of parametric statistics is that confidence intervals converge as if you sample additional data.  The most commonly used distribution in business circles is the normal distribution (commonly drawn as a bell curve, HOWEVER there are many forms of normal distributions).  In scenarios where sampling returns extreme outliers, the confidence interval may never converge.  Other forms (non-parametric) statistics which make few assumptions of how the data is distributed may be needed for the measurement to be useful.

As an example of a non-parametric measure, Hubbard introduces what he describes as the easiest sample statistic ever.  For example, for a sample of eight items if you take the second largest and smallest value the remaining range describes approximately a 90% confidence interval.  This technique avoids problems seen in parametric statistics, like the possibility of a negative lower boundary. Just try to explain a potential range development productivity to a client that includes values less than zero! Having valid statistical techniques to avoid that problem is important to me.

As with statistical techniques and distributions, there are many techniques for sampling.  Understanding of a range of different techniques and when to combine them is useful to for improving your ability to measure many business problems.  Hubbard discusses four:

  1. Population proportion sampling
  2. Spot sampling
  3. Serial sampling
  4. Measure to the threshold

I’m going to discuss #4 here, the measure to the threshold technique, which I often use in decision making.  In this technique, the person measuring makes just enough observations to make a decision.  I often use this technique in combination with triggers.  For example, recently an organization established a quality level for defects found by a user acceptance team.  The process required drawing a series of samples every month as work came to the team.  If the measure (at a 90% confidence interval) crossed the threshold it triggered an additional level of acceptance testing.

Statistical Significance

The term of statistically significant is often completely misremembered as some fixed minimum sample size.  I recently heard a consultant state that he needed a minimum of ten observation for any process to draw statistically significant conclusions. He made this statement without reference to any math. Not to hold that out as a lone observation, I will admit that I am often asked what the required observations are for a measure to be statistically significant. The answer is almost always more complicated than most people want to hear. If the number of observations for a measure doesn’t meet the required number or the number is too costly, statistical significance is formally used as an objection to a measurement.  More important questions include whether the measurement was informative or whether the measure was economically justified.  All statistical significant means is that the value you are seeing is not just a random fluke (we will ignore the terms like the null hypothesis).

Experimentation

One technique that uses sampling to gather data for analysis is an experiment.  Hubbard defines an experiment as any phenomena deliberately created for the purpose of observation. Even though explicit experimentation as a measurement and data gathering technique is often eschewed in IT, A/B testing is form of experimentation in application design and marketing disciplines that is increasingly popular.

Regression

The data provided by sampling requires interpretation to be useful. Regression analysis is a popular tool for data analysis. Tools like Excel (and Excel add-ons) have made it easy to explore the data through regression analysis to aid make decisions.  The power and sophistication of regression analysis make these techniques nearly ubiquitous in analysis reports.  There are two common regression misconceptions.

  • Correlation proves causation
  • Correlation isn’t even evidence of causation.

 

In the simplest terms, the message of Chapter 9 is that the use of sampling techniques means that you do not have to make a huge number of observations to reduce uncertainty.  In order get the most value out of the fewest number of observations means that you need to understand parametric and non-parametric sampling techniques.

 

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

  1. How To Measure Anything, Third Edition, Introduction
  2. Chapter 1: The Challenge of Intangibles
  3. Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily
  4. Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t
  5. Chapter 4: Clarifying the Measurement Problem
  6. Chapter 5: Calibrated Estimates: How Much Do You Know Now?
  7. Chapter 6: Quantifying Risk Through Modeling
  1. Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure