Chapter 5 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Calibrated Estimates: How Much Do You Know Now? Chapter 4 described how to define the decision that needs to be made and the data that will be needed to make that decision. Chapter 5 builds on the first step in Hubbard’s measurement processes by providing techniques in order to determine what you know as you begin the measurement process. Hubbard addresses two major topics in this chapter. The first is using estimation to quantify what you know, and second is his process for calibrating estimators.
Measurement is a tool to help make decisions. Once you have determined what decision you need to make and what data you need to do so, the next step is to determine how much you know. In many cases, it is easy to come to the conclusion that since you have not started measuring, you don’t know anything or you don’t know anything precisely enough to be useful. However, even if you don’t know exactly, you still know something. Estimation is a tool to establish a baseline of what is known. For example, using the recent gyrations of the Shanghai stock market as an illustration if you know what is absurd value for a daily drop and increase you can start to define a range. Until recently, the when the market falls by 7%, trading would be suspended. Knowing that gives us a lower boundary for the measurement of change in that market. We could establish an estimate of the upper limit by asking whether we would be confident that a 1,000% increase would be absurd. It would be easy to go through a process of ratcheting up or down until the answer is ‘yes that is probably absurd’ thereby establishing an upper boundary. Perfect? No, but it is a starting estimate and we can then collect data and use the feedback to improve what we know.
Estimates are generally stated as a range. For example, I am 90% confident that I will wake up unaided between 3 – 5 AM local time (unless I ingest sleeping aids, like pizza, right before bed). Expressing what we know about a number or variable as a range of probable values provides a confidence interval that we can assign a degree of confidence. We think the real value of a number is somewhere in that range. We can determine how good we are at subjective probability assessment by comparing our expected outcomes to actual outcomes. As Hubbard points out, the rub is that very few people are accurate estimators unless they have been trained, or as Hubbard calls it, calibrated (the book has several footnotes supporting this assertion and the Freakonomics podcast of Jan 14, 2016, also addresses this issue)
Almost everyone tends to be biased either toward overconfidence or underconfidence. Hubbard observes (and I agree) that the vast majority of estimators are overconfident. Overconfidence is defined as routinely overstating knowledge and being correct less often than expected. Alternately, under-confidence is when an individual understates knowledge and is correct much more often than expected. Estimating, or assessing uncertainty, is a skill that can be learned and improved. As an example, Hubbard a set of questions and asks the reader to estimate (as a range) answers at a 90% confidence. The set of questions shown in the book is similar to the types of questions Hubbard uses in his calibration workshop. Even in the exercise in the book, some questions seem too difficult to answer. However, using the boundary estimation technique used earlier I was able to harness what I did know. For example, I know it would be absurd to estimate the lower boundary of possible year Shakespeare was born before 1 AD.
Once a baseline is established there are a number of calibration techniques and tricks that you can learn (See Exhibit 5.4 in the book for the list). However, the first discussed in the book was to establish consequences for misestimating (this is very similar to the discussion on the Freakonomics podcast noted above). Hubbard uses the mechanism of betting money to establish a consequence for how well an estimator performed. Betting (real or pretend money) provided a feedback loop that provided significant calibration improvements. Another method involved asking people to identify potential problems for each of their estimates. For example, ask the estimator to assume his or her answer/estimate is wrong then and then explain why it was wrong. This technique is useful for helping to expose the unvoiced assumptions and biases, so they can be taken into account. I have recently started using this technique with colleagues on a board of directors I chair. In most cases it generates a discussion that helps tune estimates and decisions. These two are just a sample of the techniques noted in the book; Hubbard recommends that estimators learn and understand all of the techniques.
The process of determining what we know is a process of establishing a range (uncertainty) for the variables we intend to measure. Defining a range is an estimation problem. There is a wide range of techniques to establish what we know and our level of uncertainty. Initially, almost all estimators are not great at the process; however, we can improve our ability to subjectively assess what we know and then improve through calibration.
Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition
Chapter 1: The Challenge of Intangibles
Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily
Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t
Chapter 4: Clarifying the Measurement Problem