How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 7 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Quantifying The Value of Information. Chapter 7 continues to build on the concepts of quantification and estimation presented in previous chapters. Chapter 7 and the idea that we can quantify the value of information is the centerpiece to Hubbard’s premise that we measure because measurement has value. Chapter 7 defines how to quantify the value of information.

Hubbard’s approach to quantifying the value of information begins with the concept of expected opportunity loss (EOL). Opportunity loss is the cost incurred if the wrong decision is made. The expected opportunity loss is calculated as the difference between the best decision and the average of the possible losses in the decisions based on the uncertainty.  The calculation uses a large sample of possible losses typically generated using often using a table or, in complex situations, a Monte Carlo Analysis.

Here is a simple EOL example:

Scenario:  A neighbor is considering investing $50,000 dollars in home improvements so they can list their house for $200,000 more than it currently can be sold for.  Realtors have indicated that there is a well calibrated 60% chance the investment would pay off.


Variable House Sells Higher Price House Sell at Lower Price
Chance of Outcome 60% 40%
Impact if improvements are made +$200,000 -$50,000
Impact if improvements are not made $0 $0
Expected Opportunity Loss $120,000

EOL if improvement not made.


EOL if improvement made.

In the example, the EOL if the improvements are not made and the house can have been sold at the higher price, we have an expected opportunity loss of $120,000. Alternately, the EOL if we made the investment and the house sells at the lower price is $20,000 (40% * absolute value of (-$50,000). If we could improve our knowledge of the possible outcome that information would have a value because we would make a better decision.  We can begin to understand the value of additional information in this example if we could know that the real chance of outcomes was a 90% chance of selling the house at the higher price if the improvements were made? The EOL of making the improvement and not selling at higher amount is now $50,000 * 10% or $5,000. Measurements that reduce the uncertainty in the outcome of a decision with economic consequences has a value.  A reduction in uncertainty reduces the EOL by increasing the chance of a better decision. In this simple example, the impact in EOL is called the expected value of information (EVI), and is calculated by subtracting the initial EOL before measurement and the EOL after (this is also known as the expected value of perfect information – EVPI).  In the example, the EVI is $15,000.

Unfortunately, most decisions aren’t as simple.  Instead of precise levels of uncertainty, we are often confronted by ranges of possible outcomes.  In these circumstances, we need to establish a probability distribution of outcomes (Monte Carlo) from which we could calculate the EOL for each point on the distribution (or slice of the distribution).  The EVPI would be calculated by adding all of the EOLs calculated.  Conceptually this is a fairly straightforward; however, the statistics can be quite daunting . . .so use a spreadsheet and formulas or a tool to do the calculations.

There are numerous other scenarios that can make finding EOLs and EVPIs more complicated. However, most if not all of them leverage the same basic fundamental even if they require more computing power than a ten key calculator.  Tools and spreadsheets remove a lot of the mathematical burden required to generate EOLs and EVPIs.

In most situations, decisions aren’t made with perfect information. Therefore, determining the value of partial uncertainty reduction is important.  The EVPI can be useful in its own sense because it describes a maximum that we should never exceed spending to make a measurement. Instead of the expected value of the perfect information we need to know the expected value of information (EVI).  EVI defines how much we expect to pay for additional information. The value of information tends to rise more quickly with small reductions of uncertainty but levels off as we approach perfect certainty. If developed a graph to express this statement, the curve would go up rapidly and then flatten out. The slope of the curve is a factor of many things including how much uncertainty we have and the details of the loss function. Understanding this curve means that the first few observations have a much higher value in relative terms. Stated another way, in order to continuously reduce uncertainty will take more and more effort. I have seen this phenomenon in many measurement programs.  The first few measurement reports generate significant observations and ideas for process changes; however, as time goes by those large jumps in knowledge tend to come at a slower pace. This describes a cost curve that is nearly the opposite of the EVI, costs rise slowly when uncertainty is highest and then rise sharply as we get closer to certainty. Based on the EVI and cost curves it is immediately apparent that the assumption that if you have a lot of uncertainty that you need lot of information (and, therefore, cost) to reduce that understanding is WRONG. One the best lines in this chapter can be found on page 162, “If you know almost nothing, almost anything will tell you something.”

Another interesting topic tackled in the chapter is the concept of perishable information values.  In its most basic form consider the value of solving the right problem too late.  For example, if you were buying a house in a tight market and could either spend money to reduce uncertainty about prices in the market or you could wait to see prices evolve (much closer to perfect information) EVI would be a tool to decide to wait or buy more information. In a tight market, waiting tends to have far less utility and value than getting information quicker.

As the complexity of decisions increase, so does the number of variables needed to determine the value of information. Hubbard observes that the vast majority of variables in most models have an information value of zero. People measure what they think is important whether it is or not. However, when the EVI is determined for the variables in the model the known level of uncertainty for most variables was acceptable justifying no further measurement. He further found that the variables with the highest information values were routinely those that the client never measured. He named this the measurement inversion.  Measurement inversion typically occurred because people measure what they know, they measure things that will provide good news or they don’t establish the business value. Chapter 7 puts on firm footing to quantify the value of information and the value to gathering more data rather than guessing or just measuring everything.

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How To Measure Anything, Third Edition, Introduction

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling