Book Cover

I had planned to cover two chapters this week of Thinking, Fast and Slow.  I completed the reading and notes on Friday this week at 3:30 AM while waiting to go to the airport.  As in the past, when I began sorting out my thinking about the content nearly 600 words where on the paper.  This is the power of this book, it is incredibly rich in useful ideas. While it might be a priming effect (Chapter 11), I used the law of small numbers to explain why it wasn’t a good idea to jump to conclusions based outcome of a survey of a few teams. I repeat, even though I re-reading the book (actually my wife’s copy of the book) I am finding new ideas and hope you are also!  Onwards!  

Part Two: Heuristics and Biases

Chapter 10 – Law of Small Numbers

As I read this chapter, I was struck by the relationship between two very different approaches to thinking. One of the relationships occurs because System 2 relies on Systems 1’s associative machine (system 1 connects the dots between ideas). System 1 creates associations that are useful to System 2 (remember that System 2 is lazy).  Both systems can and do work together. 

Kahneman begins the chapter with an example of data interpretation using cases of kidney cancer. The lowest rates of kidney cancer are in counties that are rural and vote Republican. All sorts of theories jump to mind based on that data.  However, a few paragraphs later Kahneman notes that the data also shows that the counties with the highest rates of kidney cancer are rural and vote Republican. The problem is that rural counties have small sample sizes and therefore are prone to extremes. 

System 1 automatically creates causal relationships; it fills in the gaps between information and creates a story that is not always correct. System 1’s association machine does not work well when statistical facts which change the probability of outcomes to do not cause them to happen.  The example of kidney cancers is an example of this problem. Surveys of user satisfaction often used by internal IT groups often have the same problem.  

When statistics are generated from small samples they are prone to sampling error.  The possibility of an extreme outcome that isn’t representative of the population is much higher.  Kahneman reminds the reader that scientists call these artifacts — observations that are produced entirely by some aspect of the method of research. Artifacts can be an outcome of how sampling is done in many surveys. This a perennial problem for surveys done by as part of transformation programs because of lack of expertise. Recently I saw a survey of a few stakeholders indicate that a new technique was not working while every shred of observational evidence suggested differently. Bad samples lead to erroneous conclusions.

These sampling artifacts lead the reader to the law of small numbers. System 1 causes a bias of confidence based on perceived patterns that are supported by the associative machine; Kahneman says it nicely, “we trust intuition versus statistics.”  The human System 1 brain works very hard at constructing stories ahead of facts; making leaps of faith. I have heard stories of many studies that are terminated early when a perceived pattern emerges even though the perception is created using small samples. A friend once confided that he stopped a product A/B test before getting to a proper sample size (he actually had used the statistics to calculate a proper sample size upfront).  Even though the test had “predicted” one option would be a wild success, real life was different. The mistake cost him an annual bonus. I see people doing process assessments making the same mistake so often that it makes my head swim. The law of small numbers is alive an well and living in the process improvement world. 

Kahneman points out that the law of small numbers is built on an idea that patterns exist and are there to be easily recognized if you are observant or clever enough.  We pay more attention to the content of messages than to information about the message’s reliability. Statistics generate many observations, System 1 tries to make sense out of the observations.  Bigger sample sizes are often the answer to increasing explanative power (I suggest that you do the math).  

Remember, if you do not have a favorite, dog-eared copy of Thinking, Fast and Slow, please buy a copy.  Using the links in this blog entry helps support the blog and its alter-ego, The Software Process and Measurement Cast. Buy a copy on Amazon,  It’s time to get reading!  

The installments:

Week 1: Logistics and Introductionhttp://bit.ly/2UL4D6h

Week 2: The Characters Of The Storyhttp://bit.ly/2PwItyX

Week 3: Attention and Efforthttp://bit.ly/2H45x5A

Week 4: The Lazy Controllerhttp://bit.ly/2LE3MQQ

Week 5: The Associative Machinehttp://bit.ly/2JQgp8I

Week 6: Cognitive Easehttp://bit.ly/2VTuqVu

Week 7: Norms, Surprises, and Causeshttp://bit.ly/2Molok2

Week 8: A Machine for Jumping to Conclusionshttp://bit.ly/2XOjOcx

Week 9: How Judgement Happens and Answering An Easier Questionhttp://bit.ly/2XBPaX3