Book Cover


This week in our re-read of Thinking, Fast and Slow, Kahneman opens with a discussion of a number of studies that show that professional predictions are far less accurate than simple algorithmic predictions. The work that sends Kahneman down this was originally done by Paul Meehl and published in the book Clinical versus Statistical Prediction: A theoretical analysis and a review of the evidence (1954). At the time of publication of Thinking Fast and Slow decades later, studies across a wide range of subjects show that formulas beat intuition at least 60% of the time. Bluntly stated, formulas beat intuition most of the time – the idea that algorithms are powerful should surprise no one in 2019.

Two of the reasons Kahneman points to explain why expert intuition gets torched by simple algorithms is that experts try to get clever, and secondly that they are inconsistent in making judgments. Starting with inconsistency, context adds noise which evokes System 1 thinking.  System 1 thinking takes disparate information and constructs stories by filling in the gaps. Kahneman illustrates the point with the story about an expert doctor diagnosing the same x-ray differently based on context. The getting clever part stems from the expert trying to make their own mark.  The process assessment work is replete stories of different assessors coming to different conclusions with the same data.  

The ideas in this chapter bear on process improvement and talent acquisition very directly. Rather than having a simple set of criteria for a job interview or team assessment, it is easy to fall prey to making decisions on unstructured interviews. Consider the impact that divorcing orchestra interviews from gender had on hiring; removing the subjectivity of the gender context leads to different hiring outcomes.  

Kahneman ends the chapter with a statement that even after exposing its pitfalls, intuition can be useful but only after the disciplined collection of objective information. All coaches and consultants perform assessments before they take action. It is important to have a structured set of questions and patterns that can be used to objectively assess the situation and environment before being distracted by the noise inherent in all human endeavors. 

Remember, if you do not have a favorite, dog-eared copy of Thinking, Fast and Slow, please buy a copy.  Using the links in this blog entry helps support the blog and its alter-ego, The Software Process and Measurement Cast. Buy a copy on Amazon,  It’s time to get reading!  

The previous installments:

Week 1: Logistics and Introduction

Week 2: The Characters Of The Story

Week 3: Attention and Effort

Week 4: The Lazy Controller

Week 5: The Associative Machine

Week 6: Cognitive Ease

Week 7: Norms, Surprises, and Causes

Week 8: A Machine for Jumping to Conclusions 

Week 9: How Judgement Happens and Answering An Easier Question 

Week 10:  Law of Small Numbers 

Week 11: Anchors 

Week 12: The Science of Availability 

Week 13: Availability, Emotion, and Risk 

Week 14: Tom W’s Speciality 

Week 15: Linda: Less Is More 

Week 16: Causes Trump Statistics 

Week 17: Regression To The Mean 

Week 18: Taming Intuitive Predictions  

Week 19: The Illusion of Understanding  

Week 20: The Illusion of Validity –