Book Cover

 

This week in our re-read of Thinking, Fast and Slow, Kahneman opens with a discussion of a number of studies that show that professional predictions are far less accurate than simple algorithmic predictions. The work that sends Kahneman down this was originally done by Paul Meehl and published in the book Clinical versus Statistical Prediction: A theoretical analysis and a review of the evidence (1954). At the time of publication of Thinking Fast and Slow decades later, studies across a wide range of subjects show that formulas beat intuition at least 60% of the time. Bluntly stated, formulas beat intuition most of the time – the idea that algorithms are powerful should surprise no one in 2019.

Two of the reasons Kahneman points to explain why expert intuition gets torched by simple algorithms is that experts try to get clever, and secondly that they are inconsistent in making judgments. Starting with inconsistency, context adds noise which evokes System 1 thinking.  System 1 thinking takes disparate information and constructs stories by filling in the gaps. Kahneman illustrates the point with the story about an expert doctor diagnosing the same x-ray differently based on context. The getting clever part stems from the expert trying to make their own mark.  The process assessment work is replete stories of different assessors coming to different conclusions with the same data.  

The ideas in this chapter bear on process improvement and talent acquisition very directly. Rather than having a simple set of criteria for a job interview or team assessment, it is easy to fall prey to making decisions on unstructured interviews. Consider the impact that divorcing orchestra interviews from gender had on hiring; removing the subjectivity of the gender context leads to different hiring outcomes.  

Kahneman ends the chapter with a statement that even after exposing its pitfalls, intuition can be useful but only after the disciplined collection of objective information. All coaches and consultants perform assessments before they take action. It is important to have a structured set of questions and patterns that can be used to objectively assess the situation and environment before being distracted by the noise inherent in all human endeavors. 

Remember, if you do not have a favorite, dog-eared copy of Thinking, Fast and Slow, please buy a copy.  Using the links in this blog entry helps support the blog and its alter-ego, The Software Process and Measurement Cast. Buy a copy on Amazon,  It’s time to get reading!  

The previous installments:

Week 1: Logistics and Introductionhttp://bit.ly/2UL4D6h

Week 2: The Characters Of The Storyhttp://bit.ly/2PwItyX

Week 3: Attention and Efforthttp://bit.ly/2H45x5A

Week 4: The Lazy Controllerhttp://bit.ly/2LE3MQQ

Week 5: The Associative Machinehttp://bit.ly/2JQgp8I

Week 6: Cognitive Easehttp://bit.ly/2VTuqVu

Week 7: Norms, Surprises, and Causeshttp://bit.ly/2Molok2

Week 8: A Machine for Jumping to Conclusionshttp://bit.ly/2XOjOcx 

Week 9: How Judgement Happens and Answering An Easier Questionhttp://bit.ly/2XBPaX3 

Week 10:  Law of Small Numbershttp://bit.ly/2JcjxtI 

Week 11: Anchorshttp://bit.ly/30iMgUu 

Week 12: The Science of Availabilityhttp://bit.ly/30tW6TN 

Week 13: Availability, Emotion, and Riskhttp://bit.ly/2GmOkTT 

Week 14: Tom W’s Specialityhttp://bit.ly/2YxKSA8 

Week 15: Linda: Less Is Morehttp://bit.ly/2T3EgnV 

Week 16: Causes Trump Statisticshttp://bit.ly/2OTpAta 

Week 17: Regression To The Meanhttp://bit.ly/2ZdwCgu 

Week 18: Taming Intuitive Predictionshttp://bit.ly/2kAHClJ  

Week 19: The Illusion of Understandinghttp://bit.ly/2lK954p  

Week 20: The Illusion of Validity –   http://bit.ly/2mfyrYh