Book Cover

 

Some characteristics influence the assessment of a situation more than others. We weight characteristics and attributes whether we are aware or not. When we are not aware we are defaulting to system 1 thinking which as we have read is very biased. This is one reason why formal decision-making processes codify the weighting of attributes to avoid personal biases.  (more…)

Book Cover

The chapters in Part 4 of Thinking Fast and Slow are relatively short and punchy, but the ideas aren’t small. I think these chapters are the most useful on a day-to-day basis.  Chapter 28 goes into depth on the concept of loss aversion. Loss aversion works because people evaluate outcomes as losses or gains, and losses loom larger than gains. If we consider the motives to avoid a loss or achieve gain, humans are driven to avoid losses than to achieve gains. Many of the cognitive biases we have explored earlier support the idea that our brains are wired to see threats above all else.  Threats include words (consider the reaction you get to words like transformation, transition, or change), they cause listeners to think of the possibility of loss which immediately invokes System 1 thinking. (more…)

Book Cover

Chapter 27 begins with a discussion of the classic indifference curve from Econ 101. The indifference curve shows the trade-off between two goods. In this case, Kahneman uses the trade-off between income and leisure to show how overly simple theories generate models that do not describe behavior outside the textbook. The problem that Kahneman points out is the basic indifference map doesn’t reflect context. This is the same point discussed in the chapter titled: Bernoulli‘s Error. Using the indifference curve of a salaried employee with two weeks of vacation. While the person has vacation time to use the trade-off would tend to follow the path of a typical indifference curve, however, when he/she used up 2 weeks of paid time off the slope changes to reflect the potential at one more unit of leisure might require trading off all income. As we have seen before context and starting point really matter. (more…)

Book Cover

This chapter formally introduces the Prospect Theory and talks about the difference between it and the Expected Utility Theory. When doing a little background research, prospect theory (part of his research on decision making under uncertainty) was noted as contributing to his winning the Nobel prize in economics. 

In Expected Utility Theory, a gain is assessed by comparing the calculated value of two states. The value delivered by a decision is calculated by multiplying each of the possible outcomes by the likelihood each outcome will occur and then summing all of those values.  If you have a 50% chance to make $500 and a 50% chance of breaking even, the value is $250. When the value is positive the theory would predict that humans would always accept the gamble. Kahneman and Tversky observed that real-life behavior often differed from the behavior predicted by this Expected Utility Theory because of the context in which the choice is made makes a difference.  Changing our example to a 50/50 chance of either making $500 or losing $400, Expected Utility Theory would predict that a rational economic human would accept the gamble. However, if the person being asked to make accept the gamble has a net worth of $1,000 they would naturally be more risk-averse because the potential loss would be perceived to be psychologically larger. (more…)

Book Cover

Kahneman opens the chapter by establishing the economic definition of a human as someone who is rational, selfish and whose tastes don’t change. This flies in the face of how a human is envisioned in psychological theory – not always rational, sometimes generous, and whose tastes change. I have always had trouble with rational human definition because I grew up in a family tied to the retail and wholesaling of clothing — at the very least I have direct evidence that people’s taste change which means part of the definition does not track. The idea that people act as a pure economic being is a tantalizing simplification when planning changes in an organization. Many changes agents try to sell the process change on a purely economic basis only to be shocked when there is resistance. (more…)

Book Cover

Optimism is both a great driver of progress and problematic.  In this chapter, Kahneman explores the concept and impact of optimism bias. This bias causes a person to believe that they are less likely to experience a negative event. For example, most software engineers believe that they have never met a problem they can’t solve — an unrealistic assessment in any complicated environment. Another typical example, most drivers think they are better than average — a statistical impossibility. A third example that we have commented on before, estimates chronically fall prey to optimism bias. The list of examples could go on nearly forever. This effect is driven by the propensity of individuals to exaggerate their abilities.     (more…)

Book Cover

This week in our re-read of Thinking, Fast and Slow, we have a chapter that needs to be read by anyone ever been asked for an estimate… ever. There are three questions that have been asked since the dawn of time:  

  1. What will “it” cost?
  2. When will “it” be done?
  3. What is “it” that I am going to get? 

Almost every person, team and/or organization is called on to answer these questions on a regular basis, regardless of method. Answering the three questions has spawned a sea of consultants, not because estimators are bad actors but rather because an inside view is often optimistic. In software development, estimates are chronically optimistic for a multitude of reasons.  The Software Process and Measurement Cast has interviewed several academics on the topic over the years, one of the most memorable was with Ricardo Valerdi. Kahneman’s discussion of the planning fallacy in this chapter illustrates why optimism is such a problem. (more…)

Book Cover

 

This week in our re-read of Thinking, Fast and Slow, Kahneman opens with a discussion of a number of studies that show that professional predictions are far less accurate than simple algorithmic predictions. The work that sends Kahneman down this was originally done by Paul Meehl and published in the book Clinical versus Statistical Prediction: A theoretical analysis and a review of the evidence (1954). At the time of publication of Thinking Fast and Slow decades later, studies across a wide range of subjects show that formulas beat intuition at least 60% of the time. Bluntly stated, formulas beat intuition most of the time – the idea that algorithms are powerful should surprise no one in 2019. (more…)

Book Cover

Last night severe thunderstorms rolled through northern Ohio.  There were lots of power outages and trees that were blown over.  This morning when I went to the grocery store, the store’s systems could not accept debit cards. I immediately made up a story that connected the storms to system failure. As we have seen before, System 1 thinking takes disparate facts and creates a coherent believable story.  No conclusion is too big a jump for System 1 thinking. My story and my belief that I had predicted the most probable cause is an illusion of validity.    (more…)

Book Cover

 

Part 3 of Thinking, Fast and Slow is titled Overconfidence.  Chapter 19 begins by exploring several biases that affect overconfidence. Earlier in the book, we explored how System 1 thinking connects events to generate a coherent story.  This chapter begins by building on the attributes of fast thinking by stating that humans interpret behavior as a manifestation of general propensities and personal traits. One of the classic biases that cause this type of thinking is the halo effect. I overheard an example of a negative halo effect this week as I walked behind a group of people in Chicago. The group, tourists, pointed at a person sleeping rough along the river and exclaimed that the person was lazy.  One attribute of the person’s behavior was generalized into a larger narrative.   (more…)