Re-read Saturday


Book Cover

This chapter formally introduces the Prospect Theory and talks about the difference between it and the Expected Utility Theory. When doing a little background research, prospect theory (part of his research on decision making under uncertainty) was noted as contributing to his winning the Nobel prize in economics. 

In Expected Utility Theory, a gain is assessed by comparing the calculated value of two states. The value delivered by a decision is calculated by multiplying each of the possible outcomes by the likelihood each outcome will occur and then summing all of those values.  If you have a 50% chance to make $500 and a 50% chance of breaking even, the value is $250. When the value is positive the theory would predict that humans would always accept the gamble. Kahneman and Tversky observed that real-life behavior often differed from the behavior predicted by this Expected Utility Theory because of the context in which the choice is made makes a difference.  Changing our example to a 50/50 chance of either making $500 or losing $400, Expected Utility Theory would predict that a rational economic human would accept the gamble. However, if the person being asked to make accept the gamble has a net worth of $1,000 they would naturally be more risk-averse because the potential loss would be perceived to be psychologically larger. (more…)

Book Cover

Kahneman opens the chapter by establishing the economic definition of a human as someone who is rational, selfish and whose tastes don’t change. This flies in the face of how a human is envisioned in psychological theory – not always rational, sometimes generous, and whose tastes change. I have always had trouble with rational human definition because I grew up in a family tied to the retail and wholesaling of clothing — at the very least I have direct evidence that people’s taste change which means part of the definition does not track. The idea that people act as a pure economic being is a tantalizing simplification when planning changes in an organization. Many changes agents try to sell the process change on a purely economic basis only to be shocked when there is resistance. (more…)

Book Cover

Optimism is both a great driver of progress and problematic.  In this chapter, Kahneman explores the concept and impact of optimism bias. This bias causes a person to believe that they are less likely to experience a negative event. For example, most software engineers believe that they have never met a problem they can’t solve — an unrealistic assessment in any complicated environment. Another typical example, most drivers think they are better than average — a statistical impossibility. A third example that we have commented on before, estimates chronically fall prey to optimism bias. The list of examples could go on nearly forever. This effect is driven by the propensity of individuals to exaggerate their abilities.     (more…)

Book Cover

This week in our re-read of Thinking, Fast and Slow, we have a chapter that needs to be read by anyone ever been asked for an estimate… ever. There are three questions that have been asked since the dawn of time:  

  1. What will “it” cost?
  2. When will “it” be done?
  3. What is “it” that I am going to get? 

Almost every person, team and/or organization is called on to answer these questions on a regular basis, regardless of method. Answering the three questions has spawned a sea of consultants, not because estimators are bad actors but rather because an inside view is often optimistic. In software development, estimates are chronically optimistic for a multitude of reasons.  The Software Process and Measurement Cast has interviewed several academics on the topic over the years, one of the most memorable was with Ricardo Valerdi. Kahneman’s discussion of the planning fallacy in this chapter illustrates why optimism is such a problem. (more…)

Book Cover

 

As in Agile Coach, I am an expert in many aspects of software development. To develop that level of expertise, I have had to learn and practice a lot. I interface with many other experts and draw on their skills. The people I respect as experts have earned that title based on experience. The question in this chapter is when can expert intuition be trusted. This chapter focuses on the work that Kahneman did with Gary Kline.  Part of the premise of this chapter is that there is a difference between subjective and expert intuition. The chapter explores the difference, how expert intuition is formed, how it is evaluated, and whether we can trust it. (more…)

Book Cover

 

This week in our re-read of Thinking, Fast and Slow, Kahneman opens with a discussion of a number of studies that show that professional predictions are far less accurate than simple algorithmic predictions. The work that sends Kahneman down this was originally done by Paul Meehl and published in the book Clinical versus Statistical Prediction: A theoretical analysis and a review of the evidence (1954). At the time of publication of Thinking Fast and Slow decades later, studies across a wide range of subjects show that formulas beat intuition at least 60% of the time. Bluntly stated, formulas beat intuition most of the time – the idea that algorithms are powerful should surprise no one in 2019. (more…)

Book Cover

Last night severe thunderstorms rolled through northern Ohio.  There were lots of power outages and trees that were blown over.  This morning when I went to the grocery store, the store’s systems could not accept debit cards. I immediately made up a story that connected the storms to system failure. As we have seen before, System 1 thinking takes disparate facts and creates a coherent believable story.  No conclusion is too big a jump for System 1 thinking. My story and my belief that I had predicted the most probable cause is an illusion of validity.    (more…)

Next Page »