The Science of Successful Organizational Change

The Science of Successful Organizational Change


This week Steven dives into Part 2 of Paul Gibbons’ book The Science of Successful Organizational Change.  In today’s entry, we cover the introduction to Part 2 in which Gibbon’s takes us down the path of strategy and uncertainty.  Remember to use the link in the essay to buy a copy of the book to support the author, the podcast, and the blog!   – Tom

Week 5 —

This week we enter Part II of Paul Gibbons book “The Science of Successful Organizational Change” (get your copy).  Unlike Part I which consisted chapter 2, Part II consists of chapters 3 through 5.

Part II Change Strategy

Gibbons explains the difference between strategy and tactics – “strategy properly focuses on goals and not how to deliver those goals” (p. 71)

The importance of change programs being aligned or coherent with each other is more important than change programs being executed with precisions, given our uncertain, VUCA world.

The idea of change programs that lack coherence should not be difficult to imagine, we can all think of initiatives that seem to pull us in opposite directions.  Gibbons relates a story of one change initiative that reduced costs by staff-reductions that fell short of its goals because of another change initiative to improve customer service that resulted in hiring additional contractors.

Gibbons main point in the Part II introduction is to make sure we have a good understanding that strategic issues are not the same as the tactics needed to achieve the strategic goals.  Another way to say this, understand the “what” and “why” before defining the “how”.

Chapter 3 – Governance and the Psychology of Risk

An environment of VUCA (Volatility, Uncertainty, Complexity, and Ambiguity) means greater risks and Gibbons tells us risk is more than math and numbers; the risk is about human psychology too.

We make decisions, judgments based on intuition.  The problem is that most of us are really not good estimating probabilities. Probability is at the heart of estimation including:

  1. The probability that an uncertain event will occur.
  2. The likely outcome of the event should it occur.

We see this in Product Management all the time.

  1. Trying to determine how many of our customers will like and use a planned feature.
  2. Predicting the effect of the feature to a key measure (e.g., sales).

Gibbons writes about six reasons people do not understand risks very well and thus are subject to making poor decisions, concluding …

“In business, understanding the psychology of risk is more important than understanding the mathematics of risk” (p. 82)

The six flaws we have when accessing risks are

  1. We spot trends and patterns in events that happen randomly. (Pattern Matching Cognitive Bias – Tom)
  2. We place faith in small samples.
  3. In troubled situations, we are willing to take on more risks.
  4. We do not judge relative probabilities well because our emotions can distort our analysis. The example used, people pay more for anti-terrorism insurance than regular travel insurance, despite the statistics.
  5. For those events that are highly improbable, but have catastrophic consequences, we discount them (i.e., ignore them, this won’t happen here).
  6. We underestimate the “risk of ruin”. Example:  for a risky change project, we should not invest more than 10% of your capital, otherwise a negative outcome may ruin the business.

The Planning Fallacy

Stakeholders, Project Managers, Product Managers, and Scrum Masters all know about the planning fallacy Gibbons writes …

“The planning fallacy is the systematic tendency for project plans and budgets to undershoot” (p. 83)

We do not need the word “project” in the above quote because this happens with change initiatives, iteration (sprint) planning, and product roadmaps.

Gibbons provides an excellent resource to understand more about failed projects/change programs at (blog – Why Projects Fail).

The planning fallacy means we are good at underestimating our efforts to complete plans.  Gibbons believes this happens because:

  1. Optimism bias – our tendency towards the hopeful and best-cases estimates.
  2. The work culture we find ourselves in.
  3. Our limited ability to think probabilistically, which circles back to those 6 flaws we have when assessing risks.

The positive to the planning fallacy is that it does help us move forward with improbable products or projects that may never have started should one have been fully aware of the chances for success. The downside argument would use very similar words!  Think about the how many startups exist (at least briefly) and how many actually make money.

Gibbons quotes Daniel Kahneman (author of Thinking Fast and Slow)

“[Executives] make decisions based on delusional optimism rather than on a rational weighting of gains, losses, and probabilities.  They overestimate benefits and underestimate costs.  They spin scenarios of success while overlooking the potential for mistakes and miscalculations.  As a result, they pursue initiatives that are unlikely to come in on-budget or on-time or deliver the expected returns – or even to be completed.” (p. 84)

We all are capable of this, not just “executives”.  And likely, executives are influenced by those overly optimistic proposals, which they have encouraged.

Estimating Better

What can we do about the assessing risk better and avoiding the planning fallacy?

Chapter 3 provides ideas to improve our planning and to counter our tendency to believe that “everything will go as planned”.

  1. Plan for multiple possible outcomes, use the Most Likely Development (MLD) graph (p. 86).
  2. Avoid seeking a simplistic, clear answer about the future which does not exist very often. Reality is probabilistic, but typical management culture is not.  Quote from President Truman “Can someone please find me a one-handed economist.” (p. 88)Truman, like most of us, is seeking a clear answer about the future that almost always, does not exist.
  3. Use the SOCKS framework to drive conversations around risk. In our week-3 re-read (Chapter 1), the SOCKS framework (Shortfalls, Overruns, Consequences, Killed, and Sustainability) was introduced.
  4. Understand business risks cannot be measured precisely. Business risks are not the same as games of chance like dice (i.e., a priori probability).
  5. Understand that normal distributions will not help us think about those abnormal events. Nasim Taleb’s is referenced again.  This time about his ideas about risk found in the book “Black Swan: The Impact of the Highly Improbable”“we are reasonably good at predicting events in the middle of the distribution that makes little difference, and atrocious at predicting events that really matter” (p. 93)
  6. Use premortems to help expose risks and manage the uncertainty of important risks.
  7. Think about risk governance as a people problem, not a math problem—the psychology of risk (again).
  8. Manage risks in aggregate, as a portfolio of projects/change



Recapping this week’s reread of “The Science of Successful Organizational Change” and the main topics found in the Part II introduction and in Chapter 3:

  1. Strategy and Tactics
  2. The Psychology of Risk
  3. The Planning Fallacy
  4. Strategies and tactics to compensate for the Planning Fallacy by keeping the Psychology of Risk in mind

“We have minds that are equipped for certainty, linearity, and short-term decisions that must instead make long-term decisions in a nonlinear, probabilistic world” (p. 10)

Question:  Do you agree with Gibbons assertion that psychology of risk is more important than the mathematics of risk?


Previous entries in the re-read of the book The Science of Successful Organizational Change (buy a copy!)

Week 1: Game Plan

Week 2: Introduction

Week3; Failed Change

Week 4: Change Fragility to Change-Agility