There is still time to register!

This is part 2 of an essay based on a presentation I am doing Friday, June 5th at 9 EDT (sign-up: https://bit.ly/3gH0Uy5). I am presenting as part of IFPUG’s Knowledge Cafe Webinar Series. The presentation is titled Software Development: Preparing For Life After COVID-19.

Management guru Peter Drucker said, “There is nothing so useless as doing efficiently that which should not be done at all.” Benchmarking is a tool to identify work that should not be done or done better while continuous improvement provides a structure for improving the opportunities found in the benchmark. There are many approaches to benchmarking and I suggest combining qualitative and quantitative assessments.  The combination is critical for identifying how to improve effectiveness and efficiency. In a post-COVID-19 environment, all of us will need to answer whether how we are working is delivering tangible value in a financially sound manner. If you don’t know the answer to the effectiveness and efficiency questions leaders will be reluctant to spend money on you, let alone large scale improvement exercises. Once you know where you stand then begin to make changes using a feedback loop to know whether or not your experiments are working.

The idea of continuous improvement has been part of the business landscape in one form or another since the beginning of time. As an example, the leaders of the Total Quality Movement of the late 1980s, such as Juran, Deming, and Crosby hammered the need for continuous change home as US business refocused on product quality.  Unfortunately, the continuous process improvement message suffers from two interpretation problems. The first is that process improvement was implemented as a focus on controlling and reducing costs, rather than on increasing process throughput. Secondly, many process improvement programs focused on one big change rather than finding a generalized process that could continuously generate improvement. Finding and implementing a repeatable process requires culture change and long-term thinking, which are hard to implement. Paraphrasing W. Edwards Deming, we will need constancy of purpose to make continuous process improvement payoff, but with that constancy of purpose, we won’t need a single overwhelming change. When I use the term continuous I am urging you to look for ways to improve every day or at least every few weeks and then to address what you find. In agile, for example, Scrum teams use retrospectives to propose and make changes to how they work. The same approach is true for Kanban or Scrumban. Knowing what to change infers that you need a way to collect data and compare it to your benchmark rather than waiting for a large batch data collection and analysis event. 

Any effective approach to assessing an organization or team requires a broad approach that combines observation and an understanding of why things work -theory. Most agilists are used to an empirical process, Scrum is an empirical process.  Empiricism uses our senses to generate knowledge. We use the phrase “inspect and adapt” to encapsulate the approach. Deming showed us one of the flaws in the purely empirical approach with his funnel experiment. In the experiment, tamper, reacting to every change we see without an understanding of which outcomes are special and should be reacted to and which are a common cause and should not produce erratic performance. Said a little differently, simply reacting to everything you see will not deliver efficient results.

An approach to combine organizational change and also empower teams to self-manage how they work might look something like the diagram below:

A quick review of the model begins with the measurement/assessment step, which gathers data on how an organization is performing at a specific point in time. A baseline provides an organization with self-knowledge and a proverbial line in the sand. In order to understand performance, you need to measure. For process improvement targeting in today’s environment I suggest focusing on four areas:

  • Throughput
  • Cycle TIme
  • Productivity
  • Delivered Defects

These areas shine a light on the three areas executives find important: whether they are getting more functionality faster, whether what they are getting is done in a financially sound manner, and whether the expectations on quality are being met.  Data is great but it is only useful if we know how that happened and the capabilities of the teams that made that happen. Areas to consider for the “how” or qualitative side of the assessment typically includes:

  • Process
  • Methods
  • Skills
  • Tools
  • Environment/Context

When we know how we did the work and actually did what we did, organizations and teams can identify the experiments they would like to implement. Knowledge facilitates safe to fail experiments.  

Cycling back to the discussion of empiricism and rationalism. Regardless of what you choose to measure before you collect one piece of data make sure you can trace the measure to the contribution a change in the direction of that measure will have on the top line and/or the bottom line. 

For example, if you decide to measure productivity and throughput, process measurements you need to ensure that you are observing the processes that affect those measures and how they are consumed.  Consumption of these measures often happens in planning and reporting.  Each piece of data has a lifecycle, often we only notice data during one step or another which obscures a lot of opportunities for improvement.

The presentation of this essay is a part of is part of IFPUG’s Knowledge Cafe Webinar Series that will be held On Friday, June 5th at 9 EDT (sign-up: https://bit.ly/3gH0Uy5).  I hope you sign up and listen. I will publish the 3rd and final component of the essay next week, so I do not spoil the punchline for the attendees!