Christmas Lights

Not everything is linear

Value Stream Mapping originated in manufacturing. The diagrams we all know love can be traced back to Charles Knoeppel’s book Installing Efficiency Methods (1918). The problem of lifting this technique directly from manufacturing is that in knowledge work there are more shared people and resources, variability in processing time and path, and changing requirements. Mapping this morass gets messy and often frustrating. Strategies for addressing two common issues, shared people and resources and variability in processing time, are described below:

Shared People and Resources

People and resources are shared between projects as the work flows through the value stream. For example, an architect might be involved to provide a bit of runway, evaluate an option, or to approve a design choice. The specific specialty is not required all of the time, therefore they are matrixed to different pieces of work. The concept of shared resources is one of the outcomes of specialization and a mistaken idea that 100% utilization maximizes efficiency.  


The solution is less to change how you create a value stream map, but rather in organizational design.  Solutions can include:

  1. Adopt and leverage cross-functional teams that have as much of the capabilities needed to take a piece of work from idea to delivery.
  2. Couple people with products rather than people to siloed specialties.  This approach will decouple products from each other so that work done in one product does not have to wait for some currently working elsewhere.
  3. Where shared people or resources are seemingly unavoidable (this circumstance is ALWAYS avoidable if an organization wants to spend the time, imagination, and money) consider building a process map where they are involved and measure the impact of wait time. Knowing the impact will give you data for experimentation and for asking for a budget to fix the problem. 

Variable Processing Time

User stories or any other piece work technical teams pull to work on do not have a consistent size. How large a piece of work is will have an impact on how long it will take (not a perfect relationship), which makes it difficult to have a predictable flow through individual steps.  This is different from auto assembly or other classic manufacturing models.  


Start by measuring high-level throughput and cycle time (how many work items delivered and how long each item took). Examine the variability to determine whether work passing through the value stream is predictable enough to engender dependability and trust. Remember there will always be some variability.  If the amount of variability is acceptable, use standard process improvement mechanisms such as retrospective will be useful for driving incremental change. If not, tools such as Fishbone Diagramming or Affinity Diagramming are tools to start to identify attributes that can potentially contribute to variability.  Once you have a few suspected contributors, gather measurement data (this might require process mapping). Design experiments to determine whether you can impact the variability then make changes to the flow of work.

Some of the issues of using value streams in software product development and maintenance can be handled in the analysis of the data if you are aware that they will occur and that they are common. The real answers are often more complicated and have nothing to do with value stream mapping at all but rather the use of organizational models that make sense in manufacturing but are problematic in knowledge work.