Knowing what should not be done is rarely this straightforward.

Knowing what should not be done is rarely this straightforward.

 

A quick reminder I am running a poll to choose the next book for re-read Saturday. The poll remains open for another week. Currently Goldratt’s The Goal: A Process of Ongoing Improvement is topping the list, BUT the just a few votes could change the book at the top of the list very quickly. The poll is republished at the bottom of this post.

Management guru Peter Drucker said “There is nothing so useless as doing efficiently that which should not be done at all.” Two powerful types of techniques to identify work that should not be done are process mapping, and baselining and benchmarking.

Process Mapping – A process map focuses on the capturing and documenting the sequence of tasks and activities that comprise the process. A process map is generally constrained to a specific set of activities within a broader organization. Process mapping is useful at a tactical level while other mapping techniques like value chain mapping are often more useful when taking an organizational view. Developing a process map (of any type) allows an analyst to review each step in the process to determine whether they add value. Steps that do not add value should be evaluated for removal.

Baselining and Benchmarking – There are two typical approaches to benchmarking. The first is through measurement of the process to generate a baseline.  Once a baseline is established, that baseline can be then be compared to another baseline to generate a benchmark. This type of benchmark is often called a quantitative benchmark. The second type of a benchmark compares the steps and activities required in a process to a process that yields a similar product. Comparisons to frameworks such as the TMMi, CMMI or Scrum are a form of process benchmarking.

The use of analytical techniques such as process mapping or benchmarking is important to ensure that opinions and organizational politics don’t outweigh processes or work steps that generate real value. Without analysis, it is easy to sit down with an individual or a team and ask them what should not be done and get the wrong answer. Everyone has an opinion informed by his or her own experiences and biases. Unfortunately, just asking may identify a process or task that one person or team feels is not useful but has value to the larger organization. For example, a number of years ago an organization I was working with had instituted a productivity and customer satisfaction measurement program. The software teams involved in the program saw the effort needed to measure their work as overhead. The unstated goal of the program was to gather the information needed to resist outsourcing the development jobs in the organization. The goal was not shared for fear of increasing turnover and of angering the CFO who pushing for outsourcing.

It would be difficult to argue that that doing work that should not be done makes sense. However determining “that which should not be done” is generally harder than walking up to a team and pointing to specific tasks. There is nothing wrong with asking individuals and teams involved in a process for their input, but the core of all process changes needs to be to gathering data to validate or negate opinions.

Re-read Saturday poll – vote for up to three books!

Baseline, not base line...

Baseline, not base line…

Measuring a process generates a baseline.  By contrast, a benchmark is a comparison of a baseline to another baseline.  Benchmarks can compare baselines to other internal baselines or external baselines.  I am often asked whether it possible to externally benchmark measures and metrics that have no industry definition or occasionally are team specific. Without developing a common definition of the measure or metric so that data is comparable, the answer is no.  A valid baseline line and benchmark requires that the measure or metric being collected is defined and consistently collected by all parties using the benchmark.

Measures or metrics used in external benchmarks need to be based on published or agreed upon standards between the parties involved in the benchmark.  Most examples of standards are obvious.  For example, in the software field there are a myriad of standards that can be leveraged to define software metrics.  Examples of standards groups include: IEEE, ISO, IFPUG, COSMIC and OMG. Metrics that are defined by these standards can be externally benchmarked and there are numerous sources of data.  Measures without international standards require all parties to specifically define what is being measured.  I recently ran across a simple example. The definition of a month caused a lot of discussion.  An organization compared function points per month (a simple throughput metric) to benchmark data they purchased.  The organization’s data was remarkably below the baseline.  The problem was that the benchmark used the common definition of a month (12 in a year) while their data used an internal definition of a 13 period year. The benchmark data or their data should have been modified to be comparable.

Applying the defined metric consistently is also critical and not always a given.  For example, when discussing the cost of an IT project understanding what is included is important for consistency.  Project costs could include hardware, software development and changes, purchased software, management costs, project management costs, business participation costs, and the list could go on ad-infinitum.  Another example might be the use of story points (a relative measure based on team perception), while a team may well be able to apply the measure consistently because it is based on perception comparisons, outside of the team would be at best valueless and at worst dangerous.

The data needed to create a baseline and for a benchmark comparison must be based on a common definition that is understood by all parties, or the results will generate misunderstandings.  A common definition is only a step along the route to a valuable baseline or benchmark, the data collection must be done on a consistent basis.  It is one thing to agree upon a definition and then have that definition consistently applied during data collection. Even metrics like IFPUG Function Points, which have a standard definition and rigorous training, can show up to a five percent variance between counters.  Less rigorously defined and trained metrics are unknowns that require due diligence by anyone that use them.