There is still time to register!

This is part 2 of an essay based on a presentation I am doing Friday, June 5th at 9 EDT (sign-up: https://bit.ly/3gH0Uy5). I am presenting as part of IFPUG’s Knowledge Cafe Webinar Series. The presentation is titled Software Development: Preparing For Life After COVID-19.

Management guru Peter Drucker said, “There is nothing so useless as doing efficiently that which should not be done at all.” Benchmarking is a tool to identify work that should not be done or done better while continuous improvement provides a structure for improving the opportunities found in the benchmark. There are many approaches to benchmarking and I suggest combining qualitative and quantitative assessments.  The combination is critical for identifying how to improve effectiveness and efficiency. In a post-COVID-19 environment, all of us will need to answer whether how we are working is delivering tangible value in a financially sound manner. If you don’t know the answer to the effectiveness and efficiency questions leaders will be reluctant to spend money on you, let alone large scale improvement exercises. Once you know where you stand then begin to make changes using a feedback loop to know whether or not your experiments are working. (more…)

 

Knowing what should not be done is rarely this straightforward.

Knowing what should not be done is rarely this straightforward.

 

A quick reminder I am running a poll to choose the next book for re-read Saturday. The poll remains open for another week. Currently Goldratt’s The Goal: A Process of Ongoing Improvement is topping the list, BUT the just a few votes could change the book at the top of the list very quickly. The poll is republished at the bottom of this post.

Management guru Peter Drucker said “There is nothing so useless as doing efficiently that which should not be done at all.” Two powerful types of techniques to identify work that should not be done are process mapping, and baselining and benchmarking.

Process Mapping – A process map focuses on the capturing and documenting the sequence of tasks and activities that comprise the process. A process map is generally constrained to a specific set of activities within a broader organization. Process mapping is useful at a tactical level while other mapping techniques like value chain mapping are often more useful when taking an organizational view. Developing a process map (of any type) allows an analyst to review each step in the process to determine whether they add value. Steps that do not add value should be evaluated for removal.

Baselining and Benchmarking – There are two typical approaches to benchmarking. The first is through measurement of the process to generate a baseline.  Once a baseline is established, that baseline can be then be compared to another baseline to generate a benchmark. This type of benchmark is often called a quantitative benchmark. The second type of a benchmark compares the steps and activities required in a process to a process that yields a similar product. Comparisons to frameworks such as the TMMi, CMMI or Scrum are a form of process benchmarking.

The use of analytical techniques such as process mapping or benchmarking is important to ensure that opinions and organizational politics don’t outweigh processes or work steps that generate real value. Without analysis, it is easy to sit down with an individual or a team and ask them what should not be done and get the wrong answer. Everyone has an opinion informed by his or her own experiences and biases. Unfortunately, just asking may identify a process or task that one person or team feels is not useful but has value to the larger organization. For example, a number of years ago an organization I was working with had instituted a productivity and customer satisfaction measurement program. The software teams involved in the program saw the effort needed to measure their work as overhead. The unstated goal of the program was to gather the information needed to resist outsourcing the development jobs in the organization. The goal was not shared for fear of increasing turnover and of angering the CFO who pushing for outsourcing.

It would be difficult to argue that that doing work that should not be done makes sense. However determining “that which should not be done” is generally harder than walking up to a team and pointing to specific tasks. There is nothing wrong with asking individuals and teams involved in a process for their input, but the core of all process changes needs to be to gathering data to validate or negate opinions.

Re-read Saturday poll – vote for up to three books!

How do you calculate value?

How do you calculate value?

IT value is an outcome that can be expressed as a transaction; a summation of debits and credits resulting in a number.   Unfortunately, even if we can create a specific formula, the interpretation of the number is problematic.   Measures of the economy of inputs, the efficiency of the transformations and the effectiveness of specific outputs are components of a value equation, but they only go so far.   I would like to suggest that customer satisfaction makes interpretation of value possible.

Those that receive the service determine the value, therefore value is specific to the project or service. In order for value to be predictable, you must assume that there is a relationship between how the product or service is created and the value perceived.  When we are assessing the value delivered by an IT department, which is part of a larger organization, it is rare that we are allowed the luxury of being able to declare that the group we are appraising is a black box.  Because we can’t pretend not to care about what happens inside the box or process, we have to find a way to create transparency so that we can understand what is happening. For example, one method is to define the output of the organization or processes.  The output or product can viewed as  a synthesis of inputs, raw materials and process.  The measuring the efficiency of the processes used in the transformation process is a typically measure of value add. The product or output is only valuable if it meets the users need and is fit for use. Knowing the details of the transformation process provides us with the knowledge needed to make changes. While this sounds complex, every small business has had to surmount this complexity to stay in businesses.  A simple value example from the point of view of a restaurant owner follows.

  • A customer enters the restaurant and orders a medium-rare New York Strip steak (price $32.00)
  • The kitchen retrieves the steak from the cooler and cooks the steak so that it is done at the proper time and temperature.  (The inputs include requirement for the steak, effort of waiter and kitchen staff.)
  • The customer receives a medium-rare New York Strip steak

From the restaurant owner’s point of view the value equation begins as the price of the steak minus the cost of steak, preparation, and overhead.   If the cost of steak and servicing the customer was more than the price charged, an accounting loss would have resulted and if the costs were less  . . .  an accounting profit.  The simple calculation of profit and loss provides an important marker in understanding value, but it is not sufficient.   For example, let’s say the customer was the restaurant reviewer for a large local newspaper and the owner comp’ed the meal AND the reviewer was happy with the meal.  The owner would derive value from the transaction regardless of the accounting loss from that single transaction.  As I noted earlier, customer satisfaction is a filter that allows us to interpret the transaction.   Using our example, if the customer was unhappy with his or her steak the value derived by the restaurant will be less than totaling of the accounting debits and credits would predict.  While a large IT department has many inputs and outputs, I believe the example presents a path for addressing value without getting lost in the complexity of technology.

In a perfect world, IT value would be established in a perfect market place.  Customers would weigh the economy, efficiency, effectiveness and customer satisfaction they perceive they would garner from a specific development team to decide who should do their work. If team A down the could do the work for less money and higher quality or deliver it sooner, they would get the work.   Unfortunately, perfect market places seldom exist and participants could easily leverage pricing strategies that internal organizations would not be able to match.  The idea of a project level market place has merit and benchmarking projects is a means of injecting external pressure that helps focus teams on customer satisfaction.

Measuring IT value, whether at a macro or project level, needs to be approached as more than a simple assessment of the processes that convert inputs into products or services that the business requires.  Measure the inputs and raw materials, measure and appraise the processes used in transformation and then determine the user’s perception of the output (where the customer and user are different you need to understand both points of view).  Knowing the value of all of these components while having your thumb on the filter of customer satisfaction will put you in a position to not only determine the value you are delivering (at least over the short-term), but to predict how your customers are perceiving the value you are delivering.  Remember forewarned is forearmed.

Baseline, not base line...

Baseline, not base line…

Measuring a process generates a baseline.  By contrast, a benchmark is a comparison of a baseline to another baseline.  Benchmarks can compare baselines to other internal baselines or external baselines.  I am often asked whether it possible to externally benchmark measures and metrics that have no industry definition or occasionally are team specific. Without developing a common definition of the measure or metric so that data is comparable, the answer is no.  A valid baseline line and benchmark requires that the measure or metric being collected is defined and consistently collected by all parties using the benchmark.

Measures or metrics used in external benchmarks need to be based on published or agreed upon standards between the parties involved in the benchmark.  Most examples of standards are obvious.  For example, in the software field there are a myriad of standards that can be leveraged to define software metrics.  Examples of standards groups include: IEEE, ISO, IFPUG, COSMIC and OMG. Metrics that are defined by these standards can be externally benchmarked and there are numerous sources of data.  Measures without international standards require all parties to specifically define what is being measured.  I recently ran across a simple example. The definition of a month caused a lot of discussion.  An organization compared function points per month (a simple throughput metric) to benchmark data they purchased.  The organization’s data was remarkably below the baseline.  The problem was that the benchmark used the common definition of a month (12 in a year) while their data used an internal definition of a 13 period year. The benchmark data or their data should have been modified to be comparable.

Applying the defined metric consistently is also critical and not always a given.  For example, when discussing the cost of an IT project understanding what is included is important for consistency.  Project costs could include hardware, software development and changes, purchased software, management costs, project management costs, business participation costs, and the list could go on ad-infinitum.  Another example might be the use of story points (a relative measure based on team perception), while a team may well be able to apply the measure consistently because it is based on perception comparisons, outside of the team would be at best valueless and at worst dangerous.

The data needed to create a baseline and for a benchmark comparison must be based on a common definition that is understood by all parties, or the results will generate misunderstandings.  A common definition is only a step along the route to a valuable baseline or benchmark, the data collection must be done on a consistent basis.  It is one thing to agree upon a definition and then have that definition consistently applied during data collection. Even metrics like IFPUG Function Points, which have a standard definition and rigorous training, can show up to a five percent variance between counters.  Less rigorously defined and trained metrics are unknowns that require due diligence by anyone that use them.

2-12 2013 Room With A View

A Room With A View

It is easy to become enamored with the practices that have gotten you where you are today.  Where you are today is fine, for today but without innovation you probably will be less happy with where you will be in the future. Take a look out of the metaphorical window at the world around you.

Benchmark your process, techniques or organization against someone better than yourself and see what you learn.  Whether you benchmark formally or informally is less important than the looking out the window.  If you take the time to look out the window you just might discover something extraordinary.

To paraphrase Edwin Starr, “Best Practices, huh, what are they good for? Absolutely nothing,  Say it again . . .”

Every organization wants to use best practices. How many organizations do you know that would stand up and say we want to use average practices? Therefore a process with the moniker “best practice” on it has an allure that is hard to resist.  The problem is that one organization’s best practice is another’s average process, even if they produce the same quality and quantity of output.  Or even worse, one organization’s best practice might be beyond another organization.  The process reflects the overall organizational context.  It is possible that adopting a new process wholesale could produce output faster or better, but without tailoring, the chances are more random than many consultants would suggest. For example, just buying a configuration management tool without changing how you do configuration management will be less effective melding the tool with your processes.  Tailoring will allow you to use the process based on the attributes of the current organizational context such as the organization’s overall size or the capabilities of the people involved.

An example of an organization’s best practice that might not translate to all of its competitors is the use of super sophisticated inventory control computer systems used at Walmart. Would Walmart’s computer system help a local grocery store (let’s call this Hometown Grocery)? Not likely, the overhead of the same system would be beyond Hometown’s IT capabilities and budget.  However if hundreds of Hometown Groceries banded together, the answer might be different (tailoring the process to the environmental context).  Without tailoring the context, the best practice for Walmart would not be a best practice for our small town grocery.

The term best practice gets thrown around as if there was a dusty old tome full of magical incantations that will solve any crisis regardless of context (assuming you are a seventh level mage).  There are those that hold up the CMMI, ISO or SCRUM and shout (usually on email lists) that they are only way.  Let’s begin by putting the idea that there is a one-size-fits-all solution to every job to rest.  There isn’t and there never was any such animal.  Any individual process, practice or step that worked wonderfully in the company down the street will not work the same way for you, especially if you try to it do it same way they did.  Software development and maintenance isn’t a chemical reaction, a Lego construct or even magic.  Best practices, what are they good for?  Fortunately a lot, if used correctly.

Best practices find their highest value as a tool for you to use as a comparison in order for you to expose the assumptions hat have been used to build or evolve your own processes.   Knowledge allows you to challenge how and why are you are doing any specific step and provides an opportunity for change.  How many companies have embraced the tenants of the Toyota Production Systems after benchmarking Toyota?

Adopting best practices without regard to your context may not yield the benefits found on the box.  If you read the small print you’d see a warning. Use best practices only after reading all of the instructions and understanding of your goals and your environment.  This is not to say that exemplary practices should not be aggressively studied and translated into your organization.  Ignoring new ideas because they did not grow out of your context is just as crazy as embracing best practices without understanding the context it was created in. Best practices as an ideal, as a comparison so that you can understand your organization makes sense, not as plug compatible modules.