Efficiency and Beatty

I have been asking senior executives how they expect organizational goals to change after the shock of COVID-19. Mark Summers, Senior Director of Quality at Northwestern Mutual and Vice President of TMMi America, started his response with a single word, “efficiency.”  Mr. Summers went on to say that efficiency meant doing the right things and doing them well. He concluded by suggesting that leaders would need to improve delivery — make it faster, more efficient and deliver more value. Efficiency is a measure of how much-wasted effort there is in a process or system. The concept is highly charged in a profession that still views itself as more of a mixture of art and craftmanship then of engineering practices. All value chains and the process they are made up of must: (more…)

carrying-a-basket-on-his-head

Efficiency a measure of how much wasted effort there is in a process or system. A high efficiency process has less waste. In mechanical terms the simplest definition of efficiency is the ratio of the amount of energy used compared to the amount of work done to create an output. When applied to IT projects, efficiency measures how staffing levels effect how much work can be done. The problem is that while a simple concept, it is difficult because it requires a systems-thinking view of software development processes.  As a result it is difficult to measure directly. (more…)

Grave Hands Prague

Originally posted on the Software Process and Measurement Cast 57

Efficiency has a simple technical definition, the ratio of work done to the energy required to do that work. In the software development world, efficiency is rarely managed. Rather the discussion tends to be on cost. Cost and efficiency are different; they are related, but they are not the same.

Increasing the level of efficiency in a person, process or organization will require some kind of change. You do not get a different result by doing the same thing over and over. We know that in order to change efficiency, we need to change the process that transforms an input into a product, the environment the transformation occurs inside, the input (such as people or raw materials) or some combination of these areas.

While it might be too obvious for most people, the first place to look when you want to improve efficiency is the process. Some examples include, removing process dead wood, simplifying the flow, adding tools and automation whether through frameworks like CMMI or Agile or through techniques like Six Sigma, lean or others. Unfortunately process changes are not instantaneous, which causes many organizations to jump over the process improvement step and go right to the cutting people. The thought process around the cutting people option goes something like this:

We will cut some percentage of people because we have gotten ‘fat’. Those that are left will pick of the slack through working a little harder and by multitasking. They should just be happy to have jobs. Tasks that we just can’t cover anymore probably didn’t need to be done anyway.

Multitasking is the silver bullet of the 21st Century. However, relying on multitasking steals efficiency. According to René Marois, a neuroscientist and director of the Human Information Processing Laboratory at Vanderbilt University, “trying to do two things at once can be disadvantageous.” Now I will readily admit that I have not been able to put aside multitasking. Humans do most of their multitasking in an unconscious mode (breathing, pumping blood and other mostly autonomic tasks). I have tested conscious multitasking personally, trying to talk on the cell phone while driving; no one has died (although my wife has threatened divorce). Unfortunately humans aren’t good at multitasking because we do not really multitask. What we do typically is fast switch shuffling between tasks quickly focusing on each slice for a brief period of time. It is during the switch and reorientation that we lose efficiency. An article published in the The Journal of Experimental Psychology: Human Perception and Performance reported a 2001 study by Joshua Rubinstein, David Meyer, and Jeffrey Evans on the brain’s executive control process found that “at best, a person needs to be aware that multitasking causes inefficiency in brain function.”

Focus is required for efficiency. Doing one thing at a time correctly has an appeal both in terms of logic and science. We are faced with the issue that most workplace cultures do not seem to support focus with action. As evidence I suggest you count the number of interrupters in your environment (cell phones, email, twitter, instant messengers, etc.). Our work culture is sending a strong message that you are not expected to be cut off from the information flow at any time. The ability to deal with continual partial attention is a career success factor in many instances. Quiet time for concentration tends to happen outside of core hours or at home when we are tired. The question we must ask is when the does the cost of interruptions and multitasking ever outweigh the benefit of focus? How can we construct processes or environments that allow connection and collaboration to happen while providing an atmosphere where focus is not the odd man out of the equation?

 

Knowing what should not be done is rarely this straightforward.

Knowing what should not be done is rarely this straightforward.

 

A quick reminder I am running a poll to choose the next book for re-read Saturday. The poll remains open for another week. Currently Goldratt’s The Goal: A Process of Ongoing Improvement is topping the list, BUT the just a few votes could change the book at the top of the list very quickly. The poll is republished at the bottom of this post.

Management guru Peter Drucker said “There is nothing so useless as doing efficiently that which should not be done at all.” Two powerful types of techniques to identify work that should not be done are process mapping, and baselining and benchmarking.

Process Mapping – A process map focuses on the capturing and documenting the sequence of tasks and activities that comprise the process. A process map is generally constrained to a specific set of activities within a broader organization. Process mapping is useful at a tactical level while other mapping techniques like value chain mapping are often more useful when taking an organizational view. Developing a process map (of any type) allows an analyst to review each step in the process to determine whether they add value. Steps that do not add value should be evaluated for removal.

Baselining and Benchmarking – There are two typical approaches to benchmarking. The first is through measurement of the process to generate a baseline.  Once a baseline is established, that baseline can be then be compared to another baseline to generate a benchmark. This type of benchmark is often called a quantitative benchmark. The second type of a benchmark compares the steps and activities required in a process to a process that yields a similar product. Comparisons to frameworks such as the TMMi, CMMI or Scrum are a form of process benchmarking.

The use of analytical techniques such as process mapping or benchmarking is important to ensure that opinions and organizational politics don’t outweigh processes or work steps that generate real value. Without analysis, it is easy to sit down with an individual or a team and ask them what should not be done and get the wrong answer. Everyone has an opinion informed by his or her own experiences and biases. Unfortunately, just asking may identify a process or task that one person or team feels is not useful but has value to the larger organization. For example, a number of years ago an organization I was working with had instituted a productivity and customer satisfaction measurement program. The software teams involved in the program saw the effort needed to measure their work as overhead. The unstated goal of the program was to gather the information needed to resist outsourcing the development jobs in the organization. The goal was not shared for fear of increasing turnover and of angering the CFO who pushing for outsourcing.

It would be difficult to argue that that doing work that should not be done makes sense. However determining “that which should not be done” is generally harder than walking up to a team and pointing to specific tasks. There is nothing wrong with asking individuals and teams involved in a process for their input, but the core of all process changes needs to be to gathering data to validate or negate opinions.

Re-read Saturday poll – vote for up to three books!

Measurement is no longer optional

Measurement is no longer optional

Measurement is a topic that most IT practitioners would rather avoid discussing. Many practitioners feel that it not possible to measure software, or that all that matters is whether the customer is satisfied. IT managers tend to have a point of view that incorporates customer satisfaction and at least a notional view of how efficiently they are spending their budget. When managers or other leaders do discuss the topic of measurement, their arguments for measurement tend to begin intellectually.  Arguments begin with statements like, “we need to measure to ensure meet a model like the CMMI,” or “we need to measure to build knowledge so that we can estimate.” While these are good reasons to measure, many measurement programs are being driven by more basic and powerful need: the need to demonstrate efficiency. The demand for IT continues to explode in every organization I speak with. The problem is that IT, whether developing, enhancing or maintaining software, is expensive. Couple that expense with the cost to acquire and maintain hardware and the budgets of some IT organizations become larger than moderately industrialized countries, and are growing. The pressure that growing IT budgets create on the bottom line means that efficiency can no longer be a dirty word in IT organizations. If efficiency is, or is about to become, important then organizational measurement can’t be far away.

Much of the effort in the development field over the past ten to twelve years has been focused on effectiveness and customer satisfaction, as evidenced by the Agile movement. Efficiency has begun to creep back into the conversation under the auspices of Lean. Measurement programs focused on customer satisfaction now must be refitted to discuss time-to-market (how much time is needed get a unit of work to market), productivity (how much effort is needed to get a unit of work to market), cost efficiency (how much does a unit of work cost to come to market) and quality per unit of work. All of these metrics and the measures are based on need to be comparable and combinable across the whole of the IT organization.

A roadmap to develop measurement program can be as straightforward as: Defining Goals and Values

  • Developing Common Measures
  • Mapping the Linkage Between Goals and Common Measures
  • Identifying Measures and Metrics (Including Gap Analysis)
  • Validating Metrics to Needs, Goals and Values
  • Developing Metrics Definitions
  • Mapping Metrics Data Needs
  • Defining an Overall Dashboard

In 2014, many IT organization budgets are beginning to recover and grow; however, because of the huge backlog in demand for IT services and products the need for IT department to efficiently use their budgets is not going to change. What is going to change is the need for individual teams to solve their own measurement quandary, because all IT work needs to be combined, compared and evaluated. Solid measurement programs have to balance efficiency, effectiveness, quality and customer satisfaction.  The process starts with understanding the goals of the organization and then ensuring what gets measured and demonstrate progress against those goals.

DSC_0080

Efficiency a measure of how much wasted effort there is in a process or system. A high efficiency process has less waste. In mechanical terms the simplest definition of efficiency is the ratio of the amount of energy used compared to the amount of work done to create an output. When applied to IT projects, efficiency measures how staffing levels effect how much work can be done. The problem is that while a simple concept, it is difficult because it requires a systems-thinking view of software development processes.  As a result it is difficult to measure directly. (more…)

Audio Version:  SPaMCAST 167

To multitask or not to multitask, that is the question.  Whether ’tis nobler in the mind to suffer the slings and arrows of mono-tasking or to take arms against a sea of trouble and by opposing do more…at least appear to do more.  Is the discussion of mono-tasking versus multitasking a tempest in a teapot, a true productivity killer or perhaps are we really discussing how we segment work?  Depending on how you define the word, I believe it is the later.  The problem is that like so many other words we have conflated a number of concepts into a broader idea.

In my opinion, there are three common scenarios that get conflated into the term multitasking:

  1. Actively doing two or more things at once (breathing and talking).
  2. Actively doing one thing while passively doing another (writing this essay while listening to the radio).
  3. Switching between tasks related, unrelated or loosely coupled (rotating between reading a book and updating Facebook).

The formal definition from the Merriam Webster dictionary defines multitasking[1] as “the performance of multiple tasks at one time.”  This fits scenario one and two (to a less extent) but definitely not three.

In the work place, true multitasking is rare.  It is not that we humans can’t multitask because we can multitask even using the strictest application of the definition of the word.   We are good at multitasking when it is a combination that includes an autonomic task (like breathing, heart beating or sweating) and something more active such as chewing gum or when it includes accidents such as the combination of talking on a cell phone while driving and running into the back of my car. The data shows that generally humans are not really very good at true multitasking in the workplace. Linda Stone noted in the Huffington Post[2] that people tend to stop breathing while they answer email. She even named the malady, email apnea. If you need more examples just reflect on the data concerning cell phone usage and driving or if data doesn’t work for you then try rubbing your stomach and patting your head at the same time. Computers, on the other hand, are really good at multitasking and no matter the number of processors we have on our desktop we have not crossed that chasm to become full cyborgs yet.

The second scenario termed multitasking is a bit of a nuance: actively performing a task while passively performing another.  A classic example of this form of multitasking is reading a book while the radio is playing in the background.  In many cases this form of multitasking is an attempt to manipulate the work environment to aid focus.  The question of whether background noise affects concentration has been often studied.  In the January 2010 edition of the Scientific American, the magazine noted that background or low-level noise often disrupts people’s concentration[3]. Whether background noise helps or hurts concentration is probably one of brain wiring.   During the Christmas holiday I observed my son-in-law who can concentrate for hours but zones everything else out and my youngest daughter who requires multiple simultaneous inputs to get into flow state.  One for background noise and one against, maybe everyone is different.  At any rate, the data suggests that even on as basic a level as the having the radio on while reading, multitasking generally does not improve focus and efficiency.  By the way, if you are one of those that require background noise to concentrate, I recommend good headphones.

 

The final scenario generally conflated with multitasking is switching between multiple tasks.  This scenario is also known as fast-switching or serial mono-tasking.  Switching is in reality the juggling of resources to accomplish a set of tasks; at a macro-level you might be multitasking; however, on a micro-level you are mono-tasking.  The issue with this type of behavior is that juggling is not always easy even if you are good at it.  Inefficiencies are caused by both the queuing/scheduling of tasks and then the retooling that occurs when switching between tasks. During my research for this essay I found that in the brain, juggling multiple tasks is performed by mental executive processes that manage the individual tasks and determine how, when, and with what priorities they get performed[4]. The executive process coordinates activities so that the right outcomes occur, an analogy for what is going on inside of our minds is the air traffic control system.  The air traffic control system makes sure planes get where they are going with a minimum of delay and without two planes trying to use the same spot in the sky at the same time (bad). The coordination of tasks requires a level of overhead, just think about coordinating schedules for shared project resources if you need proof that overhead is required. However, more significant inefficiencies occur when a person switches between tasks.  Task switching experiments have shown a need for the person switching tasks to take time and mental resources to reorient. The reorientation tax (the amount of effort you need to expend to switch tasks) goes up with task complexity, lack of familiarity of the next task and the relative differences between the tasks.  Research has shown that task queuing (lining tasks up in order of precedence) so that the person doing the work can know what is coming and /or can influence the order they are to be done, can be used to reduce the impact of switching[5].  Reduction in the impact of switching can be mediated by separable executive control processes that prepare systemically for transitions between successive tasks[6].  The issue with the fast switching brand of multitasking is that in many cases the queuing of tasks is not as seamless as it should be which creates wait-states or multiple re-tooling situations because work does not flow as cleanly as it is diagramed on the Microsoft Project schedule (this is one of the reasons Reinertsen indicates that full allocation reduces efficiency).  Please note I am not comparing this type of multitasking to taking breaks between tasks to clear the “buffers” which has been shown to be valuable.

The data suggests that a mono-tasking environment that reduces interruptions is the most efficient work scenario[7]; however, the work environment is rich in interruptions.  Further according to Capers Jones, the information technology field has more named specialties than any other profession which means that individuals are spread across more project teams so they can practice their specialty.  Switching between projects leads to the switching tax we mentioned earlier.  Switching between tasks and projects is firmly etched into the classic project management body of knowledge based on 19th century manufacturing thinking (it’s now the 21st century).  We have even gone as far as to building the scheduling of shared resources into our project management tools which suggests that getting rid of the problem will not happen in the near future.  Today’s working environment leaves us with few options as methodologists.  Our goal must be to avoid switching when possible, minimize the impact when we can’t and then to decide to live with what we can’t change.

 

Next . . . A plan to address at least part of the problem

 

 


[4] Choices, Choices: Limits of the Brain, Anthrostrategist  Blog, August 28, 2011 (referenced December 17, 2011)

[5] Rubinstein, J. S., Meyer, D. E., & Evans, J. E. (2001). Executive Control of Cognitive Processes in Task Switching. Journal of Experimental Psychology: Human Perception and Performance, 27(4),763-797.

 

[6] Rubinstein, J. S., Meyer, D. E., & Evans, J. E. (2001). Executive Control of Cognitive Processes in Task Switching. Journal of Experimental Psychology: Human Perception and Performance, 27(4),763-797.

 

[7] Multitasking and Monotasking: The Effects of Mental Workload on Deferred Task Interruptions, Dario D. Salvucci and Peter Bogunovich, PDF, https://www.cs.drexel.edu/~salvucci/publications/Salvucci-CHI10.pdf, December 12, 2011, p1