We measure delivery time but do we measure the condition of what is delivered?

We measure delivery time but do we measure the condition of what is delivered?

Mark Twain once said “There are lies, damn lies, and statistics.” The same numbers can be used to support many causes. Even though numbers are just numbers, they can be used to tell a story.

What you do with the messages developed from the metrics you collect is important in its own right. Messages become tools (or weapons) to motivate. Motivation can range from the positive (look how good you are doing) to the negative (look how bad you are doing) or the ultimatum (do better or else). Here we will discuss what happens when there is no message or where the message and data aren’t synchronized. (This is the third installment, see the whole series here.)

So You Have Numbers, Now Do Something:

“The key is that there is no point to taking measurements and deriving metrics if they aren’t part of some (planned) decision making process.”

— Jack Hoffman, Wolthers Kluwer

Once data has been collected, analyzed and translated into information, now you have to do something. Given that this is a decision point, it is a place where people and processes can make Good Numbers Go Bad.

One possible problem occurs when no action is taken. When measures are collected, analyzed and actionable recommendations are created, and then nothing happens, this sends a message. Why go through the effort if it is not going to be used? Some of the reasons that action isn’t taken include:

  • Poor measures;
  • Measuring the wrong things;
  • Failure to measure things you actually have control over;
  • The measurements don’t link to organizational goals, or
  • Simple inertia.

Make sure you understand why you are collecting the data. Are the metrics tied to the organizational goals, targeted at changing behaviors, or are they being done for less important reasons? The damage can impact the whole measurement program. Measurement is counterproductive when it costs more than the value it creates. Without action, the value equation cannot be positive. Without usage, good numbers are less than bad – they are meaningless.

There are times when the best action is no action (even though just moments ago I pointed out that was poor practice). When the action planned to address a measure or metrics is irrational, action is worse than inaction. Irrational responses make Good Numbers Go Bad. What is worse is that one irrational action precipitates another. It can cheapen the metrics and result in program abandonment. I recently heard of an example of a cascade of poor decisions that began with an organization that decided to compare story point velocity of all teams (story points are a measure of size that is relative to the team that uses them) – irrational behavior one.  The comparisons were made public causing at least two of the teams to inflate their estimates of story points to increase the reported velocity – irrational behavior two. Measurement had become a game to be manipulated. Unfortunately, recognizing the difference between a rational and irrational action is easier said than done. One solution is to implement a metrics oversight board (akin to an Engineering Process Group or SEPG). The board would act as advisers or mentors that can oversee plans and activities. Oversight does not ensure that the metrics program won’t jump the shark[1] but it certainly makes it less likely.

Follow Up, Or The Lack Thereof:

Lack of action is a critical shortcoming into which many metrics programs fall. Lack of action and wrong actions are equally injurious. Doing nothing allows the imagination to run wild, allowing anyone not in the know to believe that in a situation in which Good Numbers Go Bad. Imagination in this case is your worst enemy. The prescription is simple: make sure everyone knows what is being done with the information. Incorporate usage scenarios into the metrics training. Training is a two way street; using hands-on training scenarios will allow you to gauge reactions to specific metrics and to elicit usability information. In a perfect world the training experience should be used to create a dialog, which will add further effectiveness to the process.

Lack of follow-through occurs fairly often in young metrics programs is for measures to be collected, analyzed, recommendations generated, and then nothing happens. Put another way, someone forgets to follow-through on the actions generated from all that data collection and analysis. Follow-through is about presenting the proper face to the organization, doing what you said you would do based on the read of the data. As with any of the other categories, action or inaction are messages about the perceived importance of the behavior being measured.

[1] Jumping the shark is a metaphor that was originally used to denote the tipping point at which something is deemed to have passed its peak. – http://en.wikipedia.org/wiki/Jump_the_shark

Measure Twice or Good Numbers Can Go Bad

Measure Twice or Good Numbers Can Go Bad

Mark Twain once said “There are lies, damn lies, and statistics.” The same numbers can be used to support many causes. Even though numbers are just numbers, they can be used to tell a story.

What you do with the messages developed from the metrics you collect is important in its own right. Messages become tools (or weapons) to motivate. Motivation can range from the positive (look how good you are doing) to the negative (look how bad you are doing) or the ultimatum (do better or else). Here we will discuss what happens when there is no message or where the message and data aren’t synchronized (part two!)

Using Team Measures for Individuals:

Measurement is an intimate subject, because it exposes the person being measured to praise or ridicule. Management many times will begin with a group or team level focus only to shift inextricably to focus on the individual. The individual-view is fraught with difficulties, such as gaming and conflict, which typically becomes the norm for causing anti-team behavior, which, in the long run, will reduce quality, productivity and time-to-market (short-term gains, but long-term pain). The focus of measurement must stay at team level for measures that focus on the results of team behavior and only evolve to individual measures when measure relates to the output of an individual.

Don’t We Want To Be Average?:

Another classic mistake made with numbers is regression to the mean. Performance will tend to approximate the average performance demonstrated by the measures chosen. A method taken to address the mistaken is to:

1. Select the metric based on the behavior you want to induce,

2. Set goals to incent movement in the proper direction and away from average.

The proper direction is never to be average in the long run.

It’s All About People:

It is difficult to ascribe a motive to a number. It’s merely the tool of the person wielding it. Put a number in a corner and it will stay there minding its own business not attracting attention or detracting from anyone else. Add a person and the scenario begins to change. The wielder, whether Lord Voldemort, Dumbledore or some other high lord of metrics, becomes the determining factor on how the number will be represented. Measures and metrics can be used for good or evil. Even when the measures are properly selected and match the organization’s culture, “badness” can still occur through poor usage (people). There is one theory of management that requires public punishment of perceived laggards (keel haul the lubber) as a motivation technique. The theory is that punishment or the fear of punishment will lead to higher output. In real life, fire drills (team running around like crazy to explain the numbers) are a more natural output which absorb time that could be used to create value for IT’s consumers. The fire drills and attempts to game the numbers reduces value of the measurement specifically and management more generally.

Are Report Cards A Silver Bullet?:

Report cards are a common tool used to present a point-in-time view of an organization, project or person. At their best, report cards are benign tools, which are able to consolidate large quantities of data into a coherent story. However, creating coherence is not an easy feat. It requires a deft hand that allows a balanced, comprehensive view that is integrated with what is really important to the firm and the entity being measured. Unfortunately, since this is this difficult, often a stilted view is given based on that data which is easily gathered or important only to a few parties. This stilted view is surefire prescription for when Good Numbers Go Bad. The solution is to build report cards based on the input from all stakeholders. The report card needs to include flexibility in the reporting components so they can be tailored to include the relevant facts that are unique to a project or unit without disrupting its framework. Creating a common framework helps rein in out-of-control behavior by making it easy to compare performance (peer-pressure and comparison being the major strength of report cards).

Most of us have been introduced to report cards during school. In most cases, they were thrust upon us on a periodic basis, the report card presenting a summary of the basic school accounting. While we did not get to choose the metrics, at least we understood the report card, and the performance it represented seemed to be linked to our efforts. Good Numbers Go Bad when corporate report cards are implemented using team-level metrics as a proxy for individual performance. As I noted above, balance is critical to elicit expected behavior as well as application of metrics at the proper level of aggregation (teams to teams, people to people). Team metrics present information on how the whole team performed and was controlled, unless the metrics are applied to the unit that controlled performance it misses the mark.

The Beatings Will Continue Until . . .:

“One characteristic of a bad metrics program is to beat people up for reporting true performance.” — Miranda Mason, Accenture

Terms like “world-class” and “stretch” get used when setting goals. These types of goals are set to elicit teams or individuals to over-perform for a period of time. This thought process can cause inappropriate behaviors, in which the goal seekers act more like little children playing soccer (“kick the ball and everyone chases the ball”) rather than a coordinated unit. Everyone chases the ball rather than playing their position like a team,.  Goals that make you forget teamwork are   a perfect example of when Good Numbers Go Bad. Good measurement programs challenge this method of goal setting. Do not be impressed when you hear quotes like” we like to put it out there and see who will make it.”.

Goals are an important tool for organizations and can be used to shape behavior. Used correctly, both individual and team behaviors can be synchronized with the organization’s needs. However, when used incorrectly, high pressure goals can create opportunities for unethical behavior. An example of unethical behavior I heard recently was in an organization that promoted people for staying at work late. The thinking was that working more hours would increase productivity. In this case, a manager would check in at approximately 8 PM every evening ostensibly to do something in the office, but in reality to check to see who was in office. In this case, many people did nothing more than to go out to dinner then come back to work or just read a newspaper until the appointed hour. When the manager checked, there were many people working away at their desks. I suspect that little additional time was applied to delivering value to the organization or their customer. The manager should spend time determining the behavior (good and bad) that can be incentivized as he set metrics and the goals for those metrics. Spending the time on the psychology of measures will increase the likelihood that you will get what you want.


As noted earlier, the way numbers are used has a dramatic impact of whether long-term good will is extracted from the use and collection of metrics. The way numbers are used is set by the intersection of organizational policy and politics. The mere mention of the word politics connotes lascivious activities typically performed inside a demonic pentagram. However, all human interactions are political in nature. Interactions are political and organizations are collections of individuals. Many times the use of the word “political” is a code to indicate a wide range of negative motives (usually attributed to the guy in the next cube) or to hide the inability to act. When you are confronted with the line “we can’t challenge that, it is too political,” step back and ask what you are really being shown. Is it:

  • • lack of power or will;
  • • lack of understanding, or
  • • lack of support?

Once you have identified the basis for the comment, you can build a strategic response.

Metrics, A Tool For Adding More Pressure?:

There are many types of pressure that can be exerted using metrics. Pressure is not necessarily a bad thing. Rather, it is the intent of the pressure that is the defining attribute of whether the metric/pressure combination is good or bad. Good Numbers Go Bad when measurement pressure is used to incent behavior that is outside of ethical norms. Pressure to achieve specific metrics, rather than a more constructive goal, can create an environment where the focus is misplaced. School systems that have shifted from a goal of creating an educated community to the goal of passing specific tests are a good example. An IT example that was once described to me was in an organization that measured productivity (with a single metric problem described before) in which a project gamed is productivity performance by under-reporting an effort (actually they hid it in another project). As in the discussion of using single metrics, creating a balanced view that targets organizational goals and needs is the prescription. When a balanced approach is applied, pressure can be applied to move the team or individual towards the organizational goals in a predictable (and ethical) manner.

Measure Twice or Good Numbers Can Go Bad
"Lies, Damn Lies and Statistics"

“There are lies, damn lies and statistics” – Mark Twain

Mark Twain once said “There are lies, damn lies, and statistics.” We know that the same numbers can be used to support many, often opposing, causes. Even though numbers are just numbers, they can be used to tell a story.

What you do with the messages developed from the metrics you collect is important in its own right. Messages become tools (or weapons) to motivate. Motivation can range from the positive (look how good you are doing) to the negative (look how bad you are doing) or the ultimatum (do better or else). Here we will discuss what happens when there is no message or where the message and data aren’t synchronized.

If You Act (or Re-act) Irrationally, Bad Things Happen:

“Good numbers go bad when middle management dictates what the metrics program will report in order to improve or make a less then stellar project look better then it really is.” — RaeAnn Hamilton, TDS Telecom

Good numbers go bad when the reaction they cause is irrational. One example of how measures can be used to incent or create bad behavior is the ’single measure syndrome’ (SMS). SMS occurs when it decided from on high that a whole organization can be maximized based on a single metric, such as time-to-market. Measuring an organization, department or project just on a single metric might sound like a good idea, but life is more complicated. The use of a single metric, however impressive, might have unintended consequences. For example, one means of maximizing time-to-market might be to reduce quality (forget testing, fast is what counts). In this example, is the problem the behavior; is the problem the use of just one metric, or is it the metric itself? Arguably the most rational behavior would be to maximize the measure being focused on, therefore the problem would appear to be the behavior a single metric creates. This is basic human nature. It is just that it might not be the best idea for the organization. Think about what is it you want to incentivize. Is time-to-market the real goal in this case or is something more balanced?

Patterns, Patterns Everywhere:

It is human to ascribe a meaning to data and then to act on that meaning (this is a cognitive bias). Measurement organizations use this basic premise to drive activity. It is the organizational psychology that has created the adage, “you get what measure.” For example, in Agile projects using a burn-down chart, reporting remaining effort above the the ideal line for two or three days is generally interpreted as a sign that the team needs to change behavior.  The pattern acts as the trigger rather than a single observation. Knowing that numbers and actions are intertwined requires that behavioral implications must be examined before numbers are deployed, pell-mell or ASAP.

Measure What You Think You Are Measuring:

When measures and metrics are linked to unrelated items, combined with the logical backing of studious people, the results will create ramifications that are best interesting. For example measuring productivity when you are interested in quality or time-to-market when  you are interested in customer satisfaction. Do they represent Good Numbers Gone Bad or merely chaos? In the long run, the ramifications of mismatches lead to poor decisions and abandoned metrics programs. Measuring toilet paper usage and relating it to productivity is a particularly absurd example where the logic is that higher usage of toilet paper reflected longer working hours, which would result in more output (of some sort or another). While the example was created as a class exercise, it is possible to find similar examples in the wild. Less absurd mismatches can include deciding that effort or the cost of effort is a direct proxy for productivity. Effort is an input to productivity and without an output such as software or widgets. Using half of an equation as a tool might not create the results expected.

One Metric To Rule Them All?:

Not all metrics can be used for all projects. If you can’t easily answer the question “does this relate?” for each metric, the information generated through measurement and analysis will provide little or no value. Stratification is a requirement for analysis. The goal is to understand the differences between groups of work so that when you make the comparison, you can discern what is driving the difference (or even if there is a difference). Comparing package implementations, hardware intensive projects or custom development is rational only if you understand that there will be differences and what those differences mean. Examples abound of organizations that have failed to stratify like projects into groups for comparison. Failing this simple precaution lets Good Numbers Gone Bad.

They Are Everywhere– They Are Everywhere!:

There are many items that are very important to measure. Measurement can tell you the state of your IT practice while providing focus. Measurement is sometimes thought of as a silver bullet? Because it seems important to measure many activities within an IT organization, many measurement teams think measuring everything is important. Unfortunately, measuring what is really important is rarely easy or straightforward. When presented with obstacles, many metrics programs let Good Numbers Go Bad by measuring something, anything. “Quick, do something” is the attitude! When organizations slip into the “measure something” mode, often times what gets measured is not related to the organizations target behavior (the real needs). When measures are not related to the target behavior, it will be easy to breed unexpected behaviors (not indeterminate or unpredictable, just not what was expected). For example, one organization determined that the personal capability was a key metric. More capability would translate into higher productivity and quality. During the research into the topic, it was determined that capability was too difficult or “touchy-feely” to measure directly. The organization decided that counting requirements were a rough proxy for systems capability, and if systems capability went up, it must be a reflection of personal capability. So, of course, they measured requirements. One unanticipated behavior was that the requirements became more granular (actually more consistent), which meant that there was an appearance that increased capability that could not be sustained (or easily approved) after the initial baseline of the measure.

Garbage In, Garbage Out.

Garbage In, Garbage Out.

The famous saying goes ”garbage in, garbage out.” The saying was a rallying cry during the quality movement of the ‘80’s and ‘90’s. However, if you don’t recognize the garbage going into the decision process (by an active failure to recognize or tacit agreement), the saying becomes “garbage in. gospel out.” In other words, Good Numbers Go Bad, very bad. 

So What Do Those Numbers Mean?:

The simplest issue that causes good numbers to go bad is a poor understanding of the relationship between the numbers and the concepts they represent. This is akin to failure in punctuation when writing. The intended meaning can be garbled. Take the concept of productivity as an example. In its simplest form, it is the relationship between outputs and inputs. In more complex versions of the concept, time and other variables can be added, however, the equation always represents the relationship of the amount of input required to create a unit of output. In some cases the equation gets shortened to just reflect of the amount of input. Productivity is equated to the amount of time worked. This is a breakdown in the understanding of the concept.

Never Assume People Know What You Are Talking About:

”What many people fail to realize is that metrics need to be tracked over time and ANALYZED.”’

— Iris Trout, Bloomberg

Many metrics programs fall in the trap of assuming users and providers have a deep understanding of the measures being collected and how to use them before deployment. The assumption, perhaps, is that they are born with this knowledge. When this assumption is made, the training performed is at best akin to a “drive-by” knowledge transfer. When the parties involved with the metrics don’t have proper knowledge there is a wide range of negative behaviors as a result. These behaviors can range from making poor decisions (including deciding to not decide), to wasting effort on disabusing people of wrong-headed notions (the world is not flat, regardless of how users are interpreting the metrics), and the cacophony of whining created when people decide they are wasting time and effort looking at your metrics. The solution is measurement knowledge transfer and training. Knowledge transfer can not occur by osmosis. Training needs to cover:

  • • What the metrics are,
  • • How they are used,
  • • Why they were created (or needed), and
  • • What to do with the numbers once you have them.

Simple, Simpler, Simplest:

“Keep it simple. Ensure that the measurement is meaningful to both process actors and managers.”

— S J Sanders, BOT International

It is rare for metrics programs to be afflicted with a lack of complexity. The opposite is nearly always true — overwhelming complexity causes a scenario when Good Numbers Go Bad. Overwhelming complexity is an issue that can affect metrics programs at any time. Sometimes, overwhelming complexity occurs because practitioners believe in the bafflement factor; if the equations are hard, therefore the results must be correct, or due to the application of the Einstein factor; I know more than you, and my equations prove it. In either of these cases, it is the user of the data that suffers.

Complexity is a double-edged sword. Simplistic metrics can have little statistical explanative power; in other words, they do not provide significant information about why something has happened or will happen. The Dow Jones index is a relatively simple metric. The Index tells you what happened at a specific movement of time, but not why (which would allow you to predict what might come next). It is not that the knowledge it conveys isn’t interesting, but that it holds little information. It holds information about correlation, but not causality. Complex metrics have a different problem. While they typically provide significant amounts of information, complex metrics are difficult to understand and therefore difficult to explain. A mismatch between need and complexity is a situation where Good Numbers Go Bad.

The goal of good metrics is to strike a balance between simplicity and complexity. This balance maximizes the value and power of the metrics program by allowing it to be accessible, but not sycophantic. One solution is to enlist a graphic designer to help design how the data will be presented in order to ensure it understandable and consumable. Complexity creates a situation where a bridge to understanding between the recipients and the information cannot be built, Good Numbers Go Bad. Misunderstandings and misinterpretations distort the value and credibility of the measurement. The prescription is to simplify, simplify and then simplify some more. Don’t use multivariate equations if you can’t explain what the variables in the equation are and how they interact.

A side issue of complexity is that it can mask a lack of understanding by the metrics practitioners (the “I don’t know what is happening here, so I will baffle them with b—-it” approach). In situations where complexity seems to be the only for path, the prescription may be for education as the first step for all metrics practitioners and users.

Using Measurement to Change the World:

Measurement, like science, is the process by which we organize knowledge and information. Opinions and beliefs are replaced with theories which can be tested. The ability to test provides a platform for building knowledge. Knowledge can only exist and be expanded if processes that generate knowledge are repeatable. Once judged, further testing can provide the information on whether change is occurring and why.

You can't just hope that mistakes will go away...

You can’t just hope that mistakes will go away…

Mistakes can come in many flavors, errors of commission and omission; calculation mistakes or errors in mathematics (wrong formulas, logic or just ignoring things like covariance), and just plain stupid mistakes. The group as a whole is the single biggest reason Good Numbers Go Bad. Mistakes by definition occur by accident and are not driven by direct animus. The grace and speed in which you recognize and recover from a mistake will determine the long-term prognosis of the practitioner and his/her program (assuming you don’t make the same mistake more than once or twice). Ignoring a mistake is bad practice; if you need to make a habit of brazening out the impact of mistakes, you should consider a new career as you have lost the long-term battle over the message.

Collection Mistakes:

Collection mistakes are a category that covers a lot of ground ranging from gathering the wrong data to erratic data collection. While collecting the wrong information can lead to many other kinds of mistakes, recognition and the recovery from collection errors, which lead to credibility issues will be explored in depth in this section.

“In order to capture metrics, the procedures, guidelines, templates, and databases need to be in sync with the standard practices.”

— Donna Hook, Medco

Data collection errors typically represent errors of omission (data not collected); however, occasionally the wrong information is collected or data is not collected at all. Collecting the wrong data (or data you do not understand) will create situations where your analysis will be wrong (garbage in) with the possibly that you won’t know it (gospel out). Someone will usually discover this error at the worst possible time, leading to profuse sweating and embarrassment. Gathering the wrong or incomplete data is a nontrivial mistake, which makes good numbers go bad. However, what you do about it will say a lot about your program.

Begin by making sure you have specified the data to a level that allows you to ascertain that what you collect is correct. Audit the collection process against the collection criteria periodically helps to make you collect the correct data and collect it correctly. Create rules (or at least rules of thumb) that support validation. Rules of thumb will help you to quickly interpret the data. Did you get the quantity of data you expected? Has the process capability apparently changed more than you would reasonably expect?

Erratic Collection:

Measures and metrics are can be perceived to be so important that panicked phone calls are known to precede collection. Equally as interesting are the long periods of silence that occur before the panic. Erratic data collection sends a message that the data (and therefore the results) are only as important as whoever goosed the caller (or slightly less important than whatever the caller was doing right before he/she called). Inconsistent collection leads to numerous problems including rushed collection (after the call), mistakes and an overall loss of face for the program (fire drills and metrics ought to be kept separate). Consistency spreads a better message of quiet importance.

Mathematical Mistakes:

“We accidentally used one number instead of a correct value. Now our stakeholders ask for a second source.”

— Rob Hoerr, Formerly Fidelity Information Services

“Mathematical mistakes happen! We are all human!” The excuses are anthem, which means all measurement programs must take the time and effort to validate the equations they use. Equations must be mathematically and intellectually sound. Inaction in the face of mistakes in the equations or results makes good numbers go bad. Neither your results nor equations should be ingrained to the point of freezing your project into inaction when a mistake is found. The need to avoid math mistakes driven by not understanding the data places a lot of stress on the need to create measurement and metrics specifications. Once the specification, including data like a description, formulas and definitions, is created it is easier to make sure you a measuring what you want and that you get the behavior you anticipate. The spec provides a tool to gauge the validity of the math, the validity of the presentation, and, by inference, the validity of the analysis.

Liars, Damn Liars and Statisticians:

Statistics has long been a staple of business schools, which instill the belief that numbers can prove anything. Numbers, however, require an understanding of the equations that flies in the face of this mentality. When simple relationships are ignored to make a point good numbers go bad. Examples of questionable math can include graphs with the same variable (in different forms) on both axes presented with linear regressions lines driven through them. The created co-variance goes unrecognized, leaving the analysts speculating on what the line means without the recognition that the relationship is self-inflicted.

Developing a simple understanding of the concepts of co-variance, ‘r”-squared values and standard error are easy steps to help sort out basic conceptual errors. A corollary to this is that the knowledge of statistics will not necessarily stop your other mistakes, like adding the wrong EXCEL cells together, but it can’t hurt. Always check your equations, your statistics, and never fail to check the math!

If you build it...

If you build it…

One of the most tragic errors young metrics programs can make is the field of dreams syndrome: measure it and they will find it useful. Questions surface such as: ‘Why isn’t anyone using our measures?’ Or ‘Why isn’t anyone interested?’ Dashboards and reports are created, however no one cares. There are at least two underlying problems: insular vision and lack of validation.

The field of dreams syndrome begins with a metrics vision in a single person’s head (an executive or measurement guru). When this vision is then translated into tables and charts without socialization and presented as a fully formed measurement program – problem number one. In some cases this issue is not a problem, the culture in some organizations is used to strong individual leaders driving their points of view into the organization. It becomes a problem when the lack of socialization translates into a communication problem. Potential users do not know how the metrics created, where the data (and requests for that data) came from, what it measures and, most importantly, what to do with it. Why is Joe measuring my performance based on his view of what is right? Regardless of how the syndrome expresses, at this point good numbers have gone bad.

Development, enhancement and support of software are complex activities.  Rarely can one person grasp all of different nuances that each face.  Good measures and metrics provide teams and managers with the information they need to make decisions about the direction of projects and teams.  When is a project in trouble? When should a team enforce refactoring code because of quality problems?  Measures can provide information to help make those decisions. However, if we don’t we don’t collaboratively decide which measure are necessary, it is very easy to measure the wrong thing or to focus on the narrow view of the measurement.

If you can't understand the message, communication failed.

If you can’t understand the message, communication failed.

The impact of measures and metrics is dependent on how closely they are linked to business goals and organizational strategy. The closer the link, the higher the probability that the users of the measures will generate information from which knowledge can be derived.  Knowledge is a precursor to rational action which can be translated into value. Metrics chosen and  specifically tailored by an organization must not only deliver information, they must be directly traced to the organizations goal to have value. Information does not equal value if the measure or metric does not link to the organization’s goals. For example, if the business’s goal is cost reduction, the measures and metrics must measure the reduction of the cost of production. Measuring other areas might be more appealing, for example quality and productivity, however if our goal is cost reduction measuring something else will not produce value. Identifying and ensuring that you’re measuring the right things helps to ensure the value of measurement by supporting the ultimate business goals.

Realistically business goals are typically not as cut and dry as reducing cost. Goals of market leaders (or aggressive newcomers) are typically targeted on towards expanding opportunities (creating disruptive innovation) and/or sales. Linking metrics to these types of goals can be addressed a two levels.The first is a macro view, measuring output (innovations or sales). The second is at the micro level, measuring the critical steps that lead to innovation or sales.  If you are going to use the micro level strategy, focus only on measuring the most critical steps in the process (most critical = a small number). Measuring too many items can lead to death by measurement, an affliction that can be avoided. The goal of measurement is actionable information not drowning in numbers.

Measurement programs represent the scaffolding for the analysis and presentation of data. The basic goal of a measurement program is organizing data so it can be interpreted more easily.Frameworks bring structure and organization to an overloaded information worker, which allows them to first notice the data then to extract its information content. The quality of the framework will act as a governor on the level and speed of information extraction. The choice then becomes not whether a framework is needed, but rather the efficiency of structure and the filters that will be applied.  A viable framework of structure and discipline ensures that links measure, metrics and business goals. Combining a penchant for measurement with a lack of a framework or an anti-structure belief structure is a will cause a large number of measurement problems.  Problems that range from:

•Message Messes
•Mistakes, Errors and the Like
•Lack of Understanding
•Lack of Use or Poor Usage

Any one of these issues, let alone some combination of the fours are  a prescription for making “Good Numbers Go Bad”.

Three people all afflicted with continuous partial attention syndrome

Three people all afflicted with continuous partial attention syndrome

All numbers begin their life as good and useful tools, until a combination of mistakes, misunderstandings, organizational politics and poor usage intersect causing good numbers to go bad.  One of the important roles in project management is to act as a steward of the numbers and a high priest of information.  This week we will discuss the stark realities of how measures can go wrong with suggestions on how to fix them.

We live in a world that that is rich with information supported by very little structure and few filters to help sort fact from fiction and the interesting from the truly important.  The 21st century has created an environment in which we must pay attention to everything, an affliction known as Continuous Partial Attention (CPA), or risk irrelevance.  Measures and metrics are tools to combat the symptoms of CPA, like being in a state of constant crisis.  Like any tool, used incorrectly they make the situation worse.  Tools like dashboards and balanced scorecards gather the disparate threads of organizational information into a concise package. These tools act as a gravitational nexus for focus.  The focus ensures progress toward organizational goals and needs.  To create and hold this type of focus requires that the measures used are relevant, predictive (provide foreknowledge of events) and broad enough to be useful in more than just a single situation.  Without any one of these attributes we can expect unanticipated results.

Defining what is important to the organization and what to measure is critically important.  When I first wrote this post (back in 2006), I wrote that this activity cannot be a democratic event, rather that management must take the lead or the metrics will be overly tactical and potentially counterproductive. However, my thinking on the matter has evolved over time. The wisdom of a diverse and knowledgeable team can supersede that of the individual. We will talk more about my mental shift later in the week.



Efficiency a measure of how much wasted effort there is in a process or system. A high efficiency process has less waste. In mechanical terms the simplest definition of efficiency is the ratio of the amount of energy used compared to the amount of work done to create an output (efficiency is never 100%). Efficiency, when applied to IT projects, measures how staffing levels effect how much work can be done. The problem is that while a simple concept, it is difficult because it requires a systems-thinking view of software development processes.  As a result, it is difficult to measure directly which is why proxies are used for efficiency in many cases.

Most processes are subcomponents.  For example, unit testing is part of broader coding process and coding is part of the broader software development process. Improving the efficiency of unit testing by 10% will not equate to a 10% improvement in coding or a 10% improvement in the project efficiency.  For example, let’s say I become more efficient at unit testing, I can run the more tests in the same time.  Does running more unit tests mean that the organization gets more software?  We would have to understand all of the constraints on the flow of work to know whether the efficiency improvement will translate into more delivered functionality. Without understanding how the processes fit together and how the value/work flows through the project, you can’t be sure if any improvement will be transmitted to the bottom-line.  Systems thinking (a way of looking at the whole system) and tools like value-stream mapping (a technique to analyze the flow of materials and information) are useful when identifying bottlenecks where efficiency improvements will potentially impact the overall project.

Efficiency is difficult to measure in software development.  The idea of efficiency in software development has evolved from the concept of efficiency developed in manufacturing which focused on the measurement of mechanical efficiency. Technically the efficiency of a project would be measured as the ratio of effort the project should take in a perfect world compared to the effort actually required. Perfect effort is, at best, difficult to determine.  Many organizations measure proxies for efficiency, including: productivity, time-to-market and cost per unit of work.

Efficiency, in terms of software development processes, generally means that we are avoiding waste. Waste in software development include defects, waiting, overproduction, unused creativity or extra reviews to name a few items. Efficiency is hard to measure directly therefore the impact of process changes need to be inferred from other metrics which may affected by other changes, such as cost cutting. Avoiding wasted time in projects and trying to measure the impact is a laudable goal, every minute wasted time could be applied to value added work.


Effectiveness is an important concept in many business discussions; it is a term that is featured centrally when discussing the performance of processes, teams or organizations. The problem is that many people don’t precisely know what the word means. Wikipeida defines effectiveness as “the capability of producing a desired result.”[1] The Business Dictionary expands the definition a bit, “The degree to which objectives are achieved and the extent to which targeted problems are solved.”[2] I use a simpler definition when we are talking about IT or process improvement projects – effectiveness is the capability of a process or project is doing the right thing when compared to the goals of the organization.  There are two core questions that need to be reviewed: 1) what is the right thing? and 2) how do we measure effectiveness?


When we are interested in effectiveness the focus shifts to understanding what the “right thing” is and then to ensure we track the “right thing.”  Software development personnel intuitively know that doing the “right thing” made sense, but is very hard.  Classic waterfall projects (waterfall projects have phases such as analysis, design, construction, testing that are completed before moving to the next phase) create requirement documents to establish what the “right thing” is, then enforced reviews and sign-offs for feedback to stay on track.  Agile projects build backlogs, involve product owners and perform sprint reviews to cyclically establish the “right thing” and to generate a feedback loop to stay on-track. All projects want to deliver the “right thing” so that the organization can reach its goals.


Effectiveness in business is generally assessed through the comparison to a business goal. When measuring effectiveness the question you are answering is, did the work accomplish the goals goal of the project? It is difficult to measure the effectiveness of an IT project, because it generally is component of a larger business product or program.  Rather we generally focus on efficiency measures (cost, time-to-market, productivity and velocity). Note this is a tendency to turn efficiency measures into tactical project goals and then to declare that the project was effective when they are met.  I suggest that the real goal of very few projects is to be on-time or on-budget. If the goal of the project to help the organization deliver a new widget to market, the metric is whether the new widget was delivered to market when it was supposed to be.     Focusing on the business goal of the project provides the basis to determine whether the project was effective or not.  Another proxy for effectiveness that is used is customer satisfaction.  When measuring customer satisfaction you can ask whether respondent whether they think the project delivered the “right thing.” Even when you ask it makes sense to go back to the business goals of the project and compare what was delivered to those goals, unresolved mismatches means you were not as effective as possible.


A simple, workable definition of effectiveness for IT projects is – the capability of a process or project is doing the “right thing” when compared to the goals of the organization. “Effectiveness” excites me because it forces me to think about the bigger picture, the organizational goals that the project or process is striving to support.  Knowing that I am supporting the goals of the organization is motivational.






Get every new post delivered to your Inbox.

Join 3,302 other followers