Plans are your guide to where you want to go.

Plans are your guide to where you want to go.

(Part of the Simple Checklist Series)

The simple Measurement Readiness Checklist will be useful for any major measurement initiative, but is tailored toward beginning a measurement program.  The checklist will provide a platform for evaluating and discussing whether you have the resources, plans and organizational attitudes needed to implement a new measurement program or support the program you currently have in place.

I have divided the checklist into three categories: resources (part 1 and 2), plans, and attitudes.  Each can be leveraged separately. However, using the three components will help you to focus on the big picture. We will address each component separately over the next several days.

Here we continue the checklist with the section on plans and planning.  If you have not read the first two sections of the checklist please take a moment see (Measurement Readiness Checklist: Resources Part 1 and Measurement Readiness Checklist: Resources Part 2).

Plans

Planning for the implementation or support of a measurement program can take many forms — from classic planning documents, to schedules, Kanban boards or even product backlogs.  The exact structure of the plan is less germane here, rather having an understanding of what needs to be done is most important. There are several plans that are needed when changing an organization. While the term “several” is used, this does not mandate many volumes of paper and schedules, rather that the needs and activities required have been thought through and written down somewhere so everyone can understand what needs to be done. Transparency demands that the program goal is known and that the constraints on the program have been identified (in other words capture the who, what, when, why and how to the level required).

Scale and Scoring

The plans category of the checklist contributes up to eighteen total points. Each component contributes up to 6 points (6, 3, 1, 0).

Organizational Change Plan

The Organizational Change Plan includes information on how the changes required to implement and/or support the measurement program will be communicated, marketed, reported, discussed, supported, trained and, if necessary escalated.  This level of planning needed to include tasks such as:

  • Develop activity/timeline calendar
  • Identify topics newsletter articles
  • Create articles
  • Publish articles
  • Identify topics for education/awareness sessions
  • Schedule sessions
  • Conduct sessions

6 – A full change management plan has been developed, implemented and is being constantly monitored.

3 –An Organizational Change Plan is planned, but is yet to be developed.

1 – When created, the Organizational Change Plan will be referenced occasionally.

0 – No Organizational Change Plan has or will be created.

Support Note: Even when a program reaches the level of on-going support, an overall organizational change and marketing plan is needed.  Adding energy to keep the program moving and evolving is necessary, or entropy will set in.  Any process improvement will tend to lose energy and regress unless they continually have energy added.

Backlog

The backlog records what needs to be changed, listed in prioritized order. The backlog should include all changes, issues and risks. The items in the backlog will be broken down into tasks.  The format needs to match corporate culture and can range from an Agile backlog, a Kanban board to a Microsoft Project Schedule.

6 – A prioritized backlog exists and is constantly maintained.

3 – A prioritized backlog exists and is periodically maintained.

1 – A rough list of tasks and activities is kept on whiteboard (but marked with a handwritten “do not erase” sign).

0 – No backlog or list of tasks exists.

Support Note:  Unless you have reached the level of heat death that entropy suggests will someday exist, there will always be a backlog of new measurement concepts to implement, update and maintain. The backlog needs to be continually reviewed, groomed and prioritized.

Governance

Any measurement program requires resources, perseverance and political capital. In most corporations these types of requirements scream the need for oversight (governance is a friendly code word for the less friendly word oversight). Governance defines who decides which changes will be made, when changes will be made and who will pay for the changes. I strongly recommend that you decide how governance will be handled and write it down. Make sure all of your stakeholders are comfortable with how you will get their advice, counsel, budget and, in some cases, permission.

6 – A full-governance plan has been developed, implemented and is being constantly monitored.

3 –A governance plan is planned, but is yet to be developed. .

1 – When created, the governance plan will be used to keep the process auditors off our back.

0 – Governance . . . who needs it!

Next  . . . Attitude. You have to have one and you have to manage that attitude to successfully lead and participate in organizational change.

Resources include the tools of the trade and every trade has tools.

Resources include the tools of the trade and every trade has tools.

The simple Measurement Readiness Checklist will be useful for any major measurement initiative, but is tailored toward beginning a measurement program.  The checklist will provide a platform for evaluating and discussing whether you have the resources, plans and organizational attitudes needed to implement a new measurement program or support the program you currently have in place.

I have divided the checklist into three categories: resources, plans, and attitudes.  Each can be leveraged separately. However, using the three components will help you to focus on the big picture. We will address each component separately over the next several days.

Here we continue the resources portion of the checklist:

Cash

Change costs money. Costs can include consultants, training, travel and an odd late-night pizza or two.

7 – A reasonable budget has been established and the implementation team can draw from the budget for planned expenditures.  Emergency funding can be attained to handle issues as they arise.

3 – A reasonable budget has been established and approved; however, access must be requested and justified for all expenditures.

1 – Any time that money is required funding must be requested and approved.

0 – Donations are sought in the organization’s lunchroom on a periodic basis (consider a PayPal donation button on your measurement team’s homepage).

Support Note:  Having the cash to support programs is just as important as having the cash to implement a measurement program.  Where the cash gets spent might be different for a support program.

Effort

Even if you have bales of cash, developing and implementing measurement processes will require effort. Effort will be required from many constituencies including the process-improvement team, management and from the teams being measured, just to name a few.

7 – A reasonable staffing plan has been established and the measurement program is the only project the assigned resources have been committed to.  Dedicated teams make sense for process improvement in software development.

4 – A reasonable staffing plan has been established and the measurement initiative is the highest priority for the assigned resources.

1 – All resources are matrixed to the measurement initiative and they are also assigned to other high priority projects.

0 – The program has access to all the effort needed after 5 PM and before 8 AM and during company holidays.

Support Note:  Dedicated resources might be more important for a program in support mode than even for an implementation project.  The issue is that we (us humans) tend to be distracted by new projects which means paying less attention to the projects in support mode.

Projects

Measurement requires having something to measure and then something to influence.  The organization needs to have a consistent flow of projects so that measurement becomes about the trends rather than about specific projects.

7 – Projects are constantly beginning and ending which will provide a platform for generating a continuous flow of information.

3 – There are numerous projects in the organization; however they typically begin early in the year and end late in the year or on some other periodic basis that makes data collection and reporting erratic.

1 – The organization does only a small number of projects every year making trending difficult.

0 – The organization only does one large project every year.

Support Note:  Like dedicated teams, access to projects to measure is really important to ongoing measurement programs.  Being able to continually generate reports and presentations on the data will help keep the interest stoked.

Calendar Time

Calendar time is a resource that is as important as any other resource. Severe calendar constraints can lead to irrational or bet-the-farm behaviors, which increase risk. This is especially true in big bang implementations (which is a strong reason to avoid such implementations).

7 – The schedule for implementing the measurement is in line with industry norms and includes time for tweaking the required processes before appraising.

3 – The schedule is realistic, but bare bones. Any problems could cause delay.

1 – Expectations have been set that will require a compressed schedule; however, delay will only be career limiting rather than a critical impact on the business.

0 – The measurement program implementation is critical for the organization’s survival and is required on an extremely compressed schedule.

Support Note:  If you are using the checklist to find areas to improve the support of your measurement program, I would drop this question.

Further Note: Also if you have rated this items a ‘0’ I would suggest that you have a very serious issue and should seek consulting support.

Next planning questions . . .

A measurement program is like building a wall. Make sure you have all your resources in place.

A measurement program is like building a wall. Make sure you have all your resources in place.

Part of the Simple Checklist Series (Resources Part 1)

Beginning or continuing a measurement program is never easy. Many times measurement programs begin because an organization or individual thinks it necessary for survival or to avoid pain. Measurement can be thought of as a balance between the effort to collect and report measurement data and the value gained from applying what is learned from the measurement data.  Measurement programs targeted only the at gathering and reporting part of the measurement program will languish in the long run. On the other side of the equation, i.e. measures need to be used in order to generate the value needed to eclipse the effort of collection and reporting. Everyone must be educated on how to use measurement data and then continually asked to use the data. Both sides of the equation are necessary. The simple Measurement Readiness Checklist will be useful for any major measurement initiative, but is tailored toward beginning a measurement program.  The checklist will provide a platform for evaluating and discussing whether you have the resources, plans and organizational attitudes needed to implement a new measurement program or support the program you currently have in place.

I have divided the checklist into three categories: resources, plans, and attitudes.  Each can be leveraged separately. However, using the three components will help you to focus on the big picture. We will address each component separately over the next several days.

Scoring

This checklist can be used as a tool to evaluate how well you have prepared for your measurement journey. The following questions are the evaluation criteria.  To use the checklist, answer each question with high, medium, low and not present (with one exception). Each question will contribute points toward the total.

Section and Question Weights:

Resources: Forty-two total points. Each component contributes up to 7 points (7, 3, 1, 0).

Plans: Eighteen total points. Each component contributes up to 6 points (6, 3, 1, 0).

Attitude: Forty total points. Each component contributes up to 8 points (8, 4, 2, 0).

Note that where support and implementation projects would need to take a different angle we will point out any possible nuances.

Resources

Resources are the raw materials that you will consume on your measurement journey.  As with any journey having both the correct resources and correct amount of resources will make the journey easier.  Just think of trying to canoe from New York to London for a meeting; the wrong resources can make the trip difficult.

Management Support: When initially implementing a measurement program, support from management is the most critical resource.  This is the time when measurement seems to be all effort, cost and bother.  Later, as value is derived, support can be less visible.  Note that the more management support you have across the whole IT structure, the easier it is to get a measurement program on its feet and keep it there.

Scoring

7 – Senior management is actively involved in guiding which measures and metrics are collected and how they are used.  Senior managers stop people in the hall to discuss progress in collecting and using measurement data. Discussion of progress is an agenda item at all management-staff meetings.

3 – Senior and middle managers attend formal measurement informational meetings and talk about the need to support the measurement initiative.

1 – A senior manager or two attended the kick-off meeting, then relocated en mass to Aruba, leaving the middle managers in charge.

0 – The measurement initiative is a grass-roots effort.

Support Note:  Whether you are answering from a support or implementation perspective does not matter.  Management support is important.

Change Specialist: Measurement is a form of organizational change that typically requires skills that are not generally found in an IT department. The skills needed to start and perpetuate a measurement program include sales, marketing and communication.

7 – An organizational change specialist has been assigned as a full time resource for the project.

3 – An organizational change specialist is available within the organization and works on many projects simultaneously. The specialist may or may not have experience with IT change programs.

1 – Someone on the team has helped craft an organizational change plan in the past.

0 – Organizational change websites are blocked and your best bet is buying a book on Amazon using your own cash.

 Support Note: A change specialist is needed for ALL change programs regardless of whether we are discussing implementation or generating ongoing support.

Expertise: A deep understanding of measurement will be needed in a dynamic IT environment.  Experience is generally hard won. “Doing” it once generally does not provide enough expertise to allow the level of tailoring needed to deploy a measurement program in more than one environment. Do not be afraid to get a coach or mentor if this is a weakness.

7 – The leaders and team members working to implement and/or support the measurement program have been intimately involved in successfully implementing measurement in different environments.

3 –At least two team members have had substantial involvement in implementing a measurement program in the past, in a similar environment.

1 – Only one SME has been involved in a measurement program and that was in another environment.

0 – All of the team members have taken basic measurement classes and can spell measurement, assuming they can buy a vowel.

 Support Note:  You can never have a measurement program without someone with (or without access to) measurement knowledge.

We will finish the resource part of the checklist tomorrow.

How fast are you getting to where you're going?

How fast are you getting to where you’re going?

What is the difference between productivity and velocity?  Productivity is the rate of production using a set of inputs for a defined period of time.  In a typical IT organization, productivity gets simplified to the amount of output generated per unit of input. Function points per person month is a typical expression of productivity.  For an Agile team, productivity could very easily be expressed as the amount of output delivered per time box.  Average productivity would be equivalent to the team’s capacity to deliver output.  Velocity, on the other hand, is an Agile measure of how much work a team can do during a given iteration.  Velocity is typically calculated as the average story points a team can complete. Conceptually the two concepts are very similar, the most significant differences relate to how effort is accounted for and how size is defined.

The conventional calculation for IT productivity is:


productivity

Function points, use case points, story points or lines of code are typical size measures. Work in progress (incomplete units of work) and defective units generally do not count as “delivered.” Effort expended is the total effort for the time box being measured.

The typical calculation for velocity for a specific sprint is:

velocity

Note, as a general rule, both metrics are an average.  One observation of performance may or may not be representative.

The denominator represents the team’s effort for a specific sprint in both cases, however when using velocity the unit of measure is the team rather than hours or months. Average velocity of a team makes the assumption that the team’s size and composition are stable.  This tends to be a stumbling block in many organizations that have not recognized the value of stable teams.

The similarities between the two metrics can be summarized as:

  • Velocity and productivity measure the output a team delivers in a specific timeframe.
  • Both metrics can be used to reflect team capacity for stable teams.
  • Both measures only make sense when they reflect completed units of work.

The differences in the two metrics are more a reflection of the units of measure being used.  Productivity generally uses measures that allow the data to be consolidated for organizational reporting.  While velocity uses size measures, such as story points, that are team specific. A second difference is convention. Productivity is generally stated as # of units of work per unit of effort (i.e. function points per person month), while velocity is stated as an average rate (average story points per sprint).  While there are differences, they are more a representation of the units of measure being used than the ideas that the metric represents.

This guy is an idol called San Simon that, in exchange for cigarettes and booze, guarantees good fortune.

This guy is an idol called San Simon that, in exchange for cigarettes and booze, guarantees good fortune.

Beliefs:

Beliefs act as a powerful filter that can cause communication problems. Deep-seated beliefs force the believer into a difficult position when it comes to challenging the status quo. When change occurs without the ability to challenge the rational for the change, it leads to confusion and possibly to conflict. This is a scenario where Good Numbers Go Bad. Beliefs aren’t necessarily based on mathematical or scientific fact. Once upon a time most people believed the world was flat. This belief constrained behavior. For example, a senior executive I knew firmly believed that education and training were not related to improved capability in his organization. If the organization stopped supporting training, his belief would potentially lower productivity, innovation, capability and potentially increase the need to outsource work. The workforce would not stay current or gain new skills.. Facts and the relationships between facts can are abridged through beliefs. Metrics professionals must continually create awareness so that everyone in the metrics equation keeps an open, questioning mind to extract the full value from numbers.

Just Plain Wrong:

One of the final classes of communication errors occurs when the metrics team publishes the information gleaned from a chart, graph or single number and it is wrong. In my mind, the most frightening words are “my interpretation of this graph is that the earth is flat.” Misinterpretation can be caused by a number of problems ranging from education and knowledge of the interpreter, active misinterpretation or the act of spreading misinformation (or that belief thing in the last section). Regardless of why the interpretation is wrong, damage is done. As soon as the misinterpretation is out there, the metrics program will be viewed as non-neutral and, potentially, biased. When measurement drives activity based on misinterpretation, the results can be erroneous business decisions with lasting implications. It can leave a bad taste in people’s mouths for a long time.

Zombie Hypothesis:

One of the worst errors made by humans is not publicly recognizing a mistake and trying to tough it out. The affliction can be encapsulated by the phrase “throwing good money after bad.” When applied to a metrics program, this affliction can lead to a scarcity of funds for metrics and SPI investment opportunities. Not facing up to your mistakes causes a scenario where Good Numbers Go Bad. The cost and effort needed to gather, analyze, report and react to the measures being collected will eclipse the value derived if you are living a lie. The Zombie Hypothesis is a variant of the Law of Crappy Process which implies that the worst, most incorrect data will become the de facto standard (real or perceived) for your measurement program. When you find a problem, recognize it, fix the process(es), then the definitions and re-implement the measurement. The effectiveness and efficiency of the measurement program will be improved. More importantly, you will inhabit the moral high ground of knowing you are measuring the right thing in the right way.

Communication is not usually uni-directional.

Communication is not usually uni-directional.

One of the most tragic errors young metrics programs can make is the field of dreams syndrome: measure it and they will find it useful. Questions surface such as: ‘Why isn’t anyone using our measures?’ Or ‘Why isn’t anyone interested?’ Dashboards and reports are created and no one cares. There are at least two underlying problems: insular vision and lack of validation.

Field of Dreams

”A metric program is ineffective unless it is linked directly to a set of goals, mission or vision.”

— Michael Sanders, former CIO of Transamerica Life

The field of dreams syndrome begins with a metrics vision in a single person’s head (an executive or measurement guru). When this vision is then translated into tables and charts without socialization and presented as a fully formed measurement program – problem number one. In some cases this issue is not a problem, the culture in some organizations is used to strong individual leaders driving their points of view into the organization. It becomes a problem when the lack of socialization translates into a communication problem. Potential users do not know how the metrics created, where the data (and requests for that data) came from, what it measures and, most importantly, what to do with it. Why is Joe measuring my performance based on his view of what is right? Regardless of how the syndrome expresses, at this point good numbers have gone bad.

Misinformation:

“It is of paramount importance for an organization to ensure that the proper decisions are made based upon the best (most accurate) data available.”

—David Herron, David Consulting Group

Misinformation can be caused by numerous situations ranging from errors and misunderstandings to lack of knowledge. Any of these situations can cause a breakdown in communication. How you address misinformation once it’s found is an important topic. If misinformation is swept under the covers, Good Numbers will Go Bad. Directly addressing the underlying cause of misinformation is usually the correct answer. The corporate culture of some organizations can make it impossible to directly address the problem. Therefore making an indirect approach is sometimes the only means of addressing and correcting the misinformation. In either case, you need to find a means of addressing the problem. Remember that an un-lanced boil feasters. Like a boil, uncorrected misinformation will tend to fester and destroy the credibility of the metrics program, and perhaps the staff that maintains it.

Silence:

Good Numbers Go Bad not only when data or information is wrong or miscommunicated; it happens when the silence around the data collected is deafening. When data and information enters a black hole never to be seen again, it always causes a negative reaction. The first problem that occurs is that the effort to create the report data will quickly be questioned. This is an easy way to generate the wrong kind of attention during budget season. The second and more important point is that conspiracy theorists will assume that something is being hidden. When people believe something is being hidden, they will create their own story that will rarely be kind to the metrics program. Transparency must be the central principle for any metrics program. Show the data, explain what is done with it, that how it will be used and is being reported. Remember that show and tell really shouldn’t stop in kindergarten. Communication is the prescription for Good Numbers Gone Bad.

Monologues:

Late night television is the home of the monologue. Jay Leno and Jimmy Fallon use monologues to make us laugh. Their only feedback is the laugh track. The unidirectional flow of the information is an important feature of a monologue. Late night comedy and metrics presentations shouldn’t have this in common (albeit a bit of levity is probably a good thing). Most metrics reports and presentations are approached as if they were monologues rather than dialogs.

The monologue approach occurs for a number of reasons. The first is the confusion of the volume/value attribute. Metrics programs need to show value, and the two attributes of volume and value are sometimes confused (the more the merrier isn’t the case here). When these concepts are confused, it seems that the goal of a metrics presentation seems to be to show every bit of data ever collected, crammed into charts (or slides). Then to tell anyone who will listen what they mean (also known as death by slides). Focusing on volume chokes the ability to hold a dialog. Volume and value/quality are unrelated attributes. An old adage states, ”a designer has achieved perfection not when there is nothing left to add, but when there is nothing left to be taken away.” (Read any or all of Edward Tufte’s books.) Design your presentation to evoking action by the recipient. Simplicity and minimalism are concepts that need to be used when designing your presentation tool. Show pictures, but have the data. Once you have a tool to aid your communication, the next step is to use the tool to facilitate a dialog as the basis for creating understanding. A dialog provides a platform for the metrics team to affect behavior of the organization and to absorb information about how work is being done. Wikis and blogs are means of creating this type of dialog.

Another idea to combat monologue is to recognize that presentations and handouts are not the same thing. Presentations are structures to create dialogs; handouts are one-way vehicles, monologues.

Over communicating makes understanding difficult.

Over communicating makes understanding difficult.

Mark Twain once said “There are lies, damn lies, and statistics.” The same numbers can be used to support many causes. Even though numbers are just numbers, they can be used to tell a story.

What you do with the messages developed from the metrics you collect is important in its own right. Messages become tools (or weapons) to motivate. Motivation can range from the positive (look how good you are doing) to the negative (look how bad you are doing) or the ultimatum (do better or else). Here we will discuss what happens when there is no message or where the message and data aren’t synchronized. (The final installment, see the whole series here)

Over-Reporting:

“I believe regular customer review and involvement will significantly increase the chance that we will provide what our customer(s) want.”

— Mark Smith, Diebold

If a little information is good, then more is better, right? It is easy to fall into the trap of over-collecting data and creating too many reports. The goal of metrics should be to consolidate information rather than to contribute to the constant barrage of information. Overload is probably a far easier condition to recognize (addressing is another story) than providing too little data. Link information needs to business goals as an anchor to determine what is required.

Strategies:

  • Overload – re-factor
  • Under-load – listen for requirements

It requires discipline to make business goals an anchor. Tactical goals are far easier to recognize and measure than the loftier business goal. Periodic reviews of metrics and business goals should be integrated into the normal business planning cycle. This linkage will reinforce the strategic nature of the measures and metrics.

Know Your Audience:

Getting the level of detail right is not a simple task. Too much detail is as big an issue as too little, and every audience will have differing perceptions of what level of information they need. Without a doubt, the hardest part is to sort through how information will be presented in such as way as to have the flexibility to provide for most needs. Trying to address the whole audience with a single view of the data and analysis will make Good Numbers Go Bad. The use of support information (e.g. training, job aids) and tools (such as a WIKI) are invaluable for providing basic knowledge and interactivity that does not need to be replicated in each graph, report or table. These types of tools allow users to view the data/information they want to consume. Presentation flexibility and interactivity are code words for metrics customer satisfaction.

Train Your Users:

Another topic that makes Good Numbers Go Bad that occurs when presenting data is misunderstanding the level of knowledge required to consume the material. In an ongoing program, there needs to be an assumption that the user community has absorbed a basic body of knowledge. This assumption, however, needs to be tempered with the reality. Remember, what you might believe is basic may not be very basic at all. Even if it is basic, your audience may not have been exposed to it before. Test your assumptions continually. When users change, provide refresher training or mentoring so that you do not have to include basic explanations with every analysis (respect the time of the majority of your users. Capture your own body of metrics knowledge or reference an outside body of knowledge (such as IFPUG) to provide a tool to use as a safety net that supports the validation and re-validation of the data and needs.

Over-Collection:

There is a tendency to collect more data than is needed. It is called data hoarding (as if data will be scarcer in the future). This data is then used to flood the market with measures, metrics and graphs. The proliferation of data creates an atmosphere that stifles usage under the weight of too much information, making Good Numbers Go Bad.

When viewed from the perspective of the measurement team, having lots of data creates an atmosphere where answers are pursuing questions. People will look for and find patterns/answers in the data (real or imagined). This causes a situation where the perceived answers may or may not reflect the questions that really need to be asked — a world that is “answer rich, question poor.” Viewed from the perspective of the metrics consumer, too much data and analysis feeds corporate ADD by diffusing rather than sharpening the focus.

The prescription begins by collecting only the data you will put to use creating actionable information. Frameworks like Goal, Question and Metric (GQM) or Practical Software Measurement (PSM) provide a means to link data collection to the questions generated by business goals. Using business goals to generate the questions (then the questions generate the metrics to be collected) will enhance early success of your metrics program and substantially reduce the effort need to start and sustain the program. The second part of the prescription is to create a filter allowing users to recognize which pieces of the masses of data actually have value for them (training, job aid and collaboration tools fit this bill).

The right amount of data collection balances the cost of collection and analysis with the information that can be extracted. The major issue with over-collection is not the burden on metrics personnel, but rather the burden placed on projects or application support-personnel. Developers are focused on developing code as efficiently as possible. They will fight anything that impacts that efficiency. This is true whether there is an automated data collection process or not. This burden takes time away from adding value. Burdening projects and developers is a situation in which Good Numbers Go Bad. The way to solve this conundrum is to make sure that any changes to development process not only provide value to the organization but provide benefit to the developers. This requires careful design and understanding of the development process. Include developer and project personnel when designing metrics and collection process.

Communicate Results Or Die!

When data enters the metrics machine to apparently never reappear, a powerful message is sent. Just like when no action is taken with the data, not communicating the results of the data collection signals that the data has no value. Good Numbers Go Bad when no one knows what they are being used for. At a minimum, you need to explain why those in involved with creating and collecting the data do not see any results.

The best solution is to ensure that that everyone in the metrics food chain gets feedback. The data collected at a project level should be used to provide performance feedback to project team leaders. Note some information is collected for legal or organizational reasons, which is not useful as a feedback mechanism for projects. Best practices for this process include metrics that provide information on how to improve performance. Analysis will indicate whether the performance is positive (a special case) where every effort should be made to solicit information on how the specific performance was accomplished.

Summary

Good numbers do not go bad all by themselves. It takes active participation of people who fail in the planning, have knowledge deficits and make mistakes. In the short term it might be easier to let your metrics program run wild, however in the long run planning and discipline are the only ways to maximize the value of your metrics investment. If you do not manage your metrics program you may wake up late one night and find your numbers featured in a Good Numbers Go Bad infomercial.

We measure delivery time but do we measure the condition of what is delivered?

We measure delivery time but do we measure the condition of what is delivered?

Mark Twain once said “There are lies, damn lies, and statistics.” The same numbers can be used to support many causes. Even though numbers are just numbers, they can be used to tell a story.

What you do with the messages developed from the metrics you collect is important in its own right. Messages become tools (or weapons) to motivate. Motivation can range from the positive (look how good you are doing) to the negative (look how bad you are doing) or the ultimatum (do better or else). Here we will discuss what happens when there is no message or where the message and data aren’t synchronized. (This is the third installment, see the whole series here.)

So You Have Numbers, Now Do Something:

“The key is that there is no point to taking measurements and deriving metrics if they aren’t part of some (planned) decision making process.”

— Jack Hoffman, Wolthers Kluwer

Once data has been collected, analyzed and translated into information, now you have to do something. Given that this is a decision point, it is a place where people and processes can make Good Numbers Go Bad.

One possible problem occurs when no action is taken. When measures are collected, analyzed and actionable recommendations are created, and then nothing happens, this sends a message. Why go through the effort if it is not going to be used? Some of the reasons that action isn’t taken include:

  • Poor measures;
  • Measuring the wrong things;
  • Failure to measure things you actually have control over;
  • The measurements don’t link to organizational goals, or
  • Simple inertia.

Make sure you understand why you are collecting the data. Are the metrics tied to the organizational goals, targeted at changing behaviors, or are they being done for less important reasons? The damage can impact the whole measurement program. Measurement is counterproductive when it costs more than the value it creates. Without action, the value equation cannot be positive. Without usage, good numbers are less than bad – they are meaningless.

There are times when the best action is no action (even though just moments ago I pointed out that was poor practice). When the action planned to address a measure or metrics is irrational, action is worse than inaction. Irrational responses make Good Numbers Go Bad. What is worse is that one irrational action precipitates another. It can cheapen the metrics and result in program abandonment. I recently heard of an example of a cascade of poor decisions that began with an organization that decided to compare story point velocity of all teams (story points are a measure of size that is relative to the team that uses them) – irrational behavior one.  The comparisons were made public causing at least two of the teams to inflate their estimates of story points to increase the reported velocity – irrational behavior two. Measurement had become a game to be manipulated. Unfortunately, recognizing the difference between a rational and irrational action is easier said than done. One solution is to implement a metrics oversight board (akin to an Engineering Process Group or SEPG). The board would act as advisers or mentors that can oversee plans and activities. Oversight does not ensure that the metrics program won’t jump the shark[1] but it certainly makes it less likely.

Follow Up, Or The Lack Thereof:

Lack of action is a critical shortcoming into which many metrics programs fall. Lack of action and wrong actions are equally injurious. Doing nothing allows the imagination to run wild, allowing anyone not in the know to believe that in a situation in which Good Numbers Go Bad. Imagination in this case is your worst enemy. The prescription is simple: make sure everyone knows what is being done with the information. Incorporate usage scenarios into the metrics training. Training is a two way street; using hands-on training scenarios will allow you to gauge reactions to specific metrics and to elicit usability information. In a perfect world the training experience should be used to create a dialog, which will add further effectiveness to the process.

Lack of follow-through occurs fairly often in young metrics programs is for measures to be collected, analyzed, recommendations generated, and then nothing happens. Put another way, someone forgets to follow-through on the actions generated from all that data collection and analysis. Follow-through is about presenting the proper face to the organization, doing what you said you would do based on the read of the data. As with any of the other categories, action or inaction are messages about the perceived importance of the behavior being measured.


[1] Jumping the shark is a metaphor that was originally used to denote the tipping point at which something is deemed to have passed its peak. – http://en.wikipedia.org/wiki/Jump_the_shark

Measure Twice or Good Numbers Can Go Bad

Measure Twice or Good Numbers Can Go Bad

Mark Twain once said “There are lies, damn lies, and statistics.” The same numbers can be used to support many causes. Even though numbers are just numbers, they can be used to tell a story.

What you do with the messages developed from the metrics you collect is important in its own right. Messages become tools (or weapons) to motivate. Motivation can range from the positive (look how good you are doing) to the negative (look how bad you are doing) or the ultimatum (do better or else). Here we will discuss what happens when there is no message or where the message and data aren’t synchronized (part two!)

Using Team Measures for Individuals:

Measurement is an intimate subject, because it exposes the person being measured to praise or ridicule. Management many times will begin with a group or team level focus only to shift inextricably to focus on the individual. The individual-view is fraught with difficulties, such as gaming and conflict, which typically becomes the norm for causing anti-team behavior, which, in the long run, will reduce quality, productivity and time-to-market (short-term gains, but long-term pain). The focus of measurement must stay at team level for measures that focus on the results of team behavior and only evolve to individual measures when measure relates to the output of an individual.

Don’t We Want To Be Average?:

Another classic mistake made with numbers is regression to the mean. Performance will tend to approximate the average performance demonstrated by the measures chosen. A method taken to address the mistaken is to:

1. Select the metric based on the behavior you want to induce,

2. Set goals to incent movement in the proper direction and away from average.

The proper direction is never to be average in the long run.

It’s All About People:

It is difficult to ascribe a motive to a number. It’s merely the tool of the person wielding it. Put a number in a corner and it will stay there minding its own business not attracting attention or detracting from anyone else. Add a person and the scenario begins to change. The wielder, whether Lord Voldemort, Dumbledore or some other high lord of metrics, becomes the determining factor on how the number will be represented. Measures and metrics can be used for good or evil. Even when the measures are properly selected and match the organization’s culture, “badness” can still occur through poor usage (people). There is one theory of management that requires public punishment of perceived laggards (keel haul the lubber) as a motivation technique. The theory is that punishment or the fear of punishment will lead to higher output. In real life, fire drills (team running around like crazy to explain the numbers) are a more natural output which absorb time that could be used to create value for IT’s consumers. The fire drills and attempts to game the numbers reduces value of the measurement specifically and management more generally.

Are Report Cards A Silver Bullet?:

Report cards are a common tool used to present a point-in-time view of an organization, project or person. At their best, report cards are benign tools, which are able to consolidate large quantities of data into a coherent story. However, creating coherence is not an easy feat. It requires a deft hand that allows a balanced, comprehensive view that is integrated with what is really important to the firm and the entity being measured. Unfortunately, since this is this difficult, often a stilted view is given based on that data which is easily gathered or important only to a few parties. This stilted view is surefire prescription for when Good Numbers Go Bad. The solution is to build report cards based on the input from all stakeholders. The report card needs to include flexibility in the reporting components so they can be tailored to include the relevant facts that are unique to a project or unit without disrupting its framework. Creating a common framework helps rein in out-of-control behavior by making it easy to compare performance (peer-pressure and comparison being the major strength of report cards).

Most of us have been introduced to report cards during school. In most cases, they were thrust upon us on a periodic basis, the report card presenting a summary of the basic school accounting. While we did not get to choose the metrics, at least we understood the report card, and the performance it represented seemed to be linked to our efforts. Good Numbers Go Bad when corporate report cards are implemented using team-level metrics as a proxy for individual performance. As I noted above, balance is critical to elicit expected behavior as well as application of metrics at the proper level of aggregation (teams to teams, people to people). Team metrics present information on how the whole team performed and was controlled, unless the metrics are applied to the unit that controlled performance it misses the mark.

The Beatings Will Continue Until . . .:

“One characteristic of a bad metrics program is to beat people up for reporting true performance.” — Miranda Mason, Accenture

Terms like “world-class” and “stretch” get used when setting goals. These types of goals are set to elicit teams or individuals to over-perform for a period of time. This thought process can cause inappropriate behaviors, in which the goal seekers act more like little children playing soccer (“kick the ball and everyone chases the ball”) rather than a coordinated unit. Everyone chases the ball rather than playing their position like a team,.  Goals that make you forget teamwork are   a perfect example of when Good Numbers Go Bad. Good measurement programs challenge this method of goal setting. Do not be impressed when you hear quotes like” we like to put it out there and see who will make it.”.

Goals are an important tool for organizations and can be used to shape behavior. Used correctly, both individual and team behaviors can be synchronized with the organization’s needs. However, when used incorrectly, high pressure goals can create opportunities for unethical behavior. An example of unethical behavior I heard recently was in an organization that promoted people for staying at work late. The thinking was that working more hours would increase productivity. In this case, a manager would check in at approximately 8 PM every evening ostensibly to do something in the office, but in reality to check to see who was in office. In this case, many people did nothing more than to go out to dinner then come back to work or just read a newspaper until the appointed hour. When the manager checked, there were many people working away at their desks. I suspect that little additional time was applied to delivering value to the organization or their customer. The manager should spend time determining the behavior (good and bad) that can be incentivized as he set metrics and the goals for those metrics. Spending the time on the psychology of measures will increase the likelihood that you will get what you want.

Politics:

As noted earlier, the way numbers are used has a dramatic impact of whether long-term good will is extracted from the use and collection of metrics. The way numbers are used is set by the intersection of organizational policy and politics. The mere mention of the word politics connotes lascivious activities typically performed inside a demonic pentagram. However, all human interactions are political in nature. Interactions are political and organizations are collections of individuals. Many times the use of the word “political” is a code to indicate a wide range of negative motives (usually attributed to the guy in the next cube) or to hide the inability to act. When you are confronted with the line “we can’t challenge that, it is too political,” step back and ask what you are really being shown. Is it:

  • • lack of power or will;
  • • lack of understanding, or
  • • lack of support?

Once you have identified the basis for the comment, you can build a strategic response.

Metrics, A Tool For Adding More Pressure?:

There are many types of pressure that can be exerted using metrics. Pressure is not necessarily a bad thing. Rather, it is the intent of the pressure that is the defining attribute of whether the metric/pressure combination is good or bad. Good Numbers Go Bad when measurement pressure is used to incent behavior that is outside of ethical norms. Pressure to achieve specific metrics, rather than a more constructive goal, can create an environment where the focus is misplaced. School systems that have shifted from a goal of creating an educated community to the goal of passing specific tests are a good example. An IT example that was once described to me was in an organization that measured productivity (with a single metric problem described before) in which a project gamed is productivity performance by under-reporting an effort (actually they hid it in another project). As in the discussion of using single metrics, creating a balanced view that targets organizational goals and needs is the prescription. When a balanced approach is applied, pressure can be applied to move the team or individual towards the organizational goals in a predictable (and ethical) manner.

Measure Twice or Good Numbers Can Go Bad
"Lies, Damn Lies and Statistics"

“There are lies, damn lies and statistics” – Mark Twain

Mark Twain once said “There are lies, damn lies, and statistics.” We know that the same numbers can be used to support many, often opposing, causes. Even though numbers are just numbers, they can be used to tell a story.

What you do with the messages developed from the metrics you collect is important in its own right. Messages become tools (or weapons) to motivate. Motivation can range from the positive (look how good you are doing) to the negative (look how bad you are doing) or the ultimatum (do better or else). Here we will discuss what happens when there is no message or where the message and data aren’t synchronized.

If You Act (or Re-act) Irrationally, Bad Things Happen:

“Good numbers go bad when middle management dictates what the metrics program will report in order to improve or make a less then stellar project look better then it really is.” — RaeAnn Hamilton, TDS Telecom

Good numbers go bad when the reaction they cause is irrational. One example of how measures can be used to incent or create bad behavior is the ’single measure syndrome’ (SMS). SMS occurs when it decided from on high that a whole organization can be maximized based on a single metric, such as time-to-market. Measuring an organization, department or project just on a single metric might sound like a good idea, but life is more complicated. The use of a single metric, however impressive, might have unintended consequences. For example, one means of maximizing time-to-market might be to reduce quality (forget testing, fast is what counts). In this example, is the problem the behavior; is the problem the use of just one metric, or is it the metric itself? Arguably the most rational behavior would be to maximize the measure being focused on, therefore the problem would appear to be the behavior a single metric creates. This is basic human nature. It is just that it might not be the best idea for the organization. Think about what is it you want to incentivize. Is time-to-market the real goal in this case or is something more balanced?

Patterns, Patterns Everywhere:

It is human to ascribe a meaning to data and then to act on that meaning (this is a cognitive bias). Measurement organizations use this basic premise to drive activity. It is the organizational psychology that has created the adage, “you get what measure.” For example, in Agile projects using a burn-down chart, reporting remaining effort above the the ideal line for two or three days is generally interpreted as a sign that the team needs to change behavior.  The pattern acts as the trigger rather than a single observation. Knowing that numbers and actions are intertwined requires that behavioral implications must be examined before numbers are deployed, pell-mell or ASAP.

Measure What You Think You Are Measuring:

When measures and metrics are linked to unrelated items, combined with the logical backing of studious people, the results will create ramifications that are best interesting. For example measuring productivity when you are interested in quality or time-to-market when  you are interested in customer satisfaction. Do they represent Good Numbers Gone Bad or merely chaos? In the long run, the ramifications of mismatches lead to poor decisions and abandoned metrics programs. Measuring toilet paper usage and relating it to productivity is a particularly absurd example where the logic is that higher usage of toilet paper reflected longer working hours, which would result in more output (of some sort or another). While the example was created as a class exercise, it is possible to find similar examples in the wild. Less absurd mismatches can include deciding that effort or the cost of effort is a direct proxy for productivity. Effort is an input to productivity and without an output such as software or widgets. Using half of an equation as a tool might not create the results expected.

One Metric To Rule Them All?:

Not all metrics can be used for all projects. If you can’t easily answer the question “does this relate?” for each metric, the information generated through measurement and analysis will provide little or no value. Stratification is a requirement for analysis. The goal is to understand the differences between groups of work so that when you make the comparison, you can discern what is driving the difference (or even if there is a difference). Comparing package implementations, hardware intensive projects or custom development is rational only if you understand that there will be differences and what those differences mean. Examples abound of organizations that have failed to stratify like projects into groups for comparison. Failing this simple precaution lets Good Numbers Gone Bad.

They Are Everywhere– They Are Everywhere!:

There are many items that are very important to measure. Measurement can tell you the state of your IT practice while providing focus. Measurement is sometimes thought of as a silver bullet? Because it seems important to measure many activities within an IT organization, many measurement teams think measuring everything is important. Unfortunately, measuring what is really important is rarely easy or straightforward. When presented with obstacles, many metrics programs let Good Numbers Go Bad by measuring something, anything. “Quick, do something” is the attitude! When organizations slip into the “measure something” mode, often times what gets measured is not related to the organizations target behavior (the real needs). When measures are not related to the target behavior, it will be easy to breed unexpected behaviors (not indeterminate or unpredictable, just not what was expected). For example, one organization determined that the personal capability was a key metric. More capability would translate into higher productivity and quality. During the research into the topic, it was determined that capability was too difficult or “touchy-feely” to measure directly. The organization decided that counting requirements were a rough proxy for systems capability, and if systems capability went up, it must be a reflection of personal capability. So, of course, they measured requirements. One unanticipated behavior was that the requirements became more granular (actually more consistent), which meant that there was an appearance that increased capability that could not be sustained (or easily approved) after the initial baseline of the measure.

Follow

Get every new post delivered to your Inbox.

Join 3,555 other followers