photo (14)IT projects have been around in one form or another since the 1940’s. Looking back in the literature describing the history of IT, the topic of requirements in general and identification of requirements specifically have been top of mind since day one.  There are numerous definitions of ‘requirements,’ however at a high level requirements can be thought of as what the functionality developed by the project should ‘do’. Identifying requirements is difficult because it requires nearly a perfect storm of the correct process, involvement of the correct people for the business problem to be solved (before it is even defined) and an environment that is conducive to making all of the parts work together.  This confluence forms a set of three constraints that are overlaid on flowing time which ensures subtle changes are continually being introduced to all of the variables.  A breakdown of process, people or environment will reduce the effectiveness of the result or render it unusable.  The factors driving the people category are typically the most volatile and seemingly least controllable of all of the variables within the requirements process.  This essay will focus on the ‘people’ category with subsequent essays focusing on process, environment and suggestions for solutions.

People have a major impact on the vagaries of requirements.  All of the strengths and weaknesses that individuals and groups bring to the table will influence the final requirements product. A few of the more intractable contributors to requirements variance are:

  1. Lack of experience
  2. Human nature
  3. Communication
  4. Organizational politics

We’ll discuss the first two today and deal with points 3 and 4 on Monday.

Two types of experience are the most germane to this discussion.  The first is whether participants have knowledge of the problem space from a business perspective. Without that, the requirements may not practically address the project needs. Knowledge of and experience in the problem space is critical for effectiveness. One technique that has been developed to mitigate this risk is to ensure access to business knowledge through co-location with a business partner.  This kind of access is a central tenet of the most of the Agile methods. The second category is experience with the requirements process itself. Without experience gathering, recording and managing the requirements process, it will be difficult to ensure information gathered is correct and that the results developed will be apt to be more costly than necessary and less valuable than needed.  Agile methods use coaching to reinforce this knowledge and experience, while other methods use training and processes.  The goal is the same in both cases: efficiency and effectiveness.

Human nature can act as tool to focus or redirect the requirement process.  Watching several requirements gathering systems has lead me to the conclusion that there is a natural tendency for groups to jump to defining the solution before defining the need.  This can lead to a number of communication issues.  Needs provide a locus for grounding the work, which focuses the solution.  It is important to remember in the grand scheme of life needs change before the solution changes, rather than the solution changing the need (even though in exceptional cases this can be true).

There are a number of tactical solutions for all of these issues, however the first step to solving requirements issues that are people-centric begins with recognizing that a problem exists.  One best practice that I would recommend, taking a page out of the Agile handbook, is to use coaches to support the people working on gathering and managing requirements.  The role of coaches is to be the voice of the people focused inward on the team and the work.  A coach observes how work is done, provides support and instruction in a consistent and focused manner when and where it is needed.  This role is different from that of project manager which is externally focused, interacting with outside parties and clearing external roadblocks.  While people are not the only factor driving the quality of requirements, they are a critical factor.  Pay attention to how people are being deployed, provide support and instruction and make darn sure the right people are in the right place at the right time.

It Takes A Team

It Takes A Team

Hand Drawn Chart Saturday

An Agile team is comprised of a product owner, team members (all disciplines needed to deliver the project) and the scrum master. Delivering on the team’s commitment is the ultimate measure of value. The scrum master helps to create an environment for the team to work together. Over the life of a project, everyone on the team has to lead and facilitate for the team to effectively deliver value.

Leadership in Agile projects has multiple layers. Product owners provide visionary leadership. Scrum masters provide process leadership and day-to-day leadership is more situational and generally defused across the entire team. Paul Hersey and Ken Blanchard, leadership gurus and authors, have both written that effective leadership is task-relevant. Task-relevant means that the task will determine the type of leadership and in an Agile who provides that leadership.  The focus of any Agile team changes over the life of project based on both on the stories being worked on and the barriers being addressed. As the focus changes, the mantle of tactical leadership typically changes. In a typical sprint, the product owner’s leadership will be most apparent during story grooming and planning as the focus changes to analysis and construction  development personal provide leadership.  When the focus turns to proving that a story is done the focus changes again and the testing role typically provides leadership until the team drives a story to completion when the cycle begins again. The scrum master facilitates that ebb and flow reflecting their own form of leadership.

The primary role of a scrum master is as a facilitator. That responsibility does not have to be shouldered alone.  All team members are responsible for keeping work flowing, for unsticking work when it gets stuck and for helping to create an environment to maximize the delivery of value. Every member of the team has eyes and ears and within the boundary and intimacy of the team and the responsibility to help each other meet their common goals.

While a product owner prioritizes and a scrum master facilitates, it takes a whole team to deliver.  The whole team is responsible for getting the job done which means that at different times in different situations different members will need to provide leadership. Every team member brings their senses to the project-party, which makes all of them responsible looking for trouble and then helping to resolve it even if there isn’t a scrum master around.

photo (12)

Hand Drawn Chart Saturday

The product owner owns the backlog. It almost sounds like a mantra. He or she is responsible for organizing and prioritizing the product backlog. The product owner does not act in a vacuum; they are informed and educated by others on the team, peers and other stakeholders. The relationship between the product owner and the team can generate a fishbowl effect, in which confirmation bias can cause the team to pursue threads into blind alleys.

When a team or product owner falls prey to confirmation bias, they exhibit a tendency to search for/or interpret information in a way that confirms their preconceptions.  A confirmation bias leads product owners to seek out and assign more weight to evidence that confirms their view of the project direction.  Product owners and teams living inside a fishbowl of their own ideas will ignore or underweight evidence that could disconfirm those ideas.

Coaches need to help product owners not fall prey to confirmation bias.

One technique suggested by Dr. Ahmed Sidky at AgileDC 2013 was the development of value teams with the product owner facilitator.  Value teams are groups stakeholders that provide the product owner with outside ideas and additional market knowledge. The value team serves two purposes. First is to provide an avenue for stakeholders to review decisions and direction outside of the demonstration.  Second, the value teams can provides new ideas and information to the product owner who then acts as a conduit to bring the information through the team boundary.

A second set of techniques focuses on providing the team with a question set to safely challenge decisions and direction. A question framework can be used to remove much of the sting of challenging project decisions or directions. For example, to delve into the assumptions of a decision the “what would make this true” could be used.  Where there are several completing ideas to decide from you could ask the question “what would it take to make this idea the best decision” about each ideas. Answer this question for each idea would create a set of criteria that can be used to compare options.

The CMMI recognized the need for a framework that would lessen the potential for cognitive biases to have an effect. The Decision Analysis and Resolution (DAR) process area helps teams layout criteria for evaluating alternatives, identifying alternatives and then selecting the best alternative. The activities in the DAR process  area layout how a team can develop a formal decision making process that strips away much of the confirmation bias.

Product backlogs represent the outcome of myriad decisions, some small and some large. Regardless of size, each decision the product owner and team makes changes the direction of a project. When all of those decisions are made within a fishbowl, the results may end up off target. All product owners need a mechanism to ensure they are receiving feedback that challenges their preconceptions. Whether the technique is a value team, formal question sets or frameworks like DAR, all teams must recognize that life in the fishbowl will get stale without occasional fresh water.

Distributed Meetings Need Structure

Distributed Meetings Need Structure

Hand Drawn Chart Saturday

When you really need to have a meeting (and here are the types of meetings you might need to have), face-to-face meetings are the best. Unfortunately, when your team is distributed, face-to-face meetings are really no longer possible. It is important to try and maintain the most intimate meeting possible. After face-to-face, the order of intimacy starts at immersive video, standard video, web conference, phone to IM/chat and, finally, email. Every step down the chain peels away a layer of intimacy, increasing the perceived distance between meeting participants. Techniques that I have used to improve communication between members of distributed teams include:

  1. Facilitation: Facilitators help communication happen by encouraging full participation, promote understanding and keep the meeting moving. Consider facilitation at each location for significant meetings. Facilitation can be an expensive option.
  2. Rotating meeting leadership: Rotating the meeting leadership for standing meetings (e.g. standup meetings, sprint reviews, demonstrations and retrospectives) is helpful to generate engagement. This technique works best if the meeting is not chaired by a single strong, positional leader.
  3. Formal structure: Structure provides a means to keep the meeting synchronized. The structure helps avoid open discussion. Open discussion is difficult to follow in distributed meetings.  Different accents and cultures complicate not only how information is interpreted, but also how behaviors like interruptions are interpreted.
  4. Pairing: Pair participants in different locations. Each person in the pair will look out for the other’s interest. Meeting pairs are highly effective when one or both people in the pair are actively leading or presenting as part of the agenda. Pre-work between participants in the pair might be required.  Pairing also has the side benefit of helping to keep participants engaged.

Technology is key to making distributed meetings work. Whether video or phone, use the best technology possible. If can hear the people on the other end of the phone or if the video is so poor that you are unsure if you are looking at the moon or a person, the meeting will fail.  Assuming the technology works there are several additional steps that can increase effectiveness.  All of the above techniques can be combined.  Facilitation is the most powerful technique however it tends to be the most expensive, use this technique for high impact meetings. Structure and pairing are two very powerful techniques that are cheap and easy to deploy. Rotation works best in standing meets that leverage self-organizing teams comfortable with sharing leadership.

Distributed meeting are more complicated than face-to-face meetings.  Effective distributed meetings require both an investment in technology and learning new technique to ensure understanding and communication.

This is your mind on overload!

This is your mind on overload!

Hand Drawn Chart Saturday

We live in a noisy world.  Between email, Twitter, Facebook, and the rest of the internet, we are simply awash in information. The economic theory, rational expectation, assumes that people fully and quickly process all freely available information. Unfortunately, humans only have a finite processing capacity. Enter the theory of rational inattention, which recognizes our limited ability to process information.  Rational inattention theory, like rational expectations, recognizes that information is freely available, but since it can’t be quickly absorbed people need to make choices about what they’ll pay attention to.  Attention becomes a resource, and as a scarce resource, it needs to be budgeted wisely[1].Budgeting translates to filtering.  Filtering is one reason that some process improvement messages get heard and some seem to go in one ear and out of the other.

As change agents, we have a better chance of being heard if we recognize that the potential impact of rational inattention. Our audience is not going to be moved to action if 1) they are not aware, and 2) don’t pay attention, both prerequisites to taking action. In his book The Attention Economy, Tom Davenport outlines a model that begins with awareness, which is then filtered by attention to generate specifics from which action can be taken. This simple model helps us understand that getting someone to take action has prerequisites.


How do we break through the wall of noise? Yell louder? One popular approach is to wrap the message for change around a burning platform.  The burning platform is a metaphor for a problem that if not solved will cause significant pain or anguish. There is data that shows we respond to negative shocks faster than to positive shocks[2] This means that our audience’s natural risk aversion may induce them to process negative news faster than positive news. In other words, a solution that solves a current, real pain will be heard faster than a promise of future benefits. Instability is change’s ally while stability is changes natural enemy (if it isn’t broken, don’t fix it).

Rational inattention helps change agents understand why some messages are heard and some aren’t. Our audiences will make the most of available information by analyzing those bits that are relevant to their decisions and to disregard the rest. Our goal when packaging change is to increase the incentive to be aware of the need to change and then to pay attention to the message. On many occasions we will convince our audiences that the world is not only noisy, but unstable so they can hear our message.

[1] Economic Letter, Federal Reserve Bank of Dallas, Volume 6 No. 3 March 2011, p 2

[2] “Some International Evidence on Output Inflation Tradeoffs,” Robert E. Lucas, Jr., American Economic Review , Vol. 63, No 3, 1973, pp 326-334

Frameworks and mirrors are related.

Frameworks and mirrors are related.

Hand Drawn Chart Saturday

The simplest explanation for integration testing is to ensure that functions and components fit together and work. Integration testing is critical to reducing the rework (and the professional embarrassment) that you’ll encounter if the components don’t fit together or if the application does not interact with its environment. A healthy testing ecosystem is required for effective testing regardless of whether you are using Agile or waterfall techniques. As we noted in the essay TMMi: What do I use the model for?, the Testing Maturity Model Integration (TMMi) delivers a framework and a vocabulary that defines the components needed for a healthy test ecosystem. We can use this framework to test if how we are approaching integration testing is rigorous. While a formal appraisal using the relevant portion of the model would be needed to understand whether an organization is performing at specific maturity level, we can look at a few areas that will give a clear understanding of integration test formality. A simple set of questions from the TMMI that I use in an Agile environment to ensure that integration testing is rigorous include:

  1. Does the organization or project team have a policy for integration testing? All frameworks work best if expectations are spelled out explicitly. The test policy(ies) are generally operationalized through test standards that explicitly define expectations.
  2. Is there a defined test strategy? In Agile teams all the relevant testing standards should be incorporated into the project’s definition of done. The definition of done helps the team to plan and to know when any piece of work is complete.
  3. Is there a plan for performing integration testing? Incorporating integration testing into the definition of done enforces that integration testing is planned.  The use of TDD (or any variant) that includes integration testing provides explicit evidence of a plan to perform integration testing
  4. Is integration testing progress monitored? Leveraging daily or continuous builds provides prima facie evidence that integration testing is occurring (and the build proves that application at least fits together).  Incorporating smoke and other forms of tests into the build make provides information to explicitly monitor the progress of integration testing.  A third basis for monitoring integration progress is the demo.  Work that has met the definition of done (which includes integration testing) is presented to the project’s stakeholders.
  5. Are those preforming integration tests trained? Integration testing occurs at many levels of development ranging from component to system, and each level requires specific knowledge and skills. Agile teams share work and activities to maximize the amount of value delivered. Cross functional Agile teams that include professional testers can leverage the testers as testing consultants to train and coach the entire team to be better integration testers. Teams without access to professional testers should seek coaching to ensure they are trained in how to perform integration testing.

The goal of integration testing is make sure that components, functions, applications and systems fit together. We perform integration testing to ensure we deliver the maximum possible business value to our stakeholders. When the parts of the application we are building or changing don’t fit together well, the value we are delivering is reduced. The TMMi can provide a framework for evaluating just how rigorously and effectively you are performing integration testing.


In Software Project Estimation: Fantasies I said that a budget, estimate or even a plan was not a price.  After the publication of that essay I had a follow up conversation with a close friend. He said that in his organization the word estimate is considered a commitment, or at the very least a target that all his project managers had to pursue. YEOW! He is playing fast and loose with the language and therefore is sending a mixed message.

A commitment is a promise to deliver.  An example of a commitment I heard recently as I was walking through the airport listening to the cell phone conversation of a gentlemen walking next to me was “I promise not leave the sales review until the end of the month.”  A commitment indicates a dedication to an activity or cause.  The person on the cell phone promised to meet the goal he had agreed upon.

What is a target? In an IT department a target is a statement of business objective. An example of a target might be “credit card file maintenance must be updated by January 1st to meet the new federal regulation.” A target defines the objective and defines success.  A target is generally a bar set at a performance level and then pursued.  Another example is “I have a target to review six books for the Software Process and Measurement podcast in 2014.”  Note six is two more than we did in 2013 and represents a stretch goal that hopefully will motivate me to read and review more books.

Simply put, a commitment represents a promise that will be honored and a target is a goal that will be pursued.  An estimate is a prediction based on imperfect information in an uncertain environment.  An estimate, as we have noted before, is best when given as a range. Stating an estimate as a single number and adding the words “we will deliver the project for X (where X is a budget or estimate) converts the estimate into a commitment that must be honored.  Consider for a second . . . if a project is estimated to be $10M – $11M USD and a team finds a way to deliver it for $7M USD, would you expect them to find a way to spend the extra money rather than giving it back so the organization can do something else with the money? Bringing the project in for $3 or $4M less than the estimate would mean they had not met their target or commitment. Turning an estimate into a commitment or target can lead teams toward poor behaviors.  Targets are goals, commitments are a promise to perform and an estimate is a prediction.  Targets, commitments and estimates are three different words with three different definitions that generate three different behaviors.

Development Debt

Development Debt

In Technical Debt, we noted that technical debt is a metaphor coined by Ward Cunningham to represent the work not done or the shortcuts taken when delivering a product. Cunningham uses the term to describe issues in software code. As with any good metaphor, it can be used to stand in for short cuts in other deliverables that effect the delivery of customer value.  In fact, a quick Google search shows that there have been numerous other uses. These “extensions” of technical debt include:

  • Quality Debt,
  • Design Debt,
  • Configuration Management Debt,
  • User Experience Debt, and
  • Architectural Debt.

Before I suggest that this extension of the technical debt metaphor represents a jumped the shark moment,  I’d like to propose one more extension: process debt.  Process debt reflects the shortcuts taken the process of doing work that doesn’t end up in the code.  Short cuts in the process can affect the outcome of sprint, release, whole project or an organization’s ability to deliver value. For example, teams that abandon retrospectives are incurring process debt that could have long-term impact.

Short-term process debt can be incurred when a process is abridged due to a specific one-time incident. For example, failing to update an application’s support documentation while implementing an emergency change. In the long run, support personnel might deliver poor advice based on outdated documentation, thereby reducing customer satisfaction.  The one-time nature of the scenario suggests that the team would not continually incur debt purposefully. However if process debt becomes chronic, an anti-process bubble could form around the team.  Consider how a leader with a poor attitude can infect a team.  High quantities of process debt can reflect a type of bad lead syndrome that will be very difficult to remediate.

An example of process debt I recently discussed with a fellow coach occurred when a team that was supposed to do a code check-in every evening for their nightly build called a week hiatus.  The overall process was that code was to be checked in, followed by a consolidated build of the software, and then it was to be subjected to a set of automated smoke and regression tests. A significant amount of effort had been put into making sure the process was executed, however the person that oversaw the process got the flu and no one else had been trained. The lack of executing a nightly build process allowed the code to drift enough so that it could not be integrated. It took twelve days and a significant amount of rework to get to a point where a nightly build could be done again.  The process debt was incurred when a backup was not trained and the process stopped being used.

Peer pressure, coaching and process and product quality assurance (PPQA) audits are tools for finding, avoiding and, when needed, remediating process debt.  In all three cases someone, whether a fellow team member or an outsider, will need to look at how the work has been done in order to understand how the process was followed.

Process debt equates to shortcuts in the use of the process that can impact a team, a project, project deliverables or the long-term ability of the organization to deliver value to their customers.  What process debt is not is experimentation done by teams consciously in order to improve. Process debt is not change made to a standard process that does not fit their context adopted during retrospectives. Process debt is incurred when teams abandon a process without a better process. Process debt stands as a worthy extension of the technical metaphor that can help us understand that shortcuts in how we do our work can have ramifications, even if the code is not affected.

Its all about finding the balance!

Its all about finding the balance!

During the heart of winter, when the polar vortex swoops down with temperatures that even my dog finds uncomfortable, my wife and I assemble puzzles.  The act of assembling a puzzle is a simple example of the need to balance a system view with a more detailed perspective.  Getting work done efficiently and effectively requires both perspectives in some sort of balance.

The process we use for assembling a puzzle begins with opening the box, spilling the contents on a handy table and then propping up the lid so we can reference the big picture.  I once had a conversation in a hotel bar with a person who told me that he thought using the picture as a reference was cheating.  A little probing suggested that starting puzzles might have been more his actual goal than completing them, due to the time it took to discover the picture.  In our process, by contrast, I use completion of the puzzle as the goal and the picture on the cover of the box acts as the high-level requirements. However, as anyone that has ever assembled a puzzle will tell you, to achieve your goal you still need to fit all of those little pieces together.  Whether the puzzle you are working on has 100, 500 or 1000 pieces you will have to shift from focusing on the big picture to focusing on the details to find the right fit. The information from both perspectives is essential to fully understand what is required to get from the start of the system to the end of the system in an effective and efficient manner.

Reflecting back to my conversation about puzzles in the hotel bar, by eschewing the big picture, the systems thinking view, my neighbor might have been able to complete the puzzle, but not in the most efficient manner.  Meanwhile, a single-minded focus on the big picture, as pointed out in Gene Hughson’s comments to Systems Thinking: Difficulties can cause the equality serious problem of analysis paralysis. Spinning down into greater or greater levels of detailed analysis means a team is delaying its ability to deliver business value.  Gene was interviewed for the Software Process and Measurement Cast 268 providing great insights into Agile architecture, software development and management.

In the end, both perspectives are needed to get the job done. Finding the balance between the macro- (systems thinking) and micro-focus is a process in its own right.  The balance changes over the life of any project or even iteration. As a team shifts from conceptualizing to developing what is to be done they naturally shift from the big picture to the detailed view.  Product owners face a similar journey between the big picture and the detail, however they need to own the goal and the picture on the puzzle’s lid.  As a coach, the scrum master needs to make sure the whole team remembers the overall goal and big picture. Systems thinking helps us to understand that nothing happens in a vacuum.  Developing an understanding of how we transform inputs into value is critical. However in order to deliver that value, just having the big picture understanding is not sufficient. In order to actually execute, we need to have a handle on the detail also.



Why should anyone spend the time and effort needed to count function points?  While some value can be gained from the process of counting function points (it can be leveraged as a formal analysis technique), the value from IFPUG function points comes primarily from how they are used once counted.  Function points have four primary uses.

Estimation: Size is a partial predictor of effort or duration. Estimating projects is an important use of software size. Effort can be thought of as a function of size, behavior and technical complexity.  All parametric estimation tools, homegrown or commercial, require project size as one of the primary input. An example of the relationship between size and effort is seen in the Jones equation for estimation, which says that effort is a function of size, behavior and complexity.

Denominator: Size is a descriptor that is generally used to add interpretive information to other attributes or as a tool to normalize other attributes. When used to normalize other measures or attributes, size is usually used as a denominator. Effort per function point is an example of using function points as denominator. Using size as a denominator helps organizations make performance comparisons between projects of differing sizes.  For example, if two projects discovered ten defects after implementation, which had better quality?  The size of the delivered functionality would have to be factored into the discussion of quality.

Reporting: Many measures and metrics are collected and used in most organizations to paint a picture of project performance, progress or success. Organizational report cards may also be leveraged, again with many individual metrics, any one of which may be difficult to compare individually.  Using function points as a denominator to synchronize many disparate measures so that they may be compared and reported.

Control: Understanding performance allows project managers, team leaders and project team members to understand where they are in an overall project or piece of work and therefore take action to change the trajectory of the work. Knowledge allows the organization to control the flow of work in order to influence the delivery of functionality and value in a predictable and controlled manner.

Organizations that have found the greatest value use the counting process as an analysis technique. If Agile they use function points to size stories and review sprint efficiency, estimation and reporting are uses for function points that can generate value for all organizations.  IFPUG Function Points (or any functional metric variation) only have value if used.