Buckle-Up Sign

When everything is done it is time to buckle and go home!

When I am asked what a team should do with stories that are incomplete at the end of a sprint or iteration I am reminded of the Jackson Browne song Load Out -Stay

“Now the seats are all empty
Let the roadies take the stage
Pack it up and tear it down
They’re the first to come and last to leave”

Now the demonstration and retrospective are all done and despite the team’s best efforts, one or more stories do not meet the definition of done. The planning process that springs into action after the completion of one sprint often times feels sort of anticlimactic compared to the celebration that marks the end of an iteration. The team feels like the roadies as they swing back into action to pack up the last iteration and get the next ready to go. Shortcuts are sometimes taken with teams and product owners without thinking about “rolling-over” the escaped stories. Doing anything without thought is never a good idea. The three basic steps for dealing with incomplete stories are:

  1. Return the incomplete stories (or other work items) to the backlog. Stories are either done or not done, there is no partial credit.
  2. Re-prioritize the backlog based on all of the stories on the backlog. There are a number of approaches for prioritization ranging from most risky first to most valuable first. As part of the re-prioritization, reassess the approach to selecting work. At times it might be more important not to stop work on the story even when it does not meet the team’s prioritization strategy. Sometimes team members have just gotten to a point they can solve a problem and putting down might be less efficient, however, sometimes the need of the business and product owner will conflict (the product owner and business always win).
  3. Plan (or re-plan) the work that will enter the team level planning process. Prioritize stories that are more important or return higher value ahead of the stories that you returned to the backlog unless there is a solid reason why that should happen.

The process for dealing with incomplete stories is straightforward. Incomplete stories have to compete for attention just like any other story. Being partially done does not generate a privileged position.

In addition to re-prioritization, items you should keep in mind when addressing incomplete stories.

  1. Do not recognize any contribution to velocity or throughput for incomplete items. Remember no partial credit. Velocity is an average over a large number of sprints or iterations; it will balance out. Throughput is similar.
  2. If you are tracking cycle time (and you should be) don’t reset the start date of the story. The start date for the piece of work begins when the team first starts to work on the item and ends when the story meets the definition of done.
  3. If using story points and velocity as a planning tool don’t resize the story or your velocity will be incorrect and no amount of averaging will fix the problem.

Resist the urge just to roll over stories into the next sprint or iteration. Reassess whether they should be worked on in the next sprint (or even if they should be worked on at all) when compared to other work. Reassessing will make people uncomfortable; there was a commitment to complete an item when you drew it into the sprint. However, transparency and thought are core behaviors that every team should embrace.

Sign saying propane tanks are not allowed in store.

Sometimes no is the right answer and sometimes it is not!

I have been asked many times whether it is “ok” to include work that is not complete in a demonstration/sprint review.  The simple answer is that is a bad idea 95% of the time. If the answer is no most of the time, what would the default answer to “yes”?  The only good reason to demonstrate an incomplete user story is when feedback is needed or desired to allow the team to progress and the people participating in the demo are the right people.  Allowing the team to progress is not the same demonstrating progress …we have discussed the definition of bad. Occasionally I have seen the need to show progress for reasons of organizational politics.  Not a great reason, but sometimes you have to do what is necessary to stay employed. Both of these reasons should be RARE. I have a rule: I do not spend money that is older than I am — demo’ing incomplete stories should be at least that rare.  An unasked question that is even more important when the “can I demo incomplete work” question is asked, is how can you demo incomplete work items and stay safe. (Note – Generally when people ask if they can demo incomplete items they already have or are going to do it anyway and are looking for absolution.)  

Demo’ing Incomplete Story Safety Checklist


Sign for a weather shelter

Is A Demo A SAFe Area?

I have been asked many times whether it is “ok” to include work that is not complete in a demonstration/sprint review (I’ll use the term demo from now on).  The simple answer is that is a bad idea 95% percent of the time. Demos are agile’s mechanism to share what the team completed during the current sprint or iteration. The use of ‘complete’ is purposeful. It means the work meets the organization’s technical standards and is potentially deployable in production.  Complete mean stories that meet the definition of done – before the demo, not the next day. Teams define done before they begin work. Done typically includes steps such as coding, testing, documenting and reviewing. Unless the piece of work meets the definition of done (or a deviation agreed upon) then the work is not done. Demonstrating in progress material is a bad idea in most situations for many reasons.  Mixing done and not items in a demo: (more…)

A blur!

I was recently asked to explain the difference between a number of metrics.  One difference that seems to generate some confusion is that between velocity and cycle time.


Velocity is one of the common metrics used by most Agile teams.  Velocity is the average amount of “stuff” completed in a sprint.  I use the term stuff to encompass whatever measure a team is using to identify or size work.  For example, some teams measure stories in story points, function points or simply as units. If in three sprints, a team completes 20, 30 and 10 story points, the velocity for the team would be the average of these values; that is, 20 story points. The calculation would be the same regardless of the unit of measure.  

Typical Assumptions (more…)

Listen to the Software Process and Measurement Cast 285. SPaMCAST 285 features a compilation of frequently asked questions of a consulting kind.  Working as a traveling consultant, podcaster and blogger provides me with a fabulous mix of experiences. Meeting new people and getting to participate in a wide range of real life experiences is mind expanding and invigorating. Many of the questions that I have been asked during a client engagement, on the blog or in response to a podcast have similar themes. Since most of the answers were provided in one-on-one interactions I have compiled a few of the questions to share. If these questions spark more questions I promise to circle back and add to the FAQ list!

The SPaMCAST 285 also features Kim Pries’s column, The Software Sensei. In this edition, Kim tackles the concept of failure mode and effects.

Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us on Facebook while you’re at it.

Next week we will feature an interview with Brian Wernham author or Agile Project Management for Government. Combining Agile and government used in the same phrase does not have to be an oxymoron.

Upcoming Events


I will be speaking at the StarEast Conference May 4th – 9th in Orlando, Florida.  I will be presenting a talk titled, The Impact of Cognitive Biases on Test and Project Teams. Follow the link for more information on StarEast. ALSO I HAVE A DISCOUNT CODE…. Email me at spamcastinfo@gmail.com or call 440.668.5717 for the code.

ITMPI Webinar!

On June 3 I will be presenting the webinar titled “Rescuing a Troubled Project With Agile.” The webinar will demonstrate how Agile can be used to rescue troubled projects.  Your will learn how to recognize that a project is in trouble and how the discipline, focus, and transparency of Agile can promote recovery. Register now!

I look forward to seeing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.


Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.



He's looking at you...

He’s looking at you…

Who is responsible for results on a sprint team? I was once asked “whose throat I should step on when the project is in trouble?” In a classic project, the answer would be the project manager (or a similar position). In an Agile project that is living up to the principles espoused in the Agile Manifesto, the answer is a bit messier and that messiness makes a command and control leader very nervous.

Agile projects using Scrum as their organization and management framework have three basic roles: product owner, scrum master and team. If we were looking for a throat, which one would we select? The product owner owns the backlog, the budget, is charge of prioritizing the work and provides leadership. The scrum master coaches, teaches and generally facilitates the team while removing barriers to performance and provides leadership. The whole team plans the work, tackles issues and swarms to problems and individuals provide leadership when necessary. Agile teams are self-organizing and to an extent self-managing (the organization generally decides on which projects get done based on strategic plans). The whole team is involved in planning the work and, at least at situational level, everyone on the team can provide leadership. If you were to ask the members of an Agile team to point at who is in responsible, you might not have many people pointing in the same direction. Therefore is the answer that no one is responsible?

No, rather everyone on the team is in charge.  Everyone on the team is accountable for meeting the goals that the organization sets out as interpreted by the product owner (through the backlog) and accepted by the team.  The planning activities of public acceptance and commitment to the goals and stories of the sprint creates a pressure to do what has been committed. Demonstrations act as the bookend to the public commitment with the team publicly showing how they performed against the goals they committed to attaining. If we assume that the team is empowered to attain the goals they have committed to attaining, then the team truly is responsible as a whole.

Answering that the team is responsible sounds way too squishy for some organizations. In the end, whoever controls the budget is the person that should be accountable. This suggests that the product owner should be responsible or IT management if the budget is allocated as overhead. Neither of these scenarios is conducive to empowering a self-organizing team.

Who is in charge of a typical sprint team? Every person on the team is responsible for holding each other accountable for meeting their goals. The product owner and scrum master have a direct hand in setting and facilitating the goal, therefore everyone on the team is both accountable and responsible. The layers of interlocking responsibility produce significant peer pressure. That means that every team member can truthfully say that they ARE responsible for the project.

Experimenting with solid foods often fails, at first.

Experimenting with solid foods often fails, at first.

When talking to process improvement personnel I am often asked what “safe to fail experimentation” means. Safe to fail experiments are typically small-scale experiments that approach a problem from different angles used to expose the best approach to a solution. If an experiment all results positive or negative creates knowledge.  But, that definition doesn’t get at the heart of the question.  Success in process improvement is never a foregone conclusion.  The status quo is sticky because doing work the way we are doing it now is comfortable and is safer than change. I think a good near death experience makes change a lot easier for a person or an organization.  For example, if a team doing manual testing delivers most of its work on-time, at an acceptable cost and with good quality, what impetus would the team for changing from manual testing to automated testing?  The learning curve and potential dip in effectiveness during the transition could negatively impact cost, time-to-market, quality and careers.  Just telling them that they will be more efficient in the future is not going to provide enough motivation to overcome the risk.  Instead they need hard data that tells the team how the change will help, because change is not safe.

Experiments are a tool to gather the data needed to determine whether any specific change will work.  Most process improvement frameworks suggest using experiments.  For example, the CMMI calls for piloting changes before rolling them out.  A pilot reflects a learning moment where a change can be implemented, observed and updated (or abandoned) before a broader implementation.  Kaizen takes a systematic approach to change that uses experimentation to gather and evaluate data, update the suggested change and then roll the change out to a larger audience.  Another example of experimentation can be found in the Shewhart Cycle popularized by W. Edwards Deming.  Shewhart Cycle, also know as the Plan, Do, Check, Act Cycle, represents a cycle of learning and adapting which is at the heart of experimentation.

The core of experimentation in process improvement is the cycle of trying, learning and adapting.   Translating the words into IT pilots,  experiments provide the data needed to determine if a process improvement will work.  Then the pilot generates the knowledge of whether the process needs to be changed.  In the end the pilot provides the information whether the anticipated ROI will meet expectations.  To get all of this data the experiment must occur under real life conditions.  Said more bluntly, the  pilot must be done correctly or we will not gather the right information.

In ‘safe to fail experimentation,’ the word fail is a misnomer.  Changes need to be tried and evaluated before they are implemented en masse.  An experiment is not a failure if the change provides us with information needed to avoid an implementation that will hurt the organization or the information needed to update the process so that it can be rolled out effectively.  I would rather change the expression to experiments have to be safe because it will never be safe to for a  full scale change to fail.  Experiments must be safe so that we can learn how to avoid larger changes.