Buckle-Up Sign

When everything is done it is time to buckle and go home!

When I am asked what a team should do with stories that are incomplete at the end of a sprint or iteration I am reminded of the Jackson Browne song Load Out -Stay

“Now the seats are all empty
Let the roadies take the stage
Pack it up and tear it down
They’re the first to come and last to leave”

Now the demonstration and retrospective are all done and despite the team’s best efforts, one or more stories do not meet the definition of done. The planning process that springs into action after the completion of one sprint often times feels sort of anticlimactic compared to the celebration that marks the end of an iteration. The team feels like the roadies as they swing back into action to pack up the last iteration and get the next ready to go. Shortcuts are sometimes taken with teams and product owners without thinking about “rolling-over” the escaped stories. Doing anything without thought is never a good idea. The three basic steps for dealing with incomplete stories are:

  1. Return the incomplete stories (or other work items) to the backlog. Stories are either done or not done, there is no partial credit.
  2. Re-prioritize the backlog based on all of the stories on the backlog. There are a number of approaches for prioritization ranging from most risky first to most valuable first. As part of the re-prioritization, reassess the approach to selecting work. At times it might be more important not to stop work on the story even when it does not meet the team’s prioritization strategy. Sometimes team members have just gotten to a point they can solve a problem and putting down might be less efficient, however, sometimes the need of the business and product owner will conflict (the product owner and business always win).
  3. Plan (or re-plan) the work that will enter the team level planning process. Prioritize stories that are more important or return higher value ahead of the stories that you returned to the backlog unless there is a solid reason why that should happen.

The process for dealing with incomplete stories is straightforward. Incomplete stories have to compete for attention just like any other story. Being partially done does not generate a privileged position.

In addition to re-prioritization, items you should keep in mind when addressing incomplete stories.

  1. Do not recognize any contribution to velocity or throughput for incomplete items. Remember no partial credit. Velocity is an average over a large number of sprints or iterations; it will balance out. Throughput is similar.
  2. If you are tracking cycle time (and you should be) don’t reset the start date of the story. The start date for the piece of work begins when the team first starts to work on the item and ends when the story meets the definition of done.
  3. If using story points and velocity as a planning tool don’t resize the story or your velocity will be incorrect and no amount of averaging will fix the problem.

Resist the urge just to roll over stories into the next sprint or iteration. Reassess whether they should be worked on in the next sprint (or even if they should be worked on at all) when compared to other work. Reassessing will make people uncomfortable; there was a commitment to complete an item when you drew it into the sprint. However, transparency and thought are core behaviors that every team should embrace.

Sign saying propane tanks are not allowed in store.

Sometimes no is the right answer and sometimes it is not!

I have been asked many times whether it is “ok” to include work that is not complete in a demonstration/sprint review.  The simple answer is that is a bad idea 95% of the time. If the answer is no most of the time, what would the default answer to “yes”?  The only good reason to demonstrate an incomplete user story is when feedback is needed or desired to allow the team to progress and the people participating in the demo are the right people.  Allowing the team to progress is not the same demonstrating progress …we have discussed the definition of bad. Occasionally I have seen the need to show progress for reasons of organizational politics.  Not a great reason, but sometimes you have to do what is necessary to stay employed. Both of these reasons should be RARE. I have a rule: I do not spend money that is older than I am — demo’ing incomplete stories should be at least that rare.  An unasked question that is even more important when the “can I demo incomplete work” question is asked, is how can you demo incomplete work items and stay safe. (Note – Generally when people ask if they can demo incomplete items they already have or are going to do it anyway and are looking for absolution.)  

Demo’ing Incomplete Story Safety Checklist

(more…)

Sign for a weather shelter

Is A Demo A SAFe Area?

I have been asked many times whether it is “ok” to include work that is not complete in a demonstration/sprint review (I’ll use the term demo from now on).  The simple answer is that is a bad idea 95% percent of the time. Demos are agile’s mechanism to share what the team completed during the current sprint or iteration. The use of ‘complete’ is purposeful. It means the work meets the organization’s technical standards and is potentially deployable in production.  Complete mean stories that meet the definition of done – before the demo, not the next day. Teams define done before they begin work. Done typically includes steps such as coding, testing, documenting and reviewing. Unless the piece of work meets the definition of done (or a deviation agreed upon) then the work is not done. Demonstrating in progress material is a bad idea in most situations for many reasons.  Mixing done and not items in a demo: (more…)

A blur!

I was recently asked to explain the difference between a number of metrics.  One difference that seems to generate some confusion is that between velocity and cycle time.

Velocity:

Velocity is one of the common metrics used by most Agile teams.  Velocity is the average amount of “stuff” completed in a sprint.  I use the term stuff to encompass whatever measure a team is using to identify or size work.  For example, some teams measure stories in story points, function points or simply as units. If in three sprints, a team completes 20, 30 and 10 story points, the velocity for the team would be the average of these values; that is, 20 story points. The calculation would be the same regardless of the unit of measure.  

Typical Assumptions (more…)

Listen to the Software Process and Measurement Cast 285. SPaMCAST 285 features a compilation of frequently asked questions of a consulting kind.  Working as a traveling consultant, podcaster and blogger provides me with a fabulous mix of experiences. Meeting new people and getting to participate in a wide range of real life experiences is mind expanding and invigorating. Many of the questions that I have been asked during a client engagement, on the blog or in response to a podcast have similar themes. Since most of the answers were provided in one-on-one interactions I have compiled a few of the questions to share. If these questions spark more questions I promise to circle back and add to the FAQ list!

The SPaMCAST 285 also features Kim Pries’s column, The Software Sensei. In this edition, Kim tackles the concept of failure mode and effects.

Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us on Facebook while you’re at it.

Next week we will feature an interview with Brian Wernham author or Agile Project Management for Government. Combining Agile and government used in the same phrase does not have to be an oxymoron.

Upcoming Events

StarEast

I will be speaking at the StarEast Conference May 4th – 9th in Orlando, Florida.  I will be presenting a talk titled, The Impact of Cognitive Biases on Test and Project Teams. Follow the link for more information on StarEast. ALSO I HAVE A DISCOUNT CODE…. Email me at spamcastinfo@gmail.com or call 440.668.5717 for the code.

ITMPI Webinar!

On June 3 I will be presenting the webinar titled “Rescuing a Troubled Project With Agile.” The webinar will demonstrate how Agile can be used to rescue troubled projects.  Your will learn how to recognize that a project is in trouble and how the discipline, focus, and transparency of Agile can promote recovery. Register now!

I look forward to seeing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

 

 

He's looking at you...

He’s looking at you…

Who is responsible for results on a sprint team? I was once asked “whose throat I should step on when the project is in trouble?” In a classic project, the answer would be the project manager (or a similar position). In an Agile project that is living up to the principles espoused in the Agile Manifesto, the answer is a bit messier and that messiness makes a command and control leader very nervous.

Agile projects using Scrum as their organization and management framework have three basic roles: product owner, scrum master and team. If we were looking for a throat, which one would we select? The product owner owns the backlog, the budget, is charge of prioritizing the work and provides leadership. The scrum master coaches, teaches and generally facilitates the team while removing barriers to performance and provides leadership. The whole team plans the work, tackles issues and swarms to problems and individuals provide leadership when necessary. Agile teams are self-organizing and to an extent self-managing (the organization generally decides on which projects get done based on strategic plans). The whole team is involved in planning the work and, at least at situational level, everyone on the team can provide leadership. If you were to ask the members of an Agile team to point at who is in responsible, you might not have many people pointing in the same direction. Therefore is the answer that no one is responsible?

No, rather everyone on the team is in charge.  Everyone on the team is accountable for meeting the goals that the organization sets out as interpreted by the product owner (through the backlog) and accepted by the team.  The planning activities of public acceptance and commitment to the goals and stories of the sprint creates a pressure to do what has been committed. Demonstrations act as the bookend to the public commitment with the team publicly showing how they performed against the goals they committed to attaining. If we assume that the team is empowered to attain the goals they have committed to attaining, then the team truly is responsible as a whole.

Answering that the team is responsible sounds way too squishy for some organizations. In the end, whoever controls the budget is the person that should be accountable. This suggests that the product owner should be responsible or IT management if the budget is allocated as overhead. Neither of these scenarios is conducive to empowering a self-organizing team.

Who is in charge of a typical sprint team? Every person on the team is responsible for holding each other accountable for meeting their goals. The product owner and scrum master have a direct hand in setting and facilitating the goal, therefore everyone on the team is both accountable and responsible. The layers of interlocking responsibility produce significant peer pressure. That means that every team member can truthfully say that they ARE responsible for the project.

Experimenting with solid foods often fails, at first.

Experimenting with solid foods often fails, at first.

When talking to process improvement personnel I am often asked what “safe to fail experimentation” means. Safe to fail experiments are typically small-scale experiments that approach a problem from different angles used to expose the best approach to a solution. If an experiment all results positive or negative creates knowledge.  But, that definition doesn’t get at the heart of the question.  Success in process improvement is never a foregone conclusion.  The status quo is sticky because doing work the way we are doing it now is comfortable and is safer than change. I think a good near death experience makes change a lot easier for a person or an organization.  For example, if a team doing manual testing delivers most of its work on-time, at an acceptable cost and with good quality, what impetus would the team for changing from manual testing to automated testing?  The learning curve and potential dip in effectiveness during the transition could negatively impact cost, time-to-market, quality and careers.  Just telling them that they will be more efficient in the future is not going to provide enough motivation to overcome the risk.  Instead they need hard data that tells the team how the change will help, because change is not safe.

Experiments are a tool to gather the data needed to determine whether any specific change will work.  Most process improvement frameworks suggest using experiments.  For example, the CMMI calls for piloting changes before rolling them out.  A pilot reflects a learning moment where a change can be implemented, observed and updated (or abandoned) before a broader implementation.  Kaizen takes a systematic approach to change that uses experimentation to gather and evaluate data, update the suggested change and then roll the change out to a larger audience.  Another example of experimentation can be found in the Shewhart Cycle popularized by W. Edwards Deming.  Shewhart Cycle, also know as the Plan, Do, Check, Act Cycle, represents a cycle of learning and adapting which is at the heart of experimentation.

The core of experimentation in process improvement is the cycle of trying, learning and adapting.   Translating the words into IT pilots,  experiments provide the data needed to determine if a process improvement will work.  Then the pilot generates the knowledge of whether the process needs to be changed.  In the end the pilot provides the information whether the anticipated ROI will meet expectations.  To get all of this data the experiment must occur under real life conditions.  Said more bluntly, the  pilot must be done correctly or we will not gather the right information.

In ‘safe to fail experimentation,’ the word fail is a misnomer.  Changes need to be tried and evaluated before they are implemented en masse.  An experiment is not a failure if the change provides us with information needed to avoid an implementation that will hurt the organization or the information needed to update the process so that it can be rolled out effectively.  I would rather change the expression to experiments have to be safe because it will never be safe to for a  full scale change to fail.  Experiments must be safe so that we can learn how to avoid larger changes.

You know the night is done when they lock the door.

You know the night is done when they lock the door.

What is the difference between the definition of done and acceptance criteria? If a team has let this question fester for any length of time, they generally will decide that the two concepts are synonymous. Unfortunately they are wrong.

The definition of done is the requirements that the software must meet to be considered complete. An example of the definition of done is:

All stories must be unit tested, a code review performed, integrated into the main build, integration tested, and release documentation completed.

Almost every team has a different definition of done, as technology, business or government requirements or organizational culture can have an impact on how a specific team implements the definition. For example, a team building software or hardware for use in a medical device will have different regulatory requirements they must adhere to. The definition of done is generally agreed upon by the entire core team at the beginning of a project or program and stays roughly the same over the life of the project. It provides all team members with an outline of the macro requirements that all stories must meet. Therefore the definition helps in estimating by suggesting many of the tasks that will be required. I have heard the definition of done described as the requirements generated by an organizations policies, processes and methods For example the organization may have a policy that requires code to be scanned for security holes. This requirement would need to be in the definition of done.

Acceptance criteria, on the other hand, provides confirmation that the story does what was intended and can be used to create an acceptance test. An example of acceptance criteria for a simple ( a more robust version of this example was shown previously ) data entry screen for a logo glass collection application would include:

  • Brewery name is a required field.
  • Glass logo copy is a required field
  • Glass type is a required field

The software must meet these criteria in order to meet the Product Owner and stakeholder needs. During a hands-on demonstration the Product Owner and stakeholders would be able to execute these functions. Acceptance criteria are a part of the description of the stakeholder’s requirements for the software.

In order to be part of a demonstration where the story can be accepted, all stories must satisfy both the definition of done and the acceptance criteria. The definition of done provides the team with a clear understanding of their obligations to meet overall organizational and process requirements. Acceptance criteria define how the Product Owner and stakeholders will know that the story meets their requirements for a specific function. Both are required for the team to understand when a story is done.

note (1)Is a good idea for teams to work on more than one project at once?  The logic leading up to the question usually begins with a statement like, “Our team is 15% on project A, 40% percent on project B and 45% on project C.  Being fully loaded makes us more productive, right?” Before I answer I generally have to take a deep breath, otherwise I tend to build up quite a head of steam. The simple and easily provable answer is no (the Multitasking Name Game drives the point home nicely). I am sure there are special circumstances where the answer is yes, however I have never seen that circumstance in the workplace.  Multitasking, switching costs and potential bottlenecks will all conspire to make this behavior inefficient and probably ineffective.  The problem is that both individuals and teams conflate the idea of being really busy with being highly productive.

Focused, dedicated teams generally reflect the following attributes:

  • They have a common goal that provides direction.
  • They tend have fewer cross purpose conflicts resulting from deciding which project is more important at any point in time when bottlenecks occur.
  • They can plan their work more easily, which reduces project multitasking. This, in tern, will yield an increase flow of work through the team.
  • They tend to be more efficient due to less switching between tasks to support multiple projects.

Much of the benefit of single threading projects comes from the efficiency gains generated by planning and organizing the work so that team members are effectively utilized and work flows through the process without stopping. Multitasking at either an individual or team level reduces efficiency.  Focusing on one goal at time is significantly more efficient, however it does effort for planning. Here again, focusing on one project at a time reduces the overhead of planning.

How fast are you getting to where you're going?

How fast are you getting to where you’re going?

What is the difference between productivity and velocity?  Productivity is the rate of production using a set of inputs for a defined period of time.  In a typical IT organization, productivity gets simplified to the amount of output generated per unit of input. Function points per person month is a typical expression of productivity.  For an Agile team, productivity could very easily be expressed as the amount of output delivered per time box.  Average productivity would be equivalent to the team’s capacity to deliver output.  Velocity, on the other hand, is an Agile measure of how much work a team can do during a given iteration.  Velocity is typically calculated as the average story points a team can complete. Conceptually the two concepts are very similar, the most significant differences relate to how effort is accounted for and how size is defined.

The conventional calculation for IT productivity is:


productivity

Function points, use case points, story points or lines of code are typical size measures. Work in progress (incomplete units of work) and defective units generally do not count as “delivered.” Effort expended is the total effort for the time box being measured.

The typical calculation for velocity for a specific sprint is:

velocity

Note, as a general rule, both metrics are an average.  One observation of performance may or may not be representative.

The denominator represents the team’s effort for a specific sprint in both cases, however when using velocity the unit of measure is the team rather than hours or months. Average velocity of a team makes the assumption that the team’s size and composition are stable.  This tends to be a stumbling block in many organizations that have not recognized the value of stable teams.

The similarities between the two metrics can be summarized as:

  • Velocity and productivity measure the output a team delivers in a specific timeframe.
  • Both metrics can be used to reflect team capacity for stable teams.
  • Both measures only make sense when they reflect completed units of work.

The differences in the two metrics are more a reflection of the units of measure being used.  Productivity generally uses measures that allow the data to be consolidated for organizational reporting.  While velocity uses size measures, such as story points, that are team specific. A second difference is convention. Productivity is generally stated as # of units of work per unit of effort (i.e. function points per person month), while velocity is stated as an average rate (average story points per sprint).  While there are differences, they are more a representation of the units of measure being used than the ideas that the metric represents.