Listen Now
Subscribe: Apple Podcast
Check out the podcast on Google Play Music

SPaMCAST 520 features our interview with Doc Norton. We talked about his new book Escape Velocity, measurement, and why velocity isn’t generally a good measure for teams. By the time teams get to a point where story point velocity is consistent and predictable, they will have better tools that have fewer negative side effects.

Doc’s Bio

Doc Norton is passionate about working with teams to improve delivery and building great organizations. Once a dedicated code slinger, Doc has turned his energy toward helping teams, departments, and companies work better together in the pursuit of better software. Working with a wide range of companies such as Groupon, Nationwide Insurance, Belly, and JaTango, Doc has applied tenants of agile, lean, systems thinking, and servant leadership to develop highly effective cultures and drastically improve their ability to deliver valuable software and products.

A Pluralsight Author, Clean Coders contributor, frequent blogger, international keynote speaker and coach, in his spare time, Doc has been working on his latest book, Escape Velocity: Better Metrics for Agile Teams. You can find his book on LeanPub at www.leanpub.com/EscapeVelocity

Twitter: @DocOnDev

Web: http://docondev.com/

Can you help keep the podcast growing? Here are some ideas:

  1. Tell a friend about the cast.
  2. Tweet or post about the cast.  Every mention helps.
  3. Review the podcast wherever you get the cast.
  4. Pitch a column to me. You are cool enough to be listening; you deserve to be heard.
  5. Sponsor an episode (text or call me to talk about the idea).
  6. Listen.

Whether you do one or all six, being here is a big deal to me. Thank you!


Re-Read Saturday News
This week we continue on our journey through Bad Blood, Secrets and Lies in a Silicon Valley Startup by John Carreyrou (published by Alfred A. Knopf, 2018 – Buy a copy and read along!) Today we tackle a single chapter.  Chapter 6, titled Sunny, introduces Ramesh “Sunny” Balwani to the story. Sunny, Holmes’ live-in boyfriend (the stress on the live-in part is to shine a light on just how close Holmes was to Sunny), adds another layer of toxicity to the Theranos story. The toxicity feels extraordinary but is not that uncommon when teams break down.  

Current Entry:

Week 5 — Sunnyhttps://bit.ly/2AZ5tRq (more…)

Cycle time?

In Part 1 we examined Work in Process and Story Escape Rate.  These two metrics are powerful but are not sufficient to provide a full picture of the flow of value through a process.  We continue with four more metrics to complete the pallet.
(more…)

A blur!

I was recently asked to explain the difference between a number of metrics.  One difference that seems to generate some confusion is that between velocity and cycle time.

Velocity:

Velocity is one of the common metrics used by most Agile teams.  Velocity is the average amount of “stuff” completed in a sprint.  I use the term stuff to encompass whatever measure a team is using to identify or size work.  For example, some teams measure stories in story points, function points or simply as units. If in three sprints, a team completes 20, 30 and 10 story points, the velocity for the team would be the average of these values; that is, 20 story points. The calculation would be the same regardless of the unit of measure.  

Typical Assumptions (more…)

Zoom!

Zoom!

Audio Version:  Software Process and Measurement Cast 119.

Definition:

The simple definition of velocity is the amount of work that is completed in a period of time (typically a sprint). The definition is related to productivity, which is the amount of effort required to complete a unit of work and delivery rate (the speed that the work is completed).  The inclusion of a time box (the sprint) creates a fixed duratio,n which transforms velocity into more of a productivity metric than a speed metric (how much work can be be done in a specific timescale by a specific team). Therefore to truly measure velocity you need to estimate the units of work completed, have a definition of complete and have a time box.

The definition of done in Agile is typically functional code, however I think the definition can be stretched to reflect the terminal deliverable the sprint team has committed to create and based on the definition of done (for example requirements for a sprint team working on requirements or completed test cases in a test sprint) that the team has established.

Many Agile projects use the concept of story points as a metaphor for size or functional code. Note other functional size measures can be just as easily used. Examples in this paper will use story points as a unit of measure.  What is not size however is effort or duration. Effort is an input that is consumed while transforming ideas into functional code. The amount of effort required for the transformation is a reflection of size, complexity and other factors. Duration like effort is consumed by a sprint not created therefore does not measure what is delivered.

Formula

To calculate velocity, simply add up the size estimates of the features (user stories, requirements, backlog items, etc.) successfully delivered in an iteration.  The use of the size estimates allows the team to distinguish between items of differing levels of granularity.  Successfully delivered should equate to the definition of done.

Velocity = Story Points Completed Per Sprint

And:

Average velocity = Average Number of Story Points Per Sprint

The formula becomes more complex if staffing varies between sprints (and potentially less valuable as a predictive measure).  In order to account for variable staffing the velocity formula would have to be modified as follows:

velocity per person = sum (size of completed features in a sprint / number of people) / number of sprints or observations

To be really precise (not necessarily more accurate) we would have to understand the variability of the data as variability would help define level of confidence.  Variability generated by differences in team member capabilities is one of the reasons that predicability is enhanced by team stability. As you can see, the more complex the environmental scenario becomes, the less simple the math must be to describe the scenario.

Uses:

Velocity is used as a tool in project planning and reporting. Velocity is used in planning to predict how much work will be completed in a sprint and in reporting to communicate what has been done.

When used for planning and estimation the team’s velocity is used along with a prioritized set of granular features (e.g., user stories, backlog items, requirements, etc.) that have been sized or estimated.  The team uses these factors to select what can be done in the upcoming sprint. When the sprint is complete the results are used to update velocity for the next sprint. This is a top down estimation process using historical data.

Over a number of sprints velocity can be used both as a macro planning tool (when will will the project be done) and a reporting tool (we planned at this velocity and are delivering at this velocity).

Velocity can be used in all methodologies and because it is team specific, it is agnostic in terms of units of size.

Issues

As with all metrics, velocity has it’s share of issues.

The first is that there is an expectation of team stability inherent in the metric. Velocity is impacted by team size and composition and without collecting additional attributes and correlating these attributes to performance, change is not predictable (except by gut feel or Ouija Board). There should always be notes kept on team size and capability so that you can understand your data over time.

Similarly team dynamics change over time, sometimes radically. Radical changes in  team dynamics will affect velocity. Note shocks to any system of work are apt to create the same issue. Measurement personnel, SCRUM masters and team leaders need to be aware of people’s personalities and how they change over time.

The first time application of velocity requires either historical data of other similar teams and projects or an estimate. In a perfect world a few sprints would be executed and data gathered before expectations are set however generally clients want an idea of if a project will be completed, when it will be completed and the functions that will be delivered along the way.  Estimates of velocity based on the teams knowledge of the past or other crowd sourcing techniques are relatively safe starting points assuming continuos recalibration.

The final issue is the requirement for a good definition of done. Done is a concept that has been driven home in the agile community. To quote Mayank Gupta (http://www.scrumalliance.org/articles/106-definition-of-done-a-reference), “An explicit and concrete definition of done may seem small but it can be the most critical checkpoint of an agile project.”  A concrete definition of done provides the basis for estimating velocity by reducing variability based on features that are in different states of completion.  Done also focuses the team by providing a goal to pursue. Make sure you have a crisp definition of done and recognize how that definition can change from sprint to sprint.

Related Metrics:

Productivity (size / effort)

Delivery Rate (duration / size)

Criticisms:

The first criticism of velocity is that the metric is not comparable between teams and by Inference is not useful as a benchmark. Velocity was conceived as a tool for Scrum Masters and Team Leads to manage and plan individual sprints. There are no overarching set of rules for the metric to enforce standardization therefore one velocity is apt to reflect something different than the next. The criticism is correct but perhaps off the mark. As a team level tool velocity works because it is very easy to use and can be consistent, adding the complexity of standards and rules to make it more organizational will by definition reduce the simplicity and therefore the usefulness at the team level.

A second criticism is that estimates and budgets are typically set early in a projects life.  Team level velocity may well be an unknown until later.  The dichotomy between estimating and planning (or budgeting and estimating for that matter) is often overlooked.  Estimates developed early in a project or in projects with multiple teams require different techniques to generate. In large projects applying team level velocities requires using techniques more akin to portfolio management which add significant levels of overhead. I would suggest that velocity is more valuable as a team planning tool than as a budgeting or estimation tool at a macro level.

A final criticism is that backlog items may not be defined at consistent level of granularity therefore when applied, velocity may deliver inconsistent results. I tend to dismiss this criticism as it is true for any mechanism that relies on relative sizing. Team consistency will help reduce the variability in sizing however all teams should strive to break backlog items into as atomic stories as possible.

 

Velocity and productivity are different.

Velocity and productivity are different.

Mention productivity to adherents of Agile methods and you will get a range of responses. Some of the typical responses include blank stares, tirades against organization-level control mentality or discussions on why velocity is more relevant. Similar reactions (albeit 180 degrees out of phase) will be experienced when you substitute the word velocity and have discussions with adherents of other methodologies.

Fantasy movies and novels have taught us that in the realm of magic, knowing the name of a person or thing confers power. In fantasy novels the power conferred is that of control. In real life, the power of having a name for a concept is the power of spin. Spin and control are a pair of highly related terms. Spin is to provide an interpretation (a statement or event, for example), especially in a way meant to sway public opinion.

Naming a concept, even if many similar concepts have already been given names, creates an icon that can rally followers and be used to heap derision on non-followers. Maybe because the followers of fantasy and science fiction in the IT professions is higher than you would find in the normal population, the pattern of naming as a concept to focus attention has risen to a fine art. Examples abound in the IT world, such as the use of the term logical files in the IFPUG Function Points (where are the illogical files?) and Agile methods (the others must be the inflexible methods). Productivity and velocity are named concepts that reflect this rule. Each can evoke followers to alter their behavior or to generate violent rage in what began as a civil conversation. The irony is that these terms represent highly related concepts. Both seek to describe the amount of output that will be delivered in a specific period of time. The difference is a matter of perspective.

If they are so similar why are there two terms describing similar concepts? The lets peel back another layer.

Dion Hinchcliffe has defined project velocity is the measurement of the event rate of a project. A simpler definition is simply work divided by time. In both cases velocity is used to describe the speed a specific team delivers results. Typical velocity metrics include story points per person month, requirements per sprint and stories or story points per iteration. The units of measure are targeted at the level of requirements or user stories. The granularity of the unit of measure and collection time frame (iteration or sprints) ensures that the metric is generated and collected multiple times throughout the project. Repetition makes it easy for this process to become repeatable through based on rote memory. Because of the short time horizon and the use of measures that can be derived at a team level, the data is useful to the team as they plan and monitor their work. Useful equals metrics that get collected, in my book. Unfortunately because relative measures (measures based on perception) are used to size requirements these metrics tend to be less useful for organizational comparison than more classic productivity measures. Productivity is also relatively simple metric. It is simply the output (numerator) of a project divided by the input(s) required to produce the output (denominator).

The productivity equation is divided by more esoteric units than calendar time, such as hours of effort or FTE months (full-time equivalents), that relate to the entire project. The units of measure for the numerator range from the venerable line of code to functional units such as function points. Because productivity is generally collected and used at an overall project level it is very useful for parametric estimation or comparing projects, but far less effective for planning day to day activities than is velocity. It should be noted that some organizations collect many separate units to create a lower level view of productivity. I would suggest this can be done albeit it will require a substantial amount of effort to implement and maintain.

So if velocity and productivity are both useful and related, which one should we use? The first place to start is to decide what question you are trying to answer. Once the problem you are trying to solve is identified the unit of measure and the collection time horizon both become manageable decisions. The question of whether we have to choose one over the other I would suggest is a false question. I propose that if we focus on selecting the proper numerator we can have measures that are useful at both the project and organization level. One solution is to substitute Quick and Early Function Points (QEFP is a rules based functional metric) for the typical story points (relative measure). QEFP can be applied at a granular level and then aggregated because it is rules based for reporting at different levels. Understanding the relationship between the two measures we devise a solution to have our cake and eat it too.

How fast are you getting to where you're going?

How fast are you getting to where you’re going?

What is the difference between productivity and velocity?  Productivity is the rate of production using a set of inputs for a defined period of time.  In a typical IT organization, productivity gets simplified to the amount of output generated per unit of input. Function points per person month is a typical expression of productivity.  For an Agile team, productivity could very easily be expressed as the amount of output delivered per time box.  Average productivity would be equivalent to the team’s capacity to deliver output.  Velocity, on the other hand, is an Agile measure of how much work a team can do during a given iteration.  Velocity is typically calculated as the average story points a team can complete. Conceptually the two concepts are very similar, the most significant differences relate to how effort is accounted for and how size is defined.

The conventional calculation for IT productivity is:


productivity

Function points, use case points, story points or lines of code are typical size measures. Work in progress (incomplete units of work) and defective units generally do not count as “delivered.” Effort expended is the total effort for the time box being measured.

The typical calculation for velocity for a specific sprint is:

velocity

Note, as a general rule, both metrics are an average.  One observation of performance may or may not be representative.

The denominator represents the team’s effort for a specific sprint in both cases, however when using velocity the unit of measure is the team rather than hours or months. Average velocity of a team makes the assumption that the team’s size and composition are stable.  This tends to be a stumbling block in many organizations that have not recognized the value of stable teams.

The similarities between the two metrics can be summarized as:

  • Velocity and productivity measure the output a team delivers in a specific timeframe.
  • Both metrics can be used to reflect team capacity for stable teams.
  • Both measures only make sense when they reflect completed units of work.

The differences in the two metrics are more a reflection of the units of measure being used.  Productivity generally uses measures that allow the data to be consolidated for organizational reporting.  While velocity uses size measures, such as story points, that are team specific. A second difference is convention. Productivity is generally stated as # of units of work per unit of effort (i.e. function points per person month), while velocity is stated as an average rate (average story points per sprint).  While there are differences, they are more a representation of the units of measure being used than the ideas that the metric represents.

Audio Version:  Software Process and Measurement Cast 119.

Definition:

The simple definition of velocity, as it is currently used, is the amount of work that is completed in a period of time (typically a sprint). The definition is related to productivity which is the amount of effort required to complete a unit of work and delivery rate which measures the speed that work is completed.  The inclusion of a time box (the sprint) creates a fixed duration which transforms velocity into more of a productivity metric than a speed metric (how much work can be be done in a specific timescale by a specific team). Therefore to truly measure velocity you need to estimate the units of work completed, have a definition of complete and have a time box.

The definition of complete, in agile is typically functional code however I think the definition can be stretched to reflect the terminal deliverable the sprint team has committed to create and based on the definition of done (for example requirements for a sprint team working on requirements or completed test cases in a test sprint) that the team has established.

Many agile projects use the concept of story points as a metaphor for size or functional code. Note other functional size measures can be just as easily used. Examples in this paper will use story points as a unit of measure.  What is not size however is effort or duration. Effort is an input that is consumed while transforming ideas into functional code. The amount of effort required for the transformation is a reflection of size, complexity and other factors. Duration like effort is consumed by a sprint not created therefore does not measure what is delivered.

Formula

To calculate velocity, simply add up the size estimates of the features (user stories, requirements, backlog items, etc.) successfully delivered in an iteration.  The use of the size estimates allows the team to distinguish between items of differing levels of granularity.  Successfully delivered should equate to the definition of done.

Velocity = Story Points Completed Per Sprint

And:

Average velocity = Average Number of Story Points Per Sprint

The formula becomes more complex if staffing varies between sprints (and potentially less valuable as a predictive measure).  In order to account for variable staffing the velocity formula would have to be modified as follows:

velocity per person = sum (size of completed features in a sprint / number of people) / number of sprints or observations

To be really precise (not necessarily more accurate) we would have to understand the variability of the data as variability would help define level of confidence.  Variability generated by differences in team member capabilities is one of the reasons that predicability is enhanced by team stability. As you can see, the more complex the environmental scenario becomes, the less simple the math must be to describe the scenario.

Uses:

Velocity is used as a tool in project planning and reporting. Velocity is used in planning to predict how much work will be completed in a sprint and in reporting to communicate what has been done.

When used for planning and estimation the team’s velocity is used along with a prioritized set of granular features (e.g., user stories, backlog items, requirements, etc.) that have been sized or estimated.  The team uses these factors to select what can be done in the upcoming sprint. When the sprint is complete the results are used to update velocity for the next sprint. This is a top down estimation process using historical data.

Over a number of sprints velocity can be used both as a macro planning tool (when will will the project be done) and a reporting tool (we planned at this velocity and are delivering at this velocity).

Velocity can be used in all methodologies and because it is team specific, it is agnostic in terms of units of size.

Issues

As with all metrics, velocity has it’s share of issues.

The first is that there is an expectation of team stability inherent in the metric. Velocity is impacted by team size and composition and without collecting additional attributes and correlating these attributes to performance, change is not predictable (except by gut feel or Ouija Board). There should always be notes kept on team size and capability so that you can understand your data over time.

Similarly team dynamics change over time, sometimes radically. Radical changes in  team dynamics will affect velocity. Note shocks to any system of work are apt to create the same issue. Measurement personnel, SCRUM masters and team leaders need to be aware of people’s personalities and how they change over time.

The first time application of velocity requires either historical data of other similar teams and projects or an estimate. In a perfect world a few sprints would be executed and data gathered before expectations are set however generally clients want an idea of if a project will be completed, when it will be completed and the functions that will be delivered along the way.  Estimates of velocity based on the teams knowledge of the past or other crowd sourcing techniques are relatively safe starting points assuming continuos recalibration.

The final issue is the requirement for a good definition of done. Done is a concept that has been driven home in the agile community. To quote Mayank Gupta (http://www.scrumalliance.org/articles/106-definition-of-done-a-reference), “An explicit and concrete definition of done may seem small but it can be the most critical checkpoint of an agile project.”  A concrete definition of done provides the basis for estimating velocity by reducing variability based on features that are in different states of completion.  Done also focuses the team by providing a goal to pursue. Make sure you have a crisp definition of done and recognize how that definition can change from sprint to sprint.

Related Metrics:

Productivity (size / effort)

Delivery Rate (duration / size)

Criticisms:

The first criticism of velocity is that the metric is not comparable between teams and by Inference is not useful as a benchmark. Velocity was conceived as a tool for Scrum Masters and Team Leads to manage and plan individual sprints. There are no overarching set of rules for the metric to enforce standardization therefore one velocity is apt to reflect something different than the next. The criticism is correct but perhaps off the mark. As a team level tool velocity works because it is very easy to use and can be consistent, adding the complexity of standards and rules to make it more organizational will by definition reduce the simplicity and therefore the usefulness at the team level.

A second criticism is that estimates and budgets are typically set early in a projects life.  Team level velocity may well be an unknown until later.  The dichotomy between estimating and planning (or budgeting and estimating for that matter) is often overlooked.  Estimates developed early in a project or in projects with multiple teams require different techniques to generate. In large projects applying team level velocities requires using techniques more akin to portfolio management which add significant levels of overhead. I would suggest that velocity is more valuable as a team planning tool than as a budgeting or estimation tool at a macro level.

A final criticism is that backlog items may not be defined at consistent level of granularity therefore when applied, velocity may deliver inconsistent results. I tend to dismiss this criticism as it is true for any mechanism that relies on relative sizing. Team consistency will help reduce the variability in sizing however all teams should strive to break backlog items into as atomic stories as possible.