Zoom!

Zoom!

Audio Version:  Software Process and Measurement Cast 119.

Definition:

The simple definition of velocity is the amount of work that is completed in a period of time (typically a sprint). The definition is related to productivity, which is the amount of effort required to complete a unit of work and delivery rate (the speed that the work is completed).  The inclusion of a time box (the sprint) creates a fixed duratio,n which transforms velocity into more of a productivity metric than a speed metric (how much work can be be done in a specific timescale by a specific team). Therefore to truly measure velocity you need to estimate the units of work completed, have a definition of complete and have a time box.

The definition of done in Agile is typically functional code, however I think the definition can be stretched to reflect the terminal deliverable the sprint team has committed to create and based on the definition of done (for example requirements for a sprint team working on requirements or completed test cases in a test sprint) that the team has established.

Many Agile projects use the concept of story points as a metaphor for size or functional code. Note other functional size measures can be just as easily used. Examples in this paper will use story points as a unit of measure.  What is not size however is effort or duration. Effort is an input that is consumed while transforming ideas into functional code. The amount of effort required for the transformation is a reflection of size, complexity and other factors. Duration like effort is consumed by a sprint not created therefore does not measure what is delivered.

Formula

To calculate velocity, simply add up the size estimates of the features (user stories, requirements, backlog items, etc.) successfully delivered in an iteration.  The use of the size estimates allows the team to distinguish between items of differing levels of granularity.  Successfully delivered should equate to the definition of done.

Velocity = Story Points Completed Per Sprint

And:

Average velocity = Average Number of Story Points Per Sprint

The formula becomes more complex if staffing varies between sprints (and potentially less valuable as a predictive measure).  In order to account for variable staffing the velocity formula would have to be modified as follows:

velocity per person = sum (size of completed features in a sprint / number of people) / number of sprints or observations

To be really precise (not necessarily more accurate) we would have to understand the variability of the data as variability would help define level of confidence.  Variability generated by differences in team member capabilities is one of the reasons that predicability is enhanced by team stability. As you can see, the more complex the environmental scenario becomes, the less simple the math must be to describe the scenario.

Uses:

Velocity is used as a tool in project planning and reporting. Velocity is used in planning to predict how much work will be completed in a sprint and in reporting to communicate what has been done.

When used for planning and estimation the team’s velocity is used along with a prioritized set of granular features (e.g., user stories, backlog items, requirements, etc.) that have been sized or estimated.  The team uses these factors to select what can be done in the upcoming sprint. When the sprint is complete the results are used to update velocity for the next sprint. This is a top down estimation process using historical data.

Over a number of sprints velocity can be used both as a macro planning tool (when will will the project be done) and a reporting tool (we planned at this velocity and are delivering at this velocity).

Velocity can be used in all methodologies and because it is team specific, it is agnostic in terms of units of size.

Issues

As with all metrics, velocity has it’s share of issues.

The first is that there is an expectation of team stability inherent in the metric. Velocity is impacted by team size and composition and without collecting additional attributes and correlating these attributes to performance, change is not predictable (except by gut feel or Ouija Board). There should always be notes kept on team size and capability so that you can understand your data over time.

Similarly team dynamics change over time, sometimes radically. Radical changes in  team dynamics will affect velocity. Note shocks to any system of work are apt to create the same issue. Measurement personnel, SCRUM masters and team leaders need to be aware of people’s personalities and how they change over time.

The first time application of velocity requires either historical data of other similar teams and projects or an estimate. In a perfect world a few sprints would be executed and data gathered before expectations are set however generally clients want an idea of if a project will be completed, when it will be completed and the functions that will be delivered along the way.  Estimates of velocity based on the teams knowledge of the past or other crowd sourcing techniques are relatively safe starting points assuming continuos recalibration.

The final issue is the requirement for a good definition of done. Done is a concept that has been driven home in the agile community. To quote Mayank Gupta (http://www.scrumalliance.org/articles/106-definition-of-done-a-reference), “An explicit and concrete definition of done may seem small but it can be the most critical checkpoint of an agile project.”  A concrete definition of done provides the basis for estimating velocity by reducing variability based on features that are in different states of completion.  Done also focuses the team by providing a goal to pursue. Make sure you have a crisp definition of done and recognize how that definition can change from sprint to sprint.

Related Metrics:

Productivity (size / effort)

Delivery Rate (duration / size)

Criticisms:

The first criticism of velocity is that the metric is not comparable between teams and by Inference is not useful as a benchmark. Velocity was conceived as a tool for Scrum Masters and Team Leads to manage and plan individual sprints. There are no overarching set of rules for the metric to enforce standardization therefore one velocity is apt to reflect something different than the next. The criticism is correct but perhaps off the mark. As a team level tool velocity works because it is very easy to use and can be consistent, adding the complexity of standards and rules to make it more organizational will by definition reduce the simplicity and therefore the usefulness at the team level.

A second criticism is that estimates and budgets are typically set early in a projects life.  Team level velocity may well be an unknown until later.  The dichotomy between estimating and planning (or budgeting and estimating for that matter) is often overlooked.  Estimates developed early in a project or in projects with multiple teams require different techniques to generate. In large projects applying team level velocities requires using techniques more akin to portfolio management which add significant levels of overhead. I would suggest that velocity is more valuable as a team planning tool than as a budgeting or estimation tool at a macro level.

A final criticism is that backlog items may not be defined at consistent level of granularity therefore when applied, velocity may deliver inconsistent results. I tend to dismiss this criticism as it is true for any mechanism that relies on relative sizing. Team consistency will help reduce the variability in sizing however all teams should strive to break backlog items into as atomic stories as possible.

 

Space between has to big enough and no bigger.

Space between has to big enough and no bigger.

Cadence represents a predictable rhythm. The predictability of cadence is crucial for Agile teams to build trust. Trust is built between the team and the organization and  amongst the team members themselves by meeting commitments over and over and over. The power of cadence is important to the team’s well being, therefore the choice of cadence is often debated in new teams. Many young teams make the mistake of choosing a slower (longer) cadence for many reasons. The most often cited reasons I have found are:

  1. The team has not adopted or adapted their development process to the concept of delivering working functionality in a single sprint. When a team leverages a waterfall or remnants of a waterfall method, work passes from phase to phase at sprint boundaries. For example, it passes from coding to testing. Longer time boxes feel appropriate to the team so they can get analysis, design, coding, testing and implementation done before the next sprint. The problem is that they are trying to “agilefy” a non-Agile process, which rarely works well.
  2. New Agile teams tend to lack confidence in their capabilities. Capabilities that teams need to sort out include both learning the techniques of Agile and abilities of the team members. Teams convince themselves that a longer sprint will provide a bit of padding that will accommodate the learning process. The problem with adding padding is twofold. The first is that time tend to fill the available time (Parkinson’s Law) and secondly lengthening the sprint delays retrospectives. Retrospectives provide a team the platform needed to identify issues and make changes which leads to improved capabilities.
  3. Large stories that can’t be completed in a single sprint are often noted as reason to adopt longer duration sprints and slower cadence. This is generally a reflection of improper backlog grooming. More mature Agile teams typically adopt a rule of thumb help guide the breakdown of stories. Examples include maximum size limits (e.g. 8 story points, 7 quick function points) or duration limits (a story must be able to be completed in 3 days).
  4. Planning sessions take too long, eating into development time of the sprint. Similar to the large stories, overly long planning sessions are typically a reflection of either poor backlog grooming or trying to plan and commit to more than can be done within the sprint time box. Teams often change the length of a sprint rather than doing better grooming or taking less work. Often even when the sprint duration is expanded the problem of overly long planning sessions returns as more stories are taken and worsens as the team gets bored with planning.

Teams often think that they can solve process problems by lengthening the duration of their sprints, which slows their cadence. Typically a better solution is to make sure they are practicing Agile techniques rather than trying to “agilefy” a waterfall method or doing a better job grooming stories. A faster cadence is generally better if for no other reason than the team will get to review their approach faster by doing retrospectives!

 In today’s business environment a plurality of organizations use a two week sprint cadence.

In today’s business environment a plurality of organizations use a two week sprint cadence.

In Agile, cadence is the number days or weeks in a sprint or release. Stated another way, it is the length of the team’s development cycle. In today’s business environment a plurality of organizations use a two week sprint cadence. The cadence that a project or organization selects is based on a number of factors that include: criticality, risk and the type of project.

A ‘critical’ IT project would be crucial, decisive or vital. Projects or any kind of work that can be defined as critical need to be given every change to succeed. Feedback is important for keeping critical projects pointed in the right direction. Projects that are highly important will benefit from gathering feedback early and often. The Agile cycle of planning, demonstrating progress and retrospect-ing is tailor-made to gather feedback and then act on that feedback. A shorter cycle leads to a faster cadence and quicker feedback.

Similarly, projects with higher levels of risk will benefit from faster feedback so that the team and the organization can evaluate whether the risk is being mitigated or whether risk is being converted into reality. Feedback reduces the potential for surprises therefore faster cadences is a good tool for reducing some forms of risk.

The type of project can have an impact on cadence. Projects that include hardware engineering or interfaces with heavyweight approval mechanisms will tend to have slower cadences. For example, a project I was recently asked about required two separate approval board reviews (one regulatory and the second security related). Both took approximately five working days. The length of time required for the reviews was not significantly impacted by the amount of work each group needed to approve. The team adopted a four-week cadence to minimize the potential for downtime while waiting for feedback and to reduce the rework risk of going forward without approval. Maintenance projects, on the other hand, can often leverage Kanban or Scrumban in more of a continuous flow approach (no time box).

Development cadence is not synonymous with release cadence. In many Agile techniques, the sprint cadence and the release cadence do not have to be the same. The Scaled Agile Framework Enterprise (SAFe) makes the point that teams should develop on a cadence, but release on demand. Many teams use a fast development cadence only to release in large chunks (often called releases). When completed work is released often either as a reflection of business need, an artifact in thinking from waterfall development and, in some rare cases, the organization’s operational environment.

Most projects will benefit from faster feedback. Shorter cycles, i.e. faster cadence, are an important tool for generating feedback and reducing risk. A faster cadence is almost always the right answer, unless you really don’t want to know what is happening while you can react.

Listen to the SPaMCAST 312 now!

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real leader and practitioner in a real company that has really applied Agile.  Alex shares pragmatic advice about to how practice Agile in the real world!

Alex’s bio:

Alex Neginsky began his career in the software industry at the age of 16 as a Software Engineer for Ultimate Software. He earned his Bachelor’s degree in Computer Science at Florida Atlantic University in 2006. By age 27, Alex obtained his first software patent.

Alex has been at MTech, a division of Newmarket International, since 2011. As the Director of Development he brings 15 years of experience, technical skills, and management capabilities. Alex manages highly skilled software professionals across several teams stationed all over Eastern Europe and the United States. He serves as the liaison between MTech Development and the industry. During his tenure with the MTech division of Newmarket, Alex has been pivotal in the adoption of the complete software development lifecycle and has spearheaded the adoption of leading Agile Development Methodologies such as Scrum and Kanban. This has yielded higher velocity and better efficiencies throughout the organization.

Contact Alex at aneginsky@newmarketinc.com

LinkedIn

If you have the right stuff and are interested in a joining Newmarket then check out:

http://www.newmarketinc.com/careers/join-newmarket

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin in November. More on this new feature next week. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 313 features our essay on developing an initial backlog.  Developing an initial backlog is an important step to get projects going and moving in the right direction. If a project does not start well, it is hard for it to end well.  We will provide techniques to help you begin well!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:30 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

ROAM

 

Hand Drawn Chart Saturday

I once asked the question, “Has the adoption of Agile techniques magically erased risk from software projects?” The obvious answer is no, however Agile has provided a framework and tools to reduce the overall level of performance variance. Even if we can reduce risk by damping the variance driven by complexity, the size of the work, process discipline and people, we still need to “ROAM” the remaining risks. ROAM is a model that helps teams identify risks. Applying the model, a team would review each risk and classify then as:

  • Resolved, the risk has been answered and avoided or eliminated.
  • Owned, someone has accepted the responsibility for doing something about the risk.
  • Accepted, the risk has been understood and the team has agreed that nothing will be done about it.
  • Mitigated, something has been done so that the probability or potential impact is reduced.

When we consider any risk we need to recognize the two attributes: impact and probability.  Impact is what will happen if the risk becomes something tangible. Impacts are typically monetized or stated as the amount of effort needed to correct the problem if it occurs. The size of the impact can vary depending on when the risk occurs. For example, if we suddenly decide that the system architecture will not scale to the level required during sprint 19, the cost in rework would be higher than if that fact were discovered in sprint 2. Probability is the likelihood a risk will become an issue. In a similar manner to impact, probability varies over time.

We defined risk as “any uncertain event that can have an impact on the success of a project.” Does using Agile change our need to recognize and mitigate risk?  No, but instead of a classic risk management plan and a risk log, a more Agile approach to risk management might be generating additional user stories. While Agile techniques reduce some forms of risk we still need to be vigilant. Adding risks to the project or program backlog will help ensure there is less chance of variability and surprises.

People are chaotic.

People are chaotic.

Have ever heard the saying, “software development would be easy if it weren’t for the people”? People are one of the factors that cause variability in the performance of projects and releases (other factors also include complexity, the size of the work, and process discipline.) There are three mechanisms built into most Agile frameworks to address an acceptance of the idea that people can be chaotic by nature and therefore dampen variability.

  1. Team size, constitution and consistency are attributes that most Agile frameworks have used to enhance productivity and effectiveness that also reduce the natural variability generated when people work together.
    1. The common Agile team size of 7 ± 2 is small enough that team members can establish and nurture personal relationships to ensure effective communication.
    2. Agile teams are typically cross-functional and include a Scrum master/coach and the product owner. The composition of the team fosters self-reliance and the ability to self-organize, again reducing variability.
    3. Long lived teams tend to establish strong bonds that foster good communication and behaviors such as swarming. Swarming is a behavior in which team members rally to a task that is in trouble so that team as a whole can meet its goal, which reduces overall variability in performance.
  2. Peer reviews of all types have been a standard tool to improve quality and consistency of work products for decades. Peer reviews are a mechanism to remove defects from code or other work product before they are integrated into larger work products. The problem is that having someone else look at something you created and criticize it is grating. Extreme programing took classic peer reviews a step further and put two people together at one keyboard, one typing and the other providing running commentary (a colloquial description of pair programing). Demonstrations are a variant of peer reviews. Removing defects earlier in the development process through observation and discussion reduces variability and therefore the risk of not delivering value.
  3. Daily stand ups and other rituals are the outward markers of Agile techniques. Iteration/sprint planning keeps teams focused on what they need to do in the short-term future and then re-plans when that time frame is over. Daily stand-ups provide a platform for the team to sync up on a daily basis to reduce the variance that can creep in when plans diverge. Demonstrations show project stakeholders how the team is solving their business problems and solicit feedback to keep the team on track. All of these rituals reduce potential variability that can be introduced by people acting alone rather than as a team with a common goal.

In information technology projects of all types, people transform ideas and concepts into business value. In software development and maintenance, the tools and techniques might vary but, at its core, software-centric projects are social enterprises. Get any group of people together to achieve a goal is a somewhat chaotic process. Agile techniques and frameworks have been structured to help individuals to increase alignment and to act together as a team to deliver business value.

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche...

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche…

There are many factors that cause variability in the performance of projects and releases, including complexity, the size of the work, people and process discipline. Consistency and predictability are difficult when the process is being made up on the spot. Agile has come to reflect (at least in practice) a wide range of values ranging from faster delivery to more structured frameworks such as Scrum, Extreme Programing and Scale Agile Framework Enterprise. Lack of at least some structure nearly always increases the variability in delivery and therefore the risk to the organization.

I recently received the following note from a reader (and listener to the podcast) who will remain nameless (all names redacted at the request of the reader).

“All of the development is outsourced to a company with many off-shore and a few on-site resources.

The development agency has, somehow, sold the business on the idea that because they are “Agile”, their ability to dynamically/quickly react and implement requires a lack of formal “accounting.”  The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche to work on anything and everything ever scratched out on a cocktail napkin without proper project charters, buy-in, and SOW.”

This observation reflects a risk to the organization of an ill-defined process in terms the value that get delivered to the business, financial risk and from the risk to customer satisfaction. Repeatability and consistency of process are not a dirty words.

Scrum and other Agile frameworks are light-weight empirical models. At their most basic levels they summarized as:

  1. Agree upon what your are going to do (build a backlog),
  2. Plan work directly ahead (sprint/iteration planning),
  3. Build a little bit while interacting with the customer (short time box development),
  4. Review what has been done with the stakeholders (demonstration),
  5. Make corrections to the process (retrospective),
  6. Repeat as needed until the goals of the work are met.

Deming would have recognized the embedded plan-do-check-act cycle. There is nothing ad-hoc about the frame even though it is not overly prescriptive.

I recently toured a research facility for a major snack manufacturer. The people in the labs were busy dreaming up the next big snack food. Personnel were involved in both “pure” and applied research, both highly creative endeavors. When I asked about the process they were using what was described was something similar to Scrum. Creatively being pursued within a framework to reduce risk.

Ad-hoc software development and maintenance was never in style. In today’s business environment where software in an integral the delivery of value, just winging the process of development increases risk of an already somewhat risky proposition.

Follow

Get every new post delivered to your Inbox.

Join 3,832 other followers