Listen Now!
Subscribe: Apple Podcast
Check out the podcast on Google Play Music
Listen on Spotify!

SPaMCAST 558 features our essay Story Points – Leave Them, Don’t Love Them.  Story Points are not evil and they may be useful in some circumstances. But like most tools, at some point, they lose focus. They have outlived their usefulness, therefore, I will leave them when at all possible.  

This week, Jeremy Berriault brings his QA Corner to the podcast.  We talked about focus. How much focus is enough and how much is too much? Mr. Berriault has an opinion and stories to back his opinion up.  (more…)

Story Points: No Parking

Story points are a planning tool that has proponents and opponents. Like most tools, story points are not prima facie evil, it is only through misuse that they are problematic.  Problems tend to occur because leader, managers, and team members have a need (or at least a very strong desire) to know when a piece work will be done, what they are going to do next, whether they are meeting expectations, and in many cases what something will cost. Story points are a relative measure, a proxy of a proxy that makes answering any of these questions with precision very difficult. The best anyone can hope for is that the answers story points provide are accurate enough and provide a coherent feedback loop for teams. This could be considered damning with faint praise, however, in the right circumstances story points are a useful tool for a TEAM. I am a proponent of using story points in three scenarios. (more…)

Story Points Are A Reflection!

Last week Anthony Mersino published a blog entry titled Story Points – Love Them or Leave Them? He ended his blog entry with the question, “What do you think?” I know it will shock some of you, but I have an opinion. An opinion born from observation, research, and hands-on experience. Story points are specific to every team that uses them. I have used and still do use story points in some very specific scenarios. To answer Anthony’s question, over the years I have migrated into the “leave them” camp with a few caveats (we will tackle those later in the week). Story Points have a litany of problems including a myriad of definitions, they are a poor tool to compare performance, they are conflated with hours, and they institutionalize bias (see The Slow End Of Story Points for in-depth discussions on these points). In the last round of articles I wrote on story points, I did not address the basic conceit at the core of story points. Story points assume that teams need to size pieces of work in order to know whether the work is too big or too risky to accomplish in a specific period of time. That assumption is wrong for any team that has worked together for more than a sprint/iteration or two and breaks its work down, plans and commits to that work, and then use the results from that sprint or iteration to improve how they work. These steps are basic activities for an Agile team. The inspect and adapt feedback loop provides an experiential platform that negates the need to have a debate over Cohn’s Numbers or an entry on the Fibonacci scale. That time is better spent discussing what the work is that is being requested by the product owner and how that work is going to be delivered. (more…)

Listen Now
Subscribe: Apple Podcast
Check out the podcast on Google Play Music

Now on Spotify!

SPaMCAST 525 continues our conversation about story points.  Many teams find that story points are only a partially useful tool to facilitate the flow of work within a team. Today we will highlight a behavioral fix and talk RATS.

We will also have a visit from Jeremy Berriault with a discussion from the QA Corner.  Jeremey and I talked about how the concept of a minimum viable product (MVP) impacts testing.  Check out Mr. Berriault’s blog at

Re-Read Saturday News
We are re-reading Bad Blood, Secrets and Lies in a Silicon Valley Startup by John Carreyrou (published by Alfred A. Knopf, 2018 – Buy a copy and read along).  Chapter 10 focuses on what when Hollywood makes a movie from Bad Blood will be a titanic battle between the indomitable Elizabeth and a career military officer, but what is really a struggle between right and wrong.

Week 8 – Who is LTC Shoemaker

Previous Entries:
Week 1 – Approach and Introduction    (more…)


Three possible alternatives:

IFPUG function points. If you have to have a standards-based approach to sizing and comparison. IFPUG function points are the gold standard. IFPUG function points are an ISO standard and can be applied to all software types (technology agnostic). The drawbacks for using function points include the perceptions that there is a high level of overhead, counting requires too much information too early in the processes and that only highly skilled wizards can count (or approximate) function points correctly. None of these perceptions are really true, however, in some circles, the tar and feathering has stuck. (more…)

Change Behavior To Change Value

Many teams find story points only a partially useful tool to facilitate the flow of work within a team. As noted, story points are not all unicorns and kittens story points can have issues. Can story points be fixed, or better yet can story points still be useful? On the whole, story points are inherently fine if they are used with discretion and structure, to meet a team’s needs.  The words discretion and structure are code for “change”. Reforming the use of story points to make them safe again doesn’t require changing how teams assess or determine story points, but rather how people in the value chain behave once they have a number (or two).  An upfront agreement for using story points makes story points “safe.” Four attributes are useful to guide any upfront agreement on the usage of story points. The RATS criteria are:

Range – Express story points and story point metrics as ranges.  Story points are often used to express the perception of the size or value of work. Using a range to communicate both inside and outside the team mitigates the risk of falling into precision bias.

Approximate – Agree that story points represent a team’s best guess using the knowledge available at a specific time.  Knowledge will evolve as the team develops specific experience, does research and/the environment changes. Story points are not precise.

Team – Gather a team.  Story points are a reflection of a collaboration between multiple points of view. As a collaboration of a group, they can not be used to assess or measure an individual.

Separate – Separate the use of story points from answering client and management questions related to when a function will be delivered and how much that functionality will cost from facilitating the flow of work with the team.

Regardless of what a team uses story points to assess or to approximate, the output of the process is a synthesis of thinking from a group of people.  Story points represent the thought process of the team that created them, influenced by the environment they operate within. Every individual on the team needs to understand the central thread of logic followed to generate story points; however, even on a mature team, individuals will have differences which further emphasize the need to establish a RATS-”based agreement on how story points will be used to ensure healthy behavior.

How did we get to this point!

Story points were originally developed as a metaphor to give a rough answer to the question of how much functionality could be delivered in a specific period of time.  The problem is that all good metaphors are eventually abused or, worse, people forget that the metaphor is a simplification and approximation of real life. Metaphors become reality.   Three basic behaviors of leaders and stakeholders in software development (broad definition) have lead the metaphor of story points to evolve into story points as measures — something they FAIL miserably at. (more…)

Story Points Are A Fence!

A recent discussion with a Scrum Master colleague reminded me that conversations are filled with metaphors.  Metaphors are used to simplify and represent abstract concepts so they can highlight and or offer a comparison.  According to James Geary in his TED talk from July 15, 2010, we use, on average, six metaphors a minute in conversation.  We use metaphors because they are useful. Story points are a metaphor. Story points represent a piece of work. In software, a story point is an abstraction used to talk about a piece of functional code that is not perfectly understood.  Some pieces of code are harder, bigger, take longer to complete, messier, and might not be as well understood…. the list can go on. That is why story points come in different sizes. Historically two scales have been used. Both scales are based on the Fibonacci sequence. Every person and every team has a different perspective of what story point means because it is a metaphor. However, the understanding generated by the abstraction is enough to allow team members to talk about the functionality or go get a rough approximation of what can be done by the team in a sprint or iteration.  Inside the team, the metaphor allows a conversation. Unfortunately, all useful metaphors are used and extended until their marginal utility to facilitate a conversation is reduced to zero (otherwise known as the rule – all good metaphors will be used until they are kicked to death). Story points are no different. (more…)

The language they understand is months and dollars (or any other type of currency).

The language they understand is months and dollars (or any other type of currency).

Clients, stakeholders and pointy haired bosses really do care about how long a project will take, how much the project will cost and by extension the effort required to deliver the project. What clients, stakeholders and bosses don’t care about is how much the team needs to think or the complexity of the stories or features, except as those attributes effect the duration, cost and effort.  The language they understand is months and dollars (or any other type of currency). Teams however, need to speak in terms of complexity and code (programming languages). Story points are an attempt to create a common understanding.

When a team uses story points, t-shirt or other relative sizing techniques, they hash a myriad of factors together.  When a team decomposes problem they have to assess complexity, capability and capacity in order to determine how long a story, feature or task will take (and therefor cost).  The number of moving parts in this mental algebra makes the outcome variable.  That variability generates debates on how rational it is to estimate at this level that we will not tackle in this essay.  When the team translates their individual perceptions (that include complexity, capacity and capability) into story points or other relative sizing techniques, they are attempting to share an understanding with stakeholders of how long and at what price (with a pinch of variability).  For example, if a team using t-shirt sizing and two week sprints indicate they can deliver 1 large story and 2 two medium or 1 medium and 5 small stories based on past performance, it would be fairly easy to determine when the items on the backlog will be delivered and a fair approximation on the number of sprints (aka effort, which equates to cost).

Clients, stakeholders and bosses are not interested in the t-shirt sizes or the number of story points, but they do care about whether a feature will take a long time to build or cost a lot. The process of sizing helps technical teams translate how hard a story or a project is into words that clients, stakeholders and bosses can understand intimately.

Trail Length Are An Estimate of size,  while the time need to hike  is another story!

Trail length is an estimate of size, while the time need to hike it is another story!

More than occasionally I am asked, “Why should we size as part of estimation?”  In many cases the actual question is, “why can’t we just estimate hours?”  It is a good idea to size for many reasons, such as generating an estimate in a quantitative, repeatable process, but in the long run, sizing is all about the conversation it generates.

It is well established that size provides a major contribution to the cost of an engineering project.  In houses, bridges, planes, trains and automobiles the use of size as part of estimating cost and effort is a mature behavior. The common belief is that size can and does play a similar role in software. Estimation based on size (also known as parametric estimation) can be expressed as a function of size, complexity and capabilities.

E = f(size, complexity, capabilities)

In a parametric estimate these three factors are used to develop a set of equations that include a productivity rate, which is used to translate size into effort.

Size is a measure of the functionality that will be delivered by the project.  The bar for any project-level size measure is whether it can be known early in the project, whether it is predictive and whether the team can apply the metric consistently.  A popular physical measure is lines of code, function points are the most popular functional measure and story points are the most common relative measure of size.

Complexity refers to the technical complexity of the work being done and includes numerous properties of a project (examples of complexity could include code structure, math and logic structure).  Business problems with increased complexity generally require increased levels of effort to satisfy them.

Capabilities include the dimensions of skills, experience, processes, team structure and tools (estimation tools include a much broader list).  Variation in each capability influences the level of effort the project will require.

Parametric estimation is a top-down approach to generating a project estimate.  Planning exercises are then used to convert the effort estimate into a schedule and duration.  Planning is generally a bottom-up process driven by the identification of tasks, order of execution and specific staffing assignments.  Bottom-up planning can be fairly accurate and precise over short time horizons. Top-down estimation is generally easier than bottom-up estimation early in a project, while task-based planning makes sense in tactical, short-term scenarios. Examples of estimation and planning in an Agile project include iteration/sprint planning, which includes planning poker (sizing) and task planning (bottom-up plan).  A detailed schedule built from tasks in a waterfall project would be example of a bottom-up plan.  As most of us know, plans become less accurate as we push them further into the future even if they are done to the same level of precision. Size-based estimation provides a mechanism to predict the rough course of the project before release planning can be performed then again, as a tool to support and triangulate release planning.

The act of building a logical case for a function point count or participating in a planning poker session helps those that are doing an estimate to collect, organize and investigate the information that is known about a need or requirement.  As the data is collected, questions can be asked and conversations had which enrich understanding and knowledge.  The process of developing the understanding needed to estimate size provides a wide range of benefits ranging from simply a better understanding of requirements to a crisper understanding of risks.

A second reason for estimating size as a separate step in the process is that separating it out allows a discussion of velocity or productivity as a separate entity.  By fixing one part of the size, the complexity and capability equation, we gain greater focus on the other parts like team capabilities, processes, risks or changes that will affect velocity.  Greater focus leads to greater understanding, which leads to a better estimate.

A third reason for estimating size of the software project as part of the overall estimation process is that by isolating the size of the work when capabilities change or knowledge about the project increases, the estimate can more easily be re-scaled. In most projects that exist for more than a few months, understanding of the business problem, how to solve that problem and capabilities of the team increase while at the same time the perceived complexity[1] of the solution decreases. If a team has jumped from requirements or stories directly to an effort estimate  it will require more effort to re-estimate the remaining work because they will not be able to reuse previous estimate because the original rational will have change. When you have captured size re-estimation becomes a re-scaling exercise. Re-scaling is much closer to a math exercise (productivity x size) which saves time and energy.  At best, re-estimation is more time consuming and yields the same value.  The ability to re-scale will aid in sprint planning and in release planning. Why waste time when we should be focusing on delivering value?

Finally, why size?  In the words of David Herron, author and Vice President of Solution Services at the David Consulting Group, “Sizing is all about the conversation that it generates.”  Conversations create a crisper, deeper understanding of the requirements and the steps needed to satisfy the business need.  Determining the size of the project is a tool with which to focus a discussion as to whether requirements are understood.  If a requirement can’t be sized, you can’t know enough to actually fulfill it.  Planning poker is an example of a sizing conversation. I am always amazed at the richness of the information that is exposed during a group-planning poker session (please remember to take notes).  The conversation provides many of the nuances a story or requirement just can’t provide.

Estimates, by definition, are wrong.  The question is just how wrong.   The search for knowledge generated by the conversations needed to size a project provides the best platform for starting a project well.  That same knowledge provides the additional inputs needed to complete the size, complexity, capability equation in order to yield a project estimate.  If you are asked, “Why size?” it might be tempting to fire off the answer “Why not?” but in the end, I think you will change more minds by suggesting that it is all about the conversation after you have made the more quantitative arguments.

Check out an audio version of this essay as part of  SPaMCAST 201

[1] Perceived complexity is more important than actual complexity as what is perceived more directly drives behavior than actual complexity.