Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 438 features our essay on leveraging sizing in testing. Size can be a useful tool for budgeting and planning both at the portfolio level and the team level.

Gene Hughson brings his Form Follows Function Blog to the cast this week to discuss his recent blog entry titled, Organizations as Systems and Innovation. One of the highlights of the conversation is whether emergence is a primary factor driving change in a complex system.

Our third column is from the Software Sensei, Kim Pries.  Kim discusses why blindly accepting canned solutions does not negate the need for active troubleshooting of for problems in software development.

Re-Read Saturday News

This week, we tackle chapter 1 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015. Chapter 1 is titled, Evolving Organization.  Holacracy is an approach to address shortcomings that have appeared as organizations evolve. Holacracy is not a silver bullet, but rather provides a stable platform for identifying and addressing problems efficiently.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Alex Yakyma.  Our discussion focused on the industry’s broken mindset that prevents it from being Lean and Agile.  A powerful and possibly controversial interview.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

33668249915_7a8d73072b_k.jpg

Which size metric makes sense for a testing organization is influenced by how testing is organized, where testing is incorporated into the value delivery chain, and whether the work is being done for a fee. (more…)

The Mythical Man-Month

The Mythical Man-Month

In the ninth essay of The Mythical Man-Month, Fred P. Brooks discusses the impact of building a system with a size constraint. In most corporate organizations, size is not generally a constraint programmers need to consider. While you don’t hear as much discussion of the physical size of the code as in the past, I recently heard an IT professional lamenting the size of iPad applications. In this case the size of an app impacted how many apps that could be crammed onto a device. Perhaps as devices become smaller, the old constraints of code size will rear back up. Similarly, size is still a constraint when dealing with embedded systems. The discussion of how and why we need to manage a size constraint is a reminder that all physical constraints must be managed.

Managing size is not as straightforward as building to a single executable size for the entire application. For example, building an app to fit within a 400 kb footprint might be the ultimate goal, but while the overall app might one goal each component or function considered could have different size constraints. Interfaces often require more space to code than an algorithm; therefore the number of interfaces may need to be more constrained than the number of algorithms. When managing size one has to account for all of the component parts and the whole at the same time. In this essay, Brooks used his experience with OS/360 to provide examples and discuss different components, such as core and peripherals. While most of us will never write one of premier operating systems, the lessons are still useful.

Management of the size a program, application or any constraint must include setting goals for each component and also the ability to simulate performance against those goals. Just planning, represented by setting the goal, is not enough. A feedback loop is required. Performance testing provides feedback to ensure that the work is meeting the goal and whether the goal was set correctly in the first place. I have been involved with projects that required rethinking when constraints were found by the programmers to be impossible to stay within. Ask me about the DOS COBOL and macro assembler monsters that once upon a time stalked my world due to a compiler constraint. Brooks drew three basic lessons from managing size (or any constraint) based on his experience:

  1. Set size goals/budgets for the both the total application and individual components.
  2. Define exactly what the application and each module must do. Functionality and size are typically related. The more functionality, the larger the component or application. Understand the functionality required holistically, not just from one single point of view.
  3. Identify a central point to foster continuous communication so that teams all work toward the overall goal. Brooks reflects that on large projects team and sub-teams often pursue goals that sub-optimize the overall application due to competition and communication problems. This point highlights the need for a structure for scaling large Agile efforts to address common constraints.

Budgeting and control, while important, are not sufficient to alleviate any technical constraint. In software development (in its broadest sense), once you have a goal, you need to address that goal as a constraint. Addressing a constraint requires both invention and craftsmanship.

Craftsmanship in software development is built on deep knowledge of tools, languages and techniques. Standards, frameworks and architectures provide guidance and team members provide support and mentor each other. An example of craftsmanship can be found in managing size. As decisions are made craftsmanship often provides guidance so that progress can be sustained.  Brooks uses the example adding new functions as individual items rather than bundles.  Individually controllable functions generally require a larger footprint than a bundle of of the same functions. The understanding of structure and packaging of features by developers (craftsmanship) , allows better management of a constraint.

Innovation requires the radical rethinking of how a constraint will be addressed and leaping past that typical approach to address a constraint. Brooks wrote that “innovation is often a result of strategic growth and bold cleverness” that rethinks the constraint from a different point of view. Changing your perspective is an innovation tool. Brooks identifies representation as a tool for reflection and introspection that can lead to innovation. Representation a method of thinking about and modeling both the logic and data rather than just the logic. For example, when considering a constraint, consider the representation of the data required by a function to identify a different way to accomplish the function.

In this essay Brooks was focused on constraint of size. Most programmers will not need to address size as constraint; however, every effort has some type of constraint. Constraints might include functional performance, transaction speeds, a delivery deadline or disk space. Constraints are a fact of life that need to be recognized and managed. While controlling and budgeting for constraints can be avoided, constraints can also be used to provide the impetuous to link the whole effort together. Goals and budgets provide a platform for an architect (or a similar role) to champion the big picture so that team don’t sub-optimize the whole application to meet their own need. Constraints also generate the energy needed to search for innovative solutions. Constraints, whether size or any other constraint, don’t have to be enemy in any effort.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Calling the Shot

Size matters

Size matters.

All jokes aside, size matters. Size matters because at least intellectually we all recognize that there is a relationship between the size of product and the effort required to build. We might argue over degree of the relationship or whether there are other attributes required to define the relationship, but the point is that size and effort are related. Size is important for estimating project effort, cost and duration. Size also provides us with a platform for topics as varied as scope management (defining scope creep and churn) to benchmarking. In a nutshell, size matters both as an input into the planning and controlling development processes and as a denomination to enable comparison between projects.

Finding the specific measure of software size for your organization is part art and part science. The selection of your size measure must deliver the data need to meet the measurement goal and to fit within the corporate culture (culture includes both people and the methodologies the organization uses). A framework for evaluation would include the following categories:

  • Supports measurement goal
  • Industry recognized
  • Published methodology
  • Useable when needed
  • Accurate
  • Easy enough
Trail Length Are An Estimate of size,  while the time need to hike  is another story!

Trail length is an estimate of size, while the time need to hike it is another story!

More than occasionally I am asked, “Why should we size as part of estimation?”  In many cases the actual question is, “why can’t we just estimate hours?”  It is a good idea to size for many reasons, such as generating an estimate in a quantitative, repeatable process, but in the long run, sizing is all about the conversation it generates.

It is well established that size provides a major contribution to the cost of an engineering project.  In houses, bridges, planes, trains and automobiles the use of size as part of estimating cost and effort is a mature behavior. The common belief is that size can and does play a similar role in software. Estimation based on size (also known as parametric estimation) can be expressed as a function of size, complexity and capabilities.

E = f(size, complexity, capabilities)

In a parametric estimate these three factors are used to develop a set of equations that include a productivity rate, which is used to translate size into effort.

Size is a measure of the functionality that will be delivered by the project.  The bar for any project-level size measure is whether it can be known early in the project, whether it is predictive and whether the team can apply the metric consistently.  A popular physical measure is lines of code, function points are the most popular functional measure and story points are the most common relative measure of size.

Complexity refers to the technical complexity of the work being done and includes numerous properties of a project (examples of complexity could include code structure, math and logic structure).  Business problems with increased complexity generally require increased levels of effort to satisfy them.

Capabilities include the dimensions of skills, experience, processes, team structure and tools (estimation tools include a much broader list).  Variation in each capability influences the level of effort the project will require.

Parametric estimation is a top-down approach to generating a project estimate.  Planning exercises are then used to convert the effort estimate into a schedule and duration.  Planning is generally a bottom-up process driven by the identification of tasks, order of execution and specific staffing assignments.  Bottom-up planning can be fairly accurate and precise over short time horizons. Top-down estimation is generally easier than bottom-up estimation early in a project, while task-based planning makes sense in tactical, short-term scenarios. Examples of estimation and planning in an Agile project include iteration/sprint planning, which includes planning poker (sizing) and task planning (bottom-up plan).  A detailed schedule built from tasks in a waterfall project would be example of a bottom-up plan.  As most of us know, plans become less accurate as we push them further into the future even if they are done to the same level of precision. Size-based estimation provides a mechanism to predict the rough course of the project before release planning can be performed then again, as a tool to support and triangulate release planning.

The act of building a logical case for a function point count or participating in a planning poker session helps those that are doing an estimate to collect, organize and investigate the information that is known about a need or requirement.  As the data is collected, questions can be asked and conversations had which enrich understanding and knowledge.  The process of developing the understanding needed to estimate size provides a wide range of benefits ranging from simply a better understanding of requirements to a crisper understanding of risks.

A second reason for estimating size as a separate step in the process is that separating it out allows a discussion of velocity or productivity as a separate entity.  By fixing one part of the size, the complexity and capability equation, we gain greater focus on the other parts like team capabilities, processes, risks or changes that will affect velocity.  Greater focus leads to greater understanding, which leads to a better estimate.

A third reason for estimating size of the software project as part of the overall estimation process is that by isolating the size of the work when capabilities change or knowledge about the project increases, the estimate can more easily be re-scaled. In most projects that exist for more than a few months, understanding of the business problem, how to solve that problem and capabilities of the team increase while at the same time the perceived complexity[1] of the solution decreases. If a team has jumped from requirements or stories directly to an effort estimate  it will require more effort to re-estimate the remaining work because they will not be able to reuse previous estimate because the original rational will have change. When you have captured size re-estimation becomes a re-scaling exercise. Re-scaling is much closer to a math exercise (productivity x size) which saves time and energy.  At best, re-estimation is more time consuming and yields the same value.  The ability to re-scale will aid in sprint planning and in release planning. Why waste time when we should be focusing on delivering value?

Finally, why size?  In the words of David Herron, author and Vice President of Solution Services at the David Consulting Group, “Sizing is all about the conversation that it generates.”  Conversations create a crisper, deeper understanding of the requirements and the steps needed to satisfy the business need.  Determining the size of the project is a tool with which to focus a discussion as to whether requirements are understood.  If a requirement can’t be sized, you can’t know enough to actually fulfill it.  Planning poker is an example of a sizing conversation. I am always amazed at the richness of the information that is exposed during a group-planning poker session (please remember to take notes).  The conversation provides many of the nuances a story or requirement just can’t provide.

Estimates, by definition, are wrong.  The question is just how wrong.   The search for knowledge generated by the conversations needed to size a project provides the best platform for starting a project well.  That same knowledge provides the additional inputs needed to complete the size, complexity, capability equation in order to yield a project estimate.  If you are asked, “Why size?” it might be tempting to fire off the answer “Why not?” but in the end, I think you will change more minds by suggesting that it is all about the conversation after you have made the more quantitative arguments.

Check out an audio version of this essay as part of  SPaMCAST 201


[1] Perceived complexity is more important than actual complexity as what is perceived more directly drives behavior than actual complexity.

Story points?

Story points?

Recently I did a webinar on User stories for my day job. During my preparation for the webinar I asked everyone that was registered to provide the questions they wanted to be addressed. I received a number of fantastic questions. I felt that it was important to share the answers with a broader audience. 

One of the questions from Grigory Kolesnikov I was asked was indicative of a second group of questions: 

“Which is better to use as a metric for project planning:

  1. User stories,
  2. Local self-made proxies,
  3. Functional points, or
  4.  Any other options?”

Given the topic of the webinar the answer focused on whether story points were the best metric for project planning.

Size is one of the predictors of how much work will be required to deliver a project. Assuming all project attributes, with the exception of size, stay the same, a larger project will require more effort to complete than a smaller project. Therefore knowing size is an important factor in answering questions like “how long will this take” or “how much will this project cost”.  While these are questions fraught with dangers, they are always asked. If you have to compete for work they are generally difficult not to answer. While not a perfect analogy, I do not know a person that builds or is involved in building a home that can’t answer that question (on either side of the transaction). Which metric you should use to plan the project depends on the type of project or program and whether you are an internal or external provider (i.e. whether you have to compete for work).  Said a different way, as all good consultants know the answer is – it depends.

User stories are very useful, both for release planning and iteration planning in projects that are being done with one or small number of stable teams. The stability of the teams is important for the team to be able to develop a common frame of reference for applying story points. When teams are unable to develop a common frame of reference (or need to redevelop the frame of reference due to changes in the team) their application of story points will vary widely.  A feature that in sprint 1 might have been 5 story points might be 11 in sprint 3.  While this might not seem to be a big shift, the variability of the how the team perceives size will also be exhibited in the team’s velocity.  Velocity is used in release planning and iteration planning.  The higher degree of variability in the team’s performance from sprint to sprint, the less predictive. If performance measured in story points (velocity) is highly variable it will be  less useful for project planning.  Simply put, if you struggle to remember who is on your team on a day-to-day basis, story points are not going to be very valuable. 

External providers generally have strong contractual incentives to deliver based on set of requirements in a statement of work, RFP or some other binding document.  While contracts can (and should be) tailored to address how Agile manages the flow of work through a dynamic backlog, most are not, and until accounting, purchasing and legal are brought into the world of Agile contracts will be difficult.  For example, outsourcing contracts many times include performance expectations.  These expectations need to be observable, understandable and independently measureable in order to be binding and to build trust.  Relative measures like story points fail on this point.  Story points, as noted in other posts, are also not useful for benchmarking.  

Story points not the equivalent to duct tape. You can do most anything with duct tape. Story points are a team-based mechanism for planning sprints and releases. Teams with a rotating door for membership or projects that have specific contractual performance stipulations need to use more formal sizing tools for planning.

There are lots of options, but you only need a few!

There are lots of options, but you only need a few!

Deciding what to measure in an IT organization is like letting a child loose in a candy store. There are so many things to measure and so little time! The natural tendency is measure everything. I once was shown a list of 248 (I counted them) measures for a 50-person development and maintenance group (I did not count them).  It was crazy and required a staff of three people to collect and crunch all that data. In the frenzy to measure and collect everything the logic is often that it does not matter whether the information will used now, we might need it later. This is usually a sign that the organization needs to step back and reevaluate. I suggest a simple measurement pallet that supports a wide range of organizational goals (We will discuss syncing measurement to organizational goals in a future article.) A basic IT measurement pallet can start with these six measures (this is a Swiss Army knife approach):

  • Size: Begin by measuring the size of functionality that is being delivered. I recommend IFPUG Function Points for functional size; however other functional measure such as COSMIC, NESMA or MARKII function points will also work. While none of these measures are perfect, they are currently the best means to determine how “much” functionality is being delivered.  More advanced organizations might add measures of non-functional requirements to further broaden your understanding of what is being delivered.
  • Cost: Your IT organization should understand how much each project costs, which means the organization is going to have to invest, or at least understand, good cost accounting techniques. Decisions about whether or how to allocate corporate and IT overhead to projects or whether to include hardware cost even if it has to be apportioned are important decisions in order to understand total cost of a project.
  • Effort: Measure the actual amount of effort it takes to deliver the project. Actual means just that how much time is needed, not what the amount you will bill or don’t dump hours in a project that is ahead of schedule. Effort, time accounting, is a close relative of cost accounting and needs to be approached with equal discipline. One final note, collect effort in hours. Anything lower is too much overhead and anything higher generates a lot of measurement error.
  • Duration: Collect the start and end dates of the project.  I also suggest collecting interim dates, such as release dates. One common issue with determining duration is having a definition of what marks the beginning and end of a project. For example, is the research to investigate concept feasibility part of the project or not?  If your project has a warranty period (the development team is supporting the implementation and fixing production defects), is that part of the project or part of maintenance?  Each organization needs to have a common set of policies to ensure everyone understands what start and end means.
  • Defects: I recommend counting all post implementation defects.  If needed, organizations can get fancy and account for defects found during the development process. Post implementation defects are a reflection of the quality the customer actually feels. One simplification I often recommend is that instead of spending huge amounts of time to figure out which project caused a defect, credit the defect to the project that last touched the application (they should have found the defect while regression testing).
  • Satisfaction:  Measure customer satisfaction using simple survey or interview approaches (five-ish questions). Focus on trying to identify whether the overall satisfaction to any customer group is changing at a high level.  When you see a move in the data, you can spend time on a more in-depth investigation. More advanced approaches should leverage techniques such as net promoter.

A simple measurement pallet can be used to generate a wide range of metrics, for example effort and size can be used to generate productivity.  Size and duration can be used to generate velocity or time-to-market metrics.  More importantly, by tackling this simple pallet organizations can develop the discipline needed to generate value from measurement.