Listen Now

Subscribe on iTunes

I am still traveling for the next two weeks. The trip is a mixture of vacation and a board meeting but that does not mean you will have to forego your weekly SPaMCAST.  In place of our normal format, I am posting a mix tape of the answers to the “If you could change two things” question I have been asking interviewees for nearly ten years.  This week on SPaMCAST 392 we feature our top downloaded podcasts from the year 2009:

SPaMCAST 51 – Tim Lister on Adrenaline Junkies and Template Zombies

http://bit.ly/1WERtk5

Tim discussed ending the estimating charade.  Tim stated it would be better if we recognized estimating as goal setting. Secondly, he noted that a lot of outsourcing has overshot its mark and reduced our organizational capabilities.

SPaMCAST 67 – Murali Chemuturi on Software Estimation Best Practices, Tools & Techniques

http://bit.ly/1MHDzeJ

Murali used his wishes to state that estimators need a better grasp and understanding the concepts of productivity and scheduling.

SPaMCAST 69 – Kevin Brennan on Business Analysis

http://bit.ly/1WERB2V

Kevin answered a different question and discussed the message he would share with a C-Level executive to describe why business analysis is important to them.

If you enjoyed the snippets please use the links to listen to the whole interviews.  Next week 2010!

HTMA

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 5 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Calibrated Estimates: How Much Do You Know Now? Chapter 4 described how to define the decision that needs to be made and the data that will be needed to make that decision. Chapter 5 builds on the first step in Hubbard’s measurement processes by providing techniques in order to determine what you know as you begin the measurement process.  Hubbard addresses two major topics in this chapter. The first is using estimation to quantify what you know, and second is his process for calibrating estimators. (more…)

The Mythical Man-Month

The Mythical Man-Month

In the seventh essay of The Mythical Man-Month, Fred P. Brooks begins to tackle the concept of estimating. While there are many estimating techniques, Brooks’ approach is a history/data-based approach, which we would understand as today as parametric estimation. Parametric estimation is generally a technique that generates a prediction of the effort needed to deliver a project based on historical data of productivity, staffing and quality. Estimating is not a straightforward extrapolation of has happened in the past to what will happen in the future, and mistaking it as such is fraught with potential issues. Brooks identified two potentially significant estimating errors that can occur when you use the past to predict the future without interpretation.

Often the only data available is the information about one part of the project’s life cycle. The first issue Brooks identified was that you cannot estimate the entire job or project by just estimating the coding and inferring the rest. There are many variables that might affect the relationship between development and testing. For example, some changes can impact more of the code than others, requiring more or less regression testing. The link between the effort required to deliver different types of work is not linear. The ability to estimate based on history requires a knowledge of project specific practices and attributes including competency, complexity and technical constraints.

Not all projects are the same. The second issue Brooks identified was that one type of project is not applicable for predicting another. Brooks used the differences between small projects and programming systems products to illustrate his point. Each type of work requires different activities, not just scaled up versions of the same tasks. Similarly, consider the differences in the tasks and activities required for building a smart phone app compared to building a large data warehouse application. Simply put, they are radically different. Brooks drove the point home using the analogy of extrapolating the record time for 100-yard dash (9.07 seconds according to Wikipedia) to the time to run a mile. The linear extrapolation would mean that a mile could be run in 2.40 (ish) minutes (a mile is 1760 yards) the current record is 3.43.13.

A significant portion of this essay is a review of a number of studies that illustrated the relationship between work done and the estimate. Brooks used these studies to highlight different factors that could impact the ability to extrapolate what has happened in the past to an estimate of the future (note: I infer from the descriptions that these studies dealt with the issue of completeness and relevance. The surveys, listed by  the person that generated the data, and the conclusions we can draw from an understanding of the data included:

  1. Charles Portman’s Data – Slippages occurred primarily because only half the time available was productive. Unrelated jobs meetings, paperwork, downtime, vacations and other non-productive tasks used the remainder.
  2. Joel Aron’s Data – Productivity was negatively related to the number of interactions among programmers. As the number of interactions goes up, productivity goes down.
  3. John Harr’s Data- The variation between estimates and actuals tend to be affected by the size of workgroups, length of time and number of modules. Complexity of the program being worked on could also be a contributor.
  4. OS/360 Data- Confirmed the striking differences in productivity driven by the complexity and difficulty of the task.
  5. Corbatoó’s Data – Programming languages affect productivity. Higher-level languages are more productive. Said a little differently, writing a function in Ruby on Rails requires less time than writing the same function in macro assembler language.

I believe that the surveys and data discussed are less important that the statistical recognition that there are many factors that must be addressed when trying to predict the future. In the end, estimation requires relevant historical data regardless of method, but the data must be relevant. Relevance is short hand for accounting for the factors that affect the type work you are doing. In homogeneous environments, complexity and language may not be as big a determinant of productivity as the number of interactions driven by team size or the amount of non-productive time teams have to absorb. The problem with historical data is that gathering the data requires effort, time and/or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Recognize that the the term historical data should not scare you away.  Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating. Historical data can be as complex as a pallet of information including project effort, size, duration, team capabilities and project context.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

I recently asked a group of people the question, “What are the two largest issues in project estimation?” I received a wide range of answers; probably a reflection of the range of individuals answering.  Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, “most of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.”
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.  Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.  According to Joe Schofield, “few groups retain enough relevant data from their experiences to avoid relearning the same lesson.”
  4. Labor Hours Are Not The Same As Size.  Many estimators either estimate the effort needed to perform the project or individual tasks.  By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.  According to Ian Brown, “then the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.”
  5. No One Dedicated to Estimation.  Estimating is a skill built on a wide range of techniques that need to be learned and practiced.  When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.  To quote one of the respondents, “consistency of estimation from team to team, and within a team over time, is non-existent.”

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.

1127131620a_1

There are many levels of estimation including budgeting, high-level estimation and task planning (detailed estimation).  We can link a more classic view of estimation to  the Agile planning onion popularized by Mike Cohn.   In the Agile planning onion, strategic planning is on the outside of the onion and the planning that occurs in the daily sprint meetings at the core of the onion. Each layer closer to the core relates more to the day-to-day activity of a team. The #NoEstimates movement eschew developing story- or task-level estimates and sometimes at higher levels of estimation. As you get closer to the core of the planning onion the case for the #NoEstimates becomes more compelling.

03fig01.jpg (500×393)

Planning Onion

 

Budgeting is a strategic form of estimation that most corporate and governmental entities perform.  Budgeting relates to the strategy and portfolio layers of the planning onion.  #NoEstimates techniques doesn’t answer the central questions most organizations need to answer at this level which include:

1.     How much money should I allocate for software development, enhancements and maintenance?

2.     Which projects or products should we fund?

3.     Which projects will return the greatest amount of value?

Budgets are often educated guesses that provide some approximation of the size and cost of the work on the overall backlog. Budgeting provides the basis to allocate resources in environments where demand outstrips capacity. Other than in the most extreme form of #NoEstimate, which eschews all estimates, budgeting is almost always performed.

High-level estimation, performed in the product and release layers of the planning onion, is generally used to forecast when functionality will be available. Release plans and product road maps are types of forecasts that are used to convey when products and functions will be available. These types of estimates can easily be built if teams have a track record of delivering value on a regular basis. #NoEstimates can be applied at this level of planning and estimation by substituting the predictable completion of work items for developing effort estimates.  #NoEstimates at this level of planning can be used only IF  conditions that facilitate predictable delivery flow are met. Conditions include:

  1. Stable teams
  2. Adoption of an Agile mindset (at both the team and organizational levels)
  3. A backlog of well-groomed stories

Organizations that meet these criteria can answer the classic project/release questions of when, what and how much based on the predictable delivery rates of #NoEstimate teams (assuming some level of maturity – newly formed teams are never predictable). High level estimate is closer to the day-to-day operations of the team and connect budgeting to the lowest level of planning in the planning onion.

In the standard corporate environment, task-level estimation (typically performed at the iteration and daily planning layers of the onion) is an artifact of project management controls or partial adoptions of Agile concepts. Estimating tasks is often mandated in organizations that allocate individual teams to multiple projects at the same time. The effort estimates are used to enable the organization to allocate slices of time to projects. Stable Agile teams that are allowed to focus one project at a time and use #NoEstimate techniques have no reason to estimate effort at a task level due to their ability to consistently say what they will do and then deliver on their commitments. Ceasing task-level estimation and planning is the core change all proponents of #NoEstimates are suggesting.

A special estimation case that needs to be considered is that of commercial or contractual work. These arrangements are often represent lower trust relationships or projects that are perceived to be high risk. The legal contracts agreed upon by both parties often stipulate the answers to the what, when and how much question before the project starts. Due to the risk the contract creates both parties must do their best to predict/estimate the future before signing the agreement. Raja Bavani, Senior Director at Cognizant Technology Solutions suggested in a recent conversation, that he thought that, “#NoEstimates was a non-starter in a contractual environment due the financial risk both parties accept when signing a contract.”

Estimation is a form of planning, and planning is a considered an important competency in most business environments. Planning activities abound whether planning the corporate picnic to planning the acquisition and implementation of a new customer relationship management system. Most planning activities center on answering a few very basic questions. When with will “it” be done? How much will “it” cost? What is “it” that I will actually get? As an organization or team progresses through the planning onion, the need for effort and cost estimation lessens in most cases. #NoEstimation does not remove the need for all types of estimates. Most organizations will always need to estimate in order to budget. Organizations that have stable teams, adopted the Agile mindset and have a well-groomed backlog will be able to use predictable flow to forecast rather than effort and cost estimation. At a sprint or day-to-day level Agile teams that predictably deliver value can embrace the idea of #NoEstimate while answering the basic questions based what, when and how much based on performance.

2751403120_ae8be53206_b

#NoEstimates is a lightening rod.

 

Estimation is one of the lightening rod issues in software development and maintenance. Over the past few years the concept of #NoEstimates has emerged and has become a movement within the Agile community. Due to its newness, #NoEstimates has several camps revolving around a central concept. This essay begins the process of identifying and defining a core set of concepts in order to have a measured discussion.  A  shared language accross the gamut of estimating ideas whether Agile or not including #NoEstimates is critical for comparing both sets of concepts.  We begin our exploration of the ideas around the #NoEstimates concepts by establishing a context that includes classic estimatimation and #NoEstimates.

Classic Estimation Context: Estimation as a topic is often a synthesis of three related, but different concepts. The three concepts are budgeting, estimation and planning. Becasue these three conccepts are often conflated it is important to understand tthe relationship between the three.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model. An estimate is a finite approximation of cost, effort and/or duration based on some basis of knowledge (this is known as a basis of estimation). The flow of activity conflated as estimation often runs from budget, to project estimation to planning. In most organizations, the act of generating a finite approximation typically begins as a form of portfolio management in order to generate a budget for a department or group. The budgeting process helps make decisions about which pieces of work are to be done. Most organizations have a portfolio of work that is larger than they can accomplish, therefore they need a mechanism to prioritize. Most portfolio managers, whether proponents of an Agile or a classic approach, would defend using value as a key determinant of prioritization. Value requires having some type of forecast of cost and benefit of the project over some timeframe. Once a project enters a pipeline in a classic organization. an estimate is typically generated.  The estimate is generally believed to be more accurate than the orgianal  budget due to the information gathered as the project is groomed to begin. Plans breakdown stories into tasks often with personal assigned, an estimate of effort generated at the task level and sum the estimates into higher-level estimates. Any of these steps can (but should not) be called estimation. The three -level process described above, if misused, can cause several team and organizational issues. Proponents of the #NoEstimates movement often classify these issues as estimation pathologies; we will explore these “pathologies” in later essays.

#NoEstimates Context:  There are two camps macro camps in the #NoEstimate movement (the two camps probably reflect more of continuum of ideas rathe than absolutes).  The first camp argues that a team should break work down into small chunks and then immediately begin completing those small chunks (doing the highest value first). The chunks would build up quickly to a minimum viable product (MVP) that can generate feedback, so the team can hone its ability to deliver value. I call this camp the “Feedback’ers”, and luminaries like Woody Zuill often champion this camp. A second camp begins in a similar manner – by breaking the work into small pieces, prioritizing on value (and perhaps risk), delivering against a MVP to generate feedback – but they measure throughput. Throughput is a measure of how many units of work (e.g. stories or widgets) a team can deliver in a specific period of time. Continuously measuring the throughput of the team provides a tool to understand when work needs to start in order for to be delivered within a period time. Average throughput is used to provide the team and other stakeholders with a forecast of the future. This is very similar to throughput measured used in Kanban. People like Vasco Duarte (listen to my interview with Vasco) champion the second camp, which I tend to call the “Kanban’ers”.  I recently heard David Anderson, the Kanban visionary, discuss a similar #NoEstimates position using throughput as a forecasting tool. Both camps in the #NoEstimates movement eschew developing story- or task-level estimates. The major difference is on the use of throughput to provide forecasting which is central to bottom-up estimating and planning at the lowest level of the classic estimation continuum.

When done correctly, both #NoEstimates and classic estimation are tools to generate feedback and create guidance for the organization. In its purest form #NoEstimates uses functionality to generate feedback and to provide guidance about what is possible. The less absolutist “Kanban’er” form of #NoEstimates uses both functional software and throughput measures as feedback and guidance tools. Classic estimation tools use plans and performance to the plan to generate feedback and guidance. The goal is usually the same, it is just that the mechanisms are very different. With an established context and vocabulary exlore the concepts more deeply.

 

Click this link to listen to SPaMCAST 305

Software Process and Measurement Cast number 305 features our essay on Estimation.  Estimation is a hot bed of controversy. We begin by synchronizing on what we think the word means.  Then, once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

The essay begins:

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  Once the budget is developed the evolution from budget to estimate and then plan follows a unique path as the project team learns about the project.

Next

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT
Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise.
Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT
Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.