Estimation


Untitled

Are function points relevant in 2014? In this case, the question is whether function points are relevant to the size of an application, a development or an enhancement project. IFPUG Function Points were proposed in 1979 by Allan J. Albrecht, published in 1983 by Albrecht and Gaffney while at IBM and then updated and extended over the years. Just like using a tape measure to determine the size of the room, function points are a tool to determine the size of the application or project. In order to determine relevance we need to answer two questions:

  1. Do we still need to know “size”?
  2. Is knowing size sufficient to tell us what we need to know?

Size as a measure has many uses, but the two most often cited are as a component in parametric estimation and as a denominator in metrics such as time-to-market and productivity. While there still might be an intellectual debate on the effectiveness of estimation, there has been no reduction in the sponsors, executives, purchasing agents and the like requesting a price or an end date that you will be held accountable to meet.  Until those questions cease, estimation will be required. Parametric estimation processes (the second most popular form of estimation after making up a number) require an estimate of size as one of the inputs.  Parametric estimation helps to avoid a number of the most common cognitive biases exhibited by IT estimators: optimism and assumption of knowledge.

Size is also used as a normalizing factor (a denominator) to compare effort (productivity), duration (time-to-market) and defects (quality). This type of quantitative analysis is used to answer questions like:

  • Is our performance improving?
  • Are the techniques being used delivering value faster?
  • Are we staffed appropriately?

Function points deliver a consistent measure of functional size based on a consistent set of rules.

The second and perhaps more critical question is whether the balance between functional requirements (things users do) and non-functional requirements (things like usability and maintainability) have changed when implemented in the current environment. If the balance has changed then perhaps measuring functional size is not relevant or not sufficient for estimation or productivity analysis.  A literature search provides no quantitative studies on whether the relationship between functional and non-functional requirements (NFRs) has changed.  Anecdotally, the new architectures, such as heavily distributed systems and software as a service, have caused an increase in the number and complexity of NFRs. However there is no credible academic evidence that a change has occurred.

It should be noted that some measurement organizations, like IFPUG, have developed and begun evolving measures of non-functional size.  IFPUG has released the SNAP version 2.1, which measures the size of NFRs. These measures are still in the process of being incorporated into software estimation tools and are considered an augmentation to functional size measures like IFPUG Function Points or COSMIC (another form of function points).

Function points are still relevant because organizations, sponsors and purchasing agent still want to know how much a project will cost and what they will get for their money.  Organizations still want to benchmark their performance internally and externally.  Answering these kinds of questions require a standard measure of size. Until those questions stop being important, function points will be relevant.

FYI: Many times the question of relevance is really code for: “Do I have to spend my time counting function points?”  We will tackle that issue at a later date, however until then if effort is the real issue, call me and let’s discuss Quick and Early Function Points.

Sometimes the way forward can be foggy.

Sometimes the way forward can be foggy.

We’ve all been asked for an off-the-cuff estimate and gotten burnt. What is more problematic is that we continue to do it. Worse yet, even though we know this form of behavior is destructive, we feel we have no choice.  Not answering is not allowed. The behavior is rationalized by viewing the off-the-cuff estimate as a time honored IT tradition.  What is wrong with tradition? A first estimate is just a number right? 

We forget that an estimate, even in the most benign case, sets expectations. The number, range of numbers or frame of reference creates a set of boundaries in the requestor’s mind and, as importantly, in your mind. The changes which will be required as knowledge overtakes unknowns will require more effort, than if you’d gotten the phrase, “I’ll get back to you….” out of your mouth.  

Estimates are driven by many factors.  I would like to focus on the grand-daddy of all factors: requirements.  The love/hate relationship between the hard cold number of an estimate and the fuzzy concept that is requirements begins even before a project starts.  It begins with the initial hallway conversation.  How accurate can an estimate for a project be when all you have is a one line description (or at best a paragraph in an email)?  The goal of requirements is to describe the problem in terms of a solution state; what the project is supposed to do when it is complete.  Knowing what you want to end up with is the knowledge that makes creating an estimate possible (let alone what makes creating the project possible). However, changes in the knowledge of the project team and the conceptual ideas describing the solution before requirements follows a non-linear path. The one line description is jump into the future without all of the facts, which reduces the probability of an estimate being correct. 

There are many strategies to address the requirements-estimation conundrum.  Methods such as estimation funnel (a process strategy), analogous estimates, estimating by phase (a project methodology strategy) and the planning components of Agile methods (a different project methodology strategy) will address significant portions of the conundrum.  These techniques work by relating knowledge growth to accuracy.  Regardless of the logic of relating estimates to knowledge, breaking the underlying belief in an estimate as an absolute is not for the timid.  Improving the relationship between requirements and estimation within given company will be a factor of the organizational culture in effect. 

When is an estimate just a number?  Maybe the question should be, when isn’t an estimate just a number?  When is an estimate not a promise? When is it not a contract with your boss, customer or both? The crass answer might be when you can the answer the following questions; when will it be done, how long it will take, or how much will the cost definitively. Or maybe when changes to requirements or processes can occur without an impact to the delivery date, cost or the hours you’ll have to work.  Providing a bad estimate early in a project creates a catch 22 for the project manager.  When change occurs it reduces the project manager’s and estimator’s credibility, which can lead to an aggressive defense behavior by not hearing anything that endangers the original position.

A version of this essay is on the SPaMCAST number 6.

If you know how long it takes you to make an apple pie, how long would it take you to make a lemon meringue pie?

If you know how long it takes you to make an apple pie, how long would it take you to make a lemon meringue pie?

An analogous estimate an estimate generated by comparing the project you are trying to estimate to a completed project. An example of a very simple analogous estimate would be if you had recently baked an apple pie and the exercise had taken 2 hours, you could infer that to make another apple pie the next day would also require 2 hours. The process of using an analogy to generate an estimate gets more complicated when instead of an apple pie we are going to make cherry turnovers (or some other imperfect comparison).

In project estimation, using the analogy process to generate an estimate begins with using a known project to gauge an unknown project. The person(s) making the comparison has to juggle the similarities between the two projects. The estimator will begin by selecting a project (or projects) that are as similar to the project they are trying to estimate as possible. The attributes that the estimator will typically use to make the comparison include: project size, project complexity, project team composition and the project’s story. Once the estimator judges the degree of similarity, he or she will use that difference to infer an estimate for the new project based on the comparison projects’ history. For example, if we knew that George could build a 4,000 square foot house in 1,000 hours of effort, and he was asked to build a very similar house we could confidently say that it would take him another 1,000 hours. Similarly if the house he was asked to build was 3,000 square feet, the estimate would be 750 hours of effort. Complications, such as difference in complexity, changes in crew working on the house or a more difficult client, would require a correction to the estimate.  The analogous estimation process combines expert judgment from the estimator with historical data.

The reliability of analogous estimate is based on the perception and biases of the estimator, what is known about the project being estimated and validity of the historical data available. While all of these factors affect reliability, Barry Boehm has noted that an estimate driven by expert judgment is only as good as the experts luck and opinion[1].  You can increase the reliability by using an estimation team, so that an individual opinion and bias is not over-weighted. Secondly, reliability can be increased ensuring that the estimator has enough time to find and reference the right historical data.

Estimation using analogies is a valid estimation technique that is relatively easy to apply, but that requires accurate historical data that includes quantitative information (e.g. effort, duration, staffing, and defects) and behavioral data (e.g complexity, technology, team capabilities and the story of the project). Regardless, the estimator’s knowledge of the project to be estimated and their personal biases need to be accounted for in order to ensure reliability.


[1] Software Development Cost Estimation Approaches – A Survey, March 2000, csse.usc.edu/csse/…/usccse2000-505.pdf

Prices

Prices

In classic economics, a price represents an equilibrium between supply and demand or value and scarcity. This suggests that there should be a close relationship between the estimate and the price. However, the difference between pricing and an estimate is the pricing strategy. Over the long term in commercial organizations the price must be equivalent to estimate plus a planned margin. In the short run for any specific account the relationship can be significantly more variable and nuanced. Pricing strategies work because IT sourcing markets are not perfect markets.

The classic equilibrium theory assumes absolutely free markets without barriers. Software development, operations or other IT staffing scenarios typically have several market imperfections (this is most true in the short run), which make pricing strategies effective. For example, a price to win strategy might be to price the work using lower margin (or negative) in order to get into an account, with price escalation in the longer term to get back to a planned margin. You can see this strategy in your local grocery store where prices are marked down to entice switching or stockpiling (stockpiling is useful to resist competitive pressures).

An estimate describes what sourcer thinks will be required to deliver the work, and therefore is an absolute based on what is known (and many times what is not known). The estimate is a step to a price, but only a step. The most basic formula would be:

 Price = Estimate * Planned Margin

The margin is a function of the pricing strategy. The equation could be enhanced (or complicated) by adding timeframes. In this model the estimate, unless new information is encountered (e.g. scope change, different resource costs, inaccuracies or process improvements), is a constant. That means that if the organization wants to change the price, they will need to change the expected margin or find other efficiencies. What should not happen is that a price change results in a command to change the estimate without some substantive rational.

In internal organizations, the relationship between an estimate and what is charged (charged back or applied to budget) is typically the same with the possible exception of an allocation of overhead. There is little need for an internal pricing strategy, as internal IT organizations are typically run as cost centers rather than as a business. In later articles we can discuss both the positive and mostly negative outcomes of this behavior.

In commercial IT (application development, support and operations), the price that ends up being charged or in the contract should be related to the estimate.  Related as modified by the pricing strategy being used to capture the business. Where there are market imperfections, such as high barriers to switching, that difference between the estimate and price is tilted toward the sourcer. Estimates are an input to the price, but only that – an input.

note-2

In Software Project Estimation: Fantasies I said that a budget, estimate or even a plan was not a price.  After the publication of that essay I had a follow up conversation with a close friend. He said that in his organization the word estimate is considered a commitment, or at the very least a target that all his project managers had to pursue. YEOW! He is playing fast and loose with the language and therefore is sending a mixed message.

A commitment is a promise to deliver.  An example of a commitment I heard recently as I was walking through the airport listening to the cell phone conversation of a gentlemen walking next to me was “I promise not leave the sales review until the end of the month.”  A commitment indicates a dedication to an activity or cause.  The person on the cell phone promised to meet the goal he had agreed upon.

What is a target? In an IT department a target is a statement of business objective. An example of a target might be “credit card file maintenance must be updated by January 1st to meet the new federal regulation.” A target defines the objective and defines success.  A target is generally a bar set at a performance level and then pursued.  Another example is “I have a target to review six books for the Software Process and Measurement podcast in 2014.”  Note six is two more than we did in 2013 and represents a stretch goal that hopefully will motivate me to read and review more books.

Simply put, a commitment represents a promise that will be honored and a target is a goal that will be pursued.  An estimate is a prediction based on imperfect information in an uncertain environment.  An estimate, as we have noted before, is best when given as a range. Stating an estimate as a single number and adding the words “we will deliver the project for X (where X is a budget or estimate) converts the estimate into a commitment that must be honored.  Consider for a second . . . if a project is estimated to be $10M – $11M USD and a team finds a way to deliver it for $7M USD, would you expect them to find a way to spend the extra money rather than giving it back so the organization can do something else with the money? Bringing the project in for $3 or $4M less than the estimate would mean they had not met their target or commitment. Turning an estimate into a commitment or target can lead teams toward poor behaviors.  Targets are goals, commitments are a promise to perform and an estimate is a prediction.  Targets, commitments and estimates are three different words with three different definitions that generate three different behaviors.

Historical data doesn't come from historical ruins.

Historical data doesn’t come from historical ruins.

Historical data is needed for any form of consistent estimation.  The problem with historical data is that gathering the data requires effort, time or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating or as complex as the set of data that teams using parametric estimation collect which includes a more robust pallet of data including project effort, size, duration, team capabilities and project context. In both cases the data collected needs to be for the method you are using and the level of granularity that you are going to estimate or plan.  For instance, if you are estimating at the project level you need data at a project level. If you are estimating at a task level you need to collect historical data at the task level.

Here is my recommended pallet of historical data for estimating at the project level:

Original Estimate (effort, duration, staffing)

Actual Outcome (effort, duration, staffing)

Cost (estimated and actual) – Cost data can be broken down based on the source.  Examples of further levels of granularity include hardware costs, software purchase or license costs and contractor versus internal personnel costs.

Capabilities (predicted and actual) – Capabilities describe the level of competency of the team.  Examples of capabilities include team skill set, experience level, roles and control structures.

Size (predicted and actual) – Size is a measure of the end project delivered by the project.  In a software project, size is a measure of the functionality that will be delivered by the project (IFPUG Function Points is an example of measure of software functionality).

Context – Context is the story of the project including whether anything out of the norm that happened. For example, knowing that half the project team was temporarily reassigned during the project may be important to know when analyzing the data.

Project Demographics (who was the customer, what were the product(s) affected, what methods were used, what was the primary technology, were any of the technologies new to the team, what were the primary languages, were any of the languages new to the team)

If we were to need to estimate (not plan) at a phase, release or sprint level then the data collected would need to be collected at that level.

Historical data is a requirement for effective budgeting and estimation.  The best data is data is data from your organization projects.  This means that you have to define the information you want, collect the data and analyze the data.  The collection of data also infers that someone needs to record the data as it happens (time accounting and project level accounting).  Only collect the information you need and only at the level you are going to use.  Remember, data collection for each measure or additional level of information will require more effort both from those analyzing the data, those collecting the data and perhaps more importantly by those that have to record the data.  Balance the level of measurement overhead with the benefit you can extract in the near term.  Collecting data that you might need or that will pay off in a few years will usually end up costing more than it will return and may well disenchant the people you are asking to collect and record the data.  When they become disenchanted your data quality will suffer (or potentially stop being reported).  When beginning an estimation program immediately start collecting your own data, BUT also consider reaching out to external sources of data to jump start the program that will ensure you can begin estimating as you collect your own data.

Hurdles come in many shapes and sizes.

Hurdles come in many shapes and sizes.

There are a number of hurdles to jump to be in a position to provide accurate estimations (accurate, not precise!).  These hurdles represent a number of biases that have grown up around estimates that cause us to ignore uncertainty, the capability of the teams that will do the work and whether we are using consistent processes.

The first hurdle is uncertainty. Uncertainty is a natural occurrence of in all projects. The earlier in the project we estimate, the larger the amount of uncertainty. All budgets, estimates and even plans must recognize that uncertainty exists and make provisions for the degree of uncertainty. One technique that is used to incorporate uncertainty is padding, which is the inclusion of an amount of extra time at the task, phase or overall project. However, tasks should only be generated in planning where uncertainty is at its lowest level, therefore padding should not be needed. But, task level padding is typically done when a bottom-up estimate is being generated when a budget or top-down estimate would be more appropriate.  Padding is generally a quick and dirty way to address uncertainty.  A second mechanism for dealing with uncertainty is to generate a budget or estimate as a range (the project will cost between x to y)  based on the level of variability in past budgets or estimates compared to actuals.  The bigger the difference between the prediction and reality, the greater the range. Techniques like Monte Carlo analysis is can be used to generate confidence levels for range boundaries.  Measuring how uncertain we are is more problematic when we are not using measurement data (comparing past budgets and estimates project actuals).  One method is to gather a group of subject matter experts and have them individually develop a budget or estimate and use the range as an indication of uncertainty.

A second hurdle all budgets and estimates face is that the result needs to incorporate a number of lower level predictions.  These include predicting the capability of the team, the types of methods they will use, the level of technical complexity and, to an extent, how the problem will be solved.  All estimators and planners do this kind of prediction for every estimate, budget or plan they have ever created (whether they knew it or not).  The number of different attributes that can affect any budget or estimate can be daunting.  For example, T. Capers Jones rattled off 130+ in the appendix of Estimating Software Costs. I recommend having a formal list of attributes and a rating scale so that whomever is involved in developing the budget or estimate remembers to account for the whole range of attributes (or consciously decide they do not need to be addressed).  Knowledge of the capability attributes (all of the attributes including complexity) can have a very significant impact on the cost and speed of delivery.  Some estimators make an assumption that all attributes will be average, however the process of thinking through or assessing the attributes can uncover assumptions that have been made that may not be true.  There are several published lists of project attributes that can be mined to jump start this form of assessment. I generally recommend getting advice before adopting a list from published materials to ensure a fit with your organization’s culture.

The third hurdle is consistency of process or method.  The majority of the effort in any software project is generated by the tasks required to build the project.  I call this the engineering process, which includes generating requirements, any analysis and design, coding and testing.  At a macro level all projects include work that could be assigned to those categories.  The differences are in the details.  The tasks need to code a web module would be different than those needed to code a data warehouse (if you are not a developer you are going to have to trust me on this).  Each of these types of work will have a different productivity signature if these are following any of the basic development frameworks (e.g. Agile, spiral, Crystal, RUP).  If the team is just winging it there will be no means to predict how long the project will cost or last (you might be able to predict failure).  As a side note the consistency of estimation process is a necessity also in order to generate comparable results.

Clearing the three hurdles to effective estimation is not a herculean task, but it does require discipline and a degree of introspection, which may not come naturally to IT teams. This is where leadership, defined processes and coaching are helpful to break down barriers that inhibit good budgeting and estimation processes.  Let’s stop thinking of an estimate as a single number that states what will happen, but rather a prediction of what may happen.

Fantasies are as ethereal as a cloud.

Fantasies are as ethereal as a cloud.

There are a number of fantasies about estimation that non-IT people and even some experienced software development professionals have. They are that 1) estimates are like retail prices, a predictable fixed price, 2) estimates can always be negotiated to a smaller number with impunity, and 3) in order to be accurate estimates must be precise. The belief in any of these fantasy fallacies will have negative.

The first fantasy is that is that custom projects can be priced like a cup of coffee. We fall prey to these fantasies because we are human and we want software projects to be as predictable as buying that cup of coffee.  When you go to most coffee shops, whether in North America, South America, Europe or India to buy a cup of coffee, the price is posted above the register.  In my local Starbucks I can get a cup of coffee for a few dollars, I just read the menu and pay the amount. The same is true for buying an app on my iPhone or a software package. Software project estimates are built on imperfect information that ranges from partial requirements to evolving technologies and, worse yet, include the interaction of people and the chaos that portends.  From a purely mathematical perspective these imperfections mean that the actual effort, cost and duration of the project will be variable.  How variable is influenced by the process used, the amount of learning that is required to deliver the project and the number of people involved. These are just a few critical factors that drive project performance. This variability in knowledge is why mature estimation organizations almost always express an estimate either as a range or as a probability, and why some organizations suggest that estimation is impossible. Agile projects re-estimate every sprint based on the feedback from the previous sprint using the concept of velocity.  Many waterfall projects re-estimate at the beginning of every new phase so that the current estimate utilizes the information the team has learned through experience.  Even when a fixed price is offered, the organization agreeing to a fixed price will have done an analysis to determine whether they can deliver for that price and what the probability is that the project will really cost (with a profit). This would be the process followed for any project to say they were x% confident of an estimate. When projects run short on time, resources or money and they can’t beg for more, they will begin to make compromises ranging from cutting corners (we don’t need to test that) to jettisoning scope (lets push that feature to phase two).  Many of these decisions will be made quickly and without enough thought, which will hurt IT’s reputation and increase project risk.

A second classic fantasy is that you can always brow beat the team into making the estimate smaller.  This fantasy can be true.  A good negotiator will leverage at least two physiological traits to whittle away at an estimate.  The first trait is the natural optimism of IT personnel, which we discussed in Software Project Estimation: Types of Estimates.  The problem is that negotiating the estimate downward (rather than negotiating over scope or resources) can lead to padding of estimates or to technical debt driven by pressure on profit margin or on career prospects. Estimators that know they are going to be pushed to reduce any estimate regardless of how well it is built will sometimes cheat and pad the estimate. So, when they are pushed to cut they can do so without hurting the project. This behavior is only a short term fix.  Sooner or later (and usually sooner) sponsors and managers figure out the tactic (perhaps because they used it themselves) and begin demanding even deeper cuts.  The classic estimation joke is that every first estimate should be cut in half and then sent back to be re-estimated.  A second side effect of this fantasy is that when the estimate is compressed and the requirements are not reduced, the probability of the team needing to cut corners increases.  Cutting corners can result in technical debt or just plan mistakes.  In extreme circumstances, teams will take big gambles on solutions in an attempt to be on budget.

A third fantasy is that precision equals accuracy. Precision is defined as exactness.  A precise estimate for a project might be that a project will cost $28,944 USD and require 432 hours, will take 43 days beginning January 1st and completing February 12th. Whether the estimate is accurate, defined as close to actual performance, is unknown.  This is precision bias, a form of cogitative bias, in which precision and accuracy are conflated.  In most cases in precision bias occurs the high precision infers higher accuracy.  The level of precision gives the impression that it is highly accurate.  The probability of a highly precise estimate being accurate is nearly zero, however add few decimal places and see how much more easily it is to be believed. As we have noted before, wrong budgets and/or estimates will increase the risk of project failure.

When I teach estimation I usually begin with the statement that all estimates are wrong.  This is done for theatrical effect, however it is perfectly true.  Any estimate that is a single, precise number that has gone through several negotiations (read that as revised down) is nearly always wrong. However, if when we jettison the false veneer of precision, integrate uncertainty and stop randomly padding estimates, we can construct a much more accurate prediction of how a project will perform.  Always remember that an estimate is a prediction, not a price.

Like IT professionals, perhaps too optimistic?

Like IT professionals, perhaps too optimistic?

In Software Project Estimation: The Budget, Estimate, Plan Continuum we defined a numerical continuum that makes up estimation.  There are numerous specific techniques for generating budgets, estimates and plans.  The techniques can be sorted into three basic categories.  Hybrids exist that the leverage components of each.

Untitled

Expert techniques use the judgment, generally based on experience, of an individual to determine the cost, duration or effort of a project. The primary strengths of an expert approach is that it can be developed relatively quickly and is championed by a person who has developed a high level of organizational trust. The obvious weakness of these techniques is the reliance on an individual with all of his or her biases.  Dr. Ricarado Valerdi in SPaMCAST 84 noted his research has found that IT personnel are notoriously poor estimators.  One of the reasons cited in the interview was that IT personnel are generally overly optimistic of their problem solving ability. Techniques such Delphi and Planning Poker use multiple experts as a technique to fight individual bias by using collaboration in an attempt to triangulate on the better answer.  Developing an estimate or budget leverages past performance on a specific project to anchor the estimator(s) memory, and then uses judgment to determine how much one project will be like another. Expert techniques make the most sense when there is little or no data to base the prediction, for instance when a budget is being developed.

The second technique is parametric estimation.  Parametric estimation is generally an estimation technique (as opposed to a budget or a plan, although many commercial products also include planning features) that generates an estimate based on historical data of productivity, staffing and quality that is used to create a set of equations.  These equations are then fed information about the size of the project (IFPUG Function Points for example), project complexity and the predicted capabilities of the team.  Tools like SEER-SIM and COCOMO II are parametric estimation tool. The strengths of parametric estimates are derived from the historical performance data they use to generate the estimates and the enforced rigorous estimation process.  The weakness of any parametric based estimation model is that they require the estimator generate, or have access to, a numerical size which can add overhead to the project or take time that would be better spent building the software.  We have discussed the fallacy of these issues in the discussion of IFPUG function points.  A bigger issue exists when there is no historical data that can be used to generate the productivity equations.  When no data exists I would recommend seeking external data (many firms, including the David Consulting Group – my day job – and ISBSG can sell or help you with this issue).  When no trustworthy data exists, parametric estimation does not make sense.

Work breakdown structures are the third category.  This category is generally used for planning, and some cases, as means of building a bottom-up estimate.  In this category a planner or team will generate a list of tasks need to complete the job. The level of granularity of the tasks can vary greatly – I had a colleague that planned tasks at an hourly increment. Constraints, staffing and sequence can be added to the plan to generate schedule.  The sprint backlog used in Scrum is a form of this technique.  The power of the techniques are derived from a focus on what is to be done, by whom and when at an actionable level of detail. The problem is that you need an incredible amount of information about the project and project team to be able to generate an accurate task list let alone an accurate project schedule.  It is well known that the amount of data needed for this technique is generally only accurately known over short time horizons. However, I have seen processes that require detailed schedules for long projects up to a year before they are scheduled to start.   These techniques are best use for deriving short term plans.

Most IT organizations tend to fixate on one of these categories of techniques however organizations that understand differences between a budget, an estimate and a plan will use techniques from all three categories.  Using the data and knowledge gained from using each tool or technique as a feedback loop to improve the performance of all the techniques the use.  For example, an organization I recently spoke with uses both parametric and expert techniques to generate estimates on critical projects.  Both techniques cause the estimation team to surface different assumptions that need to be understood when deciding which work can be done and how much money to ask for from the business.

Do I budget, estimate or plan the number of crawfish I am going to eat?

Do I budget, estimate or plan the number of crawfish I am going to eat?

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  The budget to estimate to plan evolution really follows the path a project team takes as they lean about the project.

Budgeting is the first step.  You usually build a budget at the point that you know the least amount about the project. Looking back on my corporate career, I can’t tell how many late nights were spent in November conceptualizing projects with my clients so that we could build a budget for the following year.  The figures we came up with were (at best) based on an analogy to a similar project.  Even more intriguing was that accounting expects you to submit a single number for each project (if you threw in decimal points they were more apt to believe the number).  Budget figures are generated when we know the least, which means we are at the widest point of the cone of uncertainty.  A term that is sometimes used instead of a budget is rough order of magnitude.

The second stop on the estimation journey generally occurs as the project is considered or staged in the overall project portfolio.  An estimate generally provides a more finite approximation of cost, effort and duration based on deeper knowledge. There is a wide range of techniques for generating an estimate, ranging from analogies to parametric estimation (a process based on quantitative inputs, expected behaviors and past performance).  The method you use depends on the organization’s culture and the amount of available information.  Mature estimation organizations almost always express an estimate either as a range or as a probability.  Estimates can be generated iteratively as you gather new information and experience.

The final stop in the decomposition of the from the budget to the estimate is the plan. A plan is the work breakdown structure for the project (generally with task estimates and resources assigned) or a task list. In order to create a plan you must have a fairly precise understanding what you are building and how you are going to build it.  Good planning can only occur when a team is in thinnest part of the cone of uncertainty. Or, in other words, where you have significant knowledge and information about what you are planning.  Immature organizations often build a plan for a project, sum the effort and the cost and then call the total an estimate (this is called bottom-up estimating) which mean they must pretend to know more than the really can. More mature organizations plan iteratively up to a short-term planning horizon (in Agile that would be the duration of a sprint) and then estimate (top-down) for periods outside the short term planning window.

Short Descriptions:

  • Budgeting: Defines how much we have to spend and influences scope.   A budget is generally a single number that ignores the cone of uncertainty.
  • Estimating: Defines an approximation of one or more of the basic attributes that define the size of the project. The attributes include of cost, effort, duration. An estimate is generally given as a range based on where the project is the cone of uncertainty.
  • Planning: Builds the task list or the work breakdown so that resources can be planned and organized. Planning occurs at the narrowest part of the cone of uncertainty.

Estimating means many things to many people.  In order to understand the process and why some form of estimation will always be required in any organization, we need unpack the term and consider each of the component parts.  Each step along the continuum from budgeting to planning provides different information and requires different levels of information ranging from the classic back of the napkin concept (budget) to a task list generated in a sprint planning session (plan).  Having one does not replace the need for the other.

« Previous PageNext Page »