Estimation


Listen Now!

In SPaMCAST 69, Kevin McKeel talks about using neurolinguistics to aid in software estimation tools. Is this the end of the world for the software estimating profession or the beginning of a golden age?  Probably both, listen and draw your own conclusions.

Kevin’s Bio

Mr. Kevin McKeel has over 25 years of experience in software cost estimation. He is a CCEA and SAFe Architect and received the prestigious 2021 Technical Achievement of the Year award from ICEAA related to the research of automated software sizing using AI and NLP. Mr. McKeel holds a Bachelor’s in Business Administration (Finance, ‘89) from James Madison University and a Master’s in Business Administration (Decision Systems, ’92) from The George Washington University.

LinkedIn linkedin.com/in/kevin-mckeel-2457235

Website: logapps.com

(more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The next Software Process and Measurement Cast features our interview with Brad Clark.  Brad and I talked about cost estimation, estimation in government and COCOMO II and what is on the way in COCOMO III. Even if you are firmly in the #NoEstimates camp this interview will give you ideas to think about!

Brad’s Bio

Dr. Brad Clark is Vice-President of Software Metrics Inc. – a Virginia-based consulting company. His area of expertise is in software cost and schedule data collection, analysis and modeling. He also works with clients to set up their own estimation capability for use in planning and managing. He has also helped clients with software cost and schedule feasibility analysis and cost estimation training.

Dr. Clark received his Master’s in Software Engineering in 1995 and Ph.D. in Computer Science in 1997 from the University of Southern California. He is a co-author of the most widely used Software Cost Estimation model in the world, COCOMO II. This model estimates the effort and duration required to complete a software development project.

Email: brad@software-metrics.com

Re-Read Saturday News

This week we tackle Chapter 5 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015.  Chapter 5, Operations, puts the roles and policies defined in governance to work.  Next week we will have some VERY exciting news about the next book in the Re-read Saturday feature! (more…)

 

How do you measure out of the ordinary packages?

Independent testing groups are often asked how long and how much effort is required to test a piece of work.  Several size estimation techniques are actively in use in many organizations.  Each of these techniques begins by deriving size either based on a set of rules or through relative sizing.  Size, once derived, is used to estimate effort.  Effort is then used to generate cost, staffing and duration estimates.  The first sizing technique is“Test Case Points”

Test Case Points are a unit of measurement generated from the the testable requirements based on a set of rules.  The process is straightforward:

  1. Identify the testable requirements in a piece of work.  Use Cases or technical requirements documents are used for identifying testable requirements.  
  1. Identify the complexity of each testable requirement.  Test case points evaluate four factors to determine complexity:
    1. The number of test steps. The number of execution steps needed to arrive at an expected (or unexpected) outcome after all preconditions have been satisfied.
    2. The number of interfaces to the other requirements. A simple count of the number of interfaces in the test case.
    3. The number of verification points. A simple count of the points in the test case that the results are evaluated for correctness.
    4. Need for baseline test data. An evaluation of whether data needs to be created to execute the test case.  


Once all of the simple, medium and complex test cases are identified, they are summed by category.

  1. Weight each category.  

  1. Sum the weighted categories together to yield the total test case points

The goal of test case points is to use size to generate an estimate.  Every version of test case points I have worked with uses a set of factors to adjust the size as part of the sizing process.

  1. Develop an estimation adjustment weighting based on a set of factors (for those familiar with IFPUG Function Points this adjustment is a similar process to the one for determining the value adjustment factor). The factors are:
  1. Count or Single Factor Adjustment Factors
    Factor 14 – Operating System Combinations (simple count)
    Factor 15 – Browser Combinations (simple count)
    Factor 16 – Productivity Improvement from Second Iteration Onwards (percentage)
  2. Factors that leverage a combination of fixed factor and complexity weighting
    Factor 1 – Domain Knowledge & Complexity
    Factor 2 – Technical Know How
    Factor 3 – Integration with other Hardware Devices such as Handheld Devices, Scanners, Printers
    Factor 4 – Multi-lingual Support
    Factor 5 – Software/Hardware Setup
    Factor 6 – Environment Setup
    Factor 7 – Build Management
    Factor 8 – Configuration Management
    Factor 9 – Preparation of Test Bed
    Factor 10 – Stable Requirements
    Factor 11 – Offshore/Onsite Coordination
    Factor 12 – Test Data Preparation
    Factor 13 – Network Latency

 

  1. Generate an estimate using the following formula

Weighted Test Case Points X Adjustment Factor X Historical Productivity Rate

In many cases, organizations generate estimates for types of work separately using the adjustment factors that that would affect the type of work.  An example of a type of work is test case generation.  Factor 5, software/hardware setup, would not be predictive of the effort for setting up test cases.

The process for deriving test case points is fairly straightforward (steps 1 – 4).  The process of turning the test case points into an estimate is more complicated. Next, we will develop a short example and examine the strengths and weaknesses of the process – some which are very apparent and other are not.  

Listen Now

Subscribe on iTunes

I am still traveling for the next two weeks. The trip is a mixture of vacation and a board meeting but that does not mean you will have to forego your weekly SPaMCAST.  In place of our normal format, I am posting a mix tape of the answers to the “If you could change two things” question I have been asking interviewees for nearly ten years.  This week on SPaMCAST 392 we feature our top downloaded podcasts from the year 2009:

SPaMCAST 51 – Tim Lister on Adrenaline Junkies and Template Zombies

http://bit.ly/1WERtk5

Tim discussed ending the estimating charade.  Tim stated it would be better if we recognized estimating as goal setting. Secondly, he noted that a lot of outsourcing has overshot its mark and reduced our organizational capabilities.

SPaMCAST 67 – Murali Chemuturi on Software Estimation Best Practices, Tools & Techniques

http://bit.ly/1MHDzeJ

Murali used his wishes to state that estimators need a better grasp and understanding the concepts of productivity and scheduling.

SPaMCAST 69 – Kevin Brennan on Business Analysis

http://bit.ly/1WERB2V

Kevin answered a different question and discussed the message he would share with a C-Level executive to describe why business analysis is important to them.

If you enjoyed the snippets please use the links to listen to the whole interviews.  Next week 2010!

The Mythical Man-Month

The Mythical Man-Month

In the seventh essay of The Mythical Man-Month, Fred P. Brooks begins to tackle the concept of estimating. While there are many estimating techniques, Brooks’ approach is a history/data-based approach, which we would understand as today as parametric estimation. Parametric estimation is generally a technique that generates a prediction of the effort needed to deliver a project based on historical data of productivity, staffing and quality. Estimating is not a straightforward extrapolation of has happened in the past to what will happen in the future, and mistaking it as such is fraught with potential issues. Brooks identified two potentially significant estimating errors that can occur when you use the past to predict the future without interpretation.

Often the only data available is the information about one part of the project’s life cycle. The first issue Brooks identified was that you cannot estimate the entire job or project by just estimating the coding and inferring the rest. There are many variables that might affect the relationship between development and testing. For example, some changes can impact more of the code than others, requiring more or less regression testing. The link between the effort required to deliver different types of work is not linear. The ability to estimate based on history requires a knowledge of project specific practices and attributes including competency, complexity and technical constraints.

Not all projects are the same. The second issue Brooks identified was that one type of project is not applicable for predicting another. Brooks used the differences between small projects and programming systems products to illustrate his point. Each type of work requires different activities, not just scaled up versions of the same tasks. Similarly, consider the differences in the tasks and activities required for building a smart phone app compared to building a large data warehouse application. Simply put, they are radically different. Brooks drove the point home using the analogy of extrapolating the record time for 100-yard dash (9.07 seconds according to Wikipedia) to the time to run a mile. The linear extrapolation would mean that a mile could be run in 2.40 (ish) minutes (a mile is 1760 yards) the current record is 3.43.13.

A significant portion of this essay is a review of a number of studies that illustrated the relationship between work done and the estimate. Brooks used these studies to highlight different factors that could impact the ability to extrapolate what has happened in the past to an estimate of the future (note: I infer from the descriptions that these studies dealt with the issue of completeness and relevance. The surveys, listed by  the person that generated the data, and the conclusions we can draw from an understanding of the data included:

  1. Charles Portman’s Data – Slippages occurred primarily because only half the time available was productive. Unrelated jobs meetings, paperwork, downtime, vacations and other non-productive tasks used the remainder.
  2. Joel Aron’s Data – Productivity was negatively related to the number of interactions among programmers. As the number of interactions goes up, productivity goes down.
  3. John Harr’s Data- The variation between estimates and actuals tend to be affected by the size of workgroups, length of time and number of modules. Complexity of the program being worked on could also be a contributor.
  4. OS/360 Data- Confirmed the striking differences in productivity driven by the complexity and difficulty of the task.
  5. Corbatoó’s Data – Programming languages affect productivity. Higher-level languages are more productive. Said a little differently, writing a function in Ruby on Rails requires less time than writing the same function in macro assembler language.

I believe that the surveys and data discussed are less important that the statistical recognition that there are many factors that must be addressed when trying to predict the future. In the end, estimation requires relevant historical data regardless of method, but the data must be relevant. Relevance is short hand for accounting for the factors that affect the type work you are doing. In homogeneous environments, complexity and language may not be as big a determinant of productivity as the number of interactions driven by team size or the amount of non-productive time teams have to absorb. The problem with historical data is that gathering the data requires effort, time and/or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Recognize that the the term historical data should not scare you away.  Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating. Historical data can be as complex as a pallet of information including project effort, size, duration, team capabilities and project context.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

I recently asked a group of people the question, “What are the two largest issues in project estimation?” I received a wide range of answers; probably a reflection of the range of individuals answering.  Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, “most of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.”
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.  Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.  According to Joe Schofield, “few groups retain enough relevant data from their experiences to avoid relearning the same lesson.”
  4. Labor Hours Are Not The Same As Size.  Many estimators either estimate the effort needed to perform the project or individual tasks.  By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.  According to Ian Brown, “then the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.”
  5. No One Dedicated to Estimation.  Estimating is a skill built on a wide range of techniques that need to be learned and practiced.  When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.  To quote one of the respondents, “consistency of estimation from team to team, and within a team over time, is non-existent.”

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.

1127131620a_1

There are many levels of estimation including budgeting, high-level estimation and task planning (detailed estimation).  We can link a more classic view of estimation to  the Agile planning onion popularized by Mike Cohn.   In the Agile planning onion, strategic planning is on the outside of the onion and the planning that occurs in the daily sprint meetings at the core of the onion. Each layer closer to the core relates more to the day-to-day activity of a team. The #NoEstimates movement eschew developing story- or task-level estimates and sometimes at higher levels of estimation. As you get closer to the core of the planning onion the case for the #NoEstimates becomes more compelling.

03fig01.jpg (500×393)

Planning Onion

 

Budgeting is a strategic form of estimation that most corporate and governmental entities perform.  Budgeting relates to the strategy and portfolio layers of the planning onion.  #NoEstimates techniques doesn’t answer the central questions most organizations need to answer at this level which include:

1.     How much money should I allocate for software development, enhancements and maintenance?

2.     Which projects or products should we fund?

3.     Which projects will return the greatest amount of value?

Budgets are often educated guesses that provide some approximation of the size and cost of the work on the overall backlog. Budgeting provides the basis to allocate resources in environments where demand outstrips capacity. Other than in the most extreme form of #NoEstimate, which eschews all estimates, budgeting is almost always performed.

High-level estimation, performed in the product and release layers of the planning onion, is generally used to forecast when functionality will be available. Release plans and product road maps are types of forecasts that are used to convey when products and functions will be available. These types of estimates can easily be built if teams have a track record of delivering value on a regular basis. #NoEstimates can be applied at this level of planning and estimation by substituting the predictable completion of work items for developing effort estimates.  #NoEstimates at this level of planning can be used only IF  conditions that facilitate predictable delivery flow are met. Conditions include:

  1. Stable teams
  2. Adoption of an Agile mindset (at both the team and organizational levels)
  3. A backlog of well-groomed stories

Organizations that meet these criteria can answer the classic project/release questions of when, what and how much based on the predictable delivery rates of #NoEstimate teams (assuming some level of maturity – newly formed teams are never predictable). High level estimate is closer to the day-to-day operations of the team and connect budgeting to the lowest level of planning in the planning onion.

In the standard corporate environment, task-level estimation (typically performed at the iteration and daily planning layers of the onion) is an artifact of project management controls or partial adoptions of Agile concepts. Estimating tasks is often mandated in organizations that allocate individual teams to multiple projects at the same time. The effort estimates are used to enable the organization to allocate slices of time to projects. Stable Agile teams that are allowed to focus one project at a time and use #NoEstimate techniques have no reason to estimate effort at a task level due to their ability to consistently say what they will do and then deliver on their commitments. Ceasing task-level estimation and planning is the core change all proponents of #NoEstimates are suggesting.

A special estimation case that needs to be considered is that of commercial or contractual work. These arrangements are often represent lower trust relationships or projects that are perceived to be high risk. The legal contracts agreed upon by both parties often stipulate the answers to the what, when and how much question before the project starts. Due to the risk the contract creates both parties must do their best to predict/estimate the future before signing the agreement. Raja Bavani, Senior Director at Cognizant Technology Solutions suggested in a recent conversation, that he thought that, “#NoEstimates was a non-starter in a contractual environment due the financial risk both parties accept when signing a contract.”

Estimation is a form of planning, and planning is a considered an important competency in most business environments. Planning activities abound whether planning the corporate picnic to planning the acquisition and implementation of a new customer relationship management system. Most planning activities center on answering a few very basic questions. When with will “it” be done? How much will “it” cost? What is “it” that I will actually get? As an organization or team progresses through the planning onion, the need for effort and cost estimation lessens in most cases. #NoEstimation does not remove the need for all types of estimates. Most organizations will always need to estimate in order to budget. Organizations that have stable teams, adopted the Agile mindset and have a well-groomed backlog will be able to use predictable flow to forecast rather than effort and cost estimation. At a sprint or day-to-day level Agile teams that predictably deliver value can embrace the idea of #NoEstimate while answering the basic questions based what, when and how much based on performance.

Listen to the Software Process and Measurement Podcast

SPaMCAST 317 tackles a wide range of frequently asked questions, ranging from the possibility of an acceleration trap, the relevance of function points, whether teams have a peak loads and safe to fail experiments. Questions, answers and controversy!

We will also have the next installment of Kim Pries’s column, The Software Sensei! This week Kim discusses robust software.

The essay starts with “Agile Can Contribute to an Acceleration Trap”

I am often asked whether Agile techniques contribute to an acceleration trap in IT.  In an article in The Harvard Business Review, Bruch and Menges (April 2010) define an acceleration trap as the malaise that sets in as an organization fails prey to chronic overloading. It can be interpreted as laziness or recalcitrance, which then elicits even more pressure to perform, generating an even deeper malaise. The results of the pressure/malaise cycle are generally a poor working atmosphere and employee loss. Agile can contribute to an acceleration trap but only as a reflection of poor practices. Agile is often perceived to induce an acceleration trap in two manners: organizational change and delivery cadence.

Listen to the rest now

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change of on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 318 features our interview with Rob Cross.  Rob and I discussed his INFOQ article “How to Incorporate Data Analytics into Your Software Process.”  Rob provides ideas on how the theory of big data can be incorporated in to big action.

 

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST

Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

 

Click this link to listen to SPaMCAST 305

Software Process and Measurement Cast number 305 features our essay on Estimation.  Estimation is a hot bed of controversy. We begin by synchronizing on what we think the word means.  Then, once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

The essay begins:

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  Once the budget is developed the evolution from budget to estimate and then plan follows a unique path as the project team learns about the project.

Next

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT
Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise.
Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT
Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Listen to the Software Process and Measurement Cast 303

Software Process and Measurement Cast number 303 features our essay titled “Topics in Estimation.” This essay is a collection of smaller essays that cover wide range of issues effecting estimation.  Topics include estimation and customer satisfaction, risk and project estimates, estimation frameworks and size and estimation.  Something to help and irritate everyone, we are talking about estimation – what would you expect?

We also have a new installment of Kim Pries’s Software Sensei column.  In this installment Kim discusses education as defect prevention.  Do we really believe that education improves productivity, quality and time to market?

Listen to the Software Process and Measurement Cast 303

Next

Software Process and Measurement Cast number 304 will feature our long awaited interview with Jamie Lynn Cooke, author The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and value of Agile business analysts.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested!

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Next Page »