Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The next Software Process and Measurement Cast features our interview with Brad Clark.  Brad and I talked about cost estimation, estimation in government and COCOMO II and what is on the way in COCOMO III. Even if you are firmly in the #NoEstimates camp this interview will give you ideas to think about!

Brad’s Bio

Dr. Brad Clark is Vice-President of Software Metrics Inc. – a Virginia-based consulting company. His area of expertise is in software cost and schedule data collection, analysis and modeling. He also works with clients to set up their own estimation capability for use in planning and managing. He has also helped clients with software cost and schedule feasibility analysis and cost estimation training.

Dr. Clark received his Master’s in Software Engineering in 1995 and Ph.D. in Computer Science in 1997 from the University of Southern California. He is a co-author of the most widely used Software Cost Estimation model in the world, COCOMO II. This model estimates the effort and duration required to complete a software development project.

Email: brad@software-metrics.com

Re-Read Saturday News

This week we tackle Chapter 5 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015.  Chapter 5, Operations, puts the roles and policies defined in governance to work.  Next week we will have some VERY exciting news about the next book in the Re-read Saturday feature! (more…)

Advertisements

Strengths and Weaknesses are up in the air!

Jeremy Berriault provided an example from this presentation at QAI Quest 2017 for us to count test case points.  Jeremy, QA Corner,  indicated baseline data was required to effectively run the three test cases in his example

The logon, transaction

, reports and expected output blocks represent verification points.  The arrows from one test case to another represent interfaces and steps are. . .steps.  The results of the count is as follows:

Test Case Number of
Steps
Interfaces Verification
Points
Baseline
Test Data
Complexity

test case 1

4 0

4

required medium
test case 2

4

0

4

required medium
test case 3 3 2 3 required

medium

Deriving the complexity leverages the following chart: (more…)

 

How do you measure out of the ordinary packages?

Independent testing groups are often asked how long and how much effort is required to test a piece of work.  Several size estimation techniques are actively in use in many organizations.  Each of these techniques begins by deriving size either based on a set of rules or through relative sizing.  Size, once derived, is used to estimate effort.  Effort is then used to generate cost, staffing and duration estimates.  The first sizing technique is“Test Case Points”

Test Case Points are a unit of measurement generated from the the testable requirements based on a set of rules.  The process is straightforward:

  1. Identify the testable requirements in a piece of work.  Use Cases or technical requirements documents are used for identifying testable requirements.  
  1. Identify the complexity of each testable requirement.  Test case points evaluate four factors to determine complexity:
    1. The number of test steps. The number of execution steps needed to arrive at an expected (or unexpected) outcome after all preconditions have been satisfied.
    2. The number of interfaces to the other requirements. A simple count of the number of interfaces in the test case.
    3. The number of verification points. A simple count of the points in the test case that the results are evaluated for correctness.
    4. Need for baseline test data. An evaluation of whether data needs to be created to execute the test case.  


Once all of the simple, medium and complex test cases are identified, they are summed by category.

  1. Weight each category.  

  1. Sum the weighted categories together to yield the total test case points

The goal of test case points is to use size to generate an estimate.  Every version of test case points I have worked with uses a set of factors to adjust the size as part of the sizing process.

  1. Develop an estimation adjustment weighting based on a set of factors (for those familiar with IFPUG Function Points this adjustment is a similar process to the one for determining the value adjustment factor). The factors are:
  1. Count or Single Factor Adjustment Factors
    Factor 14 – Operating System Combinations (simple count)
    Factor 15 – Browser Combinations (simple count)
    Factor 16 – Productivity Improvement from Second Iteration Onwards (percentage)
  2. Factors that leverage a combination of fixed factor and complexity weighting
    Factor 1 – Domain Knowledge & Complexity
    Factor 2 – Technical Know How
    Factor 3 – Integration with other Hardware Devices such as Handheld Devices, Scanners, Printers
    Factor 4 – Multi-lingual Support
    Factor 5 – Software/Hardware Setup
    Factor 6 – Environment Setup
    Factor 7 – Build Management
    Factor 8 – Configuration Management
    Factor 9 – Preparation of Test Bed
    Factor 10 – Stable Requirements
    Factor 11 – Offshore/Onsite Coordination
    Factor 12 – Test Data Preparation
    Factor 13 – Network Latency

 

  1. Generate an estimate using the following formula

Weighted Test Case Points X Adjustment Factor X Historical Productivity Rate

In many cases, organizations generate estimates for types of work separately using the adjustment factors that that would affect the type of work.  An example of a type of work is test case generation.  Factor 5, software/hardware setup, would not be predictive of the effort for setting up test cases.

The process for deriving test case points is fairly straightforward (steps 1 – 4).  The process of turning the test case points into an estimate is more complicated. Next, we will develop a short example and examine the strengths and weaknesses of the process – some which are very apparent and other are not.  

Listen Now

Subscribe on iTunes

I am still traveling for the next two weeks. The trip is a mixture of vacation and a board meeting but that does not mean you will have to forego your weekly SPaMCAST.  In place of our normal format, I am posting a mix tape of the answers to the “If you could change two things” question I have been asking interviewees for nearly ten years.  This week on SPaMCAST 392 we feature our top downloaded podcasts from the year 2009:

SPaMCAST 51 – Tim Lister on Adrenaline Junkies and Template Zombies

http://bit.ly/1WERtk5

Tim discussed ending the estimating charade.  Tim stated it would be better if we recognized estimating as goal setting. Secondly, he noted that a lot of outsourcing has overshot its mark and reduced our organizational capabilities.

SPaMCAST 67 – Murali Chemuturi on Software Estimation Best Practices, Tools & Techniques

http://bit.ly/1MHDzeJ

Murali used his wishes to state that estimators need a better grasp and understanding the concepts of productivity and scheduling.

SPaMCAST 69 – Kevin Brennan on Business Analysis

http://bit.ly/1WERB2V

Kevin answered a different question and discussed the message he would share with a C-Level executive to describe why business analysis is important to them.

If you enjoyed the snippets please use the links to listen to the whole interviews.  Next week 2010!

HTMA

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 5 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Calibrated Estimates: How Much Do You Know Now? Chapter 4 described how to define the decision that needs to be made and the data that will be needed to make that decision. Chapter 5 builds on the first step in Hubbard’s measurement processes by providing techniques in order to determine what you know as you begin the measurement process.  Hubbard addresses two major topics in this chapter. The first is using estimation to quantify what you know, and second is his process for calibrating estimators. (more…)

The Mythical Man-Month

The Mythical Man-Month

In the seventh essay of The Mythical Man-Month, Fred P. Brooks begins to tackle the concept of estimating. While there are many estimating techniques, Brooks’ approach is a history/data-based approach, which we would understand as today as parametric estimation. Parametric estimation is generally a technique that generates a prediction of the effort needed to deliver a project based on historical data of productivity, staffing and quality. Estimating is not a straightforward extrapolation of has happened in the past to what will happen in the future, and mistaking it as such is fraught with potential issues. Brooks identified two potentially significant estimating errors that can occur when you use the past to predict the future without interpretation.

Often the only data available is the information about one part of the project’s life cycle. The first issue Brooks identified was that you cannot estimate the entire job or project by just estimating the coding and inferring the rest. There are many variables that might affect the relationship between development and testing. For example, some changes can impact more of the code than others, requiring more or less regression testing. The link between the effort required to deliver different types of work is not linear. The ability to estimate based on history requires a knowledge of project specific practices and attributes including competency, complexity and technical constraints.

Not all projects are the same. The second issue Brooks identified was that one type of project is not applicable for predicting another. Brooks used the differences between small projects and programming systems products to illustrate his point. Each type of work requires different activities, not just scaled up versions of the same tasks. Similarly, consider the differences in the tasks and activities required for building a smart phone app compared to building a large data warehouse application. Simply put, they are radically different. Brooks drove the point home using the analogy of extrapolating the record time for 100-yard dash (9.07 seconds according to Wikipedia) to the time to run a mile. The linear extrapolation would mean that a mile could be run in 2.40 (ish) minutes (a mile is 1760 yards) the current record is 3.43.13.

A significant portion of this essay is a review of a number of studies that illustrated the relationship between work done and the estimate. Brooks used these studies to highlight different factors that could impact the ability to extrapolate what has happened in the past to an estimate of the future (note: I infer from the descriptions that these studies dealt with the issue of completeness and relevance. The surveys, listed by  the person that generated the data, and the conclusions we can draw from an understanding of the data included:

  1. Charles Portman’s Data – Slippages occurred primarily because only half the time available was productive. Unrelated jobs meetings, paperwork, downtime, vacations and other non-productive tasks used the remainder.
  2. Joel Aron’s Data – Productivity was negatively related to the number of interactions among programmers. As the number of interactions goes up, productivity goes down.
  3. John Harr’s Data- The variation between estimates and actuals tend to be affected by the size of workgroups, length of time and number of modules. Complexity of the program being worked on could also be a contributor.
  4. OS/360 Data- Confirmed the striking differences in productivity driven by the complexity and difficulty of the task.
  5. Corbatoó’s Data – Programming languages affect productivity. Higher-level languages are more productive. Said a little differently, writing a function in Ruby on Rails requires less time than writing the same function in macro assembler language.

I believe that the surveys and data discussed are less important that the statistical recognition that there are many factors that must be addressed when trying to predict the future. In the end, estimation requires relevant historical data regardless of method, but the data must be relevant. Relevance is short hand for accounting for the factors that affect the type work you are doing. In homogeneous environments, complexity and language may not be as big a determinant of productivity as the number of interactions driven by team size or the amount of non-productive time teams have to absorb. The problem with historical data is that gathering the data requires effort, time and/or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Recognize that the the term historical data should not scare you away.  Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating. Historical data can be as complex as a pallet of information including project effort, size, duration, team capabilities and project context.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

I recently asked a group of people the question, “What are the two largest issues in project estimation?” I received a wide range of answers; probably a reflection of the range of individuals answering.  Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, “most of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.”
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.  Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.  According to Joe Schofield, “few groups retain enough relevant data from their experiences to avoid relearning the same lesson.”
  4. Labor Hours Are Not The Same As Size.  Many estimators either estimate the effort needed to perform the project or individual tasks.  By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.  According to Ian Brown, “then the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.”
  5. No One Dedicated to Estimation.  Estimating is a skill built on a wide range of techniques that need to be learned and practiced.  When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.  To quote one of the respondents, “consistency of estimation from team to team, and within a team over time, is non-existent.”

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.