Listen Now!

In SPaMCAST 69, Kevin McKeel talks about using neurolinguistics to aid in software estimation tools. Is this the end of the world for the software estimating profession or the beginning of a golden age?  Probably both, listen and draw your own conclusions.

Kevin’s Bio

Mr. Kevin McKeel has over 25 years of experience in software cost estimation. He is a CCEA and SAFe Architect and received the prestigious 2021 Technical Achievement of the Year award from ICEAA related to the research of automated software sizing using AI and NLP. Mr. McKeel holds a Bachelor’s in Business Administration (Finance, ‘89) from James Madison University and a Master’s in Business Administration (Decision Systems, ’92) from The George Washington University.

LinkedIn linkedin.com/in/kevin-mckeel-2457235

Website: logapps.com

(more…)

We are enjoying a bit of a holiday.  Yesterday I toured La Sagrada Familia in Barcelona. The basilica was started well over 100 years ago and is now planned to be completed in 2026.  I am struck by how persistent and motivating an idea can be.  While function points are not as old, they are equally as persistent and useful.  Please enjoy this throwback essay on function points: (more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The next Software Process and Measurement Cast features our interview with Brad Clark.  Brad and I talked about cost estimation, estimation in government and COCOMO II and what is on the way in COCOMO III. Even if you are firmly in the #NoEstimates camp this interview will give you ideas to think about!

Brad’s Bio

Dr. Brad Clark is Vice-President of Software Metrics Inc. – a Virginia-based consulting company. His area of expertise is in software cost and schedule data collection, analysis and modeling. He also works with clients to set up their own estimation capability for use in planning and managing. He has also helped clients with software cost and schedule feasibility analysis and cost estimation training.

Dr. Clark received his Master’s in Software Engineering in 1995 and Ph.D. in Computer Science in 1997 from the University of Southern California. He is a co-author of the most widely used Software Cost Estimation model in the world, COCOMO II. This model estimates the effort and duration required to complete a software development project.

Email: brad@software-metrics.com

Re-Read Saturday News

This week we tackle Chapter 5 of Holacracy: The New Management System for a Rapidly Changing World by Brian J. Robertson published by Henry Holt and Company in 2015.  Chapter 5, Operations, puts the roles and policies defined in governance to work.  Next week we will have some VERY exciting news about the next book in the Re-read Saturday feature! (more…)

Strengths and Weaknesses are up in the air!

Jeremy Berriault provided an example from this presentation at QAI Quest 2017 for us to count test case points.  Jeremy, QA Corner,  indicated baseline data was required to effectively run the three test cases in his example

The logon, transaction

, reports and expected output blocks represent verification points.  The arrows from one test case to another represent interfaces and steps are. . .steps.  The results of the count is as follows:

Test Case Number of
Steps
Interfaces Verification
Points
Baseline
Test Data
Complexity

test case 1

4 0

4

required medium
test case 2

4

0

4

required medium
test case 3 3 2 3 required

medium

Deriving the complexity leverages the following chart: (more…)

 

How do you measure out of the ordinary packages?

Independent testing groups are often asked how long and how much effort is required to test a piece of work.  Several size estimation techniques are actively in use in many organizations.  Each of these techniques begins by deriving size either based on a set of rules or through relative sizing.  Size, once derived, is used to estimate effort.  Effort is then used to generate cost, staffing and duration estimates.  The first sizing technique is“Test Case Points”

Test Case Points are a unit of measurement generated from the the testable requirements based on a set of rules.  The process is straightforward:

  1. Identify the testable requirements in a piece of work.  Use Cases or technical requirements documents are used for identifying testable requirements.  
  1. Identify the complexity of each testable requirement.  Test case points evaluate four factors to determine complexity:
    1. The number of test steps. The number of execution steps needed to arrive at an expected (or unexpected) outcome after all preconditions have been satisfied.
    2. The number of interfaces to the other requirements. A simple count of the number of interfaces in the test case.
    3. The number of verification points. A simple count of the points in the test case that the results are evaluated for correctness.
    4. Need for baseline test data. An evaluation of whether data needs to be created to execute the test case.  


Once all of the simple, medium and complex test cases are identified, they are summed by category.

  1. Weight each category.  

  1. Sum the weighted categories together to yield the total test case points

The goal of test case points is to use size to generate an estimate.  Every version of test case points I have worked with uses a set of factors to adjust the size as part of the sizing process.

  1. Develop an estimation adjustment weighting based on a set of factors (for those familiar with IFPUG Function Points this adjustment is a similar process to the one for determining the value adjustment factor). The factors are:
  1. Count or Single Factor Adjustment Factors
    Factor 14 – Operating System Combinations (simple count)
    Factor 15 – Browser Combinations (simple count)
    Factor 16 – Productivity Improvement from Second Iteration Onwards (percentage)
  2. Factors that leverage a combination of fixed factor and complexity weighting
    Factor 1 – Domain Knowledge & Complexity
    Factor 2 – Technical Know How
    Factor 3 – Integration with other Hardware Devices such as Handheld Devices, Scanners, Printers
    Factor 4 – Multi-lingual Support
    Factor 5 – Software/Hardware Setup
    Factor 6 – Environment Setup
    Factor 7 – Build Management
    Factor 8 – Configuration Management
    Factor 9 – Preparation of Test Bed
    Factor 10 – Stable Requirements
    Factor 11 – Offshore/Onsite Coordination
    Factor 12 – Test Data Preparation
    Factor 13 – Network Latency

 

  1. Generate an estimate using the following formula

Weighted Test Case Points X Adjustment Factor X Historical Productivity Rate

In many cases, organizations generate estimates for types of work separately using the adjustment factors that that would affect the type of work.  An example of a type of work is test case generation.  Factor 5, software/hardware setup, would not be predictive of the effort for setting up test cases.

The process for deriving test case points is fairly straightforward (steps 1 – 4).  The process of turning the test case points into an estimate is more complicated. Next, we will develop a short example and examine the strengths and weaknesses of the process – some which are very apparent and other are not.  

Listen Now

Subscribe on iTunes

I am still traveling for the next two weeks. The trip is a mixture of vacation and a board meeting but that does not mean you will have to forego your weekly SPaMCAST.  In place of our normal format, I am posting a mix tape of the answers to the “If you could change two things” question I have been asking interviewees for nearly ten years.  This week on SPaMCAST 392 we feature our top downloaded podcasts from the year 2009:

SPaMCAST 51 – Tim Lister on Adrenaline Junkies and Template Zombies

http://bit.ly/1WERtk5

Tim discussed ending the estimating charade.  Tim stated it would be better if we recognized estimating as goal setting. Secondly, he noted that a lot of outsourcing has overshot its mark and reduced our organizational capabilities.

SPaMCAST 67 – Murali Chemuturi on Software Estimation Best Practices, Tools & Techniques

http://bit.ly/1MHDzeJ

Murali used his wishes to state that estimators need a better grasp and understanding the concepts of productivity and scheduling.

SPaMCAST 69 – Kevin Brennan on Business Analysis

http://bit.ly/1WERB2V

Kevin answered a different question and discussed the message he would share with a C-Level executive to describe why business analysis is important to them.

If you enjoyed the snippets please use the links to listen to the whole interviews.  Next week 2010!

HTMA

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 5 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Calibrated Estimates: How Much Do You Know Now? Chapter 4 described how to define the decision that needs to be made and the data that will be needed to make that decision. Chapter 5 builds on the first step in Hubbard’s measurement processes by providing techniques in order to determine what you know as you begin the measurement process.  Hubbard addresses two major topics in this chapter. The first is using estimation to quantify what you know, and second is his process for calibrating estimators. (more…)

The Mythical Man-Month

The Mythical Man-Month

In the seventh essay of The Mythical Man-Month, Fred P. Brooks begins to tackle the concept of estimating. While there are many estimating techniques, Brooks’ approach is a history/data-based approach, which we would understand as today as parametric estimation. Parametric estimation is generally a technique that generates a prediction of the effort needed to deliver a project based on historical data of productivity, staffing and quality. Estimating is not a straightforward extrapolation of has happened in the past to what will happen in the future, and mistaking it as such is fraught with potential issues. Brooks identified two potentially significant estimating errors that can occur when you use the past to predict the future without interpretation.

Often the only data available is the information about one part of the project’s life cycle. The first issue Brooks identified was that you cannot estimate the entire job or project by just estimating the coding and inferring the rest. There are many variables that might affect the relationship between development and testing. For example, some changes can impact more of the code than others, requiring more or less regression testing. The link between the effort required to deliver different types of work is not linear. The ability to estimate based on history requires a knowledge of project specific practices and attributes including competency, complexity and technical constraints.

Not all projects are the same. The second issue Brooks identified was that one type of project is not applicable for predicting another. Brooks used the differences between small projects and programming systems products to illustrate his point. Each type of work requires different activities, not just scaled up versions of the same tasks. Similarly, consider the differences in the tasks and activities required for building a smart phone app compared to building a large data warehouse application. Simply put, they are radically different. Brooks drove the point home using the analogy of extrapolating the record time for 100-yard dash (9.07 seconds according to Wikipedia) to the time to run a mile. The linear extrapolation would mean that a mile could be run in 2.40 (ish) minutes (a mile is 1760 yards) the current record is 3.43.13.

A significant portion of this essay is a review of a number of studies that illustrated the relationship between work done and the estimate. Brooks used these studies to highlight different factors that could impact the ability to extrapolate what has happened in the past to an estimate of the future (note: I infer from the descriptions that these studies dealt with the issue of completeness and relevance. The surveys, listed by  the person that generated the data, and the conclusions we can draw from an understanding of the data included:

  1. Charles Portman’s Data – Slippages occurred primarily because only half the time available was productive. Unrelated jobs meetings, paperwork, downtime, vacations and other non-productive tasks used the remainder.
  2. Joel Aron’s Data – Productivity was negatively related to the number of interactions among programmers. As the number of interactions goes up, productivity goes down.
  3. John Harr’s Data- The variation between estimates and actuals tend to be affected by the size of workgroups, length of time and number of modules. Complexity of the program being worked on could also be a contributor.
  4. OS/360 Data- Confirmed the striking differences in productivity driven by the complexity and difficulty of the task.
  5. Corbatoó’s Data – Programming languages affect productivity. Higher-level languages are more productive. Said a little differently, writing a function in Ruby on Rails requires less time than writing the same function in macro assembler language.

I believe that the surveys and data discussed are less important that the statistical recognition that there are many factors that must be addressed when trying to predict the future. In the end, estimation requires relevant historical data regardless of method, but the data must be relevant. Relevance is short hand for accounting for the factors that affect the type work you are doing. In homogeneous environments, complexity and language may not be as big a determinant of productivity as the number of interactions driven by team size or the amount of non-productive time teams have to absorb. The problem with historical data is that gathering the data requires effort, time and/or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Recognize that the the term historical data should not scare you away.  Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating. Historical data can be as complex as a pallet of information including project effort, size, duration, team capabilities and project context.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

I recently asked a group of people the question, “What are the two largest issues in project estimation?” I received a wide range of answers; probably a reflection of the range of individuals answering.  Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, “most of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.”
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.  Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.  According to Joe Schofield, “few groups retain enough relevant data from their experiences to avoid relearning the same lesson.”
  4. Labor Hours Are Not The Same As Size.  Many estimators either estimate the effort needed to perform the project or individual tasks.  By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.  According to Ian Brown, “then the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.”
  5. No One Dedicated to Estimation.  Estimating is a skill built on a wide range of techniques that need to be learned and practiced.  When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.  To quote one of the respondents, “consistency of estimation from team to team, and within a team over time, is non-existent.”

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.

1127131620a_1

There are many levels of estimation including budgeting, high-level estimation and task planning (detailed estimation).  We can link a more classic view of estimation to  the Agile planning onion popularized by Mike Cohn.   In the Agile planning onion, strategic planning is on the outside of the onion and the planning that occurs in the daily sprint meetings at the core of the onion. Each layer closer to the core relates more to the day-to-day activity of a team. The #NoEstimates movement eschew developing story- or task-level estimates and sometimes at higher levels of estimation. As you get closer to the core of the planning onion the case for the #NoEstimates becomes more compelling.

03fig01.jpg (500×393)

Planning Onion

 

Budgeting is a strategic form of estimation that most corporate and governmental entities perform.  Budgeting relates to the strategy and portfolio layers of the planning onion.  #NoEstimates techniques doesn’t answer the central questions most organizations need to answer at this level which include:

1.     How much money should I allocate for software development, enhancements and maintenance?

2.     Which projects or products should we fund?

3.     Which projects will return the greatest amount of value?

Budgets are often educated guesses that provide some approximation of the size and cost of the work on the overall backlog. Budgeting provides the basis to allocate resources in environments where demand outstrips capacity. Other than in the most extreme form of #NoEstimate, which eschews all estimates, budgeting is almost always performed.

High-level estimation, performed in the product and release layers of the planning onion, is generally used to forecast when functionality will be available. Release plans and product road maps are types of forecasts that are used to convey when products and functions will be available. These types of estimates can easily be built if teams have a track record of delivering value on a regular basis. #NoEstimates can be applied at this level of planning and estimation by substituting the predictable completion of work items for developing effort estimates.  #NoEstimates at this level of planning can be used only IF  conditions that facilitate predictable delivery flow are met. Conditions include:

  1. Stable teams
  2. Adoption of an Agile mindset (at both the team and organizational levels)
  3. A backlog of well-groomed stories

Organizations that meet these criteria can answer the classic project/release questions of when, what and how much based on the predictable delivery rates of #NoEstimate teams (assuming some level of maturity – newly formed teams are never predictable). High level estimate is closer to the day-to-day operations of the team and connect budgeting to the lowest level of planning in the planning onion.

In the standard corporate environment, task-level estimation (typically performed at the iteration and daily planning layers of the onion) is an artifact of project management controls or partial adoptions of Agile concepts. Estimating tasks is often mandated in organizations that allocate individual teams to multiple projects at the same time. The effort estimates are used to enable the organization to allocate slices of time to projects. Stable Agile teams that are allowed to focus one project at a time and use #NoEstimate techniques have no reason to estimate effort at a task level due to their ability to consistently say what they will do and then deliver on their commitments. Ceasing task-level estimation and planning is the core change all proponents of #NoEstimates are suggesting.

A special estimation case that needs to be considered is that of commercial or contractual work. These arrangements are often represent lower trust relationships or projects that are perceived to be high risk. The legal contracts agreed upon by both parties often stipulate the answers to the what, when and how much question before the project starts. Due to the risk the contract creates both parties must do their best to predict/estimate the future before signing the agreement. Raja Bavani, Senior Director at Cognizant Technology Solutions suggested in a recent conversation, that he thought that, “#NoEstimates was a non-starter in a contractual environment due the financial risk both parties accept when signing a contract.”

Estimation is a form of planning, and planning is a considered an important competency in most business environments. Planning activities abound whether planning the corporate picnic to planning the acquisition and implementation of a new customer relationship management system. Most planning activities center on answering a few very basic questions. When with will “it” be done? How much will “it” cost? What is “it” that I will actually get? As an organization or team progresses through the planning onion, the need for effort and cost estimation lessens in most cases. #NoEstimation does not remove the need for all types of estimates. Most organizations will always need to estimate in order to budget. Organizations that have stable teams, adopted the Agile mindset and have a well-groomed backlog will be able to use predictable flow to forecast rather than effort and cost estimation. At a sprint or day-to-day level Agile teams that predictably deliver value can embrace the idea of #NoEstimate while answering the basic questions based what, when and how much based on performance.