In a recent discussion of the concepts of Cost of Delay (CoD) and Weighted Short Job First (WSJF) the used of tables and mathematical formulae led to a discussion of accuracy and precision.  The use of  math and formulae suggest high levels of accuracy and precision even though the CoD and project sizes used were relative estimates. One of the questions posed in the class was whether the application of WSJF, or for that matter ANY prioritization or estimation technique is actuate enough to use in making decisions when the input is imprecise estimated data. Techniques like CoD and WSJF when used to prioritize work are generally based on estimates, rather than ultra-precise measures. Remember the of techniques like WSJF is to generate accurate priorities so that the most important work gets into the pipeline. Prioritization based on frameworks like WSJF require an understanding of the concepts of accuracy and precision and what is required in terms of accuracy or precision when prioritizing work. Often the two concepts are conflated however the two concepts of accuracy and precision are different but interrelated.

Accuracy is defined as how close a measured value is to the true value (www.mathisfun.com) or as freedom from mistake or error (www.merriam-webster.com/dictionary/accuracy). In some instances, accuracy is easy to judge. For example, (Pi) to 10 decimal points is 3.1415926535 (if you are a geek, PI to a million). Whereas Pi calculated to two decimal points is 3.14. Both are accurate. Judging accuracy gets messier when the outcome is unknown, for instance generating an estimate for project prioritization. In order to judge accuracy we generally apply a decision making rule or standard which is typically expressed as a confidence interval or a degree of risk. Another popular alternative is to use a comparison to a relative measure, such an analogy or story points. In order for anything to be perceived as accurate it has to be within the standards set so that the organization can have confidence as they prioritize work.

Precision is a reflection of exactness. In the example of Pi, the 3.14 is accurate but not precise. Calculating Pi to a million decimal points increases the level of precision but doesn’t make the calculation precise. PI is an irrational number we will never be able to calculate PI precisely. In software development, most activities required to deliver business value have some level of variability in performance. Variability makes precise prediction difficult or impossible. Estimates of software development are imprecise by nature. Imprecision driven by the variability in the processes and by the inability to know all the factors that will be encountered makes precision difficult at best. Precision is typically generated by padding the estimates, cutting or adding scope so that the outcome matches the estimate, or in some cases fibbing about the results.

Given effort and consternation that often are often taken to generate false precision, a better avenue is to define an acceptable level of precision so that accuracy is an artifact of doing the work rather than an artifact of controlling how the work is measured. For example, estimating that a piece of work will be done at 2:33.06 PM EST takes far more effort than estimating it will be completed this afternoon.  Both estimates could be equally accurate although the later has more chance at accuracy. Techniques like CoD and WSJF when used to prioritize work are generally based on estimates, rather than ultra-precise measures. The goal is to generate accurate priorities so that the most important work gets into the pipeline. Like many things in Agile and other iterative frameworks, just because a piece of work is not at the top of the list today does not mean its priority can’t increase when the list is reassessed. Organizations have to decide what level of precision is really needed and whether they are willing to trade precision for accuracy.

This weekend I am going to run the 35th Annual St. Malachi Church Run in a time of roughly fifty minutes. The race is five miles long and the forecast is for a very cold rain. In this case, my estimate is probably accurate (time will tell), but not precise. Last year, I actually ran the race in 49.06.36 minutes. The measured results are very precise and are accurate within the tolerance of the measurement system used (chips in the race bib). While I feel faster this year, but I am a bit heavier and a bit older which increases the possible variability in my speed. In terms of setting expectations, I think I would rather be accurate than precise so that my support team can meet me with the umbrella without have to stand in the rain too long themselves.

PS March 14th is Pi Day – have an irrationally good day!