Metrics Minute Entry:  Automated Test Cases Passed

Audio Version:  SPaMCAST 217

Definition:

Automated Test Cases Passed (ATCP) is primarily a progress measure represented as a ratio that compares the number of automated test cases that have passed to the total number of automated cases that will be executed (TATC). Progress metrics are collected iteratively over time; for example, a day, a sprint, a release or a phase (waterfall). Progress measures are generally presented graphically to show the trend in a process’s output. This measure is easy to collect (counting physical items) and to interpret (simple graphs or a percentage).  The metric can be used to support organizational goals for test automation (combinable across many projects for trending) therefore tends to be adopted fairly early in a metrics program. The simplicity of the metric limits the predictive power because it does not reflect the overall complexity of the development environment; however the metric does provide a simple snapshot of activity.

Formula:

Automated Test Cases Passed is defined as the percentage of automated test cases that pass, divided by the total number of automated test cases. This is represented by the following equation:

ATCP          No. of automated test that pass

PATCP (%) =                ——– =      ( —————————————— )

TATC          Total no. of automated test cases

PATCP  = Percent automate test cases passed

ATCP     = Number of automated test cases

TATC     = Number of automated test cases

S-Curve Tracking Graph Example:

S-Curve

Uses:

During a Sprint:  Tracking the number of passed cases to total automated cases should follow the classic S-curve shape, therefore being valuable as part of tracking regimen. This tracking technique is especially useful if automated testing begins as coding begins (like Test Driven Development). Some organizations use tracking the number of automated test cases passed as a percentage of the total automated test cases that are planned as tool to define when testing is complete.  This is a BAD idea.

Conclusion of Sprint:  The number of automated test cases passed compared to the total number of automated test case can highlight potential issues if the percentage does not meet expectations.

Organizational:  Counting the number of automated test cases passed puts a focus on automation of testing. The old adage “you get what you measure” comes to mind.

Issues:

  • There are multiple points that test cases are developed in a test driven environment which makes construction of the classic S-curve graph problematic.
  • At the conclusion of the iteration rationalization is required to explain any automated test cases that have not been passed.  Un-passed tests, unless rationalized, will be viewed as a black mark on the project’s performance.  Note: un-passed tests can occur for many reason such as bad tests, defects or tests that have been created for functionality that has not been developed yet
  • The use and granularity of terms like test case and test plan can vary. This makes the use of individual project count data across more than one team possibly less useful at the organizational level than the trend of the number of automated test case passed.  Using a standard tool set can minimize definition and usage variability.

Related or Variants Measures and Metrics:

  • Total Automated Test Cases
  • Percentage of Test Cases Automated
  • Test Coverage (measure of the percentage of code covered by tests)
  • Test Cases Passed

Criticisms:

  • Counting test cases obscures the complexities of testing. Not every test case is of equal value in terms of explanative or predictive power; therefore, the passing of any individual case or specific group of cases may not be indicative of quality or when development will be done precisely. This criticism is true but incomplete. Simply put, if a group of tests are expected to pass at a specific point, then knowing how many have not passed gives the developer a rough idea of what is in-front of her.
  • Not all tests will ever be automated and not all types of testing is measurable by counting the number of test cases. True. Most automated testing is contextual; however, not all types of tests are contextual. Exploratory testing, described as simultaneous learning, test design and test execution, is an important tool to try to ensure that testing is not biased totally to the captured requirements.
  • More automated test cases do not equate to better testing.  In black and white terms, this criticism is true.  The premise is that by counting and reporting the number of automated test cases passed (or the total number of anything) will cause an inflation of the number over time based on the belief that more is better.  I would suggest rather than abandoning the metric, it would be better to remind the team that they should only add test cases that add value and that, like code, they should refactor test cases often (automated or not).  An interesting measure to add in order to address the inflationary potential might be to report the number of refactored or removed test cases.
  • Just because all of the automated test cases have passed does not mean you have achieved defect-free code. The criticism is true IF you fall prey to the bias that once all of the automated test cases have been passed (see the first two criticisms) you are finished.

Thoughts and comments?  Contact information the Software Process and Measurement Cast by email:  spamcastinfo@gmail.com or voicemail,  +1-206-888-6111.