DSC_0448In Technical Debt we defined technical debt as the work not done or the shortcuts taken when delivering a product. Then we discussed the sources of technical debt in Technical Debt: Where Does It Come From? The next question is can we measure technical debt? If we have a consistent measure, we can determine whether the technical debt in our code is growing or shrinking, or even whether the level of debt passes some threshold of acceptability which requires remediation.  There are three common approaches to identifying and measuring technical debt. Each of these approaches have pluses and minuses. They are:

  1. Tool Based Approaches: There are several software tools that scan code to determine whether the code meets coding or structural standards (extensibility is a structural standard). These tools include Cast AIP, SonarQube, SonarJ, Structure101 and others.  Some tools are proprietary and some are open source.  Tools are very useful for identifying technical debt that has crept into an application unintentionally and that impacts code quality, but less useful for identifying partial requirements, dropped testing or intentional architecture deviations.
  2. Custom Measures:  Many organizations create measures of technical debt.  Almost all measures of technical debt are proxies, including automated test code coverage, audit results and size to line-of-code ratios ([Lines of code/ Function Points Delivered]/Industry Function Point Language Specific Backfire Calibration Number). Since there any number of custom techniques, it is difficult to determine where these can be best applied. However, audit techniques are very valuable to identify technical debt generated by process problems that are not reported.  For example, when pressed a developer may not complete all of his or her unit testing (the same behavior will occur in independent testing).  Unless it is self-reported or observed by a peer, this type of behavior may leave unrecognized technical debt in the code that can’t be seen through a scan. Audits are a mechanism to formally inspect code for standards and processes that can’t be automated.
  3. Self-Reporting: Team-level self-reporting is a fantastic mechanism for tracking intentionally accrued technical debt.  The list of identified technical debt can be counted, sized or valued.  The act of sizing or valuing converts the list into a true measure that can be prioritized.  The sizing mechanisms I have used include function points, story points and effort to correct (usually hours or days). When using value instead of size or effort I always done using a currency (e.g. Dollars, Rupees, Euros). Everyone understands money. I usually ask teams I am working with to generate a value for each piece of technical debt they intentionally accrue as they make the decision.  More than once I have seen team change their decision to incur the technical debt when they considered the value of the item they are considering.  Self-reporting is very useful for capturing intentional accrued technical debt, such as partially implemented requirements, intentional architectural deviations and knowingly constrained testing.

Technical debt is a powerful tool to help teams and organizations think about the quality of their code base.  Measures are also important tools to understand how much technical debt exists and whether or not it makes sense to remediate the technical debt.  Each category of measuring technical debt is useful, however each has its own strengths and weaknesses.  A combination is usually needed to get the best view of the technical debt to meet the needs of a specific team or organization.