Agile Metrics Framework

Agile Metrics Framework

What gets measured depends on the team’s and the organization’s reporting needs and the measurement goals. For instance, an organization that needs to quantitatively prove their transformation will need to consider measures (and metrics) that can be generated consistently across project teams. Organizations whose teams are standalone and do not anticipate the need for baselining or benchmarking can easily leverage team-based relative measures, such as function points. The simple metrics framework suggested here with potential metrics is shown below: [

  1. Labor Productivity Quadrant
    1. Labor productivity is generally expressed as a functional measure of size per person month, for example: function points per person month or use case points per person month. Labor productivity is typically the measure of choice when an overall transformation program needs to prove efficiency results. These measures are easily comparable between teams and have industry benchmarks available for comparison. The drawbacks are the perceived level of effort to generate the measure and the invasiveness of the process used to generate the size component of the metric.
    2. Story completion (variants include measure of percentage story completion) is a relatively easy metric for teams to collect and leverage. The simplest form of this measure is a simple count of the number of stories competed in a sprint (or period of time for Kanban). Adding a time component creates a rate of completion (a metric) which can be used as a variant of velocity.
  2. Quality Quadrant
    1. Customer satisfaction is a measure of how satisfied the customers (or stakeholders) of a project are with the team performance or functionality delivered. Customer satisfaction can measured using techniques as simple as asking specific stakeholder how they feel about the project or very formal techniques, such as surveys and calculations such as Net Promoters. The higher the formality the more effort that will be required to collect and analyze the metric and the more comparable the metric will be between teams across the life of long running projects.
    2. Delivered defects are a count of the number of defects found after the code (or other deliverable) is marked as done. This measure is generally considered one of the more important reflection of quality, because all code or other deliverables are potentially implementable after they have be marked as done, which means any defects found, regardless of by whom, could have been found in production. Defects found in production can negatively impact customer satisfaction and the overall business.
    3. Usability is typically a measure of compliance against a set of industry and/or organizational standards. Most teams build usability compliance into the definition of done, therefore what is delivered as done is compliant. The metric is used as a mechanism to reflect progress while functionality is being built.
  3. Predictability Quadrant
    1. Velocity is the average number of units of work delivered in a sprint (or any other repeating unit of time). Typically velocity is expressed as the average number of story points a team delivers per sprint. While similar to productivity, velocity is typically used to represent team predictability. Predictability metrics can be used to generate release plans (when will some group of functionality be ready for production) and in sprint planning.
    2. Time-to-market is very similar to velocity reflecting how fast functionality is developed and delivered. Time to market is generally used to reflect plan-based (non-Agile) projects or in organizations using functional metrics (e.g. function points). An example of a time-to-market metric would be the number of function points per calendar. Note: like velocity, time-to-market metrics are generally averages and used in planning exercises or in benchmark.
    3. Effort burn-down is a measure of a team’s estimate of the number of hours of effort remaining in a sprint to deliver the functionality that was committed to during planning. This almost always used as mechanism for teams to anticipate whether will complete work by the end of the sprint. There are numerous variations on effort burn-down chart including story points, task and feature burn-down charts. In every case some measure of work is count (e.g. hours, tasks, story points) and then as they are completed, used up or new instances discovered, the number remaining is either incremented or decremented.
  4. Value Quadrant
    1. Business value is an estimate of the net value being delivered in a unit of work (e.g. story, epic or feature). While business value is the Holy Grail in this category, it is generally very difficult to estimate at a story or feature level, therefore tracking value tends at a higher level such as a release.
    2. Features delivered is a proxy for business value. This measure is typically a count of the number of features delivered in sprint. Variants of this measure include counting stories or epic (larger user stories). Features and stories reflect requirements therefore as the number of features delivered increase the value users and the product owner perceive.

The measures and metrics noted above barely scratch the surface of what can be measured. What should be measured is dependent on the needs and goals of the team and the organization. In ALL cases the measures and metrics must be vetted to ensure they meet the Agile measurement philosophies.

Advertisements