You have to measure to improve!

The nine most commonly cited reasons for an agile transformation range from coldly tangible to ethereal.  As we have noted, each of the reasons can berestated as a question(s) that can be answered quantitatively.  How the question(s) is stated provides clarity to the organization’s goal.  This is no different than the way acceptance criteria and test cases define the nuances and provide clarity to user stories.  Today, I’m presenting an example of how data can be used as a feedback loop to highlight misinterpretation of intent and provide an impetus for behavior change.

Scenario:  The business side of an organization was tired of waiting for functionality to be delivered in quarterly releases.  The product’s stakeholders were fielding problems and high priority change requests from their constituents on a fairly regular basis. Those items that could not be delayed and incorporated into the quarterly release were either expedited or raised as high priority defects (even when they were not).  The process caused significant chaos for everyone involved including erratic overtime demands for the software team.   One team of six people addressed development, enhancement, and maintenance of the product.   

Solution 1:  The organization, business, and software team, decided to implement Scrum.  They decided on a two-week cadence. The stakeholder, operating as the product owner, would re-prioritize the backlog before iteration planning and would participate in backlog grooming.  The team’s definition of done for the two weeks was that each story would be tested and implementable in production.  However, the Director of Operations still believed in a quarterly release schedule.

The results of several retrospectives found that that the solution increased the responsiveness of the software team to the business’s need.  On the surface, everyone professed to be happier (at least at the beginning). Only later did it become apparent that “done” did not mean available to use.   After the fact, data was collected and a scatter plot of stories deployed in production was developed.

Release Cycle Time.png

The spike at the end of the scatterplot reflects the quarterly release.  The change in the process had not changed the behavior that had so incensed the business. The data presents a pretty stark picture.  It should be noted that while some of the team members and other stakeholders recognized that the real issue had not been addressed, it was not until they could show data that productive discussion was possible.

Solution 2: Quarterly release was replaced by monthly releases and a second (less integrated) team is experimenting with continuous delivery.  The solution is recognized as being an interim confidence-building approach.  If the approach had been in place the scatterplot would have looked substantially different.

Scenario 2.png

The team determined how they thought the release change would impact when stories would be implemented. The big spike at the end of the quarter was replaced by three smaller spikes.   It should be noted that 85% stories would have been delivered to the business to use much faster (85% line falling from 41 days to 15 days).

Understanding the goal of change helps teams and organizations know what data needs to be collected to track and prove whether the change is having the expected impact.  The example, loosely patterned after a real-life scenario (a few things changed to protect the innocent) shows that having the data and the ability to show the impact drives home the point that even a relatively small policy change can have a huge impact on how the teams deliver value.