Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 383 features our essay on peer reviews.  Peer reviews are a tool to remove defects before we need to either test them out or ask our customers to find them for us. While the data about the benefits of peer reviews is UNAMBIGUOUS, they are rarely practiced well and often turn into a blame apportionment tool.  The essay discusses how to do peer reviews, whether you are using Agile or not so that you get the benefits you expect!

Our second segment is a visit to the QA Corner.  Jeremy Berriault discusses a piece of advice he got from a mentor that continues to pay dividends.  This installment of the QA Corner discusses how a QA leader can generate and leverage responsibility without formal authority.

Steve Tendon anchors this week’s SPaMCAST discussing Chapter 8 of Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross. Chapter 8 is titled “Creating A Shared Vision At The Team Level”.  We discuss why it is important for the team to have a shared vision, the downside of not having a shared vision and most importantly, how to get a share vision.

Remember Steve has a great offer for SPaMCAST listeners. Check out for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach, and Its Application to Scrum and Kanban at 40% off the list price.

Re-Read Saturday News

This week we are back with Chapter 10 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter 10 we visited how to use Bayesian Statistics to account for having prior knowledge before we begin measuring.  Most common statistics assume that we don’t have prior knowledge of the potential range of what we are measuring or the shape of the distribution.  This is often a gross simplification with ramifications!


Peer Reviews are not gates!

Peer Reviews are not gates!

In software development, peer reviews are a tool to build in quality. But, they are rarely used, or when used, rarely used properly. Peer reviews are a review of a work product by peers or colleagues to remove defects. Often the concept of peer reviews is transformed by dropping the word peer and using the review meeting to judge whether a project or deliverable is fit to move forward (typically to “save time’). When peer reviews become gate or sign-off reviews, both the goal and who participates in the event change.

The goal of peer reviews as stated by Luc Bourgault, Director Shared Development Services at Wolters Kluwer, is “to be sure of the quality of a work product.” Therefore, making changes to the peer review process that make attaining that goal harder are problematic. The goal of a gate or sign-off review is control oriented. Luigi Buglione, IFPUG Board Liaison and Measurement & Process Improvement Specialist at Engineering Ingegneria Informatica SpA, stated that the goal of a gate and sign-off reviews are to act a “controls before passing to the next stage.” Peer reviews and phase gate or sign-off reviews have very different goals.

The definition of the words peer and colleague are imprecise which means that the exact composition of participants of a peer review can often be debated. For example, Jeff Dalton, President of Broadsword most recently interviewed on SPaMCAST 366, stated: “any time you get relevant stakeholders together to review a design/architecture/plan/test plan/code/et al, it’s a peer review.” The term stakeholders, in this case, means participants that facilitate error finding rather than error hiding. For example, if we were reviewing a piece of code a programmer, tester or business analyst on the same would probably be relevant stakeholders to participate in the review rather someone that had never seen code before. The relevant stakeholders are those that create an environment where it is safe to find and remove errors from the work product BEFORE they go any further. Talmon Ben-Cnaan, Quality Manager at AMDOCS, stated that peer reviews are done by “a person or a group of people in the same occupation or profession.” Non-peer participants, such as customers, managers, high-visibility stakeholders and executives, who are required for sign-off or gate reviews typically do not create an environment for finding and removing errors. As one anonymous Quality Manager put it, “there are often political overtones which may prevent earnest feedback from being presented.” Simply put, when people believe they are being judged or that errors will be held against them, they will tend to try to hide those errors.

Peer reviews and sign-off or gate reviews are are not the same thing. Combining the two types of reviews will not yield the defect removal benefits of a peer review, and often lead to teams having to test out defects later or customers being asked to find defects that could have been avoided in production.


Clone review?

Quality is important.  If we embrace that ideal it will influence many aspects of how software-centric products are developed, enhanced and maintained. Quality is an attribute of the product delivered to a customer and an output of the process used to deliver the product. Quality affects both customers and the users of a product. Quality can yield high customer satisfaction when it meets or exceeds expectations, or negatively shade a customer’s perception of the teams or organization that created the product when quality is below expectations. Quality can also impact the ability of any development organization to deliver quickly and efficiently. Capers Jones in Applied Software Measurement, Third Edition states, “An enterprise that succeeds in quality control will succeed in optimizing productivity, schedules, and customer satisfaction.” The Scaled Agile Framework (SAFe) has included “build-in quality” as one their four core values, because, without built-in quality, teams will not be able to deliver software with the fastest sustainable lead-time.

Peer reviews are a critical tool to build quality into software before we have to try to test quality in or ask our customers to debug the software for us. Unfortunately, the concept of a peer review is often misunderstood or actively conflated with other forms of reviews and inspections in order to save time. We need a definition. 

The TMMi defines peer review as a methodical examination of work products by peers to identify defects and areas where changes are needed. (TMMi Framework Release 1.0)

The CMMI defines peer review as the review of work products performed by peers during the development of work product to identify defects for removal. (CMMI for Development, Third Edition) 

Arguably it would be possible to find any number of other similar definitions; however, the core concepts of a composite definition would be:

work products
peers /colleagues
<to remove>

Talmon Ben-Cnaan, Quality Manager at AMDOCS, suggested an example that meets all criteria. “Code written by developer A and is reviewed by developer B from the same team. Or: A test book prepared by tester A and is presented to the entire team testers.” 

Peer reviews are an integral part of many different development frameworks and methods. They can be powerful tools to remove defects before they can impact production and to share knowledge with the team. As with all types of reviews and inspections, peer reviews are part of a class of verification and validation techniques called static techniques. These techniques are considered static because the system or application being built is not yet executed. In peer reviews, people review the work product to find defects, and the “people” involved will have the same or similar organizational status so that goal does not shift from finding defects to hiding defects.