Quality Tools

Pareto chart of Pokemon

Got to catch them all!

A Pareto analysis is based on the principle suggested by Joseph Juran (and named after Vilfredo Pareto) that 80% of the problems/issues are produced by 20% of the issues. This is the famous 80/20 rule, and this principle is sometimes summarized as the vital few versus the trivial many. Process improvement professionals use the Pareto principle to focus limited resources (time and money) on a limited number of items that produce the biggest benefit. (more…)

A count of the Pokemon in my local park yesterday!

In today’s complex software development environment, it is easy to see every data collection issue as a complex automation problem. For example, I recently had a discussion with a team that is trying to determine how they would know whether a coding standard change they were contemplating would be effective. The team proposed capturing defects the coder/developer pairs found in TFS. The plan was to enter each defect found into the tool. After a discussion, the team decided to adopt a much simpler data collection solution with very low overhead. One of the classic quality tools is a tally sheet or check sheet. A check sheet is a form that used to collect information as it happens as a set of checkmarks or tally marks on the form. Check sheets are a very simple data collection tool. They are often a great way to pilot data collection before spending time and effort on automation or adding overhead to work. A check sheet is a form of the histogram where data where data collection and categorization occurs at the same time as visualization. (more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 423 builds  on our interview from last week with Philip Lew.  This week we lead with a discussion of measuring quality.  Quality is related to risk, productivity and customer satisfaction.  

Next Jeremy Berriault brings his QA Corner to the Cast to discuss the impact of certifications in software testing.  Want a bit of foreshadowing?  The answer is not cut and dry. Visit Jermey’s new blog at https://jberria.wordpress.com/

The Software Sensei, Kim Pries , answers a question he was recently asked by one his students, “why do we have so many computer languages?” This a question I have often asked, usually when I have to learn the basics of a new language. Reach out to Kim on LinkedIn.

Jon M Quigley, brings his column, the Alpha and Omega of Product Development to the cast.  In this installment, the 2nd in a 3 part series on configuration management, Jon continues the cycle of configuration management which begins with requirements and travels across the whole lifecycle. One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

In this week’s re-read of The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing), we talk about two sections, An Overview of the Model and Team Assessment. There are two more weeks left before moving to the next book. If you are new to the re-read series buy a copy and  go back to week one and read along!

I am running a poll to decide between Carol Dweck’s Mindset, Thinking Fast and Slow (Daniel Kahneman) and Flow (Mihaly Csikszentmihalyi).  I have also had suggestions (in the other category) for Originals: How Non-Conformists Move the World (Adam Grant) and Management Lessons from Taiichi Ohno: What Every Leader Can Learn from the Man by Takehiko Harada.  I would like your opinion! 

Takeaways from this week include: (more…)

Find the defects before delivery.

Find the defects before delivery.

One of the strongest indications of the quality of a piece of software is the number of defects found when it is used.  In software, defects are generated by a flaw that causes the code to fail to perform as required. Even in organizations that don’t spend the time and effort to collect information on defects before the software is delivered collect information on defects that crop up after delivery.  Four classic defect measures are used “post” delivery.  Each of the four measures is used to improve the functional, structural and process aspects of software delivery. They are: 


Peer Reviews are not gates!

Peer Reviews are not gates!

In software development, peer reviews are a tool to build in quality. But, they are rarely used, or when used, rarely used properly. Peer reviews are a review of a work product by peers or colleagues to remove defects. Often the concept of peer reviews is transformed by dropping the word peer and using the review meeting to judge whether a project or deliverable is fit to move forward (typically to “save time’). When peer reviews become gate or sign-off reviews, both the goal and who participates in the event change.

The goal of peer reviews as stated by Luc Bourgault, Director Shared Development Services at Wolters Kluwer, is “to be sure of the quality of a work product.” Therefore, making changes to the peer review process that make attaining that goal harder are problematic. The goal of a gate or sign-off review is control oriented. Luigi Buglione, IFPUG Board Liaison and Measurement & Process Improvement Specialist at Engineering Ingegneria Informatica SpA, stated that the goal of a gate and sign-off reviews are to act a “controls before passing to the next stage.” Peer reviews and phase gate or sign-off reviews have very different goals.

The definition of the words peer and colleague are imprecise which means that the exact composition of participants of a peer review can often be debated. For example, Jeff Dalton, President of Broadsword most recently interviewed on SPaMCAST 366, stated: “any time you get relevant stakeholders together to review a design/architecture/plan/test plan/code/et al, it’s a peer review.” The term stakeholders, in this case, means participants that facilitate error finding rather than error hiding. For example, if we were reviewing a piece of code a programmer, tester or business analyst on the same would probably be relevant stakeholders to participate in the review rather someone that had never seen code before. The relevant stakeholders are those that create an environment where it is safe to find and remove errors from the work product BEFORE they go any further. Talmon Ben-Cnaan, Quality Manager at AMDOCS, stated that peer reviews are done by “a person or a group of people in the same occupation or profession.” Non-peer participants, such as customers, managers, high-visibility stakeholders and executives, who are required for sign-off or gate reviews typically do not create an environment for finding and removing errors. As one anonymous Quality Manager put it, “there are often political overtones which may prevent earnest feedback from being presented.” Simply put, when people believe they are being judged or that errors will be held against them, they will tend to try to hide those errors.

Peer reviews and sign-off or gate reviews are are not the same thing. Combining the two types of reviews will not yield the defect removal benefits of a peer review, and often lead to teams having to test out defects later or customers being asked to find defects that could have been avoided in production.


Clone review?

Quality is important.  If we embrace that ideal it will influence many aspects of how software-centric products are developed, enhanced and maintained. Quality is an attribute of the product delivered to a customer and an output of the process used to deliver the product. Quality affects both customers and the users of a product. Quality can yield high customer satisfaction when it meets or exceeds expectations, or negatively shade a customer’s perception of the teams or organization that created the product when quality is below expectations. Quality can also impact the ability of any development organization to deliver quickly and efficiently. Capers Jones in Applied Software Measurement, Third Edition states, “An enterprise that succeeds in quality control will succeed in optimizing productivity, schedules, and customer satisfaction.” The Scaled Agile Framework (SAFe) has included “build-in quality” as one their four core values, because, without built-in quality, teams will not be able to deliver software with the fastest sustainable lead-time.

Peer reviews are a critical tool to build quality into software before we have to try to test quality in or ask our customers to debug the software for us. Unfortunately, the concept of a peer review is often misunderstood or actively conflated with other forms of reviews and inspections in order to save time. We need a definition. 

The TMMi defines peer review as a methodical examination of work products by peers to identify defects and areas where changes are needed. (TMMi Framework Release 1.0)

The CMMI defines peer review as the review of work products performed by peers during the development of work product to identify defects for removal. (CMMI for Development, Third Edition) 

Arguably it would be possible to find any number of other similar definitions; however, the core concepts of a composite definition would be:

work products
peers /colleagues
<to remove>

Talmon Ben-Cnaan, Quality Manager at AMDOCS, suggested an example that meets all criteria. “Code written by developer A and is reviewed by developer B from the same team. Or: A test book prepared by tester A and is presented to the entire team testers.” 

Peer reviews are an integral part of many different development frameworks and methods. They can be powerful tools to remove defects before they can impact production and to share knowledge with the team. As with all types of reviews and inspections, peer reviews are part of a class of verification and validation techniques called static techniques. These techniques are considered static because the system or application being built is not yet executed. In peer reviews, people review the work product to find defects, and the “people” involved will have the same or similar organizational status so that goal does not shift from finding defects to hiding defects.

Simple and Cheap aren't necessarily the same!

Simple and Cheap aren’t necessarily the same!

Attribute and usage models are fairly typical frameworks for translating a strategic definition of quality into tactical measurements of quality. The quality of the software or product is measured by observing the impact of what is delivered. The software or product quality can be influenced by the development process (process quality); however, process-like development, project management, and testing are not being directly measured. The attributes and usage components you select are a reflection of the organization’s goals and missions. For example, one would expect an airline’s definition and model of quality to prominently feature safety while a financial institution would tend to highlight data and physical security. Leveraging the ISO “quality in use” model as an example to develop a measurement pallet (a bunch of metrics that can be used based on specific needs) provides the following example: (more…)

Attributes are like buttons, lots of control and lots of complexity.

Attributes are like buttons, lots of control and lots of complexities.

At its heart, the defect management approach to defining software quality is focused on identifying, categorizing and counting defects.  Quality attribute models expand the definition of quality from the occurrence of defects to a framework of attributes.  For example, ISO/IEC 25010:2011 describes an attribute model comprised of eight quality characteristics. The characteristics are:

  1. Functional suitability
  2. Reliability
  3. Operability
  4. Performance efficiency
  5. Security
  6. Compatibility
  7. Maintainability
  8. Transferability

In the ISO model each of the characteristics can be further broken down into sub-characteristics to provide additional depth to the definition of quality.  All quality attribute models are based on a broader framework than the defect management approach. Using a broader framework provides teams and organizations with more information than simpler models due to the sheer number of attributes in the model. However, more information comes at a cost.  Quality attribute modes are less easy to implement, require more data collection, and often require more effort from the development team. (more…)



One of the most common measures of software quality is a count of defects. Conceptually it goes as follows: the more defects generated (and found), the lower the quality of the software. The process of finding, categorizing, counting and resolving defects not only improves the quality of the delivered software but is useful for improving the processes used to create the defects (and the software). The defect management approach is common because it can be simple or complex, require little or a lot of effort to execute, or predictive or reactive depending on an organization’s need.


Airplane propeller

Software quality is a simple phrase that is difficult to define. Three of the most important quality management thought leaders of the past century define quality in very different ways.

Philip B Crosby, author of Quality is Free, defines quality as conforming to requirements. Crosby views quality as nearly binary; quality either exists or it does not.  There are no different levels of quality.

Joseph Juran, who popularized the concept of the cost of poor quality, defines quality as fitness for use.  Juran’s writings describe quality as meeting customers’ expectations with a product that is free from deficiencies.

W. Edward Deming, the author of Out of the Crisis, stated that the customer’s definition of quality is the only one that matters, which means that there is no single definition. Each customer (or group of customers) will have their own definition of quality is based on their needs.

While none of these eminent thought leaders agreed on a precise definition of quality, at the core of all three definitions are the needs and requirements of users. This is critical, but software quality is more nuanced. Just meeting user requirements is only one part of the overall quality of a software deliverable. Technical debt, shortcuts taken while developing and maintaining software, can accumulate and make the software buggy and costly to maintain.  Quality will suffer if the process used to develop the software causes it to be late or if the software is not verified before it delivered.  Defining software quality requires a broader framework.

Next Page »