On a scale of fist to five, I'm at a ten.

On a scale of fist to five, I’m at a ten.

Quality is partly about the number defects delivered in a piece of software and partly about how the stakeholders and customers experience the software.  Experience is typically measured as customer satisfaction. Customer satisfaction is a measure of how products and services supplied by a company meet or surpass customer expectations. Customer satisfaction is impacted by all three aspects of software quality: functional (what the software does), structural (whether the software meets standards) and process (how the code was built).

Surveys can be used to collect customer- and team-level data.  Satisfaction is used to measure if products, services, behaviors or work environment meet expectations.  (more…)

Find the defects before delivery.

Find the defects before delivery.

One of the strongest indications of the quality of a piece of software is the number of defects found when it is used.  In software, defects are generated by a flaw that causes the code to fail to perform as required. Even in organizations that don’t spend the time and effort to collect information on defects before the software is delivered collect information on defects that crop up after delivery.  Four classic defect measures are used “post” delivery.  Each of the four measures is used to improve the functional, structural and process aspects of software delivery. They are: 

(more…)

Bug Case

It is all about the bugs!

Many common measures of software quality include defects. A collection of defects and information about defects can be a rich source of information to assess or improve the functional, structural and process aspects of software delivery. Because of the apparent ease of defect collection and management (apparent because it really is never that easy) and the amount information that can glean from defect data, the number of defect-related measures and metrics found in organizations is wide and varied.  Unfortunately, many defect measures are often used incorrectly or are expected to be predictive.

Defect measures that are useful while work is in process (or pretty close) include: (more…)

Bugs!

Bugs!

One of the most common measures of software quality is a count of defects. Conceptually it goes as follows: the more defects generated (and found), the lower the quality of the software. The process of finding, categorizing, counting and resolving defects not only improves the quality of the delivered software but is useful for improving the processes used to create the defects (and the software). The defect management approach is common because it can be simple or complex, require little or a lot of effort to execute, or predictive or reactive depending on an organization’s need.

(more…)

www.spamcast.net

                      http://www.spamcast.net

Listen Now

Subscribe on iTunes

This week’s Software Process and Measurement Cast is a magazine feature with three columns. This week we have columns from Kim Pries, The Software Sensei, and Jo Ann Sweeney’s Explaining Change.  In this installment Kim discusses the ins and outs of selling defect control.  In Explaining Change, Jo Ann tackles the concept of planning for communication (protip: it is better than winging it). The SPaMCAST essay this week tackles the topic of what is and isn’t Agile.  Does just saying you are Agile make you Agile?  We think not!

Call to action!

Can you tell a friend about the podcast? If your friends don’t know how to subscribe or listen to a podcast, show them how you listen and subscribe them!  Remember to send us the name of you person you subscribed (and a picture) and I will give both you and the horde you have converted to listeners a call out on the show.

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Conference EMEA 2015
March 26 -27 London, UK
I will be presenting “Agile Risk Management.”
http://cmmi.unicom.co.uk/

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

CANCELED -International Conference on Software Quality and Test Management
Washington D.C. May 31 – June 5, 2015

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Agile coach Mario Lucero.  Mario and I discussed the nuts and bolts of coaching Agile teams, what is and isn’t Agile and the impact of coaching on success.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Call them bunnies or opportunities or whatever...

Call them bunnies or opportunities or whatever…

Every once in a while the person sitting next to me on a flight home works in IT. We generally trade war stories, the human version of dogs meeting on the street.  Recently a seatmate described an environment in which defects and production problems had been renamed as opportunities. Discovering opportunities was considered to be a positive and the organization seemed to have a lot of them.

Every profession has a common language and that language tends to be built into common processes and frameworks.  Common industry language generates a specific behavioral bias. Testing and software development are no different. The product of software development, enhancements and maintenance activities is software. In development and testing, stuff that goes wrong are typically called errors, defects and failures.  Each has an industry standard meaning.

Errors can be generated by human and external factors. In most IT departments human factors are the most significant source of errors because humans write software. People make mistakes for reasons ranging from the complexity of the business process to distractions caused when someone in the next cube spills coffee on their lap. The bottom line is that we make errors that produces an incorrect result. Calling mistakes opportunities or something else with a similar positive spin changes the behavioral bias from something to avoid to something to to embrace.

Errors can translate into defects (also known as bugs). In software, defects are a flaw that can cause the code to fail to perform as required. As noted in the Principles of Testing, not all defects are discovered or ever occur.  I have spent more than a few nights pouring over code that had not been changed in years only to finally be exposed to a strange set of conditions that had never been seen before. Many years ago I was working for a credit card processing firm that discovered that if the same person bought the same item costing 99 cents, 100 times in a row using a credit card our file maintenance system would fail spectacularly.  Finding and fixing that bug funded at least one coffee plantation.

Defects that occur when the code is executed and represent a difference between what the application is supposed to and what actually happens are termed failures. The code in the credit card file maintenance system was a defect that existed for several years before the fateful night that someone ordered 100 99-cent items on The Home Shopping Network at 1 AM (what was even more strange was that the same person did the same thing the next day at approximately the same time). As soon as the defect actually happened, it became a failure.

Mistakes, defects and failures, whether generated by human factors or external factors (e.g. pollution, radiation or a Super Storm Sandy), are in some sense opportunities to learn and refine how we practice our profession. During the Software Testing Foundations class I recently took, the theme of avoiding the use of  industry standard definitions because words like mistakes, defects and defects can cause poor behavior (e.g. defect hiding, pointing fingers or team strife).  Dawn Haynes, the class instructor and industry consultant, recounted a story of an organization that once called defects and failures “bunnies” in an attempt to avoid negativity. Like my seatmate’s company, they found that they had lots of bunnies and finding and removing bunnies was not taken very seriously. Renaming mistakes, defects and failures to opportunities or bunnies trivializes the efforts of everyone that spend time reviewing, testing and monitoring software execution. I would rather focus my creativity of learning and improving how value is delivered than finding neutral or happy terms to describe errors, defects and failure.

Untitled

I recently studied and passed the test for the International Software Testing Qualification Board’s (ISTQB) Certified Test, Foundation Level (CFTL). During my career I have been a tester, managed a test group and consulted on testing processes. During my time as a tester and a test manager, I was not aware explicitly of the seven principles of testing, however I think I understood them in my gut. Unfortunately most of my managers and clients did not understand them, which meant they behaved in a way that never felt rational and always devolved into a discussion of why bugs made it into production. Whether you are involved in testing, developing, enhancing, supporting or managing IT projects an understanding of the principles of testing can and should influence your professional behavior. I have broken the seven principles into two groups.  Group one relates to why we can’t catch them all and the second is focus on where we find defects. The first group includes:

  1. Testing shows the presence of defects. Stated differently, testing proves that the defects you find exist, but does not prove that there aren’t any other defects that you did not find. Understanding that testing does not prove that software or any product is defect free means that we always need to plan and mitigate the risk that we will find a defect as the development process progresses through to a production environment.
  2. Exhaustive testing is impossible. Testing all combinations of inputs, outputs and processing conditions is not generally not possible (I was involved in a spirited argument at a testing conference that suggested in very simple cases, exhaustive testing might be possible). Even if we set aside exoteric test cases, such as the possibility of a neutrino changing active memory while your software, application or product is using it, the number of possible perpetuations for even simple changes is eye popping (consider calculating the number of possible combinations of a simple change with 15 independent inputs each having 10 possible values). If exhaustive testing is not possible, the testers and test managers must use other techniques to focus the time and effort they have on what is important and risky. Developing an understanding of potential impact and possibility of problems (risk) is needed to target testing resources.
  3. Pesticide Paradox. The value running the same type of test over and over on an application wanes over time. The metaphor of pesticide is used to draw attention to the fact that once a test finds the bugs it is designed to find (or can find – a factor of how the test is implemented) the remaining bugs will be not found by the test.  Testing must be refactored over time to continue to be effective. This is why simply automating a set of tests and then running them over and over is not an adequate risk reduction strategy.

The first three principles of testing very forcibly remind everyone involved in developing, maintaining or support IT applications (hardware or software) that zero defects is aspirational, but not realistic. That understanding belies the shocked disbelief or manic finger pointing when defects are discovered late in the development cycle or in production. They exist and will be found. Our strategy should start by first avoiding creating the defects, focus testing (the whole range of testing from reviews to dynamic testing) on areas of the application or change based on risk to the business if a defect is not found, and have a plan in place for the bugs that run the gauntlet. In the world of IT, everyone, developers, testers, operators and network engineers alike, need to work together to improve quality within real world constraints because unlike Pokémon, you are never going to catch them all.

Gary Gack (whom I know and have interviewed on the Software Process and Measurement Cast – http://www.spamcast.net) has posted a survey. Please take the time to fill it out!

Hope this finds you well and prospering. I’m conducting a survey on defect containment practices and metrics. I’m in hopes you or someone in your team (or maybe among your clients) will respond to the survey with either current or earlier data – it’s quite short. I’ll share a summary of the results with you when I get a reasonable # of responses (4 so far, since Friday). Please feel free to refer the link to anyone you think might participate – I plan to keep it live for an extended period.

http://www.surveymonkey.com/s.aspx?sm=v9v4RBH9fe4Mu8M9Pw97VQ_3d_3d