Listen Now

Subscribe: Apple Podcast
Check out the podcast on Google Play Music

The SPaMCAST 590 features my interview with Nancy Kastl. Nancy and I discussed testing and the future of the testing profession. The future of testing is not cut and dry; in the short run more automation and in the long-term more codeless testing and AI might replace entry-level testers. An eye-opening interview! (more…)

 

Listen Now
Subscribe: Apple Podcast

Check out the podcast on Google Play Music

SPaMCAST 516 features our interview with Nishi Grover Garg.  Nishi and I started by discussing the major differences in agile and non-agile testing and ended with a discussion of agile pods. This is a wonderfully idea-rich interview.

Note:  I am recording part of this episode remotely from a hotel in Brazil!

Nishi’s Bio:

Nishi is a Consulting Agile and Software testing trainer. With a decade of experience working in an Agile environment in different product-based companies, she has had a chance to work in all stages of software testing life cycle from a White box, Black box to Automation testing and Usability testing. Having now extended it to her full-time job, Nishi is a coach, trainer, and mentor in areas of Agile and software testing, specializing in conducting QA Induction boot-camps, ISTQB workshops, DevOps Foundation and Selenium Automation courses. She is certified by Agile Testing Alliance (ATA) as a CP-DOF, CP-SAT, CP-AAT, CP-MAT and by ISTQB as a Foundation and Advanced Test Analyst and likes to keep updating her skills periodically. She is also a passionate freelance writer and contributes to many online forums about new topics of interest in the industry like Techwell community’s AgileConnection.com, Stickyminds.com and many more. Check out her blog at http://www.testwithnishi.com to find her articles and catch up on her latest professional activities!

Contact information:

Blog: www.testwithnishi.com

Email: grover.nishi@gmail.com


Re-Read Saturday News

This week we begin our read of Bad Blood (buy your copy today https://amzn.to/2zTEgPq  and support the blog and the author).  Bad Blood is a new book for me, therefore a “read” rather than a re-read. We begin with the introductory material and a proposed plan for the read.

Week 1 – Approach and Introduction – https://bit.ly/2J1pY2t 
(more…)

 

Listen Now
Subscribe: Apple Podcast
Check out the podcast on Google Play Music

SPaMCAST 504 features our interview with Gerie Owen.  Gerie and I discussed continuous testing, DevOps, and testing in an agile culture! We worked through big ideas that can have a big impact on how you work and deliver value immediately.

Gerie Owen is the VP of Knowledge and Innovation at QualiTest. Gerie Owen represents the QualiTest team at the client site and manages the offshore team’s test activities including the development of an automated regression test suite. Gerie manages large, complex projects involving multiple applications, coordinates test teams across multiple time zones and delivers high-quality projects on time and within budget.

Gerie is also a Certified Scrum Master, Conference Presenter and Author on Testing and Test management topics. She enjoys mentoring new QA Leads and brings a cohesive team approach to testing.  Gerie is the author of many articles on testing and is currently writing a series on the Brave New Worlds of testing. She chooses her presentation topics based on her testing and test management experiences, what she has learned from them and what she would like to learn to improve them.

Web: https://www.qualitestgroup.com

Blog: https://testinggirl.wordpress.com/about-gerie-owen/

 

Re-Read Saturday News
Today we complete our re-read of Turn the Ship Around! with a few final thoughts.  Next week we will begin The Checklist Manifesto by Atul Gawande (use the link and buy a copy so you can read along).

Current Installment:

Week 19: Final Thoughts!https://bit.ly/2O4Pc21 (more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

SPaMCAST 467 features our essay on value. Value is the most talked about and least understood concept in Agile. In terms of software development, enhancements, and maintenance, the value of a piece of work is the worth of the outcome that results from doing the work.

In the second position is Jeremy Berriault and the QA Corner!  Jeremy discusses testing in difficult situations. Are there differences? Jeremy has the answers!

Gene Hughson completes the cast by bringing a discussion of a recent missive, Management, Simple and Wrong – Semantics, Systems, and Self-Correction.  This entry at  Form Follows Function even includes a reference to Snidely Whiplash!

Upcoming Appearances

Metricas 2017

I will be keynoting on Agile leadership and then delivering one my favorites, Function Points and Pokémon Go
29 November 2017
Sao Paulo, Brazil

Register

Re-Read Saturday News (more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

SPaMCAST 463 features our essay on using big picture stories to generate resonance.  Early in the history of Agile, most descriptions of Agile included the need to define a central metaphor to help guide the work.  Somewhere over time, the idea of a central metaphor has disappeared as Agile thought leaders have focused on more tactical facets of Agile methods and frameworks. It’s time to reconsider the big picture story!

We will also have columns from Gene Hughson of Form Follows Function fame.  Gene and I discuss his recent essay, Management, Simple and Wrong – Semantics, Systems, and Self-Correction.  This essay is about meaning and includes an appearance from Snidely Whiplash.

Anchoring the cast, Jeremy Berriault brings the QA Corner to the podcast.  Jeremy and I discussed motivating testers. Testers like any other discipline require the correct care and feeding to effectively deliver value.

Here is a promo for my upcoming ITMPI Webinar!

Wed, Oct 18, 2017, 11:00 AM (EST)

Product Owners In Agile – The Really Hard Role

In this webinar, you will learn why an Agile team’s product owner has a special obligation for leadership and value delivery.  It’s a hard role but we will discuss making it work!

Register (more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 406 features our interview with Erik van Veenendaal.  We discussed Agile testing, risk and testing, the Test Maturity Model Integrated (TMMi), and why in an Agile world quality and testing still matter.

Erik van Veenendaal (www.erikvanveenendaal.nl) is a leading international consultant and trainer, and a recognized expert in the area of software testing and requirement engineering. He is the author of a number of books and papers within the profession, one of the core developers of the TMap testing methodology, a participant in working parties of the International Requirements Engineering Board (IREB). He is one of the founding members of the TMMi Foundation, the lead developer of the TMMi model and currently a member of the TMMi executive committee. Erik is a frequent keynote and tutorial speaker at international testing and quality conferences. For his major contribution to the field of testing, Erik received the European Testing Excellence Award (2007) and the ISTQB International Testing Excellence Award (2015). You can follow Erik on twitter via @ErikvVeenendaal.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 14 and 15.  This week we dive into design and scaling. These chapters  address two critical and controversial topics that XP profoundly rethought.

I am still collecting thoughts on what to read next. Is it time to start thinking about what is next: a re-read or a new read?  Thoughts?

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday. (more…)

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 389 essay on different layers and anti-patterns of Agile Acceptance Testing. Many practitioners see Agile acceptance testing as focused solely on validating the business facing functionality. This is a misunderstanding; acceptance testing is more varied.

We also have a column from Kim Pries, the Software Sensei.  Kim discusses the significance of soft skills. Kim starts his essay with the statement, “The terms we use to talk about soft skills may reek of subjective hand-waving, but they can often be critical to a career.”

Gene Hughson anchors the cast with a discussion from his blog Form Follows Function, titled OODA vs PDCA – What’s the Difference? Gene concludes that OODA loops help address the fact that “We can’t operate with a “one and done” philosophy” when it comes to software architecture.

We are also changing and curtailing some of the comments at the end of the cast based on feedback from listeners. We will begin spreading out some of the segments such as future events over the month so that if you binge listen, the last few minutes won’t be as boring and boring. (more…)

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 379 features our short essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: Why doesn’t the definition of done include value?

We will also have an entry of Jeremy Berriault’s QA Corner.  Jeremy and I discussed test data, and why having a suite of test data that many projects can use is important for efficiency.  One question is who should bite the bullet and build the first iteration of any test data library?

Steve Tendon completes this cast with a discussion of the next chapter in his book, Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.  Chapter 7 is titled “Budgeting is Harmful.”  Steve hits classic budgeting head on, and provides options that improve flexibility and innovation.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player. Then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News (more…)

24015766875_1afd0b0e7b_b

Acceptance Testing is rarely just one type of testing.

Many practitioners see Agile acceptance testing as focused solely on the business facing functionality. This is a misunderstanding; acceptance testing is more varied. The body of knowledge that supports the International Software Testing Qualifications Board’s testing certifications deconstructs acceptance testing into four categories:  (more…)

 

The Mythical Man-Month

The Mythical  Man-Month

The Whole and the Parts is the thirteenth essay of The Mythical Man-Month by Fred P. Brooks. In this essay, Brooks posits the question “How do you build a program or system to work?” The components of a system that “works” must operate together while delivering the functionality needed in a dependable manner.

The process of ensuring what is built “works” begins by designing bugs out. Brooks breaks this section down into four steps that build on each other.

  1. Bug-proofing the definition: The word “definition” combines the wants and needs of the users with the assumptions of developers or authors. Mismatched assumptions cause the most harmful and subtle bugs. Brooks circles back to the idea of conceptual integrity discussed in the essay, Aristocracy, Democracy and System Design. Conceptual integrity (the whole systems proceeds from one overall design) makes any piece of work easier to use, easier to build and less subject to bugs. Conceptual integrity also supports simplicity, one of the core principles of Agile.
  2. Testing the specification: Brooks suggests handing the specifications over to the testers before software code is written. The testers will review the specs for completeness and clarity. Peer review processes or test first development practices deliver feedback that improve the chances that the system will work.
  3. Top-down design: The concepts of top-down design are based on the work of Niklaus Wirth. Wirth’s methods are to identify the design as a sequence of refinement steps. Brooks describes the procedure as sketching out a rough definition and rough solution that achieves the principal result. The next step is to examine the definition more closely to see how the results differ from what is wanted (feedback). Based on the refinements, the next step is to break the large components into smaller steps (grooming).The interactive process of breaking working into smaller and smaller chunks while generating feedback sounds suspiciously like lean and Agile.A good top-down design avoids bugs in several ways. First, the clarity of structure makes the precise statement of the requirements and functionality easier. Second, the partitioning and independence of modules avoids system bugs. Three, the suppression of details makes flaws in the structure more apparent. Four, the design can be tested at each step during its refinement.
  4. Structured programming: Focuses on using loops, subroutines, and other structures to avoid unmaintainable spaghetti code.

In the second major section of the essay on component debugging, Brooks describes four types of debugging, including machine debugging, memory dumps, snapshots and interactive debugging. While each of these types of component-level debugging are still in use today, how they are done are fundamentally different. Very few developers under forty have ever read a dump or had to translate hex to decimal. While much of the section is a walk down memory lane, it is a reminder that testing and defect removal is not just an event after all the code is written.

In the third section, Brooks builds on his earlier comments about the unexpected difficulty of system testing. Brooks argues that the difficulty and complexity of system testing justifies a systematic approach. First begin by using debugged components for system testing. Beginning with buggy components will yield unpredictable. In other words, do system testing after component debugging. Second, build plenty of scaffolding. Scaffolding provides teams with the ability to begin system testing before all components are done generating earlier feedback. Third, control changes to the system. Testing a system that is subject to random changes will generate results that will not understandable, which increases the chance of delivering poor quality. Fourth, Brooks suggests adding one component at a time to the system test (incremental integration and testing). Building on a known system generates understandable results, and when a problem appears the source can be isolated quicker. Fifth, quantize updates (make changes of fixed size), which suggests that changes to the system should either be large (releases) or very small (continuous integration), although Brooks states the later induces instability. Today’s methods and tools have reduced the potential for problems caused by smaller quanta of change.

The ideas in this essay are a path to verifying and validating software. While it might seem like a truism, Brooks reminds us that building software that works starts well before the first line of code is eveN written.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Calling the Shot

Ten Pounds in a Five–Pound Package

The Documentary Hypothesis

Plan to Throw One Away

Sharp Tools