Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 406 features our interview with Erik van Veenendaal.  We discussed Agile testing, risk and testing, the Test Maturity Model Integrated (TMMi), and why in an Agile world quality and testing still matter.

Erik van Veenendaal (www.erikvanveenendaal.nl) is a leading international consultant and trainer, and a recognized expert in the area of software testing and requirement engineering. He is the author of a number of books and papers within the profession, one of the core developers of the TMap testing methodology, a participant in working parties of the International Requirements Engineering Board (IREB). He is one of the founding members of the TMMi Foundation, the lead developer of the TMMi model and currently a member of the TMMi executive committee. Erik is a frequent keynote and tutorial speaker at international testing and quality conferences. For his major contribution to the field of testing, Erik received the European Testing Excellence Award (2007) and the ISTQB International Testing Excellence Award (2015). You can follow Erik on twitter via @ErikvVeenendaal.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 14 and 15.  This week we dive into design and scaling. These chapters  address two critical and controversial topics that XP profoundly rethought.

I am still collecting thoughts on what to read next. Is it time to start thinking about what is next: a re-read or a new read?  Thoughts?

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday. (more…)

Advertisements

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 389 essay on different layers and anti-patterns of Agile Acceptance Testing. Many practitioners see Agile acceptance testing as focused solely on validating the business facing functionality. This is a misunderstanding; acceptance testing is more varied.

We also have a column from Kim Pries, the Software Sensei.  Kim discusses the significance of soft skills. Kim starts his essay with the statement, “The terms we use to talk about soft skills may reek of subjective hand-waving, but they can often be critical to a career.”

Gene Hughson anchors the cast with a discussion from his blog Form Follows Function, titled OODA vs PDCA – What’s the Difference? Gene concludes that OODA loops help address the fact that “We can’t operate with a “one and done” philosophy” when it comes to software architecture.

We are also changing and curtailing some of the comments at the end of the cast based on feedback from listeners. We will begin spreading out some of the segments such as future events over the month so that if you binge listen, the last few minutes won’t be as boring and boring. (more…)

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 379 features our short essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: Why doesn’t the definition of done include value?

We will also have an entry of Jeremy Berriault’s QA Corner.  Jeremy and I discussed test data, and why having a suite of test data that many projects can use is important for efficiency.  One question is who should bite the bullet and build the first iteration of any test data library?

Steve Tendon completes this cast with a discussion of the next chapter in his book, Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.  Chapter 7 is titled “Budgeting is Harmful.”  Steve hits classic budgeting head on, and provides options that improve flexibility and innovation.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player. Then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News (more…)

24015766875_1afd0b0e7b_b

Acceptance Testing is rarely just one type of testing.

Many practitioners see Agile acceptance testing as focused solely on the business facing functionality. This is a misunderstanding; acceptance testing is more varied. The body of knowledge that supports the International Software Testing Qualifications Board’s testing certifications deconstructs acceptance testing into four categories:  (more…)

 

The Mythical Man-Month

The Mythical  Man-Month

The Whole and the Parts is the thirteenth essay of The Mythical Man-Month by Fred P. Brooks. In this essay, Brooks posits the question “How do you build a program or system to work?” The components of a system that “works” must operate together while delivering the functionality needed in a dependable manner.

The process of ensuring what is built “works” begins by designing bugs out. Brooks breaks this section down into four steps that build on each other.

  1. Bug-proofing the definition: The word “definition” combines the wants and needs of the users with the assumptions of developers or authors. Mismatched assumptions cause the most harmful and subtle bugs. Brooks circles back to the idea of conceptual integrity discussed in the essay, Aristocracy, Democracy and System Design. Conceptual integrity (the whole systems proceeds from one overall design) makes any piece of work easier to use, easier to build and less subject to bugs. Conceptual integrity also supports simplicity, one of the core principles of Agile.
  2. Testing the specification: Brooks suggests handing the specifications over to the testers before software code is written. The testers will review the specs for completeness and clarity. Peer review processes or test first development practices deliver feedback that improve the chances that the system will work.
  3. Top-down design: The concepts of top-down design are based on the work of Niklaus Wirth. Wirth’s methods are to identify the design as a sequence of refinement steps. Brooks describes the procedure as sketching out a rough definition and rough solution that achieves the principal result. The next step is to examine the definition more closely to see how the results differ from what is wanted (feedback). Based on the refinements, the next step is to break the large components into smaller steps (grooming).The interactive process of breaking working into smaller and smaller chunks while generating feedback sounds suspiciously like lean and Agile.A good top-down design avoids bugs in several ways. First, the clarity of structure makes the precise statement of the requirements and functionality easier. Second, the partitioning and independence of modules avoids system bugs. Three, the suppression of details makes flaws in the structure more apparent. Four, the design can be tested at each step during its refinement.
  4. Structured programming: Focuses on using loops, subroutines, and other structures to avoid unmaintainable spaghetti code.

In the second major section of the essay on component debugging, Brooks describes four types of debugging, including machine debugging, memory dumps, snapshots and interactive debugging. While each of these types of component-level debugging are still in use today, how they are done are fundamentally different. Very few developers under forty have ever read a dump or had to translate hex to decimal. While much of the section is a walk down memory lane, it is a reminder that testing and defect removal is not just an event after all the code is written.

In the third section, Brooks builds on his earlier comments about the unexpected difficulty of system testing. Brooks argues that the difficulty and complexity of system testing justifies a systematic approach. First begin by using debugged components for system testing. Beginning with buggy components will yield unpredictable. In other words, do system testing after component debugging. Second, build plenty of scaffolding. Scaffolding provides teams with the ability to begin system testing before all components are done generating earlier feedback. Third, control changes to the system. Testing a system that is subject to random changes will generate results that will not understandable, which increases the chance of delivering poor quality. Fourth, Brooks suggests adding one component at a time to the system test (incremental integration and testing). Building on a known system generates understandable results, and when a problem appears the source can be isolated quicker. Fifth, quantize updates (make changes of fixed size), which suggests that changes to the system should either be large (releases) or very small (continuous integration), although Brooks states the later induces instability. Today’s methods and tools have reduced the potential for problems caused by smaller quanta of change.

The ideas in this essay are a path to verifying and validating software. While it might seem like a truism, Brooks reminds us that building software that works starts well before the first line of code is eveN written.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Calling the Shot

Ten Pounds in a Five–Pound Package

The Documentary Hypothesis

Plan to Throw One Away

Sharp Tools

The spiral method is just one example of a Agile hybrid.

The spiral method is just one example of a Agile hybrid.

Many organizations have self-titled themselves as Agile. Who wouldn’t want to be Agile? If you are not Agile, aren’t you by definition clumsy, slow or dull? Very few organizations would sign up for those descriptions; however, Agile in the world of software development, enhancements and maintenance means more than being able to move quickly and easily. Agile means that a team or organization has embraced a set of principles that shape behaviors and lead to the adoption of a set of techniques. When there is a disconnect between the Agile walk and the Agile talk, management is often a barrier when it comes to principles and practitioners are when it comes to techniques. Techniques are often deeply entrenched and require substantial change efforts. Many organizations state they are using a hybrid approach to Agile to transition from a more classic approach to some combination of Scrum, Kanban and Extreme Programming. This is considered a safe, conservative approach that allows an organization to change organically. The problem is that this tactic rarely works and often organizations get stuck. Failure to spend the time and effort on change management often leads to hybrids frameworks that are neither fish nor fowl.  Those neither fish nor fowl frameworks are rarely Agile. Attributes of stuck (or potentially stuck) organizations are:

The iterative waterfall. The classic iterative waterfall traces its roots to the Boehem Spiral Model. In the faux Agile version of iterative development, short, time-boxed iterations are used for each of the classic waterfall phase. A requirements sprint is followed by a design sprint, then a development sprint and you know the rest. Both the classic spiral model or the faux Agile version are generally significantly better than the classic waterfall model for generating feedback and delivering value faster; therefore, organizations stop moving toward Agile and reap the partial rewards.

Upfront requirements. In this hybrid approach to Agile, a team or organization will gather all of the requirements (sometimes called features) at the beginning of the project and then have them locked down before beginning “work.” Agile is based on a number of assumptions about requirements. Two key assumptions are that requirements are emergent, and that once known, requirements decay over time. Locking product backlogs flies in the face of both of these assumptions, which puts teams and organizations back into the age of building solutions that when delivered don’t meet the current business needs. This approach is typically caused when the Agile rollout is done using a staggered approach beginning with the developers and then later reaching out to the business analysts and business. the interface between groups who have embraced Agile and those that  have not often generates additional friction, often blamed on Agile making further change difficult.

Testing after development is “done.” One of the most pernicious Agile hybrids is testing the sprint after development is complete. I have heard this hybrid called “development+1 sprint.” In this scenario a team will generate a solution (functional code if this is a software problem), demo it to customers, and declare it to be done, and THEN throw it over the wall to testers. Testers will ALWAYS find defects, which requires them to throw the software back over the wall either to be worked on, disrupting the current development sprint, or to be put on the backlog to be addressed later. Agile principles espouse the delivery of shippable software (or at least potentially shippable) at the end of every sprint. Shippable means TESTED. Two slightly less pernicious variants of this problem are the use of hardening sprints or doing all of the testing at the end of the project. At least in those cases you are not pretending to be Agile.

How people work is the only cut and dry indicator of whether an organization is Agile or not. Sometimes how people work is reflection of a transition; however, without a great deal of evidence that the transition is moving along with alacrity, I assume they are or will soon be stuck. When a team or organization adopts Agile, pick a project and have everyone involved with that project adopt Agile at the same time, across the whole flow of work. If that means you have to coach one whole project or team at a time, so be it. Think of it as an approach that slices the onion, addressing each layer at the same time rather than peeling it layer by layer.

One final note: Getting stuck in most of these hybrids is probably better than the method(s) that was being used before. This essay should not be read as an indictment of people wrestling with adopting Agile, but rather as a prod to continue to move forward.

Listen Now

Subscribe on iTunes

To paraphrase Ed Sullivan, “We have a big, big show this week,” so we will keep the up front chit chat to a minimum.  First up is our essay on Agile Testing. Even if you are not a tester, understanding how testing flows in Agile projects is important to maximize value.

Second, we have a new installment from Jeremy Berriault’s QA Corner.  In this installment Jeremy talks about test cases.  More is not always the right answer.

Anchoring the Cast is Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Call to Action!

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

We have just begun the Re-Read Saturday of The Mythical Man-Month. We are off to rousing start beginning with the Tar Pit. Get a copy now and start reading!

The Re-Read Saturday and other great articles can be found on the Software Process and Measurement Blog.

Remember: We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement which began on February 21nd. What did you think?  Did the re-read cause you to read The Goal for a refresher? Visit the Software Process and Measurement Blog and review the whole re-read.

Note: If you don’t have a copy of the book, buy one. If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Upcoming Events

Software Quality and Test Management 

September 13 – 18, 2015

San Diego, California

http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending!

 

More on other great conferences soon!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Arlene Minkiewicz. Arlene and I talked technical debt. Not sure what technical debt is?  Well to some people it is a metaphor for cut corners and to others is a measure of work that will need to be done later.  In either case, a little goes a long way!

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.