24015766875_1afd0b0e7b_b

Acceptance Testing is rarely just one type of testing.

Many practitioners see Agile acceptance testing as focused solely on the business facing functionality. This is a misunderstanding; acceptance testing is more varied. The body of knowledge that supports the International Software Testing Qualifications Board’s testing certifications deconstructs acceptance testing into four categories:  (more…)

 

The Mythical Man-Month

The Mythical  Man-Month

The Whole and the Parts is the thirteenth essay of The Mythical Man-Month by Fred P. Brooks. In this essay, Brooks posits the question “How do you build a program or system to work?” The components of a system that “works” must operate together while delivering the functionality needed in a dependable manner.

The process of ensuring what is built “works” begins by designing bugs out. Brooks breaks this section down into four steps that build on each other.

  1. Bug-proofing the definition: The word “definition” combines the wants and needs of the users with the assumptions of developers or authors. Mismatched assumptions cause the most harmful and subtle bugs. Brooks circles back to the idea of conceptual integrity discussed in the essay, Aristocracy, Democracy and System Design. Conceptual integrity (the whole systems proceeds from one overall design) makes any piece of work easier to use, easier to build and less subject to bugs. Conceptual integrity also supports simplicity, one of the core principles of Agile.
  2. Testing the specification: Brooks suggests handing the specifications over to the testers before software code is written. The testers will review the specs for completeness and clarity. Peer review processes or test first development practices deliver feedback that improve the chances that the system will work.
  3. Top-down design: The concepts of top-down design are based on the work of Niklaus Wirth. Wirth’s methods are to identify the design as a sequence of refinement steps. Brooks describes the procedure as sketching out a rough definition and rough solution that achieves the principal result. The next step is to examine the definition more closely to see how the results differ from what is wanted (feedback). Based on the refinements, the next step is to break the large components into smaller steps (grooming).The interactive process of breaking working into smaller and smaller chunks while generating feedback sounds suspiciously like lean and Agile.A good top-down design avoids bugs in several ways. First, the clarity of structure makes the precise statement of the requirements and functionality easier. Second, the partitioning and independence of modules avoids system bugs. Three, the suppression of details makes flaws in the structure more apparent. Four, the design can be tested at each step during its refinement.
  4. Structured programming: Focuses on using loops, subroutines, and other structures to avoid unmaintainable spaghetti code.

In the second major section of the essay on component debugging, Brooks describes four types of debugging, including machine debugging, memory dumps, snapshots and interactive debugging. While each of these types of component-level debugging are still in use today, how they are done are fundamentally different. Very few developers under forty have ever read a dump or had to translate hex to decimal. While much of the section is a walk down memory lane, it is a reminder that testing and defect removal is not just an event after all the code is written.

In the third section, Brooks builds on his earlier comments about the unexpected difficulty of system testing. Brooks argues that the difficulty and complexity of system testing justifies a systematic approach. First begin by using debugged components for system testing. Beginning with buggy components will yield unpredictable. In other words, do system testing after component debugging. Second, build plenty of scaffolding. Scaffolding provides teams with the ability to begin system testing before all components are done generating earlier feedback. Third, control changes to the system. Testing a system that is subject to random changes will generate results that will not understandable, which increases the chance of delivering poor quality. Fourth, Brooks suggests adding one component at a time to the system test (incremental integration and testing). Building on a known system generates understandable results, and when a problem appears the source can be isolated quicker. Fifth, quantize updates (make changes of fixed size), which suggests that changes to the system should either be large (releases) or very small (continuous integration), although Brooks states the later induces instability. Today’s methods and tools have reduced the potential for problems caused by smaller quanta of change.

The ideas in this essay are a path to verifying and validating software. While it might seem like a truism, Brooks reminds us that building software that works starts well before the first line of code is eveN written.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?

Calling the Shot

Ten Pounds in a Five–Pound Package

The Documentary Hypothesis

Plan to Throw One Away

Sharp Tools

The spiral method is just one example of a Agile hybrid.

The spiral method is just one example of a Agile hybrid.

Many organizations have self-titled themselves as Agile. Who wouldn’t want to be Agile? If you are not Agile, aren’t you by definition clumsy, slow or dull? Very few organizations would sign up for those descriptions; however, Agile in the world of software development, enhancements and maintenance means more than being able to move quickly and easily. Agile means that a team or organization has embraced a set of principles that shape behaviors and lead to the adoption of a set of techniques. When there is a disconnect between the Agile walk and the Agile talk, management is often a barrier when it comes to principles and practitioners are when it comes to techniques. Techniques are often deeply entrenched and require substantial change efforts. Many organizations state they are using a hybrid approach to Agile to transition from a more classic approach to some combination of Scrum, Kanban and Extreme Programming. This is considered a safe, conservative approach that allows an organization to change organically. The problem is that this tactic rarely works and often organizations get stuck. Failure to spend the time and effort on change management often leads to hybrids frameworks that are neither fish nor fowl.  Those neither fish nor fowl frameworks are rarely Agile. Attributes of stuck (or potentially stuck) organizations are:

The iterative waterfall. The classic iterative waterfall traces its roots to the Boehem Spiral Model. In the faux Agile version of iterative development, short, time-boxed iterations are used for each of the classic waterfall phase. A requirements sprint is followed by a design sprint, then a development sprint and you know the rest. Both the classic spiral model or the faux Agile version are generally significantly better than the classic waterfall model for generating feedback and delivering value faster; therefore, organizations stop moving toward Agile and reap the partial rewards.

Upfront requirements. In this hybrid approach to Agile, a team or organization will gather all of the requirements (sometimes called features) at the beginning of the project and then have them locked down before beginning “work.” Agile is based on a number of assumptions about requirements. Two key assumptions are that requirements are emergent, and that once known, requirements decay over time. Locking product backlogs flies in the face of both of these assumptions, which puts teams and organizations back into the age of building solutions that when delivered don’t meet the current business needs. This approach is typically caused when the Agile rollout is done using a staggered approach beginning with the developers and then later reaching out to the business analysts and business. the interface between groups who have embraced Agile and those that  have not often generates additional friction, often blamed on Agile making further change difficult.

Testing after development is “done.” One of the most pernicious Agile hybrids is testing the sprint after development is complete. I have heard this hybrid called “development+1 sprint.” In this scenario a team will generate a solution (functional code if this is a software problem), demo it to customers, and declare it to be done, and THEN throw it over the wall to testers. Testers will ALWAYS find defects, which requires them to throw the software back over the wall either to be worked on, disrupting the current development sprint, or to be put on the backlog to be addressed later. Agile principles espouse the delivery of shippable software (or at least potentially shippable) at the end of every sprint. Shippable means TESTED. Two slightly less pernicious variants of this problem are the use of hardening sprints or doing all of the testing at the end of the project. At least in those cases you are not pretending to be Agile.

How people work is the only cut and dry indicator of whether an organization is Agile or not. Sometimes how people work is reflection of a transition; however, without a great deal of evidence that the transition is moving along with alacrity, I assume they are or will soon be stuck. When a team or organization adopts Agile, pick a project and have everyone involved with that project adopt Agile at the same time, across the whole flow of work. If that means you have to coach one whole project or team at a time, so be it. Think of it as an approach that slices the onion, addressing each layer at the same time rather than peeling it layer by layer.

One final note: Getting stuck in most of these hybrids is probably better than the method(s) that was being used before. This essay should not be read as an indictment of people wrestling with adopting Agile, but rather as a prod to continue to move forward.

Listen Now

Subscribe on iTunes

To paraphrase Ed Sullivan, “We have a big, big show this week,” so we will keep the up front chit chat to a minimum.  First up is our essay on Agile Testing. Even if you are not a tester, understanding how testing flows in Agile projects is important to maximize value.

Second, we have a new installment from Jeremy Berriault’s QA Corner.  In this installment Jeremy talks about test cases.  More is not always the right answer.

Anchoring the Cast is Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Call to Action!

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

We have just begun the Re-Read Saturday of The Mythical Man-Month. We are off to rousing start beginning with the Tar Pit. Get a copy now and start reading!

The Re-Read Saturday and other great articles can be found on the Software Process and Measurement Blog.

Remember: We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement which began on February 21nd. What did you think?  Did the re-read cause you to read The Goal for a refresher? Visit the Software Process and Measurement Blog and review the whole re-read.

Note: If you don’t have a copy of the book, buy one. If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Upcoming Events

Software Quality and Test Management 

September 13 – 18, 2015

San Diego, California

http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams!  Let me know if you are attending!

 

More on other great conferences soon!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Arlene Minkiewicz. Arlene and I talked technical debt. Not sure what technical debt is?  Well to some people it is a metaphor for cut corners and to others is a measure of work that will need to be done later.  In either case, a little goes a long way!

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Listen to the Software Process and Measurement Cast 322

SPaMCAST 322 features our interview with Clareice and Clyneice Chaney. Clareice and Clyneice provide insights and practical advice into how Agile and contracting work together.  The focus of the interview is on contracting and acquisition of Agile testing, however the concepts we discussed can be applied to contracting for any type of service using Agile techniques.

Clyneice Chaney brings over 30 years of testing, quality assurance, and process improvement experience. Clyneice holds certifications from the American Society for Quality as a Certified Quality Manager/Organizational Excellence and Project Management Institute’s Professional Project Manager. She has participated as an examiner for Baldrige state quality awards for Georgia and Virginia. She is currently an instructor for an International Testing Certification organization and has presented technical papers at the Software Engineering Institute: SEPG Conference, American Society for Quality: Quality Manager’s conference, Quality Assurance Institute International Testing Conference, International Conference on Software Process Improvement and Software Test and Performance Testing Conferences.

Clareice Chaney has over 30 years’ experience in Commercial and Government Contracting with an emphasis in contracting within the information technology arena.  She holds a PMP certification with the Project Management Institute and is a certified Professional Contracts Manager (CPCM) through the National Contract Management Association (NCMA). She has presented at the National Contract Management Association World Congress and provided recent collaborations on agile testing and contracting at the Quality Assurance Institute International Conferences.

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

The next Software Process and Measurement Cast will feature our essay on the Attributes Leading to Faiure with Agile. Agile projects don’t work when there isn’t open and honest communication within a team. Problems also can occur when all team members are not involved, or if the organization has not bought into the principles of Agile. Knowing what can go wrong with Agile implementations and projects is a step to making sure they do not happen!

We will also have the next Form Follows Function column from Gene Hughson and Explaining Change with Jo Ann Sweeney.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Apparently the cleaning crew flows a process

Apparently the cleaning crew flows a process

Developing or maintaining a piece of software requires a fairly complicated set of processes, including processes for collecting requirements, designing, coding, verifying and validating a solution. All of the processes need to work together well or they risk impacting the quality of the delivered product. Process problems tend to be most severe when testing and engineering processes are mismatched, organizations embrace a one-size-fits-all testing solution or ad-hoc testing processes (gag).

  • Mismatched processes – Testing is a collaborative process requiring communication between everyone involved in developing software. When development and testing processes are not synchronized, the chance of miscommunication increases.  For example, consider the communication problems that would ensue if the developers were using Agile techniques while the testers were using waterfall techniques. Agile development techniques would be focused on delivering functional code rather than omnibus requirements or design documents that are often used to drive waterfall testing. Whether Agile, waterfall, RUP or some other framework if testing and development have not found a mechanism to synchronize how they work together, defects will make it to production.
  • One-size-fits-all testing solutions – Every project has its own set of nuances and risks. The testing solution for each project needs to be tailored to meet the specific needs of the project. A one-size-fits-all solution will tend to overemphasize specific types of testing (e.g. functional testing, system testing or integration testing) when another type may need emphasis.  For example, recently I observed a large program that initially failed on delivery because integration testing was not part of the standard process the firm used.
  • Ad-hoc testing – Ad-hoc testing (just winging it) went out of style as soon as someone thought about the quality of the code being delivered, it never worked and never will. Just don’t do this.

Development is a dance of multiple inter-related processes. Regardless of whether the project uses the team uses a mixture of extreme programming, test-driven development, black-box testing or exploratory testing the processes need to work together. Synchronized and compatible development and testing processes are critical for effectively and efficiently developing. Agile techniques leveraging cross-functional teams, that include developers and testers, put teams in the best position to ensure a synchronized process.

In order to participate you have to be capable.

In order to participate you have to be capable.

Testing effectiveness and efficiency will suffer if the organization or team does not have the capability to test well. Testing with the proper level of capability is akin to trying to drive from Cleveland, Ohio to Washington, DC in a car with four flat tires.  It could be done, but at what cost?  Capabilities include the number of testers, clarity of responsibilities, expertise, tools and environments.  Problems in any of these categories will affect the effectiveness and efficiency of testing.

  • The number of testers – There is no fixed ratio of testers to developers however too few testers will cause corners to be cut. The development methods used, amount of test automation available, application criticality and the ability of others in the organization to augment the ranks of testers will all impact the required staffing level. The business needs and test goals and strategy will also influence staffing levels for testers.
  • Clarity of responsibilities – The responsibilities for testing in teams can be easily delineated if the team is cross functional with a mix of developers and testers supporting a common delivery goal. Techniques, such as stand-up meetings, are useful for ensuring everyone knows the work they are responsible for completing. As the number of teams increase, ensuring testing responsibilities are understood become more problematic.  Techniques, such as SAFe’s release planning and the role of a release train engineer, can be leveraged as tools to coordinate responsibilities.
  • Expertise – Just drafting anyone to do testing is a recipe for using your clients to find your defects. The core of your testing capabilities needs to be comprised of experienced (both in testing and with the application being tested) and certified testers. The core testers should lead testing efforts, but also act as consultants to support others who also are acting as testers (think cross-functional).
  • Tools – Development frameworks like Agile work best when testing is performed early and often. Making testing a ubiquitous part of the development process requires test automation.  Automation is needed not only for executing tests, but for generating test data, generating code builds, and capturing defects. Good automation will lessen the testing effort burden and increase the effectiveness of testing.
  • Environments – A test environment is the combination of hardware, software and data required to run software tests. Test environments should closely emulate the environment that the software will run in when it is finally installed in production.  Problems in the test environment will generally mask problems that will not be recognized until production.  The expense to implement and maintain test environments often cause organizations to cut corners on the number or makeup of test environments.

A team’s or organization’s testing capabilities are critical factors in the equation of whether testing will be effective and efficient. Capabilities encompass a broad range of factors from people to computer environments.  Being good at one might compensate a bit for weaknesses in another, but in the long run an organization needs strength in all categories testing software