Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 436 features our essay titled, Change Fatigue, Tunnel Vision, and Watts Humphrey, in which we answer the question of whether the state and culture of the organization or team, can have a large impact on whether a Big Bang approach or an incremental approach makes sense to change.

Our second column is from Jeremy Berriault. Jeremy discusses user acceptance testing and Agile. There are lots of different ways to accomplish user acceptance testing in an Agile environment.  The only wrong way is not to do UAT in Agile.  Jeremy  blogs at https://jberria.wordpress.com/  

Jon M Quigley brings his column, The Alpha and Omega of Product Development, to the Cast. This week Jon puts all the pieces together and discusses systems thinking.  One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

This week we wrap-up our re-read of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  In the wrap-up, we discuss overall impressions of the book and suggest a set of exercises to reinforce your growth mindset.

The next book in the series will be Holacracy (Buy a copy today) by Brian J. Robertson. After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy, therefore we will read the whole book together. (more…)

Scaling up, up, up!

Scaling up, up, up!

Agile User Acceptance Testing (AUAT) at the team level, focuses on proving that the functionality developed to solve a specific user story meets the user’s needs. Typically stories are part of a larger “whole,” and to truly prove that a business problem has been solved, acceptance testing needs to be performed as stories are assembled into features and features into applications/systems.

Individual teams accept user stories into sprints, if they are using time boxes as in Scrum. Stories should follow the guidelines found in the INVEST mnemonic coined by Bill Wake to generate a kernel of functionality that can be delivered. Because user stories are very granular, they often do not satisfy the overall business needs of the stakeholders. Product owners and other stakeholders generally want features. During backlog grooming features are broken down from epics into stories, then are developed and then assembled to satisfy the business need. A typical feature requires multiple stories (a one-to-many relationship). Two basic scenarios can be used to highlight the need to scale from story-level AUAT to feature- and system-level acceptance testing

Scenario One: Each Story Can Stand Alone

The simplest scenario would be the situation in which a feature is just the sum of the individual stories. This means that each independent story can be assembled and that no further acceptance testing is required. In this scenario, meeting the story-level acceptance criteria would satisfy the feature-level acceptance criteria and the system-level acceptance criteria. At best, this scenario is rare.

Scenario Two: Features Represent More Than The Sum of Parts

Features are often represent more than the sum of the individual stories. Even the relatively simple scenarios can be more than a sum of their parts.  For example, consider a feature for maintaining a customer on an applications.  Stories would include adding a customer, modifying a customer, deleting a customer and inquiring on a customer. The acceptance criteria for the feature would more than likely include criteria that the functionality in each story  needs to work smoothly together or meet a performance standard all of which requires running an acceptance test at the feature level. Non-functional requirements are often reflected in overarching acceptance criteria captured at the feature level or system level. These overarching criteria require performing AUAT at the feature and system level.

The discussion of executing a feature- or system-level acceptance test often generates hot debate. The debate is less about the need to get acceptance and generate feedback at the feature or system level, but more about when this type of test should be done. Deciding on “when” is often a reflection on whether the organization and teams have adopted a few critical Agile techniques.

  1. Integrated code base – All teams should be building and committing to a single code base.
  2. Continuous builds (or at least daily) – The single code base should be re-built as code is committed (or at least daily) and validated.
  3. Team synchronization – All teams working together toward a common goal (SAFe calls this an Agile release train) should begin and end their sprints at the same time.

A solution I have used for teams that meet these criteria is to coodinate the feature acceptance test through the Scrum of Scrums as the second to last official activity of each synchronized sprint (prior to the retrospective(s)). The feature AUAT requires team and stakeholder participation so that everyone can agree that the criteria is met or not met. All of these activities assume that acceptance criteria were developed for each feature as it was added the backlog and that overall system acceptance criteria was crafted in the team charter at the beginning of the overall effort. This ensures that delivery of functionality can move forward to release (if planned) without delays.

Where organizations have not addressed the three criteria, often the response is to implement a “hardening” (also known as development plus one, test after or just a testing sprint), so that the system can be assembled and tested as a whole. Problems found after stories are accepted generally require reopening stories and re-planning. Also if work has gone forward and is being built on potentially bad code, significant rework can be required. My strong advice is to spend the time and money needed to implement the three criteria; therefore removing this need for hardening sprints.

Scaling AUAT to features that require more than a single story, team or sprint to complete is not as simple looking at  each story’s acceptance criteria. Features and the overall system will have their own acceptance criteria. Scaling is facilitated by addressing the technical aspects of Agile and synchronizing activities, however these are only prerequisites to building layers of AUAT into the product development cycle.

Note – We have left a number of hanging issues, such as who should be involved in AUAT and if a story is truly independent does it require higher levels of AUAT? We will address these in the future. Are there other aspects of AUAT that you believe we should be address on this blog?

In goes the money and out comes the soda? It is a test!

In goes the money and out comes the soda? It is a test!

Acceptance testing is necessity when developing any product. My brother, the homebuilder, includes acceptance testing through out the building process. His process includes planned and unplanned walkthroughs and backlog reviews with his clients as the house is built. He has even developed checklists for clients that have never had a custom home built. The process culminates with a final walk through to ensure the homeowner is happy. The process of user acceptance testing in Agile development has many similarities, including: participation by users, building UAT into how the teams and teams-of-teams work and testing user acceptance throughout the product development life cycle.

Acceptance testing is a type of black box testing. The tester knows the inputs and has an expected result in mind, but the window into how the input is transformed is opaque. An example of a black box test for a soda machine would be putting money into a soda machine, pressing the selection button and getting the correct frosty beverage. The tester does not need to be aware of all the steps between hitting the selector and receiving the drink. The story-level AUAT can be incorporated into the the day-to-day activity of an Agile team. Incorporating AUAT activities includes:

  1. Adding the requirement for the development of acceptance tests into the definition of ready to develop. (This will be bookended by the definition of done)
  2. Ensuring that the product owner or a well-regarded subject matter expert for the business participate in defining the acceptance criteria for stories and features.
  3. Reviewing acceptance criteria as part of the story grooming process.
  4. Using Acceptance Test Driven Development (ATDD) or other Test First Development methods. ATDD builds collaboration between the developers, testers and the business into the process by writing acceptance tests before developers begin coding.
  5. Incorporating the satisfaction of the acceptance criteria into the definition of done.
  6. Leveraging the classic Agile demo lead by the product owner to stakeholders performed at the end of each sprint. Completed (done) stories are demonstrated and stakeholders interact with them to make sure their needs are being addressed and to solicit feedback.
  7. Performing a final AUAT step using a soft roll-out or first use technique to generate feedback to collect final user feedback in a truly production environment. One of the most common problems all tests have is that they are executed in an environment that closely mirrors production. The word close is generally the issue, and until the code is run in a true production environment what exactly will happen is unknown. The concept of first use feedback borders on one the more problematic approaches that of throwing code over the wall and testing in production. This should never be the first time acceptance, integration or performance is tested, but rather treated as a mechanism to broaden the pool of feedback available to the team.

In a scaled Agile project acceptance testing at the story level is a step in a larger process of planning and actions. This process typically starts by developing acceptance criteria for features and epics which are then groomed and decomposed into stories.  Once the stories are developed and combined  a final acceptance test at the system or application level is needed to ensure what has been developed works as a whole package and meets the users needs.

Are there other techniques that you use to implement AUAT at the team level?

In the next blog entry we will address ideas for scaling AUAT.

Balloon glows require more expertise than a single team!

Balloon glows require more expertise than a single team!

20258755709_1a1144be89_k

Over the years I have heard many reasons for performing some form of user acceptance testing. Some of those reasons are somewhat humorous, such as “UAT is on the checklist, therefore we have to do it” while some are profound, such as reducing risk of production failures and lack of acceptance. Regardless of the reason acceptance testing does not happen by magic, someone has to plan and execute acceptance testing.  Even in the most automated environment acceptance testing requires a personal touch and in Agile, acceptance testing is a group affair.

The agile literature and pundits talk a great deal about the need for Agile teams to be cross functional. A cross-functional Agile team should include all of the relevant functional and technical expertise needed to deliver the stories they have committed to delivering. Occasionally this idea is taken too far and teams believe they can’t or don’t need to reach beyond their boundaries for knowledge or expertise. This perception is rarely true. Agile teams often need to draw on knowledge, experience and expertise that exists outside the boundary of the team. While the scope of the effort and techniques used in Agile user acceptance testing (AUAT) can impact the number of people and teams that will be involved with Agile user acceptance testing, typically there are a four fairly stable set of capabilities that actively participate in acceptance testing.

  1. The Agile Team – The team (or teams) is always actively engaged in AUAT. AUAT is not a single event, but rather is integrated directly into every step of the product life cycle. Acceptance test cases are a significant part of requirements. Techniques such as Acceptance Test Driven Development require whole team involvement.
  2. Product Owner/Product Management – The product owner is the focal point for AUAT activities. The product owner acts as a conduit for business knowledge and needs into the team. As efforts scale up to require more than a single team or for external software products, product management teams are often needed to convey the interrelationships between features, stories and teams.
  3. Subject Matter Experts/Real users – Subject matter experts (SMEs) know the ins and outs of the product, market or other area of knowledge. Involving SMEs to frame acceptance test or to review solutions evolve provides the team with a ready pool of knowledge that by definition they don’t have. Product owners or product management identify, organize and bring subject matter expertise to the team.
  4. Test Professionals/Test Coaches – AUAT is real testing, therefore everyone that is involved in writing and automating acceptance test cases, creating test environments and executing acceptance testing needs to understand. Test coaches (possibly test architects, also) are very useful to help everyone involved in AUAT regardless of technique to test effectively.

Over the years, who participated in user acceptance testing was as varied as the reason people said they were doing acceptance testing. Sometimes development teams would “perform” acceptance testing as proxy for the users. Other times software would be thrown over the wall and SMEs and other business users would do something that approximated testing. AUAT takes a different approach and builds it directly into the product development flow. Integrating UAT into whole the flow of developing requires that even the most cross-functional team access a whole cavalcade of roles inside and outside of the team to ensure that AUAT reduces the chance of doing the wrong thing and at the same time reduces the chance of doing the right thing wrong.

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

User acceptance test (UAT) is a process that confirms that the output of a project meets the business needs and requirements. Classically, UAT would happen at the end of a project or release. Agile spreads UAT across the entire product development life cycle re-defining acceptance testing as a “formal description of the behavior of a software product[1].” By redefining acceptance testing as a description of what the software does (or is supposed to do) that can be proved (the testing part), Agile makes acceptance more important than ever by making it integral across the entire Agile life cycle.

Agile begins the acceptance testing process as requirements are being discovered. Acceptance tests are developed as part of the of the requirements life cycle in an Agile project because acceptance test cases are a form of requirements in their own right. The acceptance tests are part of the overall requirements, adding depth and granularity to the brevity of the classic user story format (persona, goal, benefit). Just like user stories, there is often a hierarchy of granularity from an epic to a user story. The acceptance tests that describe a feature or epic need to be decomposed in lock step with the decomposition of features and epics into user stories. Institutionalizing the process of generating acceptance tests at the feature and epic level and then breaking the stories and acceptance test cases down as part of grooming is a mechanism to synchronize scaled projects (we will dive into greater detail on this topic in a later entry).

As stories are accepted into sprints and development begins, acceptance test cases become a form of executable specifications. Because the acceptance test describes what the user wants the system to do, then the functionality of the code can be compared to the expected outcome of the acceptance test case to guide the developer.

When development of user stories is done the acceptance test cases provide a final feedback step to prove completion. The output of acceptance testing is a reflection of functional testing that can be replicated as part of the demo process. Typically, acceptance test cases are written by users (often Product Owners or subject matter experts) and reflect what the system is supposed to do for the business. Ultimately, it provides proof to the user community that the team (or teams) are delivering what is expected.

As one sprint follows another, the acceptance test cases from earlier sprints are often recast as functional regression tests cases in later sprints.

Agile user acceptance testing is a direct reflection of functional specifications that guide coding, provide basis for demos and finally, ensure that later changes don’t break functions that were develop and accepted in earlier sprints. UAT in an Agile project is more rigorous and timely than the classic end of project UAT found in waterfall projects.

[1] http://guide.agilealliance.org/guide/acceptance.html, September 2015