Untitled3Large Agile programs, with numerous teams that each have their own cadence and need coordinated releases, add a level of complexity to planning and communicating releases. Instead of focusing on communicating how stories roll up to a release, program-level release plans typically take a higher-level view, communicating the relationships between themes, epics, teams and releases. Shifting to a big picture view allows the release plan to be concise enough to be able to communicate direction and progress.

A Moderately Simple Example

The example above is built on a release plan I helped build several years ago.  Just part of a single year of the program is shown below.  The program was comprised of three sprint teams and a program manager. Each of the teams had a different sprint cadence.  The program was planning to release code to their customer base (internal and external customers) twice during the first year and then every five months afterwards. The five month release cadence was driven by a major, outside client’s need. Each team supported a separate area within the organization and were allowed to release code outside of the program-level releases if needed. The product owners (yes, owners . . . not my favorite idea) prioritized the goals and themes based on business need and perceived value.  Epics that supported those themes were then prioritized based on value and dependencies.  Epics that were perceived to be larger (this group had sized the under stories and summed the size of the stores) drawn as thicker bars or across more sprints (meaning the epic was planned to be completed over a number of sprints).

Note that the plan shown is not complete.  I have used this format as a template several times.  The plan helps everyone understand the program is progressing. In at least one version of the program release plan we have up a “You are here” marker to show the breadth of progress that has occurred and is yet to happen. Keeping the plan as simple as possible makes communicating progress easier and also helps team understand when functions and features are planned.  The program-level release plan does not negate the need for a team-level plan that reflects the planning needed for a more tactical team-level release plan.

For a moderately complex program release plan I have found it necessary to add two other tables.  The first defines the epics related to the theme. Themes are used to describe larger sets of related functions within an application and are used to provide a focus to the team.  Epics are large user stories that are broken down into more granular user stories which can be estimated and completed in a sprint (remember INVEST). Here’s an example using the beer logo glass collection application we have described before.  A development theme might be focused on maintaining the glass collection (maintain equates to adding, changing and deleting glasses). A short description of an epic in that theme might be “add logo glasses”. Examples of the type of tables are shown below:

 Untitled4

When using the program release plan to brief stakeholders, it is expeditious to have an idea where their pet function will be addressed.

The goal of program-level release planning is to help the teams, product owner(s) and the organization to conceptualize the flow of how functionality will be delivered and then to communicate the plan.  As with any other Agile tool, a program release plan will evolve as everyone involved in the gathers project level experience and knowledge.

Logo beer glasses application: Integration testing

Logo beer glasses application: Integration testing

Integration testing can and should be incorporated easily into a test driven development framework (TDD) or any TDD variant. Incorporating integration tests into a TDD, BDD or ATDD takes the bigger picture into account.

Incorporating integration testing into a TDD framework requires knowing how functions and components fit together and then adding the required test cases as acceptance criteria. In an example of TDD with ATDD and BDD attributes we began defining an application for maintain my beer glass collection. We developed a set of Cucumber-based behavior driven development tests cases.  These tests are critical for defining done for the development of the logo glass entry screen. The entry screen is just a single story in a larger application.

A broader view of the application:

Untitled

An example of the TDD test that we wrote for the logo glass entry screen was:

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Based on our discussion in Integration Testing and Unit Testing, Different?, this test is a unit test. The test case doesn’t cross a boundary, it answers a single question, and is designed to providing information to a developer.

Using the same Cucumber format, two examples of integration tests for the logo glass add function would be:

The first of these examples describes a test for the integration between the test logo glass add function and the brewery database should act.

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass if the brewery name entered is not on the brewery data base

Then the new brewery screen should be opened and the message “Please add the brewery before proceeding.”

The second case describes a test for the integration between the logo glass entry screen and the logo glass database.

Scenario: When the glass is added and all error conditions are resolved, the glass should be inserted into the database (the glass logical file).

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then the record for the lass should be inserted into the database and I should see the message “The glass has been added.”

In both of these cases a boundary is crossed, more than one unit is being evaluated and, because the test is broader, more than one role will find the results useful.  These test cases fit the classic definition of integration tests. Leveraging the Cucumber framework, integration tests can be written as part of the acceptance criteria (acceptance test driven development).  As more functions are developed or changed as part of the project broader integration tests can be added as acceptance criteria which provides a basis for continually ensuring overall integration occurs.

Incorporating integration test cases into a TDD, BDD or ATDD framework ensues that the development team isn’t just focused on delivering individual stories, but rather on delivering a greater whole that works well and plays well with itself and any other relevant hardware and applications.

USA border crossing

USA border crossing

Are unit and integration testing the same things masquerading under a different name? No. Unit testing is a process in which a developer tests a very specific self-contained function all by itself, whereas we have defined integration testing as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements. Unit testing and integration testing are fundamentally different forms of testing.  There are three major differences.

Boundaries: All levels of integration testing cross boundaries either between classes, functions or components.  The goal is to determine if the parts of the application fit together and perform as expected.  Unit tests focused on the performance of a single function by definition can’t cross boundaries and do not test dependencies outside of the function being tested.

Scope: A unit test is focused on answering a single question: Based on the test input, does the specific function perform as expected? An example of a simple unit test might be, “if I put a number in a field that only accepts letters, do I get an error?” Because integration tests reflect the interaction between functions or components, they must answer several questions. For example, in Test Driven Development: An example of TDD with ATDD and BDD attributes, we described a simple beer glass collection tracking application. In the application, the user enters the glass being collected into a screen, and after validation, the database is updated. An integration test would need to be written and performed that tests sending the information from the data entry function to the database.  The test would cover several different specific points such as, was the information sent from one module to the other, was it received by the other, and was it inserted in the database correctly.

Role Involvement: Unit testing is part of the coding process.  Occasionally I see tester doing a developer’s unit testing – this is a VERY POOR practice. At its very simplest, the coding process can be described as think, write code, see if it works, if it doesn’t work, go back to thinking. Then repeat, and if it works, commit the code and go to the next requirement. The “see if it works” step is unit testing. Integration testing in its most granular form reflects a transition between coding and validation processes. The transition means that the results need to more broadly seen and interpreted to ensure that all of the parts being developed or changed in a project fit together. Testers, business analysts, developers and sometimes business stakeholders and product owners can be part of executing, interpreting and consuming integration tests.

Unit testing and integration testing are at times easily confused, this is most true when considering integration tests focused at the connections between functions within a single component. However, if we consider whether boundaries are involved and the number of conditions/questions the test is resolving (which suggests that number of roles need to understand the results) the distinction becomes fairly stark.

Logo beer glasses come in all shapes and sizes!

Logo beer glasses come in all shapes and sizes!

An example of TDD with ATDD and BDD attributes
(or TDD/ATDD/BDD run through a blender just a bit)

I.         The team establishes a backlog of prioritized user stories based on the functional and architectural requirements. Remember that the backlog is dynamic and will evolve over the life of the project.  Story Maps are one method of visualizing the backlog.

II.         The Three Amigos (Product Owner, Lead Developer and Tester or Business Analyst) review the stories and their specifications to be taken into the next sprint.  They make sure that each story has acceptance test cases, add any missing scenarios or requirements needed. Here’s an example:

Feature:  Add logo glass to my collection

Story: As a beer logo glass collector, I want to be able to log a logo glass into my collection so I do not recollect the same glass!

Notes: It is important to capture the information needed to identify the glass.  A screen is needed that captures, brewery name, glass logo copy, glass type and price.

Logo Glass Entry Screen 

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Scenario: Glass logo copy is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass logo copy blank

Then I should see the error message “The glass logo copy can’t be blank”

Scenario: Glass type is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should see the error message “The glass type can’t be blank”

Scenario: Glass cost is not a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should not see an error message

Scenario: Glass added message should be displayed when glass data entry is complete

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then I should see the message “The glass has been added”

 III.         For the stories accepted into the sprint, the tester would ensure the acceptance tests fail (the scenarios above are acceptance tests)  and then automate the acceptance test cases (assuming testing is automated, which is a really important concept).  Concurrently the developer would implement the feature in the code. The developer and the tester will collaborate to testing the implemented feature. This is an iterative process until the feature passes the tests.

IV.         Once the feature has passed the tests, any refactoring of the design is done (remember the difference between test-first and test driven development). The tests are retained and incorporated into the regression test suite.

V.         Start over on the next story!