264237296_864c89a913_z

As we have seen, product owners act as a conduit between the business and/or customers and the team. The product owner’s role encompasses not only the definition of what gets done and when, but also the level of quality of what gets delivered.  One of the facets of the role affecting quality comes when the product owner participates in writing the acceptance criteria for user stories.  Acceptance criteria are part of well-formed user stories and are crafted early in the life cycle when stories are generated and refined as they are groomed.  The product owner role provides another check on quality and occurs when the product owner uses his or her authority to accept completed stories.  I don’t want to suggest that the product owner is only active at the start and end of a sprint or iteration. The product owner interacts with the team on a continuous basis in order to guide the work and the culture. Adoption of acceptance test-driven development (ATDD) is an excellent method of instantiating the product owner’s role in both shaping the vision for the team and influencing the quality of the work a team delivers. (more…)

There really is a difference  . . .

There really is a difference . . .

Classic software development techniques follow a pattern of analysis, design, coding and testing. Regardless of whether the process is driven by a big upfront design or incremental design decisions, code is written and then tests are executed for the first time. Test plans, scenarios and cases may well be developed in parallel with code (generally only with a minimum of interaction), however execution only occurs as code is completed. In test-last models, the roles of developer and tester are typically filled by professionals with very different types of training and many times from separate management structures. The differences in training and management structure create structural communication problems. At best the developers and testers are viewed as separate but equal, although rarely do you see testers and developers sharing tables at lunch.

Testing last supports a sort of two tiered development environment apartheid in which testers and developers are kept apart. I have heard it argued that anyone that learned to write code in school has been taught how to test the work they have create, at least in a rudimentary manner, therefore they should be able to write the test cases needed to implement test-driven development (TDD). When the code is thrown over the wall to the testers, the test cases may or may not be reused to build regression suites. Testers have to interpret both the requirements and implementation approaches based on point in time reviews and documentation.

Test-first development techniques take a different approach, and therefore require a different culture. The common follow of a test first process is:

  • The developer accepts a unit of work and immediately writes a set of tests that will prove that the unit of work actually functions correctly.
  • Run the tests.  The tests should fail.
  • Write the code need to solve the problem.  Write just enough code to really solve the problem.
  • Run the test suite again.  The test should pass.

Most adherents of Agile development methods have heard of TDD which is the most common instantiation of the broader school of test-first development (TFD).  TDD extends TFD by adding a final refactoring step to the flow in which developers write the tests cases needed to prove that unit of work solves the business problem and is technically correct. Other variants, such as behavior-driven and acceptance test-driven development, are common.  In all cases the process follows the same cycle of writing tests, running the tests, writing the code, re-running the tests and then  refactoring the only difference being the focus tests.

TFD techniques intermingle coding and testing disciplines, creating a codependent chimera in which neither technique can exist without the other. For TFD to work most effectively coders and testers must understand each other and be able to work together. Pairing testers and developers together to write the test case either for component/unit testing (TDD), behavior-driven tests (BDD) or acceptance testing (ATDD) creates an environment of collaboration and communication.  Walls between the two groups will be weak, and over time, torn down.

Test-first development and test-last development techniques are different both in terms of the how work is performed and how teams are organized. TFD takes a collaborative approach to delivering value, while test-last approaches are far more adversarial in nature.

ACUT isn't all sunshine and flowers.

AUAT isn’t all sunshine and flowers.

Agile user acceptance testing (AUAT) confirms that the output of a project meets the business’ needs and requirements. The concept of acceptance testing early and often is almost inarguable, whether you are using Agile or any other method. AUAT generates early customer feedback, which increases customer satisfaction and reduces the potential for delivering defects. The problem is that implementing an effective and efficient AUAT isn’t always easy. Several complicating factors include:

  1. Having the right technical environment. Pithythoughts strongly argued in his comments (read them here) that acceptance testing without exposure to the true production environment is insufficient to really determine whether the work is “done and that business value has been delivered.” I believe that no seasoned developer or IT pro would argue, however it rarely happens except for new products or for relatively self-contained applications.  For example, consider the expense of having an environment that mirrors production for SAP on top of multiple test and development environments. Environmental compromises are made, and each level of abstraction away from the production mirror increases the risk that what seems ‘acceptable’ might not be in the real world.
  2. Acceptance test cases can be fragile. One of the central tenants in Agile projects is to embrace change. In many cases that change will need to be reflected in the acceptance test cases and criteria. Given that acceptance tests are known for suffering from false positive test failures (they are perceived to be true when they are not) they need to be carefully evaluated as the project evolves or the feedback the team gets from Agile acceptance testing can be counterproductive. Acceptance test cases need to be refactored just like code.
  3. Acceptance Test Driven Development (ATDD) can encourage big upfront design rather than emergent design favored in Agile projects. The development of acceptance criteria can lead the product owner (or other stakeholders) involved in developing the test cases to focus on “how” business value will be delivered technically rather than on the functionality or the “what” will be delivered. Fixing the design too early any project increases the likelihood of rework, and many times is the reason for rejecting change later in the project.  An emergent design allows teams to design only what is needed and then to learn from implementing and demonstrating that design.  Acceptance testing is part of the feedback loop needed to create an evolving design.
  4. Some product owners are unable or unwilling to write acceptance criteria and tests. Many times, product owners have day jobs in addition to their role as the voice to the business, which makes it harder to participate in an Agile team.  In rare cases, some product owners look at the activity of writing Agile user acceptance test cases as being beneath them. I once heard a “product owner” state that writing test cases was an IT job or maybe a clerk’s job – certainly not something he should do. Regardless of whether you are using Agile or waterfall, user acceptance testing is a critical step towards implementation. I find that in these scenarios, generating the proper level of product owner involvement takes a significant amount of time and effort. In the short run, partner a business analyst with the product owner to shepherd the creation of acceptance criteria and then pursue a long term coaching opportunity to change the product owner’s attitude.

Agile user acceptance testing is a method of pursuing feedback from the business early and often by focusing the product owner (and by extension the rest of the business) on considering and then documenting the proof they will need to accept what the project will deliver. AUAT is not easy nor is it free.  There are numerous issues that need to be dealt with to make sure Agile acceptance testing is done correctly.  Without an environment that is as close to production it will be difficult to interpret the results.  Just because you capture acceptance criteria once does not mean you are done, acceptance criteria need to be continually review and refactored just like code.  If the process you are using to generate the acceptance criteria mean that you need to develop the solution too early, you need to step back and take the process more incrementally. Finally remember the word user in the Agile user acceptance testing otherwise you are just doing developer-based functionality testing.  Agile user acceptance testing is not all sunshine and flowers, but the outcome is generally worth the hassle.

Logo beer glasses application: Integration testing

Logo beer glasses application: Integration testing

Integration testing can and should be incorporated easily into a test driven development framework (TDD) or any TDD variant. Incorporating integration tests into a TDD, BDD or ATDD takes the bigger picture into account.

Incorporating integration testing into a TDD framework requires knowing how functions and components fit together and then adding the required test cases as acceptance criteria. In an example of TDD with ATDD and BDD attributes we began defining an application for maintain my beer glass collection. We developed a set of Cucumber-based behavior driven development tests cases.  These tests are critical for defining done for the development of the logo glass entry screen. The entry screen is just a single story in a larger application.

A broader view of the application:

Untitled

An example of the TDD test that we wrote for the logo glass entry screen was:

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Based on our discussion in Integration Testing and Unit Testing, Different?, this test is a unit test. The test case doesn’t cross a boundary, it answers a single question, and is designed to providing information to a developer.

Using the same Cucumber format, two examples of integration tests for the logo glass add function would be:

The first of these examples describes a test for the integration between the test logo glass add function and the brewery database should act.

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass if the brewery name entered is not on the brewery data base

Then the new brewery screen should be opened and the message “Please add the brewery before proceeding.”

The second case describes a test for the integration between the logo glass entry screen and the logo glass database.

Scenario: When the glass is added and all error conditions are resolved, the glass should be inserted into the database (the glass logical file).

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then the record for the lass should be inserted into the database and I should see the message “The glass has been added.”

In both of these cases a boundary is crossed, more than one unit is being evaluated and, because the test is broader, more than one role will find the results useful.  These test cases fit the classic definition of integration tests. Leveraging the Cucumber framework, integration tests can be written as part of the acceptance criteria (acceptance test driven development).  As more functions are developed or changed as part of the project broader integration tests can be added as acceptance criteria which provides a basis for continually ensuring overall integration occurs.

Incorporating integration test cases into a TDD, BDD or ATDD framework ensues that the development team isn’t just focused on delivering individual stories, but rather on delivering a greater whole that works well and plays well with itself and any other relevant hardware and applications.

How do you weight costs and benefits?

How do you weight costs and benefits?

Introducing any process improvement requires weighing the costs and benefits.  Most major improvements come with carefully crafted cost/benefit analyses. Where the benefits outweigh the costs, changes are implemented.  The goal is to the decide where to expend precious process improvement capital so that it has the most benefit.  Many of the benefits and criticisms of TDD and other test-first techniques can be quantified, but others are less tangible.  In many cases the final decision rests on personal biases, beliefs and past experiences. In the end, our organization and environment may yield different results – there is no perfect ROI model.

In a perfect world, we would pilot TDD with a typical team to determine the impact. For example, will the cost of change be higher than anticipated (or lower), or will the improvement in quality be worth the effort to learn and implement?  In many cases pilot teams for important process improvements are staffed with the best and the brightest.  Unfortunately these are the people that will deliver results regardless of barriers.  The results are rarely perfectly extensible because the capabilities of the best and brightest aren’t generally the same as a normal team. Another typical failure many organizations make is to reflect on the results of a team that embraces TDD (or any other change) on their own.  Expecting the results of teams that self-selects to extend to the larger organization is problematic.  Those that self-select change are more apt to work harder to make TDD work.

In a perfect world we would be able to test our assumptions about the impact of TDD in our organization based on the impact typical groups. However, sometimes we have to rely on the results of the best team or the results of true believers.  Either case is better than having to rely solely on studies of from outside the organization.  Just remember to temper your interpretation of the data when you are weighing your interpretation of costs and benefits.

A 10 mile run requires effort but that does not mean it should be avoided.

A 10 mile run requires effort but that does not mean it should be avoided.

Implementing and using Test Driven Development (TDD) and other related test-first techniques can be difficult. There are a significant number of criticisms of these methods.  These criticisms fall into two camps: effort related and not full testing.

Effort-related criticisms reflect that all test-first techniques require an investment in time and effort.  The effort to embrace TDD, ATDD or BDD begins when teams learn the required philosophies and techniques.  Learning anything new requires an investment.  The question to ask is whether the investment in learning these techniques will pay off in a reasonable period of time?  While I do not think I am the perfect yardstick (I have both a development and testing background), however I learned enough of the BDD technique to be dangerous in less than a day.  The criticism is true, however I believe the impact is overstated.

The second effort-related criticism is that early in a development project developers might be required to create stubs (components that emulate parts of a system that have not been developed) or testing harnesses (code that holds created components together before the whole system exists). However stubs and harnesses can generally be reused as the system is built and enhanced when testing.  I have found that creating and keeping a decent library of stubs and harnesses generates good discussion of interfaces and reduces the number of “I can’t test that until integration” excuses.  Again true, but overstated.

The third effort-related criticism is that in order to effectively do TDD you need to have automated testing (inferred in this criticism is the effort, time and cost for test automation).  I have seen both TDD and ATDD done without automation . . . my brother also had a tooth pulled without anesthetic, I recommend neither.  Test automation is important to making TDD efficient.  Writing test tools, learning the tools and writing test scripts does take effort.  Again the criticism is true, however test automation makes sense (and requires someone to learn it) even if you are not doing TDD.

The final effort-related criticism is that TDD, ATDD and BDD is hard to learn.  I respectfully disagree.  TDD, ATDD and BDD are different concepts than many team members have been exposed before in career. Just because they are different does not mean they are difficult to learn.  This is most likely a reflection of a fear of change. Change is hard, especially if a team is successful.  I would suggest implementing TDD once your team or organization has become comfortable with Agile and has begun to implement test automation which makes “learning” TDD easier.

A second class of criticism of TDD, ATDD or BDD is that these techniques are not full testing. Organizations that decide that test-first techniques should replace all types of testing are generally in for a rude awakening. Security testing is just one example of overall testing that will still be required. TDD, ATDD or BDD are development methods that can be used to replace some testing, but not all types of testing.

Also in the full testing category of criticisms, TDD does not help teams to learn good testing. I agree that just writing and executing tests doesn’t teach testing.  What does lead people to learn to test is the discussion of how to test and gaining experience. Agile teams are built on interaction and collaboration, which provides a platform for growth.  Neither test-first nor test-last (waiting until coding done) leads a team to good testing. Therefore this criticism is true but not constrained to use of TDD.

Embracing TDD, ATDD or BDD will require effort to learn and implement. If you can’t afford the time and effort, wait until you can.  Embracing any of the test-first techniques will require that teams change their behavior. If you can’t spend the effort on organizational change management, wait until you can. Test automation is important for efficient TDD.  If you can’t buy or write testing tools I would still experiment with test-first, but recognize that sooner or later you will need to bite the bullet. Finally, TDD is generally not sufficient to replace all testing.  The criticisms of are mostly true but they are not sufficient to overwhelm the benefits of using these techniques.

KISS: Keep it Simple Stupid!

KISS: Keep it Simple Stupid!

Test Driven Development (TDD) is an approach to development in which you write a test that proves the piece of work you are working on, write the code required to pass the test and then update the design based on what you have learned.  The short cycle of TDD, Acceptance Test Driven Development (ATDD) or even Behavior Driven Development (BDD) deliver a variety of benefits to the organizations that practice these techniques.  Some of the benefits are obvious, while others are less apparent.

Fewer Delivered Defects:  This is probably the most obvious benefit to using any of the test-first techniques.  Testing finds defects, timelier testing is more apt to find defects before they get delivered and better testing tends to find more defects.  The benefit to fewer defects is customers that swear less often.

Communication:  This the second most obvious benefit. The process of developing tests creates a platform to ensure that the product owner, business analyst or stakeholders communicate the meaning of a user story before the first line of code is written. The process of developing the tests requires conversation, which is at the heart of Agile techniques. Better communication yields software better targeted to the organization’s needs.

Better Tests:  This benefit is related to delivering fewer defects, but is far less obvious.  Tests that are targeted at proving that the functionality being delivered or changed will be significantly more targeted and lean.  Whether leveraging the xP technique of pair programming or the Three Amigos technique, TDD (or any of the test-first techniques) ensures that the tests are reviewed early and often which helps to ensure only what is needed is tested.  Tighter testing will not only result in fewer delivered defects, but also less costly testing due to targeting.

Code Quality:  TDD, ATDD and BDD all encourage developers to write only the code needed to pass the tests.  The test-first techniques embrace the old adage “keep it simple stupid” through the refactoring step. By only writing as much code needed to pass the test, it will remain as simple as possible. This will yield higher code quality because there will be less chances to embed defects.

Less Dead Code:  The final and least obvious benefit of TDD is a general reduction of dead code.  When I worked in a development organization one of the things that drove me crazy was that amount of old dead code in the programs I interacted with.  Some code was commented out, left in “just in case” and some code had just become un-executable because of logic changes. Refactoring helps remove dead code making programs more readable and therefore easier to maintain. Writing simple, focused code, just enough to pass the test, makes it less likely that you will create dead code.

TDD, ATDD and BDD all deliver significant value to organizations that use these techniques.  ATDD and BDD with their additional focus on acceptance tests may encourage deeper communication between developers, testers and stakeholders. Regardless of which technique you use these test-first techniques improve communication and testing, providing teams with an ability to deliver more value and generate higher customer satisfaction.

Logo beer glasses come in all shapes and sizes!

Logo beer glasses come in all shapes and sizes!

An example of TDD with ATDD and BDD attributes
(or TDD/ATDD/BDD run through a blender just a bit)

I.         The team establishes a backlog of prioritized user stories based on the functional and architectural requirements. Remember that the backlog is dynamic and will evolve over the life of the project.  Story Maps are one method of visualizing the backlog.

II.         The Three Amigos (Product Owner, Lead Developer and Tester or Business Analyst) review the stories and their specifications to be taken into the next sprint.  They make sure that each story has acceptance test cases, add any missing scenarios or requirements needed. Here’s an example:

Feature:  Add logo glass to my collection

Story: As a beer logo glass collector, I want to be able to log a logo glass into my collection so I do not recollect the same glass!

Notes: It is important to capture the information needed to identify the glass.  A screen is needed that captures, brewery name, glass logo copy, glass type and price.

Logo Glass Entry Screen 

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Scenario: Glass logo copy is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass logo copy blank

Then I should see the error message “The glass logo copy can’t be blank”

Scenario: Glass type is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should see the error message “The glass type can’t be blank”

Scenario: Glass cost is not a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should not see an error message

Scenario: Glass added message should be displayed when glass data entry is complete

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then I should see the message “The glass has been added”

 III.         For the stories accepted into the sprint, the tester would ensure the acceptance tests fail (the scenarios above are acceptance tests)  and then automate the acceptance test cases (assuming testing is automated, which is a really important concept).  Concurrently the developer would implement the feature in the code. The developer and the tester will collaborate to testing the implemented feature. This is an iterative process until the feature passes the tests.

IV.         Once the feature has passed the tests, any refactoring of the design is done (remember the difference between test-first and test driven development). The tests are retained and incorporated into the regression test suite.

V.         Start over on the next story!

Testing is rarely a recipe.

Testing is rarely a recipe.

Test Driven Development (TDD) is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. There are a number of different versions of the basic TDD concepts driven by different development philosophies.  The most common are:

  • Test Driven Development (TDD) – The classic extension of test-first development that incorporates code and design refactoring.  A developer using TDD will develop a test(s) for the unit of work he or she is working on first. Then run the test(s) to validate that it fails.  Once the test fails he or she writes only the code needed to satisfy the unit of work.  The code and test are repeated until the tests are passed.  Finally, design refactoring is performed based on the implementation of the code. The tests that are written in a TDD environment tend to be focused on unit and functional testing. TDD was originally linked to Extreme Programming, where the practice of pairing (two developers working together on one unit of work) helped to ensure that the tests were not tailored to pass based on an individual developers point of view. TDD proves that the work was done correctly.
  • Acceptance Test Driven Development (ATDD) – ATDD begins by generating a set acceptance test cases based on the team’s shared understanding of the units of work the team is being asked to deliver.  The goal of ATDD is to ensure everyone on the team has the same perception about what is being built. ATDD ensures that the right things are getting built.  Acceptance tests cases nearly always provide additional depth for a user story or the requirements which further supports generating a shared vision among the team
  • Behavior Driven Development (BDD) – BDD leverages facets of both TDD and ATDD.  From TDD  we take the unit/functional test focus  and from ATDD we bring int the acceptance test focus.  Test formats tend to be in an object oriented class format that specifies the desired behavior of the unit of work.  Sample of BDD test formats are: Given <initial context>, when <event>, then <outcome>. I have seen and used variants that include formats such as [given, when, then, and] or [given, but, then] and other variations. In BDD, tests are written in a precise format that supports automation if the test (an example of writing tests in a specific manner is Cucumber), while at the same time are representing real behaviors and being human readable.

Test Driven Development and its cousins, ATDD and BDD, are not only development approaches but also communication tools.  TDD tends to focus communication within the core team. ATDD, while still focused on the core team, can also be used to incorporate stakeholder participation by including stakeholders in generating tests.   BDD can be used to bridge the communication gap between stakeholders, developers and testers by making tests represent explicit behaviors that all parties can read and understand. Writing tests first helps everyone understand what will be developed and in proving that what was developed meets that understanding.