264237296_864c89a913_z

As we have seen, product owners act as a conduit between the business and/or customers and the team. The product owner’s role encompasses not only the definition of what gets done and when, but also the level of quality of what gets delivered.  One of the facets of the role affecting quality comes when the product owner participates in writing the acceptance criteria for user stories.  Acceptance criteria are part of well-formed user stories and are crafted early in the life cycle when stories are generated and refined as they are groomed.  The product owner role provides another check on quality and occurs when the product owner uses his or her authority to accept completed stories.  I don’t want to suggest that the product owner is only active at the start and end of a sprint or iteration. The product owner interacts with the team on a continuous basis in order to guide the work and the culture. Adoption of acceptance test-driven development (ATDD) is an excellent method of instantiating the product owner’s role in both shaping the vision for the team and influencing the quality of the work a team delivers. (more…)

Listen to the Software Process and Measurement Cast 295!

SPaMCAST 295 features our essay on Test Driven Development (TDD). TDD is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. You then refactor that code to eliminate duplication and any overlap, then repeat until all of the work is completed. Philosophically, Agile practitioners see TDD as a tool either to improve requirements and design (specification) or to improve the quality of the code.  This is similar to the distinction between verification (are you doing the right thing) and validation (are you doing the thing right).

We also have a new entry from the Software Sensei, Kim Pries. Kim addresses cognitive load theory.  Cognitive load theory helps explain how learning and change occur at personnel, team and organizational levels.

Next week we will feature our interview with Jeff Dalton. Jeff and I talked about making Agile resilient.  Jeff posits that the CMMI can be used to strengthen and reinforce Agile. This is an important interview for organizations that are considering scaled Agile frameworks.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT – The Impact of Cognitive Bias On Teams

Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Logo beer glasses application: Integration testing

Logo beer glasses application: Integration testing

Integration testing can and should be incorporated easily into a test driven development framework (TDD) or any TDD variant. Incorporating integration tests into a TDD, BDD or ATDD takes the bigger picture into account.

Incorporating integration testing into a TDD framework requires knowing how functions and components fit together and then adding the required test cases as acceptance criteria. In an example of TDD with ATDD and BDD attributes we began defining an application for maintain my beer glass collection. We developed a set of Cucumber-based behavior driven development tests cases.  These tests are critical for defining done for the development of the logo glass entry screen. The entry screen is just a single story in a larger application.

A broader view of the application:

Untitled

An example of the TDD test that we wrote for the logo glass entry screen was:

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Based on our discussion in Integration Testing and Unit Testing, Different?, this test is a unit test. The test case doesn’t cross a boundary, it answers a single question, and is designed to providing information to a developer.

Using the same Cucumber format, two examples of integration tests for the logo glass add function would be:

The first of these examples describes a test for the integration between the test logo glass add function and the brewery database should act.

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass if the brewery name entered is not on the brewery data base

Then the new brewery screen should be opened and the message “Please add the brewery before proceeding.”

The second case describes a test for the integration between the logo glass entry screen and the logo glass database.

Scenario: When the glass is added and all error conditions are resolved, the glass should be inserted into the database (the glass logical file).

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then the record for the lass should be inserted into the database and I should see the message “The glass has been added.”

In both of these cases a boundary is crossed, more than one unit is being evaluated and, because the test is broader, more than one role will find the results useful.  These test cases fit the classic definition of integration tests. Leveraging the Cucumber framework, integration tests can be written as part of the acceptance criteria (acceptance test driven development).  As more functions are developed or changed as part of the project broader integration tests can be added as acceptance criteria which provides a basis for continually ensuring overall integration occurs.

Incorporating integration test cases into a TDD, BDD or ATDD framework ensues that the development team isn’t just focused on delivering individual stories, but rather on delivering a greater whole that works well and plays well with itself and any other relevant hardware and applications.

How do you weight costs and benefits?

How do you weight costs and benefits?

Introducing any process improvement requires weighing the costs and benefits.  Most major improvements come with carefully crafted cost/benefit analyses. Where the benefits outweigh the costs, changes are implemented.  The goal is to the decide where to expend precious process improvement capital so that it has the most benefit.  Many of the benefits and criticisms of TDD and other test-first techniques can be quantified, but others are less tangible.  In many cases the final decision rests on personal biases, beliefs and past experiences. In the end, our organization and environment may yield different results – there is no perfect ROI model.

In a perfect world, we would pilot TDD with a typical team to determine the impact. For example, will the cost of change be higher than anticipated (or lower), or will the improvement in quality be worth the effort to learn and implement?  In many cases pilot teams for important process improvements are staffed with the best and the brightest.  Unfortunately these are the people that will deliver results regardless of barriers.  The results are rarely perfectly extensible because the capabilities of the best and brightest aren’t generally the same as a normal team. Another typical failure many organizations make is to reflect on the results of a team that embraces TDD (or any other change) on their own.  Expecting the results of teams that self-selects to extend to the larger organization is problematic.  Those that self-select change are more apt to work harder to make TDD work.

In a perfect world we would be able to test our assumptions about the impact of TDD in our organization based on the impact typical groups. However, sometimes we have to rely on the results of the best team or the results of true believers.  Either case is better than having to rely solely on studies of from outside the organization.  Just remember to temper your interpretation of the data when you are weighing your interpretation of costs and benefits.

A 10 mile run requires effort but that does not mean it should be avoided.

A 10 mile run requires effort but that does not mean it should be avoided.

Implementing and using Test Driven Development (TDD) and other related test-first techniques can be difficult. There are a significant number of criticisms of these methods.  These criticisms fall into two camps: effort related and not full testing.

Effort-related criticisms reflect that all test-first techniques require an investment in time and effort.  The effort to embrace TDD, ATDD or BDD begins when teams learn the required philosophies and techniques.  Learning anything new requires an investment.  The question to ask is whether the investment in learning these techniques will pay off in a reasonable period of time?  While I do not think I am the perfect yardstick (I have both a development and testing background), however I learned enough of the BDD technique to be dangerous in less than a day.  The criticism is true, however I believe the impact is overstated.

The second effort-related criticism is that early in a development project developers might be required to create stubs (components that emulate parts of a system that have not been developed) or testing harnesses (code that holds created components together before the whole system exists). However stubs and harnesses can generally be reused as the system is built and enhanced when testing.  I have found that creating and keeping a decent library of stubs and harnesses generates good discussion of interfaces and reduces the number of “I can’t test that until integration” excuses.  Again true, but overstated.

The third effort-related criticism is that in order to effectively do TDD you need to have automated testing (inferred in this criticism is the effort, time and cost for test automation).  I have seen both TDD and ATDD done without automation . . . my brother also had a tooth pulled without anesthetic, I recommend neither.  Test automation is important to making TDD efficient.  Writing test tools, learning the tools and writing test scripts does take effort.  Again the criticism is true, however test automation makes sense (and requires someone to learn it) even if you are not doing TDD.

The final effort-related criticism is that TDD, ATDD and BDD is hard to learn.  I respectfully disagree.  TDD, ATDD and BDD are different concepts than many team members have been exposed before in career. Just because they are different does not mean they are difficult to learn.  This is most likely a reflection of a fear of change. Change is hard, especially if a team is successful.  I would suggest implementing TDD once your team or organization has become comfortable with Agile and has begun to implement test automation which makes “learning” TDD easier.

A second class of criticism of TDD, ATDD or BDD is that these techniques are not full testing. Organizations that decide that test-first techniques should replace all types of testing are generally in for a rude awakening. Security testing is just one example of overall testing that will still be required. TDD, ATDD or BDD are development methods that can be used to replace some testing, but not all types of testing.

Also in the full testing category of criticisms, TDD does not help teams to learn good testing. I agree that just writing and executing tests doesn’t teach testing.  What does lead people to learn to test is the discussion of how to test and gaining experience. Agile teams are built on interaction and collaboration, which provides a platform for growth.  Neither test-first nor test-last (waiting until coding done) leads a team to good testing. Therefore this criticism is true but not constrained to use of TDD.

Embracing TDD, ATDD or BDD will require effort to learn and implement. If you can’t afford the time and effort, wait until you can.  Embracing any of the test-first techniques will require that teams change their behavior. If you can’t spend the effort on organizational change management, wait until you can. Test automation is important for efficient TDD.  If you can’t buy or write testing tools I would still experiment with test-first, but recognize that sooner or later you will need to bite the bullet. Finally, TDD is generally not sufficient to replace all testing.  The criticisms of are mostly true but they are not sufficient to overwhelm the benefits of using these techniques.

KISS: Keep it Simple Stupid!

KISS: Keep it Simple Stupid!

Test Driven Development (TDD) is an approach to development in which you write a test that proves the piece of work you are working on, write the code required to pass the test and then update the design based on what you have learned.  The short cycle of TDD, Acceptance Test Driven Development (ATDD) or even Behavior Driven Development (BDD) deliver a variety of benefits to the organizations that practice these techniques.  Some of the benefits are obvious, while others are less apparent.

Fewer Delivered Defects:  This is probably the most obvious benefit to using any of the test-first techniques.  Testing finds defects, timelier testing is more apt to find defects before they get delivered and better testing tends to find more defects.  The benefit to fewer defects is customers that swear less often.

Communication:  This the second most obvious benefit. The process of developing tests creates a platform to ensure that the product owner, business analyst or stakeholders communicate the meaning of a user story before the first line of code is written. The process of developing the tests requires conversation, which is at the heart of Agile techniques. Better communication yields software better targeted to the organization’s needs.

Better Tests:  This benefit is related to delivering fewer defects, but is far less obvious.  Tests that are targeted at proving that the functionality being delivered or changed will be significantly more targeted and lean.  Whether leveraging the xP technique of pair programming or the Three Amigos technique, TDD (or any of the test-first techniques) ensures that the tests are reviewed early and often which helps to ensure only what is needed is tested.  Tighter testing will not only result in fewer delivered defects, but also less costly testing due to targeting.

Code Quality:  TDD, ATDD and BDD all encourage developers to write only the code needed to pass the tests.  The test-first techniques embrace the old adage “keep it simple stupid” through the refactoring step. By only writing as much code needed to pass the test, it will remain as simple as possible. This will yield higher code quality because there will be less chances to embed defects.

Less Dead Code:  The final and least obvious benefit of TDD is a general reduction of dead code.  When I worked in a development organization one of the things that drove me crazy was that amount of old dead code in the programs I interacted with.  Some code was commented out, left in “just in case” and some code had just become un-executable because of logic changes. Refactoring helps remove dead code making programs more readable and therefore easier to maintain. Writing simple, focused code, just enough to pass the test, makes it less likely that you will create dead code.

TDD, ATDD and BDD all deliver significant value to organizations that use these techniques.  ATDD and BDD with their additional focus on acceptance tests may encourage deeper communication between developers, testers and stakeholders. Regardless of which technique you use these test-first techniques improve communication and testing, providing teams with an ability to deliver more value and generate higher customer satisfaction.

Logo beer glasses come in all shapes and sizes!

Logo beer glasses come in all shapes and sizes!

An example of TDD with ATDD and BDD attributes
(or TDD/ATDD/BDD run through a blender just a bit)

I.         The team establishes a backlog of prioritized user stories based on the functional and architectural requirements. Remember that the backlog is dynamic and will evolve over the life of the project.  Story Maps are one method of visualizing the backlog.

II.         The Three Amigos (Product Owner, Lead Developer and Tester or Business Analyst) review the stories and their specifications to be taken into the next sprint.  They make sure that each story has acceptance test cases, add any missing scenarios or requirements needed. Here’s an example:

Feature:  Add logo glass to my collection

Story: As a beer logo glass collector, I want to be able to log a logo glass into my collection so I do not recollect the same glass!

Notes: It is important to capture the information needed to identify the glass.  A screen is needed that captures, brewery name, glass logo copy, glass type and price.

Logo Glass Entry Screen 

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Scenario: Glass logo copy is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass logo copy blank

Then I should see the error message “The glass logo copy can’t be blank”

Scenario: Glass type is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should see the error message “The glass type can’t be blank”

Scenario: Glass cost is not a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should not see an error message

Scenario: Glass added message should be displayed when glass data entry is complete

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then I should see the message “The glass has been added”

 III.         For the stories accepted into the sprint, the tester would ensure the acceptance tests fail (the scenarios above are acceptance tests)  and then automate the acceptance test cases (assuming testing is automated, which is a really important concept).  Concurrently the developer would implement the feature in the code. The developer and the tester will collaborate to testing the implemented feature. This is an iterative process until the feature passes the tests.

IV.         Once the feature has passed the tests, any refactoring of the design is done (remember the difference between test-first and test driven development). The tests are retained and incorporated into the regression test suite.

V.         Start over on the next story!

Testing is rarely a recipe.

Testing is rarely a recipe.

Test Driven Development (TDD) is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. There are a number of different versions of the basic TDD concepts driven by different development philosophies.  The most common are:

  • Test Driven Development (TDD) – The classic extension of test-first development that incorporates code and design refactoring.  A developer using TDD will develop a test(s) for the unit of work he or she is working on first. Then run the test(s) to validate that it fails.  Once the test fails he or she writes only the code needed to satisfy the unit of work.  The code and test are repeated until the tests are passed.  Finally, design refactoring is performed based on the implementation of the code. The tests that are written in a TDD environment tend to be focused on unit and functional testing. TDD was originally linked to Extreme Programming, where the practice of pairing (two developers working together on one unit of work) helped to ensure that the tests were not tailored to pass based on an individual developers point of view. TDD proves that the work was done correctly.
  • Acceptance Test Driven Development (ATDD) – ATDD begins by generating a set acceptance test cases based on the team’s shared understanding of the units of work the team is being asked to deliver.  The goal of ATDD is to ensure everyone on the team has the same perception about what is being built. ATDD ensures that the right things are getting built.  Acceptance tests cases nearly always provide additional depth for a user story or the requirements which further supports generating a shared vision among the team
  • Behavior Driven Development (BDD) – BDD leverages facets of both TDD and ATDD.  From TDD  we take the unit/functional test focus  and from ATDD we bring int the acceptance test focus.  Test formats tend to be in an object oriented class format that specifies the desired behavior of the unit of work.  Sample of BDD test formats are: Given <initial context>, when <event>, then <outcome>. I have seen and used variants that include formats such as [given, when, then, and] or [given, but, then] and other variations. In BDD, tests are written in a precise format that supports automation if the test (an example of writing tests in a specific manner is Cucumber), while at the same time are representing real behaviors and being human readable.

Test Driven Development and its cousins, ATDD and BDD, are not only development approaches but also communication tools.  TDD tends to focus communication within the core team. ATDD, while still focused on the core team, can also be used to incorporate stakeholder participation by including stakeholders in generating tests.   BDD can be used to bridge the communication gap between stakeholders, developers and testers by making tests represent explicit behaviors that all parties can read and understand. Writing tests first helps everyone understand what will be developed and in proving that what was developed meets that understanding.