Measuring TDD is a lot like measuring a cyclone!

Measuring TDD is a lot like measuring a cyclone!

Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason…yet).  In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the “you can’t measure software development crowd”), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, “unless you spend the time to link your measurement or change program to business needs, they will be short-lived.”  Just adopting someone else’s best practices in measurement tends to be counterproductive because every organization has different goals and needs.  This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit.  There is NO single measure or metric that proves you are getting the benefit you need from TDD.  That is not to say that TDD can’t or should not be measured.  A pallet of measures that are commonly used based on the generic goal they address are: (more…)

Just Say No!

Just Say No!

Over and over I find teams that use Test-Driven Development get serious results, including improved quality and faster delivery.  However, not everything is light, kittens and puppies or everyone would be doing test-first development or one its variants (TDD, ATTD or BDD).   The costs and organizational impacts can lead organizations into bad behaviors. Costs and behavioral impacts (cons) that we explored in earlier articles on TFD and TDD include: (more…)

TDD is a life jacket for Quality.

TDD is a life jacket for quality.

Test Driven Development promises serious results for teams, organizations, and customers, including improved quality and faster delivery. The promises of TDD, or any of the other variants of test-first development techniques, are at least partially offset by costs and potentially tragic side effects when it is done poorly.

The positive impacts explored in earlier articles on TFD and TDD include: (more…)

Testing is about predicting the future!

Testing is about predicting the future!

Test-first development is an old concept that was rediscovered and documented by Kent Beck in Extreme Programming Explained (Chapter 13 in the Second Edition).  Test-first development (TFD) is an approach to development in which developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level. In a response to a question on Quora, Beck described reading about developers using a test-first approach well before XP and Agile. Test-driven development is test-first development combined with design and code refactoring.  Both test-first and test-driven development  are useful for improving quality, morale and trust and even though both are related they not the same. (more…)

There really is a difference  . . .

There really is a difference . . .

Classic software development techniques follow a pattern of analysis, design, coding and testing. Regardless of whether the process is driven by a big upfront design or incremental design decisions, code is written and then tests are executed for the first time. Test plans, scenarios and cases may well be developed in parallel with code (generally only with a minimum of interaction), however execution only occurs as code is completed. In test-last models, the roles of developer and tester are typically filled by professionals with very different types of training and many times from separate management structures. The differences in training and management structure create structural communication problems. At best the developers and testers are viewed as separate but equal, although rarely do you see testers and developers sharing tables at lunch.

Testing last supports a sort of two tiered development environment apartheid in which testers and developers are kept apart. I have heard it argued that anyone that learned to write code in school has been taught how to test the work they have create, at least in a rudimentary manner, therefore they should be able to write the test cases needed to implement test-driven development (TDD). When the code is thrown over the wall to the testers, the test cases may or may not be reused to build regression suites. Testers have to interpret both the requirements and implementation approaches based on point in time reviews and documentation.

Test-first development techniques take a different approach, and therefore require a different culture. The common follow of a test first process is:

  • The developer accepts a unit of work and immediately writes a set of tests that will prove that the unit of work actually functions correctly.
  • Run the tests.  The tests should fail.
  • Write the code need to solve the problem.  Write just enough code to really solve the problem.
  • Run the test suite again.  The test should pass.

Most adherents of Agile development methods have heard of TDD which is the most common instantiation of the broader school of test-first development (TFD).  TDD extends TFD by adding a final refactoring step to the flow in which developers write the tests cases needed to prove that unit of work solves the business problem and is technically correct. Other variants, such as behavior-driven and acceptance test-driven development, are common.  In all cases the process follows the same cycle of writing tests, running the tests, writing the code, re-running the tests and then  refactoring the only difference being the focus tests.

TFD techniques intermingle coding and testing disciplines, creating a codependent chimera in which neither technique can exist without the other. For TFD to work most effectively coders and testers must understand each other and be able to work together. Pairing testers and developers together to write the test case either for component/unit testing (TDD), behavior-driven tests (BDD) or acceptance testing (ATDD) creates an environment of collaboration and communication.  Walls between the two groups will be weak, and over time, torn down.

Test-first development and test-last development techniques are different both in terms of the how work is performed and how teams are organized. TFD takes a collaborative approach to delivering value, while test-last approaches are far more adversarial in nature.

Logo beer glasses application: Integration testing

Logo beer glasses application: Integration testing

Integration testing can and should be incorporated easily into a test driven development framework (TDD) or any TDD variant. Incorporating integration tests into a TDD, BDD or ATDD takes the bigger picture into account.

Incorporating integration testing into a TDD framework requires knowing how functions and components fit together and then adding the required test cases as acceptance criteria. In an example of TDD with ATDD and BDD attributes we began defining an application for maintain my beer glass collection. We developed a set of Cucumber-based behavior driven development tests cases.  These tests are critical for defining done for the development of the logo glass entry screen. The entry screen is just a single story in a larger application.

A broader view of the application:

Untitled

An example of the TDD test that we wrote for the logo glass entry screen was:

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Based on our discussion in Integration Testing and Unit Testing, Different?, this test is a unit test. The test case doesn’t cross a boundary, it answers a single question, and is designed to providing information to a developer.

Using the same Cucumber format, two examples of integration tests for the logo glass add function would be:

The first of these examples describes a test for the integration between the test logo glass add function and the brewery database should act.

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass if the brewery name entered is not on the brewery data base

Then the new brewery screen should be opened and the message “Please add the brewery before proceeding.”

The second case describes a test for the integration between the logo glass entry screen and the logo glass database.

Scenario: When the glass is added and all error conditions are resolved, the glass should be inserted into the database (the glass logical file).

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then the record for the lass should be inserted into the database and I should see the message “The glass has been added.”

In both of these cases a boundary is crossed, more than one unit is being evaluated and, because the test is broader, more than one role will find the results useful.  These test cases fit the classic definition of integration tests. Leveraging the Cucumber framework, integration tests can be written as part of the acceptance criteria (acceptance test driven development).  As more functions are developed or changed as part of the project broader integration tests can be added as acceptance criteria which provides a basis for continually ensuring overall integration occurs.

Incorporating integration test cases into a TDD, BDD or ATDD framework ensues that the development team isn’t just focused on delivering individual stories, but rather on delivering a greater whole that works well and plays well with itself and any other relevant hardware and applications.

How do you weight costs and benefits?

How do you weight costs and benefits?

Introducing any process improvement requires weighing the costs and benefits.  Most major improvements come with carefully crafted cost/benefit analyses. Where the benefits outweigh the costs, changes are implemented.  The goal is to the decide where to expend precious process improvement capital so that it has the most benefit.  Many of the benefits and criticisms of TDD and other test-first techniques can be quantified, but others are less tangible.  In many cases the final decision rests on personal biases, beliefs and past experiences. In the end, our organization and environment may yield different results – there is no perfect ROI model.

In a perfect world, we would pilot TDD with a typical team to determine the impact. For example, will the cost of change be higher than anticipated (or lower), or will the improvement in quality be worth the effort to learn and implement?  In many cases pilot teams for important process improvements are staffed with the best and the brightest.  Unfortunately these are the people that will deliver results regardless of barriers.  The results are rarely perfectly extensible because the capabilities of the best and brightest aren’t generally the same as a normal team. Another typical failure many organizations make is to reflect on the results of a team that embraces TDD (or any other change) on their own.  Expecting the results of teams that self-selects to extend to the larger organization is problematic.  Those that self-select change are more apt to work harder to make TDD work.

In a perfect world we would be able to test our assumptions about the impact of TDD in our organization based on the impact typical groups. However, sometimes we have to rely on the results of the best team or the results of true believers.  Either case is better than having to rely solely on studies of from outside the organization.  Just remember to temper your interpretation of the data when you are weighing your interpretation of costs and benefits.