Measuring TDD is a lot like measuring a cyclone!

Measuring TDD is a lot like measuring a cyclone!

Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason…yet).  In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the “you can’t measure software development crowd”), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, “unless you spend the time to link your measurement or change program to business needs, they will be short-lived.”  Just adopting someone else’s best practices in measurement tends to be counterproductive because every organization has different goals and needs.  This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit.  There is NO single measure or metric that proves you are getting the benefit you need from TDD.  That is not to say that TDD can’t or should not be measured.  A pallet of measures that are commonly used based on the generic goal they address are: (more…)

Just Say No!

Just Say No!

Over and over I find teams that use Test-Driven Development get serious results, including improved quality and faster delivery.  However, not everything is light, kittens and puppies or everyone would be doing test-first development or one its variants (TDD, ATTD or BDD).   The costs and organizational impacts can lead organizations into bad behaviors. Costs and behavioral impacts (cons) that we explored in earlier articles on TFD and TDD include: (more…)

TDD is a life jacket for Quality.

TDD is a life jacket for quality.

Test Driven Development promises serious results for teams, organizations, and customers, including improved quality and faster delivery. The promises of TDD, or any of the other variants of test-first development techniques, are at least partially offset by costs and potentially tragic side effects when it is done poorly.

The positive impacts explored in earlier articles on TFD and TDD include: (more…)

Testing is about predicting the future!

Testing is about predicting the future!

Test-first development is an old concept that was rediscovered and documented by Kent Beck in Extreme Programming Explained (Chapter 13 in the Second Edition).  Test-first development (TFD) is an approach to development in which developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level. In a response to a question on Quora, Beck described reading about developers using a test-first approach well before XP and Agile. Test-driven development is test-first development combined with design and code refactoring.  Both test-first and test-driven development  are useful for improving quality, morale and trust and even though both are related they not the same. (more…)

There really is a difference  . . .

There really is a difference . . .

Classic software development techniques follow a pattern of analysis, design, coding and testing. Regardless of whether the process is driven by a big upfront design or incremental design decisions, code is written and then tests are executed for the first time. Test plans, scenarios and cases may well be developed in parallel with code (generally only with a minimum of interaction), however execution only occurs as code is completed. In test-last models, the roles of developer and tester are typically filled by professionals with very different types of training and many times from separate management structures. The differences in training and management structure create structural communication problems. At best the developers and testers are viewed as separate but equal, although rarely do you see testers and developers sharing tables at lunch.

Testing last supports a sort of two tiered development environment apartheid in which testers and developers are kept apart. I have heard it argued that anyone that learned to write code in school has been taught how to test the work they have create, at least in a rudimentary manner, therefore they should be able to write the test cases needed to implement test-driven development (TDD). When the code is thrown over the wall to the testers, the test cases may or may not be reused to build regression suites. Testers have to interpret both the requirements and implementation approaches based on point in time reviews and documentation.

Test-first development techniques take a different approach, and therefore require a different culture. The common follow of a test first process is:

  • The developer accepts a unit of work and immediately writes a set of tests that will prove that the unit of work actually functions correctly.
  • Run the tests.  The tests should fail.
  • Write the code need to solve the problem.  Write just enough code to really solve the problem.
  • Run the test suite again.  The test should pass.

Most adherents of Agile development methods have heard of TDD which is the most common instantiation of the broader school of test-first development (TFD).  TDD extends TFD by adding a final refactoring step to the flow in which developers write the tests cases needed to prove that unit of work solves the business problem and is technically correct. Other variants, such as behavior-driven and acceptance test-driven development, are common.  In all cases the process follows the same cycle of writing tests, running the tests, writing the code, re-running the tests and then  refactoring the only difference being the focus tests.

TFD techniques intermingle coding and testing disciplines, creating a codependent chimera in which neither technique can exist without the other. For TFD to work most effectively coders and testers must understand each other and be able to work together. Pairing testers and developers together to write the test case either for component/unit testing (TDD), behavior-driven tests (BDD) or acceptance testing (ATDD) creates an environment of collaboration and communication.  Walls between the two groups will be weak, and over time, torn down.

Test-first development and test-last development techniques are different both in terms of the how work is performed and how teams are organized. TFD takes a collaborative approach to delivering value, while test-last approaches are far more adversarial in nature.

Logo beer glasses application: Integration testing

Logo beer glasses application: Integration testing

Integration testing can and should be incorporated easily into a test driven development framework (TDD) or any TDD variant. Incorporating integration tests into a TDD, BDD or ATDD takes the bigger picture into account.

Incorporating integration testing into a TDD framework requires knowing how functions and components fit together and then adding the required test cases as acceptance criteria. In an example of TDD with ATDD and BDD attributes we began defining an application for maintain my beer glass collection. We developed a set of Cucumber-based behavior driven development tests cases.  These tests are critical for defining done for the development of the logo glass entry screen. The entry screen is just a single story in a larger application.

A broader view of the application:

Untitled

An example of the TDD test that we wrote for the logo glass entry screen was:

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Based on our discussion in Integration Testing and Unit Testing, Different?, this test is a unit test. The test case doesn’t cross a boundary, it answers a single question, and is designed to providing information to a developer.

Using the same Cucumber format, two examples of integration tests for the logo glass add function would be:

The first of these examples describes a test for the integration between the test logo glass add function and the brewery database should act.

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass if the brewery name entered is not on the brewery data base

Then the new brewery screen should be opened and the message “Please add the brewery before proceeding.”

The second case describes a test for the integration between the logo glass entry screen and the logo glass database.

Scenario: When the glass is added and all error conditions are resolved, the glass should be inserted into the database (the glass logical file).

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then the record for the lass should be inserted into the database and I should see the message “The glass has been added.”

In both of these cases a boundary is crossed, more than one unit is being evaluated and, because the test is broader, more than one role will find the results useful.  These test cases fit the classic definition of integration tests. Leveraging the Cucumber framework, integration tests can be written as part of the acceptance criteria (acceptance test driven development).  As more functions are developed or changed as part of the project broader integration tests can be added as acceptance criteria which provides a basis for continually ensuring overall integration occurs.

Incorporating integration test cases into a TDD, BDD or ATDD framework ensues that the development team isn’t just focused on delivering individual stories, but rather on delivering a greater whole that works well and plays well with itself and any other relevant hardware and applications.

How do you weight costs and benefits?

How do you weight costs and benefits?

Introducing any process improvement requires weighing the costs and benefits.  Most major improvements come with carefully crafted cost/benefit analyses. Where the benefits outweigh the costs, changes are implemented.  The goal is to the decide where to expend precious process improvement capital so that it has the most benefit.  Many of the benefits and criticisms of TDD and other test-first techniques can be quantified, but others are less tangible.  In many cases the final decision rests on personal biases, beliefs and past experiences. In the end, our organization and environment may yield different results – there is no perfect ROI model.

In a perfect world, we would pilot TDD with a typical team to determine the impact. For example, will the cost of change be higher than anticipated (or lower), or will the improvement in quality be worth the effort to learn and implement?  In many cases pilot teams for important process improvements are staffed with the best and the brightest.  Unfortunately these are the people that will deliver results regardless of barriers.  The results are rarely perfectly extensible because the capabilities of the best and brightest aren’t generally the same as a normal team. Another typical failure many organizations make is to reflect on the results of a team that embraces TDD (or any other change) on their own.  Expecting the results of teams that self-selects to extend to the larger organization is problematic.  Those that self-select change are more apt to work harder to make TDD work.

In a perfect world we would be able to test our assumptions about the impact of TDD in our organization based on the impact typical groups. However, sometimes we have to rely on the results of the best team or the results of true believers.  Either case is better than having to rely solely on studies of from outside the organization.  Just remember to temper your interpretation of the data when you are weighing your interpretation of costs and benefits.

A 10 mile run requires effort but that does not mean it should be avoided.

A 10 mile run requires effort but that does not mean it should be avoided.

Implementing and using Test Driven Development (TDD) and other related test-first techniques can be difficult. There are a significant number of criticisms of these methods.  These criticisms fall into two camps: effort related and not full testing.

Effort-related criticisms reflect that all test-first techniques require an investment in time and effort.  The effort to embrace TDD, ATDD or BDD begins when teams learn the required philosophies and techniques.  Learning anything new requires an investment.  The question to ask is whether the investment in learning these techniques will pay off in a reasonable period of time?  While I do not think I am the perfect yardstick (I have both a development and testing background), however I learned enough of the BDD technique to be dangerous in less than a day.  The criticism is true, however I believe the impact is overstated.

The second effort-related criticism is that early in a development project developers might be required to create stubs (components that emulate parts of a system that have not been developed) or testing harnesses (code that holds created components together before the whole system exists). However stubs and harnesses can generally be reused as the system is built and enhanced when testing.  I have found that creating and keeping a decent library of stubs and harnesses generates good discussion of interfaces and reduces the number of “I can’t test that until integration” excuses.  Again true, but overstated.

The third effort-related criticism is that in order to effectively do TDD you need to have automated testing (inferred in this criticism is the effort, time and cost for test automation).  I have seen both TDD and ATDD done without automation . . . my brother also had a tooth pulled without anesthetic, I recommend neither.  Test automation is important to making TDD efficient.  Writing test tools, learning the tools and writing test scripts does take effort.  Again the criticism is true, however test automation makes sense (and requires someone to learn it) even if you are not doing TDD.

The final effort-related criticism is that TDD, ATDD and BDD is hard to learn.  I respectfully disagree.  TDD, ATDD and BDD are different concepts than many team members have been exposed before in career. Just because they are different does not mean they are difficult to learn.  This is most likely a reflection of a fear of change. Change is hard, especially if a team is successful.  I would suggest implementing TDD once your team or organization has become comfortable with Agile and has begun to implement test automation which makes “learning” TDD easier.

A second class of criticism of TDD, ATDD or BDD is that these techniques are not full testing. Organizations that decide that test-first techniques should replace all types of testing are generally in for a rude awakening. Security testing is just one example of overall testing that will still be required. TDD, ATDD or BDD are development methods that can be used to replace some testing, but not all types of testing.

Also in the full testing category of criticisms, TDD does not help teams to learn good testing. I agree that just writing and executing tests doesn’t teach testing.  What does lead people to learn to test is the discussion of how to test and gaining experience. Agile teams are built on interaction and collaboration, which provides a platform for growth.  Neither test-first nor test-last (waiting until coding done) leads a team to good testing. Therefore this criticism is true but not constrained to use of TDD.

Embracing TDD, ATDD or BDD will require effort to learn and implement. If you can’t afford the time and effort, wait until you can.  Embracing any of the test-first techniques will require that teams change their behavior. If you can’t spend the effort on organizational change management, wait until you can. Test automation is important for efficient TDD.  If you can’t buy or write testing tools I would still experiment with test-first, but recognize that sooner or later you will need to bite the bullet. Finally, TDD is generally not sufficient to replace all testing.  The criticisms of are mostly true but they are not sufficient to overwhelm the benefits of using these techniques.

KISS: Keep it Simple Stupid!

KISS: Keep it Simple Stupid!

Test Driven Development (TDD) is an approach to development in which you write a test that proves the piece of work you are working on, write the code required to pass the test and then update the design based on what you have learned.  The short cycle of TDD, Acceptance Test Driven Development (ATDD) or even Behavior Driven Development (BDD) deliver a variety of benefits to the organizations that practice these techniques.  Some of the benefits are obvious, while others are less apparent.

Fewer Delivered Defects:  This is probably the most obvious benefit to using any of the test-first techniques.  Testing finds defects, timelier testing is more apt to find defects before they get delivered and better testing tends to find more defects.  The benefit to fewer defects is customers that swear less often.

Communication:  This the second most obvious benefit. The process of developing tests creates a platform to ensure that the product owner, business analyst or stakeholders communicate the meaning of a user story before the first line of code is written. The process of developing the tests requires conversation, which is at the heart of Agile techniques. Better communication yields software better targeted to the organization’s needs.

Better Tests:  This benefit is related to delivering fewer defects, but is far less obvious.  Tests that are targeted at proving that the functionality being delivered or changed will be significantly more targeted and lean.  Whether leveraging the xP technique of pair programming or the Three Amigos technique, TDD (or any of the test-first techniques) ensures that the tests are reviewed early and often which helps to ensure only what is needed is tested.  Tighter testing will not only result in fewer delivered defects, but also less costly testing due to targeting.

Code Quality:  TDD, ATDD and BDD all encourage developers to write only the code needed to pass the tests.  The test-first techniques embrace the old adage “keep it simple stupid” through the refactoring step. By only writing as much code needed to pass the test, it will remain as simple as possible. This will yield higher code quality because there will be less chances to embed defects.

Less Dead Code:  The final and least obvious benefit of TDD is a general reduction of dead code.  When I worked in a development organization one of the things that drove me crazy was that amount of old dead code in the programs I interacted with.  Some code was commented out, left in “just in case” and some code had just become un-executable because of logic changes. Refactoring helps remove dead code making programs more readable and therefore easier to maintain. Writing simple, focused code, just enough to pass the test, makes it less likely that you will create dead code.

TDD, ATDD and BDD all deliver significant value to organizations that use these techniques.  ATDD and BDD with their additional focus on acceptance tests may encourage deeper communication between developers, testers and stakeholders. Regardless of which technique you use these test-first techniques improve communication and testing, providing teams with an ability to deliver more value and generate higher customer satisfaction.

Logo beer glasses come in all shapes and sizes!

Logo beer glasses come in all shapes and sizes!

An example of TDD with ATDD and BDD attributes
(or TDD/ATDD/BDD run through a blender just a bit)

I.         The team establishes a backlog of prioritized user stories based on the functional and architectural requirements. Remember that the backlog is dynamic and will evolve over the life of the project.  Story Maps are one method of visualizing the backlog.

II.         The Three Amigos (Product Owner, Lead Developer and Tester or Business Analyst) review the stories and their specifications to be taken into the next sprint.  They make sure that each story has acceptance test cases, add any missing scenarios or requirements needed. Here’s an example:

Feature:  Add logo glass to my collection

Story: As a beer logo glass collector, I want to be able to log a logo glass into my collection so I do not recollect the same glass!

Notes: It is important to capture the information needed to identify the glass.  A screen is needed that captures, brewery name, glass logo copy, glass type and price.

Logo Glass Entry Screen 

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Scenario: Glass logo copy is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass logo copy blank

Then I should see the error message “The glass logo copy can’t be blank”

Scenario: Glass type is a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should see the error message “The glass type can’t be blank”

Scenario: Glass cost is not a required field

Given I am on the logo glass entry site

When I add a glass leaving the glass type blank

Then I should not see an error message

Scenario: Glass added message should be displayed when glass data entry is complete

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then I should see the message “The glass has been added”

 III.         For the stories accepted into the sprint, the tester would ensure the acceptance tests fail (the scenarios above are acceptance tests)  and then automate the acceptance test cases (assuming testing is automated, which is a really important concept).  Concurrently the developer would implement the feature in the code. The developer and the tester will collaborate to testing the implemented feature. This is an iterative process until the feature passes the tests.

IV.         Once the feature has passed the tests, any refactoring of the design is done (remember the difference between test-first and test driven development). The tests are retained and incorporated into the regression test suite.

V.         Start over on the next story!