From Story Points to Test Driven Development – A Dev Team Story

By Rebecca Schira

I’ve been supporting agile teams for almost six years. What is the one common issue I’ve seen across many teams and two companies during that time? Everyone loves story points or at least proclaims to love them. But why? The common answers we’ve all heard -they’re easier to understand, they’re a good way to do a gut check, etc. Without fail, each and every one of these teams sucked at estimating how much work they could accomplish. (more…)

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 407 includes four separate columns.  We begin with a short essay refreshing the pros and cons of Test Driven Development. Test Driven Development promises a lot of benefits but all is not light, kittens and puppies. Still, TDD is well worth doing if you go into it with your eyes open.

Our second column features Kim Pries, the Software Sensei.  Kim discusses what makes software “good.” The Software Sensei puts the “good” in quotes because it is actually a difficult word to define but Kim is willing to give the discussion a go!

In our third column, we return to Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here). We tackle Chapter 10 which is titled The Thinking Processes. Thinking processes are key to effectively using  Agile, lean and kanban processes.  

Gene Hughson anchors the cast with an entry from his Form Follows Function Blog.  In this installment, we discuss the blog entry titled “Learning to Deal with the Inevitable.”  Gene and I discussed change which is inevitable and innovation which is not quite as inevitable.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 16 and 17.   Chapter 16 ends Section One with an interview with Brad Jensen.  Section Two addresses the philosophies of XP.  Chapter 17 tells the creation story of XP from Beck’s point of view.

We are going to read The Five Dysfunctions of a Team by Jossey-Bass .  This will be a new book for me, therefore, an initial read (I have not read this book yet), not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog ( to catch up on past installments of Re-Read Saturday.


In the next Software Process and Measurement Cast, we will feature our interview with Kupe Kupersmith. Kupe brings his refreshing take on the role of the business analyst in today’s dynamic environment.  This interview was informative, provocative and entertaining.     

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Measuring TDD is a lot like measuring a cyclone!

Measuring TDD is a lot like measuring a cyclone!

Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason…yet).  In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the “you can’t measure software development crowd”), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, “unless you spend the time to link your measurement or change program to business needs, they will be short-lived.”  Just adopting someone else’s best practices in measurement tends to be counterproductive because every organization has different goals and needs.  This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit.  There is NO single measure or metric that proves you are getting the benefit you need from TDD.  That is not to say that TDD can’t or should not be measured.  A pallet of measures that are commonly used based on the generic goal they address are: (more…)

TDD is a life jacket for Quality.

TDD is a life jacket for quality.

Test Driven Development promises serious results for teams, organizations, and customers, including improved quality and faster delivery. The promises of TDD, or any of the other variants of test-first development techniques, are at least partially offset by costs and potentially tragic side effects when it is done poorly.

The positive impacts explored in earlier articles on TFD and TDD include: (more…)

Testing is about predicting the future!

Testing is about predicting the future!

Test-first development is an old concept that was rediscovered and documented by Kent Beck in Extreme Programming Explained (Chapter 13 in the Second Edition).  Test-first development (TFD) is an approach to development in which developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level. In a response to a question on Quora, Beck described reading about developers using a test-first approach well before XP and Agile. Test-driven development is test-first development combined with design and code refactoring.  Both test-first and test-driven development  are useful for improving quality, morale and trust and even though both are related they not the same. (more…)

Listen to the Software Process and Measurement Cast 295!

SPaMCAST 295 features our essay on Test Driven Development (TDD). TDD is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. You then refactor that code to eliminate duplication and any overlap, then repeat until all of the work is completed. Philosophically, Agile practitioners see TDD as a tool either to improve requirements and design (specification) or to improve the quality of the code.  This is similar to the distinction between verification (are you doing the right thing) and validation (are you doing the thing right).

We also have a new entry from the Software Sensei, Kim Pries. Kim addresses cognitive load theory.  Cognitive load theory helps explain how learning and change occur at personnel, team and organizational levels.

Next week we will feature our interview with Jeff Dalton. Jeff and I talked about making Agile resilient.  Jeff posits that the CMMI can be used to strengthen and reinforce Agile. This is an important interview for organizations that are considering scaled Agile frameworks.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT – The Impact of Cognitive Bias On Teams

Check these out at

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Logo beer glasses application: Integration testing

Logo beer glasses application: Integration testing

Integration testing can and should be incorporated easily into a test driven development framework (TDD) or any TDD variant. Incorporating integration tests into a TDD, BDD or ATDD takes the bigger picture into account.

Incorporating integration testing into a TDD framework requires knowing how functions and components fit together and then adding the required test cases as acceptance criteria. In an example of TDD with ATDD and BDD attributes we began defining an application for maintain my beer glass collection. We developed a set of Cucumber-based behavior driven development tests cases.  These tests are critical for defining done for the development of the logo glass entry screen. The entry screen is just a single story in a larger application.

A broader view of the application:


An example of the TDD test that we wrote for the logo glass entry screen was:

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Based on our discussion in Integration Testing and Unit Testing, Different?, this test is a unit test. The test case doesn’t cross a boundary, it answers a single question, and is designed to providing information to a developer.

Using the same Cucumber format, two examples of integration tests for the logo glass add function would be:

The first of these examples describes a test for the integration between the test logo glass add function and the brewery database should act.

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass if the brewery name entered is not on the brewery data base

Then the new brewery screen should be opened and the message “Please add the brewery before proceeding.”

The second case describes a test for the integration between the logo glass entry screen and the logo glass database.

Scenario: When the glass is added and all error conditions are resolved, the glass should be inserted into the database (the glass logical file).

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then the record for the lass should be inserted into the database and I should see the message “The glass has been added.”

In both of these cases a boundary is crossed, more than one unit is being evaluated and, because the test is broader, more than one role will find the results useful.  These test cases fit the classic definition of integration tests. Leveraging the Cucumber framework, integration tests can be written as part of the acceptance criteria (acceptance test driven development).  As more functions are developed or changed as part of the project broader integration tests can be added as acceptance criteria which provides a basis for continually ensuring overall integration occurs.

Incorporating integration test cases into a TDD, BDD or ATDD framework ensues that the development team isn’t just focused on delivering individual stories, but rather on delivering a greater whole that works well and plays well with itself and any other relevant hardware and applications.

How do you weight costs and benefits?

How do you weight costs and benefits?

Introducing any process improvement requires weighing the costs and benefits.  Most major improvements come with carefully crafted cost/benefit analyses. Where the benefits outweigh the costs, changes are implemented.  The goal is to the decide where to expend precious process improvement capital so that it has the most benefit.  Many of the benefits and criticisms of TDD and other test-first techniques can be quantified, but others are less tangible.  In many cases the final decision rests on personal biases, beliefs and past experiences. In the end, our organization and environment may yield different results – there is no perfect ROI model.

In a perfect world, we would pilot TDD with a typical team to determine the impact. For example, will the cost of change be higher than anticipated (or lower), or will the improvement in quality be worth the effort to learn and implement?  In many cases pilot teams for important process improvements are staffed with the best and the brightest.  Unfortunately these are the people that will deliver results regardless of barriers.  The results are rarely perfectly extensible because the capabilities of the best and brightest aren’t generally the same as a normal team. Another typical failure many organizations make is to reflect on the results of a team that embraces TDD (or any other change) on their own.  Expecting the results of teams that self-selects to extend to the larger organization is problematic.  Those that self-select change are more apt to work harder to make TDD work.

In a perfect world we would be able to test our assumptions about the impact of TDD in our organization based on the impact typical groups. However, sometimes we have to rely on the results of the best team or the results of true believers.  Either case is better than having to rely solely on studies of from outside the organization.  Just remember to temper your interpretation of the data when you are weighing your interpretation of costs and benefits.

A 10 mile run requires effort but that does not mean it should be avoided.

A 10 mile run requires effort but that does not mean it should be avoided.

Implementing and using Test Driven Development (TDD) and other related test-first techniques can be difficult. There are a significant number of criticisms of these methods.  These criticisms fall into two camps: effort related and not full testing.

Effort-related criticisms reflect that all test-first techniques require an investment in time and effort.  The effort to embrace TDD, ATDD or BDD begins when teams learn the required philosophies and techniques.  Learning anything new requires an investment.  The question to ask is whether the investment in learning these techniques will pay off in a reasonable period of time?  While I do not think I am the perfect yardstick (I have both a development and testing background), however I learned enough of the BDD technique to be dangerous in less than a day.  The criticism is true, however I believe the impact is overstated.

The second effort-related criticism is that early in a development project developers might be required to create stubs (components that emulate parts of a system that have not been developed) or testing harnesses (code that holds created components together before the whole system exists). However stubs and harnesses can generally be reused as the system is built and enhanced when testing.  I have found that creating and keeping a decent library of stubs and harnesses generates good discussion of interfaces and reduces the number of “I can’t test that until integration” excuses.  Again true, but overstated.

The third effort-related criticism is that in order to effectively do TDD you need to have automated testing (inferred in this criticism is the effort, time and cost for test automation).  I have seen both TDD and ATDD done without automation . . . my brother also had a tooth pulled without anesthetic, I recommend neither.  Test automation is important to making TDD efficient.  Writing test tools, learning the tools and writing test scripts does take effort.  Again the criticism is true, however test automation makes sense (and requires someone to learn it) even if you are not doing TDD.

The final effort-related criticism is that TDD, ATDD and BDD is hard to learn.  I respectfully disagree.  TDD, ATDD and BDD are different concepts than many team members have been exposed before in career. Just because they are different does not mean they are difficult to learn.  This is most likely a reflection of a fear of change. Change is hard, especially if a team is successful.  I would suggest implementing TDD once your team or organization has become comfortable with Agile and has begun to implement test automation which makes “learning” TDD easier.

A second class of criticism of TDD, ATDD or BDD is that these techniques are not full testing. Organizations that decide that test-first techniques should replace all types of testing are generally in for a rude awakening. Security testing is just one example of overall testing that will still be required. TDD, ATDD or BDD are development methods that can be used to replace some testing, but not all types of testing.

Also in the full testing category of criticisms, TDD does not help teams to learn good testing. I agree that just writing and executing tests doesn’t teach testing.  What does lead people to learn to test is the discussion of how to test and gaining experience. Agile teams are built on interaction and collaboration, which provides a platform for growth.  Neither test-first nor test-last (waiting until coding done) leads a team to good testing. Therefore this criticism is true but not constrained to use of TDD.

Embracing TDD, ATDD or BDD will require effort to learn and implement. If you can’t afford the time and effort, wait until you can.  Embracing any of the test-first techniques will require that teams change their behavior. If you can’t spend the effort on organizational change management, wait until you can. Test automation is important for efficient TDD.  If you can’t buy or write testing tools I would still experiment with test-first, but recognize that sooner or later you will need to bite the bullet. Finally, TDD is generally not sufficient to replace all testing.  The criticisms of are mostly true but they are not sufficient to overwhelm the benefits of using these techniques.

KISS: Keep it Simple Stupid!

KISS: Keep it Simple Stupid!

Test Driven Development (TDD) is an approach to development in which you write a test that proves the piece of work you are working on, write the code required to pass the test and then update the design based on what you have learned.  The short cycle of TDD, Acceptance Test Driven Development (ATDD) or even Behavior Driven Development (BDD) deliver a variety of benefits to the organizations that practice these techniques.  Some of the benefits are obvious, while others are less apparent.

Fewer Delivered Defects:  This is probably the most obvious benefit to using any of the test-first techniques.  Testing finds defects, timelier testing is more apt to find defects before they get delivered and better testing tends to find more defects.  The benefit to fewer defects is customers that swear less often.

Communication:  This the second most obvious benefit. The process of developing tests creates a platform to ensure that the product owner, business analyst or stakeholders communicate the meaning of a user story before the first line of code is written. The process of developing the tests requires conversation, which is at the heart of Agile techniques. Better communication yields software better targeted to the organization’s needs.

Better Tests:  This benefit is related to delivering fewer defects, but is far less obvious.  Tests that are targeted at proving that the functionality being delivered or changed will be significantly more targeted and lean.  Whether leveraging the xP technique of pair programming or the Three Amigos technique, TDD (or any of the test-first techniques) ensures that the tests are reviewed early and often which helps to ensure only what is needed is tested.  Tighter testing will not only result in fewer delivered defects, but also less costly testing due to targeting.

Code Quality:  TDD, ATDD and BDD all encourage developers to write only the code needed to pass the tests.  The test-first techniques embrace the old adage “keep it simple stupid” through the refactoring step. By only writing as much code needed to pass the test, it will remain as simple as possible. This will yield higher code quality because there will be less chances to embed defects.

Less Dead Code:  The final and least obvious benefit of TDD is a general reduction of dead code.  When I worked in a development organization one of the things that drove me crazy was that amount of old dead code in the programs I interacted with.  Some code was commented out, left in “just in case” and some code had just become un-executable because of logic changes. Refactoring helps remove dead code making programs more readable and therefore easier to maintain. Writing simple, focused code, just enough to pass the test, makes it less likely that you will create dead code.

TDD, ATDD and BDD all deliver significant value to organizations that use these techniques.  ATDD and BDD with their additional focus on acceptance tests may encourage deeper communication between developers, testers and stakeholders. Regardless of which technique you use these test-first techniques improve communication and testing, providing teams with an ability to deliver more value and generate higher customer satisfaction.