How can this team really work together?

How can this team really work together?

I recently got a question from a long-time reader and listener.  I have removed the name to ensure confidentiality.

Context:

  • The person who asked the question is an experienced Agile leader.
  • The team is not all technically-equal full-stack developers, some developers work on UI stories and others work on backend stories.
  • The team has 8-10 people.

The Problem:

  • During story grooming/sizing, the entire team does not participate equally to offer up their points. UI developers participate on UI stories and are reluctant to chime in on backend work, and vice-versa.

The Question:

  • Scrum seeks to involve the entire team.  How can I get everyone involved (or should I)? 

(more…)

Measuring TDD is a lot like measuring a cyclone!

Measuring TDD is a lot like measuring a cyclone!

Teams and organizations adopt test-driven development for many reasons, including improving software design, functional quality, time to market or because everyone is doing it (well maybe not that last reason…yet).  In order to justify the investment in time, effort and even the cash for consultants and coaches, most organizations want some form of proof that there is some return on investment (ROI) from leveraging TDD. The measurement issue is less that something needs to be measured (I am ignoring the “you can’t measure software development crowd”), but rather what constitutes an impact and therefore what really should be measured. Erik van Veenendaal, an internationally recognized testing expert stated in an interview that will be published on SPaMCAST 406, “unless you spend the time to link your measurement or change program to business needs, they will be short-lived.”  Just adopting someone else’s best practices in measurement tends to be counterproductive because every organization has different goals and needs.  This means they will adopt TDD for different reasons and will need different evidence to assure themselves that they are getting a benefit.  There is NO single measure or metric that proves you are getting the benefit you need from TDD.  That is not to say that TDD can’t or should not be measured.  A pallet of measures that are commonly used based on the generic goal they address are: (more…)

Just Say No!

Just Say No!

Over and over I find teams that use Test-Driven Development get serious results, including improved quality and faster delivery.  However, not everything is light, kittens and puppies or everyone would be doing test-first development or one its variants (TDD, ATTD or BDD).   The costs and organizational impacts can lead organizations into bad behaviors. Costs and behavioral impacts (cons) that we explored in earlier articles on TFD and TDD include: (more…)

Testing is about predicting the future!

Testing is about predicting the future!

Test-first development is an old concept that was rediscovered and documented by Kent Beck in Extreme Programming Explained (Chapter 13 in the Second Edition).  Test-first development (TFD) is an approach to development in which developers do not write a single line of code until they have created the test cases needed to prove that unit of work solves the business problem and is technically correct at a unit-test level. In a response to a question on Quora, Beck described reading about developers using a test-first approach well before XP and Agile. Test-driven development is test-first development combined with design and code refactoring.  Both test-first and test-driven development  are useful for improving quality, morale and trust and even though both are related they not the same. (more…)

In goes the money and out comes the soda? It is a test!

In goes the money and out comes the soda? It is a test!

Acceptance testing is necessity when developing any product. My brother, the homebuilder, includes acceptance testing through out the building process. His process includes planned and unplanned walkthroughs and backlog reviews with his clients as the house is built. He has even developed checklists for clients that have never had a custom home built. The process culminates with a final walk through to ensure the homeowner is happy. The process of user acceptance testing in Agile development has many similarities, including: participation by users, building UAT into how the teams and teams-of-teams work and testing user acceptance throughout the product development life cycle.

Acceptance testing is a type of black box testing. The tester knows the inputs and has an expected result in mind, but the window into how the input is transformed is opaque. An example of a black box test for a soda machine would be putting money into a soda machine, pressing the selection button and getting the correct frosty beverage. The tester does not need to be aware of all the steps between hitting the selector and receiving the drink. The story-level AUAT can be incorporated into the the day-to-day activity of an Agile team. Incorporating AUAT activities includes:

  1. Adding the requirement for the development of acceptance tests into the definition of ready to develop. (This will be bookended by the definition of done)
  2. Ensuring that the product owner or a well-regarded subject matter expert for the business participate in defining the acceptance criteria for stories and features.
  3. Reviewing acceptance criteria as part of the story grooming process.
  4. Using Acceptance Test Driven Development (ATDD) or other Test First Development methods. ATDD builds collaboration between the developers, testers and the business into the process by writing acceptance tests before developers begin coding.
  5. Incorporating the satisfaction of the acceptance criteria into the definition of done.
  6. Leveraging the classic Agile demo lead by the product owner to stakeholders performed at the end of each sprint. Completed (done) stories are demonstrated and stakeholders interact with them to make sure their needs are being addressed and to solicit feedback.
  7. Performing a final AUAT step using a soft roll-out or first use technique to generate feedback to collect final user feedback in a truly production environment. One of the most common problems all tests have is that they are executed in an environment that closely mirrors production. The word close is generally the issue, and until the code is run in a true production environment what exactly will happen is unknown. The concept of first use feedback borders on one the more problematic approaches that of throwing code over the wall and testing in production. This should never be the first time acceptance, integration or performance is tested, but rather treated as a mechanism to broaden the pool of feedback available to the team.

In a scaled Agile project acceptance testing at the story level is a step in a larger process of planning and actions. This process typically starts by developing acceptance criteria for features and epics which are then groomed and decomposed into stories.  Once the stories are developed and combined  a final acceptance test at the system or application level is needed to ensure what has been developed works as a whole package and meets the users needs.

Are there other techniques that you use to implement AUAT at the team level?

In the next blog entry we will address ideas for scaling AUAT.

How do you weight costs and benefits?

How do you weight costs and benefits?

Introducing any process improvement requires weighing the costs and benefits.  Most major improvements come with carefully crafted cost/benefit analyses. Where the benefits outweigh the costs, changes are implemented.  The goal is to the decide where to expend precious process improvement capital so that it has the most benefit.  Many of the benefits and criticisms of TDD and other test-first techniques can be quantified, but others are less tangible.  In many cases the final decision rests on personal biases, beliefs and past experiences. In the end, our organization and environment may yield different results – there is no perfect ROI model.

In a perfect world, we would pilot TDD with a typical team to determine the impact. For example, will the cost of change be higher than anticipated (or lower), or will the improvement in quality be worth the effort to learn and implement?  In many cases pilot teams for important process improvements are staffed with the best and the brightest.  Unfortunately these are the people that will deliver results regardless of barriers.  The results are rarely perfectly extensible because the capabilities of the best and brightest aren’t generally the same as a normal team. Another typical failure many organizations make is to reflect on the results of a team that embraces TDD (or any other change) on their own.  Expecting the results of teams that self-selects to extend to the larger organization is problematic.  Those that self-select change are more apt to work harder to make TDD work.

In a perfect world we would be able to test our assumptions about the impact of TDD in our organization based on the impact typical groups. However, sometimes we have to rely on the results of the best team or the results of true believers.  Either case is better than having to rely solely on studies of from outside the organization.  Just remember to temper your interpretation of the data when you are weighing your interpretation of costs and benefits.

A 10 mile run requires effort but that does not mean it should be avoided.

A 10 mile run requires effort but that does not mean it should be avoided.

Implementing and using Test Driven Development (TDD) and other related test-first techniques can be difficult. There are a significant number of criticisms of these methods.  These criticisms fall into two camps: effort related and not full testing.

Effort-related criticisms reflect that all test-first techniques require an investment in time and effort.  The effort to embrace TDD, ATDD or BDD begins when teams learn the required philosophies and techniques.  Learning anything new requires an investment.  The question to ask is whether the investment in learning these techniques will pay off in a reasonable period of time?  While I do not think I am the perfect yardstick (I have both a development and testing background), however I learned enough of the BDD technique to be dangerous in less than a day.  The criticism is true, however I believe the impact is overstated.

The second effort-related criticism is that early in a development project developers might be required to create stubs (components that emulate parts of a system that have not been developed) or testing harnesses (code that holds created components together before the whole system exists). However stubs and harnesses can generally be reused as the system is built and enhanced when testing.  I have found that creating and keeping a decent library of stubs and harnesses generates good discussion of interfaces and reduces the number of “I can’t test that until integration” excuses.  Again true, but overstated.

The third effort-related criticism is that in order to effectively do TDD you need to have automated testing (inferred in this criticism is the effort, time and cost for test automation).  I have seen both TDD and ATDD done without automation . . . my brother also had a tooth pulled without anesthetic, I recommend neither.  Test automation is important to making TDD efficient.  Writing test tools, learning the tools and writing test scripts does take effort.  Again the criticism is true, however test automation makes sense (and requires someone to learn it) even if you are not doing TDD.

The final effort-related criticism is that TDD, ATDD and BDD is hard to learn.  I respectfully disagree.  TDD, ATDD and BDD are different concepts than many team members have been exposed before in career. Just because they are different does not mean they are difficult to learn.  This is most likely a reflection of a fear of change. Change is hard, especially if a team is successful.  I would suggest implementing TDD once your team or organization has become comfortable with Agile and has begun to implement test automation which makes “learning” TDD easier.

A second class of criticism of TDD, ATDD or BDD is that these techniques are not full testing. Organizations that decide that test-first techniques should replace all types of testing are generally in for a rude awakening. Security testing is just one example of overall testing that will still be required. TDD, ATDD or BDD are development methods that can be used to replace some testing, but not all types of testing.

Also in the full testing category of criticisms, TDD does not help teams to learn good testing. I agree that just writing and executing tests doesn’t teach testing.  What does lead people to learn to test is the discussion of how to test and gaining experience. Agile teams are built on interaction and collaboration, which provides a platform for growth.  Neither test-first nor test-last (waiting until coding done) leads a team to good testing. Therefore this criticism is true but not constrained to use of TDD.

Embracing TDD, ATDD or BDD will require effort to learn and implement. If you can’t afford the time and effort, wait until you can.  Embracing any of the test-first techniques will require that teams change their behavior. If you can’t spend the effort on organizational change management, wait until you can. Test automation is important for efficient TDD.  If you can’t buy or write testing tools I would still experiment with test-first, but recognize that sooner or later you will need to bite the bullet. Finally, TDD is generally not sufficient to replace all testing.  The criticisms of are mostly true but they are not sufficient to overwhelm the benefits of using these techniques.