Listen to the Software Process and Measurement Cast 307 (podcast)

Software Process and Measurement Cast number 307 features our essay on integration testing and Agile. Integration testing is defined as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements.  Good integration testing is critical to effective development whether you are using Agile techniques or not.

Link and pictures noted in the essay:

Beer glass logo screen

Application Diagram

We also have a new installment from the Software Sensei.  Kim Pries, the Software Sensei, discusses layered process audits and software inspections.  The techniques are a powerful approach to deliver high quality software.

Next

SPaMCAST 308 features our interview with Michael West author of Return On Process (ROP): Getting Real Performance Results from Process Improvement and more! We had a great discussion about why some process improvements impact the organization’s bottom line and some don’t. Impacting the bottom line is not accident.

 

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT
Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise.
Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT
Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Frameworks and mirrors are related.

Frameworks and mirrors are related.

Hand Drawn Chart Saturday

The simplest explanation for integration testing is to ensure that functions and components fit together and work. Integration testing is critical to reducing the rework (and the professional embarrassment) that you’ll encounter if the components don’t fit together or if the application does not interact with its environment. A healthy testing ecosystem is required for effective testing regardless of whether you are using Agile or waterfall techniques. As we noted in the essay TMMi: What do I use the model for?, the Testing Maturity Model Integration (TMMi) delivers a framework and a vocabulary that defines the components needed for a healthy test ecosystem. We can use this framework to test if how we are approaching integration testing is rigorous. While a formal appraisal using the relevant portion of the model would be needed to understand whether an organization is performing at specific maturity level, we can look at a few areas that will give a clear understanding of integration test formality. A simple set of questions from the TMMI that I use in an Agile environment to ensure that integration testing is rigorous include:

  1. Does the organization or project team have a policy for integration testing? All frameworks work best if expectations are spelled out explicitly. The test policy(ies) are generally operationalized through test standards that explicitly define expectations.
  2. Is there a defined test strategy? In Agile teams all the relevant testing standards should be incorporated into the project’s definition of done. The definition of done helps the team to plan and to know when any piece of work is complete.
  3. Is there a plan for performing integration testing? Incorporating integration testing into the definition of done enforces that integration testing is planned.  The use of TDD (or any variant) that includes integration testing provides explicit evidence of a plan to perform integration testing
  4. Is integration testing progress monitored? Leveraging daily or continuous builds provides prima facie evidence that integration testing is occurring (and the build proves that application at least fits together).  Incorporating smoke and other forms of tests into the build make provides information to explicitly monitor the progress of integration testing.  A third basis for monitoring integration progress is the demo.  Work that has met the definition of done (which includes integration testing) is presented to the project’s stakeholders.
  5. Are those preforming integration tests trained? Integration testing occurs at many levels of development ranging from component to system, and each level requires specific knowledge and skills. Agile teams share work and activities to maximize the amount of value delivered. Cross functional Agile teams that include professional testers can leverage the testers as testing consultants to train and coach the entire team to be better integration testers. Teams without access to professional testers should seek coaching to ensure they are trained in how to perform integration testing.

The goal of integration testing is make sure that components, functions, applications and systems fit together. We perform integration testing to ensure we deliver the maximum possible business value to our stakeholders. When the parts of the application we are building or changing don’t fit together well, the value we are delivering is reduced. The TMMi can provide a framework for evaluating just how rigorously and effectively you are performing integration testing.

Logo beer glasses application: Integration testing

Logo beer glasses application: Integration testing

Integration testing can and should be incorporated easily into a test driven development framework (TDD) or any TDD variant. Incorporating integration tests into a TDD, BDD or ATDD takes the bigger picture into account.

Incorporating integration testing into a TDD framework requires knowing how functions and components fit together and then adding the required test cases as acceptance criteria. In an example of TDD with ATDD and BDD attributes we began defining an application for maintain my beer glass collection. We developed a set of Cucumber-based behavior driven development tests cases.  These tests are critical for defining done for the development of the logo glass entry screen. The entry screen is just a single story in a larger application.

A broader view of the application:

Untitled

An example of the TDD test that we wrote for the logo glass entry screen was:

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass leaving the brewery name blank

Then I should see the error message “The brewery name can’t be blank”

Based on our discussion in Integration Testing and Unit Testing, Different?, this test is a unit test. The test case doesn’t cross a boundary, it answers a single question, and is designed to providing information to a developer.

Using the same Cucumber format, two examples of integration tests for the logo glass add function would be:

The first of these examples describes a test for the integration between the test logo glass add function and the brewery database should act.

Scenario: Brewery name is a required field.

Given I am on the logo glass entry site

When I add a glass if the brewery name entered is not on the brewery data base

Then the new brewery screen should be opened and the message “Please add the brewery before proceeding.”

The second case describes a test for the integration between the logo glass entry screen and the logo glass database.

Scenario: When the glass is added and all error conditions are resolved, the glass should be inserted into the database (the glass logical file).

Given I am on the logo glass entry site

When I have completed the entry of glass information

Then the record for the lass should be inserted into the database and I should see the message “The glass has been added.”

In both of these cases a boundary is crossed, more than one unit is being evaluated and, because the test is broader, more than one role will find the results useful.  These test cases fit the classic definition of integration tests. Leveraging the Cucumber framework, integration tests can be written as part of the acceptance criteria (acceptance test driven development).  As more functions are developed or changed as part of the project broader integration tests can be added as acceptance criteria which provides a basis for continually ensuring overall integration occurs.

Incorporating integration test cases into a TDD, BDD or ATDD framework ensues that the development team isn’t just focused on delivering individual stories, but rather on delivering a greater whole that works well and plays well with itself and any other relevant hardware and applications.

USA border crossing

USA border crossing

Are unit and integration testing the same things masquerading under a different name? No. Unit testing is a process in which a developer tests a very specific self-contained function all by itself, whereas we have defined integration testing as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements. Unit testing and integration testing are fundamentally different forms of testing.  There are three major differences.

Boundaries: All levels of integration testing cross boundaries either between classes, functions or components.  The goal is to determine if the parts of the application fit together and perform as expected.  Unit tests focused on the performance of a single function by definition can’t cross boundaries and do not test dependencies outside of the function being tested.

Scope: A unit test is focused on answering a single question: Based on the test input, does the specific function perform as expected? An example of a simple unit test might be, “if I put a number in a field that only accepts letters, do I get an error?” Because integration tests reflect the interaction between functions or components, they must answer several questions. For example, in Test Driven Development: An example of TDD with ATDD and BDD attributes, we described a simple beer glass collection tracking application. In the application, the user enters the glass being collected into a screen, and after validation, the database is updated. An integration test would need to be written and performed that tests sending the information from the data entry function to the database.  The test would cover several different specific points such as, was the information sent from one module to the other, was it received by the other, and was it inserted in the database correctly.

Role Involvement: Unit testing is part of the coding process.  Occasionally I see tester doing a developer’s unit testing – this is a VERY POOR practice. At its very simplest, the coding process can be described as think, write code, see if it works, if it doesn’t work, go back to thinking. Then repeat, and if it works, commit the code and go to the next requirement. The “see if it works” step is unit testing. Integration testing in its most granular form reflects a transition between coding and validation processes. The transition means that the results need to more broadly seen and interpreted to ensure that all of the parts being developed or changed in a project fit together. Testers, business analysts, developers and sometimes business stakeholders and product owners can be part of executing, interpreting and consuming integration tests.

Unit testing and integration testing are at times easily confused, this is most true when considering integration tests focused at the connections between functions within a single component. However, if we consider whether boundaries are involved and the number of conditions/questions the test is resolving (which suggests that number of roles need to understand the results) the distinction becomes fairly stark.

Integration testing should fit in like the stitches in a

Testing continuously ensures that the pieces fit together like stitches in a blanket.

In Integration Testing Is Core, we defined integration testing as testing in which components (software and hardware) are combined to confirm that they interact according to expectations and requirements. This ensures that the technical architecture decisions are correct. This is one definition of integration testing, but there are others. A second definition of integration testing is the tests needed to determine that components communicate or connect at a component-by-component level. Proponents of this second definition apply these types of tests as part of the build process either before or as part of unit testing. A third definition is as end-to-end testing, a form of system testing usually done late in development. Regardless of the variation, integration testing can be accomplished by building it into the cycle of daily builds.

Daily or continuous builds (preferably automated) are an efficient delivery vehicle for integration testing. The basic flow of configuration management that feeds the build begins when a code module is checked out, new code or code changes are written, and then the code module that was checked out is updated.  Once the developer has satisfied herself that the code meets its acceptance criteria (read Test Driven Development), it is checked back into the configuration tool adding to the project’s code base and then it will be “built.”  The build assembles all the bits and bytes into an integrated code base – an application. This new code base will be used for integration testing. However, if the process stopped at the build, the fact that the code fits together would at least show that the new code did not break the build at a basic level.  This very basic level of testing is valuable, however it is not true integration testing. In order to match any of the definitions of integration testing it requires combining daily or continuous builds with specifically defined test cases to ensure that the code not only fits together like Legos, but work together like an integrated circuit.  The integration test cases need to prove that the code fits together, communicates and performs as expected. In Agile projects, these test cases do not spring into existence as a single deliverable, but rather these tests build over the life of the project, sprint by sprint.  Integration testing ensures integration, as well as a form of systems testing for the changes that were added in earlier builds.

As code is written and added to the build, the integration tests need to be ready. Therefore as stories are formed, integration test cases need to be built. Successful execution of integration testing must be a condition of done.  When integration is built into a team’s natural flow and executed over the whole project life cycle, it satisfies all three definitions.