TMMi


Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 406 features our interview with Erik van Veenendaal.  We discussed Agile testing, risk and testing, the Test Maturity Model Integrated (TMMi), and why in an Agile world quality and testing still matter.

Erik van Veenendaal (www.erikvanveenendaal.nl) is a leading international consultant and trainer, and a recognized expert in the area of software testing and requirement engineering. He is the author of a number of books and papers within the profession, one of the core developers of the TMap testing methodology, a participant in working parties of the International Requirements Engineering Board (IREB). He is one of the founding members of the TMMi Foundation, the lead developer of the TMMi model and currently a member of the TMMi executive committee. Erik is a frequent keynote and tutorial speaker at international testing and quality conferences. For his major contribution to the field of testing, Erik received the European Testing Excellence Award (2007) and the ISTQB International Testing Excellence Award (2015). You can follow Erik on twitter via @ErikvVeenendaal.

Re-Read Saturday News

This week we continue our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 14 and 15.  This week we dive into design and scaling. These chapters  address two critical and controversial topics that XP profoundly rethought.

I am still collecting thoughts on what to read next. Is it time to start thinking about what is next: a re-read or a new read?  Thoughts?

Use the link to XP Explained in the show notes when you buy your copy to read along to support both the blog and podcast. Visit the Software Process and Measurement Blog (www.tcagley.wordpress.com) to catch up on past installments of Re-Read Saturday. (more…)

SPaMCAST 293 features our essay on the Test Maturity Model Integration (TMMi). The TMMi is a maturity model focused on improving both the process and practice of testing! The TMMi covers the entire testing environment not just typical dynamic testing. The essay begins:

“All models are wrong, but some are useful.”  – George E. P. Box

Information Technology (IT) has many useful models for addressing the complexity of developing, delivering and running software.  Well known models include the Capability Maturity Model Integration (CMMI®), the Information Technology Infrastructure Library (ITIL®) and the Test Maturity Model Integration (TMMi®) to name a few. The TMMi delivers a framework to help practitioners and IT executives understand and improve the quality of the products they deliver through better testing.
To listen to the rest listen of the essay on the Software Process and Measurement Cast 293.

Thanks for the feedback on shortening the introduction of the cast this week. Please keep your feedback coming. Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us on Facebook while you’re at it.
Next week we will feature our interview with Sean Robson. We discussed his book, Agile SAP: Introducing flexibility, transparency and speed to SAP implementations. SAP and Agile, some say it can’t be done and they would just be wrong.

Upcoming Events
Upcoming DCG Webinars:
June 19 11:30 EDT – How To Split User Stories
July 24 11:30 EDT – The Impact of Cognitive Bias On Teams

Check these out at www.davidconsultinggroup.com

I look forward to seeing or hearing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.
As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.
Available in English and Chinese.

Frameworks and mirrors are related.

Frameworks and mirrors are related.

Hand Drawn Chart Saturday

The simplest explanation for integration testing is to ensure that functions and components fit together and work. Integration testing is critical to reducing the rework (and the professional embarrassment) that you’ll encounter if the components don’t fit together or if the application does not interact with its environment. A healthy testing ecosystem is required for effective testing regardless of whether you are using Agile or waterfall techniques. As we noted in the essay TMMi: What do I use the model for?, the Testing Maturity Model Integration (TMMi) delivers a framework and a vocabulary that defines the components needed for a healthy test ecosystem. We can use this framework to test if how we are approaching integration testing is rigorous. While a formal appraisal using the relevant portion of the model would be needed to understand whether an organization is performing at specific maturity level, we can look at a few areas that will give a clear understanding of integration test formality. A simple set of questions from the TMMI that I use in an Agile environment to ensure that integration testing is rigorous include:

  1. Does the organization or project team have a policy for integration testing? All frameworks work best if expectations are spelled out explicitly. The test policy(ies) are generally operationalized through test standards that explicitly define expectations.
  2. Is there a defined test strategy? In Agile teams all the relevant testing standards should be incorporated into the project’s definition of done. The definition of done helps the team to plan and to know when any piece of work is complete.
  3. Is there a plan for performing integration testing? Incorporating integration testing into the definition of done enforces that integration testing is planned.  The use of TDD (or any variant) that includes integration testing provides explicit evidence of a plan to perform integration testing
  4. Is integration testing progress monitored? Leveraging daily or continuous builds provides prima facie evidence that integration testing is occurring (and the build proves that application at least fits together).  Incorporating smoke and other forms of tests into the build make provides information to explicitly monitor the progress of integration testing.  A third basis for monitoring integration progress is the demo.  Work that has met the definition of done (which includes integration testing) is presented to the project’s stakeholders.
  5. Are those preforming integration tests trained? Integration testing occurs at many levels of development ranging from component to system, and each level requires specific knowledge and skills. Agile teams share work and activities to maximize the amount of value delivered. Cross functional Agile teams that include professional testers can leverage the testers as testing consultants to train and coach the entire team to be better integration testers. Teams without access to professional testers should seek coaching to ensure they are trained in how to perform integration testing.

The goal of integration testing is make sure that components, functions, applications and systems fit together. We perform integration testing to ensure we deliver the maximum possible business value to our stakeholders. When the parts of the application we are building or changing don’t fit together well, the value we are delivering is reduced. The TMMi can provide a framework for evaluating just how rigorously and effectively you are performing integration testing.

A Five Stage Maturity Model

A Five Stage Maturity Model

One the central features of the TMMi is the five levels of maturity.  They are a shorthand way to describe organizational capability.  The five levels are:

  1. Initial – Level One, Initial, represents the starting point in the TMMi.  Organizations in the Initial state have not institutionalized the processes that are called for in the TMMi model.  To be ultra-precise, the organization would not be able to satisfy an appraisal for the process identified at Level Two (Managed). This level has no process areas.
  2. Managed – Maturity Level Two defines five specific processes areas that delineate testing from the development process of debugging.  Level Two creates a basic testing capability that includes the definition of testing, processes for planning, designing, executing and controlling  testing  in a defined of testing environment. These capabilities provide a backstop for organizations to hold onto during periods of project stress were it would be easy to revert to ad-hoc testing. The five process areas are:
    1. Test Policy and Strategy
    2. Test Planning
    3. Test Monitoring and Control
    4. Test Design and Execution
    5. Test Environment
  3. Defined – The third maturity level, Defined, includes the practices required to extend testing (or verification and validation) across the life cycle of development based on a common core of standards and processes.  This maturity level includes five process areas.  The five are
    1. Test Organization
    2. Test Training Program
    3. Test Life Cycle and Integration
    4. Non-functional Testing
    5. Peer Reviews
  4. Measured – Level Four of the TMMi focuses on gathering data and the measurement of the testing processes.  Measurement provides information needed to begin improving the testing process and product quality.  I strongly suggest waiting until Level Four to begin measuring. Measurement can provide many benefits even before it becomes a focus at Level Four but less effectively. The Measured process area includes three process areas. They are:
    1. Test Measurement
    2. Product Quality Evaluation
    3. Advanced Reviews
  5. Optimization – The ultimate level of the TMMi framework, Optimization, reflects an organizational state where quantitative process improvement is honed to maximize effectiveness and efficiency. At this level of process maturity statistical modeling and statistical process control are leveraged for raising the bar of quality and effectiveness. Maturity Level Five includes three process areas. They are:
    1. Defect Prevention
    2. Quality Control
    3. Test Process Optimization

The five maturity levels of the TMMi represent progressive waves of capability.  Each of the level, except Initial, has a set of process areas that an organization needs to implement. As an organization moves up the ladder of testing maturity the practices contained in each maturity level form the foundation for the next level.

Note:  The five maturity levels identified in the TMMi are very similar to those identified in the CMMI.  The five maturity levels identified in the CMMI are: Initial, Managed, Defined, Quantitatively Managed and Optimizing.

Bad Dog!

Bad Dog!

There are criticisms of the TMMi. Reference models attract criticism for many reasons, but most criticisms emerge either because of the philosophy of the model or poor implementation.  In all cases, even though the criticisms are real, all of the potential issues that generate those criticisms are avoidable.  The typical criticisms of the TMMi are:

  1. The model is heavy.  The TMMi presents a framework that covers a full range of processes, activities and tasks required for pretty much any testing organization.  The model seeks to cover the entire spectrum of verification and validation.  Taken at face value, the model is huge which gives the model user (or casual reader) the perception that the model is heavy.  The omnibus coverage philosophy represents best practices to cover as many situations as possible.
  2. The model becomes the goal.  This criticism is related to the perception that the model is heavy. In some cases model advocates get over enamored by the breadth of the model and decide that every step, every activity and task is needed to satisfy the model.  Outsiders see this as the model specifying how work is being done rather than guiding the process.  This criticism is typically caused by poor training in how the model should be properly implemented.
  3. The TMMI slows process experimentation. This criticism is the outcome of here are the rules and do not deviate  implementation approach.  The implication is that  there is only one way to approach any problem and that the model implementation represents that approach.  This is purely an implementation problem.  One process cannot deal with all issues in a complex organization.  Process areas and practices noted in the model are a framework and not a specific blueprint for how to perform testing.  Exploration of new ideas is critical for growth.  Nothing in the model or model philosophy precludes experimentation and continuous process improvement.
  4. Application of the TMMi puts up barriers between testers and developers.  Waterfall methods typically leverage a production line model in which specialties complete their tasks then pass the partially completed deliverables to the next specialty.  For example, the flow might be from business analyst to designer to developer to tester.  The TMMi can be implemented in that manner but the TMMi can just as easily be implemented in a cross-functional Agile team. Organizational implementation philosophy is the driver rather the model’s philosophy.
  5. Model capability and performance are not linked.  Reference models reflect a philosophy that standard processes lead to performance.  The TMMi is no different than any of the other standard reference models.  I consider this philosophy a belief, like the belief in something were there is no tangible, measured proof.  Quantitative data is critical to understand whether any process is efficient or even effective.  The linkage must between performance and capability must be proved for every iteration.

All of the typical criticisms of the TMMi can be valid, however most of the discussions of the criticisms fail to reflect that most of the issue driving those criticisms are self-inflicted through the implementation.  Practitioners, methodologist and managers must remember that the goal of any implementation of the TMMi (or any reference model) is to deliver value to the organization rather than just to “do” the model. In all cases these criticisms can be mitigated by viewing the TMMi as a framework for change rather than an explicit recipe that requires each step be performed in one precise manner.

The TMMi is comprised of eight primary components, similar to a pile of Legos.

The TMMi is comprised of eight primary components, similar to a pile of Legos.

The Test Maturity Model Integration (TMMi®) provides a framework to describe the requirements and environment for testing in the complex environment of most IT organizations.  The TMMi, like the CMMI® and ITIL®, describes a wide swath of the IT landscape.  Each model might cover part of the landscape, but not the entirely of the products and services delivered by a typical IT department. The TMMi has addressed the problem of describing part of the environment by being complementary with the CMMI (mainly the CMMI for Development). Part of being complementary is content (testing) and part is structure.

The TMMi is comprised of eight primary components, similar to a pile of Legos that are assembled into process areas define levels of maturity.  When putting the parts together some of the parts are needed, some of the parts are recommended but others can be substituted, and some of the components are there to provide explanation or elaboration on how the model works.  The model uses three terms to capture this concept:

  • Required Component:  This component must be visibly implemented.
  • Expected Component:  This component describes how the concept is typically implemented but alternatives are acceptable.
  • Informative Component:  These components provide explanation or elaboration on practices in the model.

The eight components of the TMMi are:

  1. Maturity Levels – Maturity levels are used to capture and convey a general sense of the capability of the organization. The TMMi model has five levels (Initial, Managed, Defined, Measured, Optimization).  An organization that is classified as Initial is considered to be less capable than one that is classified as Managed.
  2. Process Areas – A process area defines a set of practices required to support a part of the testing eco-system.  Process areas include Test Environment, Non-Functional Testing and Product Quality Evaluation to name just three.  Each maturity level, with the exception of “Initial,” is defined by a number of process areas.
  3. Specific Goals – Each process area has one or more specific goals that define the unique behavioral characteristics that the process area is attempting to generate. A process area is considered satisfied when the goal is satisfied. For example the first goal in the Test Planning process area is “Perform a Product Risk Assessment.”  (This is a Required Component.)
  4. Specific Practices – The specific practices express a path that, if taken (and done well), will satisfy a specific goal.  These activities are important to reach the specific goal, but there may be other means of attaining the goal.  (This is an Expected Component.)
  5. Example Work Products – This component elaborates on the types of deliverables that are typically seen when the specific practices are implemented.  For example, work products for the Test Planning Process Area might include a risk analysis and a test plan.  These work products indicate how others have implemented the specific practices, but not how you must implement them. (This is an Informative Component of the model)
  6. Sub-practices – Sub-practices described the tasks that are generally needed to satisfy a specific practice (think of this as a form of a work breakdown structure). These components of the model provide support for interpreting and implementing the model. (Sub-tasks are an Informative Component of the model.)
  7. Generic Goals – These goals are represent organizational goals that are common for each process area.  For example, have a policy, provide resources and identify responsibilities are three generic goals.  There are 12 generic goals (10 at Level 2 and 2 at Level 3).  Satisfying the generic goals is required to institutionalize model usage.  (The generic goals are Required Components of the model.)
  8. Generic Practices – The generic practices express a path that, if taken (and done well), will satisfy a generic goal.  These activities are important to reach the generic goal but there may be other means of attaining the goal.  (This is an Expected Component).

The structure of the TMMi is very similar to the CMMI. Once I irritated the instructor of a class I took on the TMMi by pointing that out.  That similarity makes understanding the structure of the TMMi significantly easier for those have previous exposure to CMMI.  The complementary nature of the two models ensures that a TMMi implementation can not only co-exist, but also support and extend each other.  We apply the CMMI for Development covering development functions and the TMMi to augment the verification and validation functions within IT.

Development and testing are intertwined!

Development and testing are intertwined!

“All models are wrong, but some are useful.”  – George E. P. Box

IT has many useful models for addressing the complexity of developing, delivering and running software.  Well known models include the Capability Maturity Model Integration (CMMI®), the Information Technology Infrastructure Library (ITIL®) and the Test Maturity Model Integration (TMMi®) to name a few.

Testing is a mechanism for affecting product quality.  The definition is of quality is varied, ranging from precise (Crosby – “Conformance to requirements”) to meta-physical (Juran – “Quality is an attitude or state of mind”).  Without a standard model of testing that codifies a definition, it is difficult to determine whether testing is affecting quality in a positive manner.  The TMMi is an independent test maturity model. It  is a reference model representing an abstract framework of interlinked concepts based on expert opinions. The Wikipedia definition suggests that reference model can be used as a communication vehicle for ideas and concepts among the members of the model’s community. The use of a model as a tool to define the boundaries of a community also amplifies its usefulness as a communication tool as it defines the language the community uses to describe itself.   The TMMi is a reference model of testing for the testing community defining the boundaries of testing, the language of testing and a path for process improvement and assessment.

Many developers (and development managers) think of testing as a group of activities that occur at the end of coding. This flies in the face of software engineering practice since the 1980s and the Agile tenant of integrating testing into the entire development process. The TMMi model explicitly details a framework in which testing is not an event or gate that has to be hurdled, but rather a set of activities that stretch across the development lifecycle (waterfall, iterative or Agile). A model provides a framework of the activities and processes that need to be addressed rather merely laying out a set of milestones or  events that need to be followed explicitly.  The TMMi model extends the boundary of testing to entire development process.

The TMMi model lays out a set five maturity levels and sixteen process areas ranging from test environment to defect prevention.  The model is has a similar feel to the classic CMMI model. Since the model provides a set of definitions and a language to talk about testing and how it integrates into development, it provides the mechanism for members of the testing community to communicate more effectively.

The TMMi, through its framework of maturity levels, process areas, practices and sub-practices, lays out best practices for testing that should be considered when developing testing practices.  Like other reference models, the TMMi provides a framework but does not prescribe how any project or organization should do any of the practices or sub-practices.  By not prescribing how practices are to be implemented, the TMMi can be used in any organization that includes testing.  A framework that is neutral to lean, Agile or waterfall practices that points that communicates best practices provides a tool to identify and pursue process improvement is a tool that can be molded by managers and practitioners to make testing more efficient and effective.