20160324_165235

Nesting Easter eggs show each layer of the process improvement architecture

One of my children owned a (Russian nesting doll) that is now somewhere in our attic.  I was always struck how one piece fit within the other and how getting the assembly out of order generated a mess. I think I learned more from the toy than my children did.  The matryoshka doll is a wonderful metaphor for models, frameworks, and methodologies. A model represents the outside shell into which a framework fits followed by the next doll representing a methodology placed inside the framework. Like models, frameworks, and methodologies, each individual doll is unique, but they are related to the whole group of dolls. (more…)

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 373 features our essay #NotImplementedNoValue. The twelve principles that underpin the Agile Manifesto include several that link the concept of value to the delivery of working software. The focus on working software stems from one of the four values, “Working software over comprehensive documentation,” which is a reaction to projects and programs that seem to value reports and PowerPoint presentations more than putting software in the hands of users. For a typical IT organization that develops, enhances and maintains the software that the broader organization uses to do their ultimate business; value is only delivered when software can be used in production.

We visit Gene Hughson’s Form Follows Function Blog!  Gene suggests that while most models have value, some models are can lead to poor decisions.  The punchline for the discussion is “Simple is good, but not when it’s too good to be true” Gene builds the case that we need to be cognizant of our biases when using and building models.

(more…)

FullSizeRender

CMMI, ITIL, Six Sigma, Agile, waterfall, software development life cycle and eXtreme Programming . . .what do all of these terms have in common?  They are models.  In a perfect world, models are abstractions that we find useful to explain the world around us.  Models work by rendering complex ideas more simply.  For example, both a road map and picture rendered in Google Earth are models.  Two very different types of models: an abstraction of a set roads, buildings, parks and plants that exhibit can provide more information than rendering.  Real life is complex, Google Earth is less complex and the road maps are the least complex.  Simplifying a concept to a point allows understanding, while too much simplification renders the concept as a pale reflection.  Oversimplification can lead to misunderstandings or misconceptions, for example the conception that Agile methods are undisciplined or that waterfall methods are bureaucratic.  Both of these are misconceptions (individual implementations are another story).  According to Kenji Haranabe, software development is a form of communication game.  Communication requires that groups understand a concept so that it can be implemented.  Communication and understanding requires finding a level where common understanding based on common words can occur.  Words provide the simplification of real life into a form of model.

Unfortunately it is difficult to determine when enough simplification is enough.  Oversimplification of ideas can allow trouble to creep in.  Oversimplification can water down a concept to the point that it can not provide useful information to be used operationally.  An example of a very simple model is the five maturity levels commonly connected to the CMMI.  The maturity levels build awareness, but provide little operational information.  I do not know how many times I have heard people talk about an individual maturity level as if the name given to that level was all you needed to know about a maturity level.   The less simplified version with process areas, goals and practices provides substantial operational information.  ‘Operationalizing’ an overly simplified model will yield unanticipated results and that is not what we want from a model.  I once built a model of the battleship Missouri that had horrible directions (directions are a model of a model), I used three firecrackers to remodel the thing I ended up with (which was not a very good model).

Models abound in the world of information technology.  If we don’t have a model for it, we at least have TLA (three letter acronym) for it and are working on a model that will incorporate it.  The models that have lasting power provide structure, discipline and control.  They’re also used as a tool to guide work (tightly or loosely depends on the organization) and as a scaffold to define improvements in a structured manner.  Models are powerful; molding, bending and guiding legions of IT practitioners.  The dark side of this power is that the choice of models can be definitional statement for a group or organization.  Selecting a model can elicit all of the passions of politics or religion.  Just consider the emotions you feel when describing Six Sigma, CMMI, eXtreme Programming, waterfall or Agile.  One of those words will probably be a hot button.  The power of models can become seductive and entrenched so as to reduce your ability to change and adapt as circumstances demand.  A model is never a goal!  Define what your expectations are for the model or models that you are you using in business terms.  Examples of goals I would expect are of increased customer satisfaction, improved quality or faster time-to-market, rather than attaining CMMI Maturity Level 2 or implementing daily builds.  Know what you want to accomplish then integrate the models and tactics to achieve those goals.  Do not let the tool be the goal.

Models are powerful, useful tools to ensure understanding, they provide structure and discipline.  Perform a health check.  Questions to ask about the models in your organization begin with:

  1. Is there is a deep enough understanding of the model being used? – With knowledge comes the background to operationalize the model.
  2. What are your expectations of value from the model? – Knowing what you want from a model helps ensure that the model does not become the goal and that you retain the ability to be flexible.

There are many other questions you can ask on your heath check, however if you can’t answer these questions well stop and reassess, re-evaluate, re-train and re-plan your effort.

CMMI, ITIL, Six Sigma, Agile, waterfall, software development life cycle and eXtreme Programming. . . powerful tools or a straight jacket. Which is it for you?

Tallying Up the Answers:
After assessing the three components (customer involvement, criticality and complexity), count the number of “yes” and “no” answers for each model axis. Plotting the results is merely a matter of indicating the number of yes and no answers on each axis. For example, if an appraisal yields:

Customer Involvement:   8 Yes 1 No

Criticality:                       7 Yes 2 No

Complexity:                    5 Yes 4 No

The responses could be shown graphically as:

1

The Traceability model is premised on the idea that as criticality and complexity increases, the need for communication intensifies. Communication becomes more difficult as customer involvement shifts from intimate to arm’s length. Each component of the model influences the other to some extent. In circumstances where customer involvement is high, there are many different planning and control tools that must be utilized than when involvement is lower. The relationships between each axes will suggest a different implementation of traceability. In a perfect world, the model would be implemented as a continuum with an infinite number of nuanced implementations of traceability. In real life, continuums are difficult to implement. Therefore, for ease of use, I suggest an implementation of model with three basic levels of traceability (the Three Bears Approach); Papa Bear or formal/detailed tracking, Mama Bear or formal with function level tracking and Baby Bear or informal (but disciplined)/anecdote based tracking. The three bears analogy is not meant to be pejorative; heavy, medium and light would work as well.

Interpreting the axes:
Assemble the axes you have plotted with the zero intercept at the center (see example below).

Untitled

As noted earlier, I suggest three levels of traceability, ranging from agile to formal. In general if the accumulated “No” answers exceed three (on any axis); an agile approach is not appropriate. An accumulated of 7, 8 or 9 strongly suggests as formal an approach as possible should be used. Note there are certain “NO” answers that are more equal than others. For example, in the Customer Involvement Category, if ‘Agile Methods Used’ is no . . . it probably makes sense to raise the level of formality immediately. A future refinement of the model will create a hierarchy of questions and to vary the impact of the responses based on that hierarchy. All components of the model are notional rather than carved in stone – implementing the model in specific environments will require tailoring. Apply the model through the filter of your experience. Organizational culture and experience will be most important on the cusps (3-4 and 6-7 yes answer ranges).

Informal – Anecdote Based Tracing

Component Scores: No axis with more than three “No” answers.

Traceability will be accomplished through combination of stories, test cases and later test results coupled with the tight interplay between customer and developers found in agile methods. This will ensure what was planned (and not unplanned) is implemented and what was implemented was what was planned.

Moderately Formal – Function Based Tracking

Component Scores: No axis with more than six “No” answers.

The moderately formal implementation of traceability links requirements to functions (each organization needs to define the precise unit – tracing use cases can be very effective when a detailed level control is not indicated), tests cases (development and user acceptance). This type of linkage is typically accomplished using matrices and numbering, requirements tools or some combination of the two.

Formal – Detailed Traceability

Component Scores: One or more axis with more than six “No” answers.

The most formal version of traceability links individual requirements (detailed, granular requirements) through design components, code and test cases, and results. This level of traceability provides the highest level of control and oversight. This type of traceability can be accomplished using paper and pencil for small projects; however for projects of any size, tools are required.

Caveats – As with all models, the proposed traceability model is a simplification of the real world. Therefore customization is expected. Three distinct levels of traceability may be too many for some organizations or too few for others. One implemented version of the model swings between an agile approach (primarily for WEB based projects where SCRUM is being practiced) and the moderately formal model for other types of projects.   For the example organization, adding additional layers has been difficult to implement without support to ensure high degrees of consistency. We found that leveraging project level tailoring for specific nuances has been the most practical means for dealing with “one off” issues.

In practice, teams have reported major benefits to using the model.

The first benefit is that using the model ensures an honest discussion of risks, complexity and customer involvement early in the life of the project. The model works best when all project team members (within reason) participate in the discussion and assessment of the model. Facilitation is sometimes required to ensure that discussion paralysis does not occur. One organization I work with has used this mechanism as a team building exercise.

The second benefit is that the model allows project managers, coaches and team members to define the expectations for the processes to be used for traceability in a transparent/collaborative manner. The framework presented allows all parties to understand what is driving where on the formality continuum your implementation of scalability will fall – HUH?. It should be noted that once the scalability topic is broached for traceability, it is difficult to contain the discussion to just this topic. I applaud those who embrace the discussion and would suggest that all project process need to be scalable based on a disciplined and participative process that can be applied early in a project.

Examples:

Extreme examples are easy to apply without leveraging a model, a questionnaire, or graph. An extreme example would be a critical system where defects could be life threatening – such as a project to build an air traffic control system. The attributes of this type of project would include extremely high levels of complexity, a large system, many groups of customers, each with differing needs, and probably a hard deadline with large penalties for missing the date and any misses on anticipated functionality. The model recommends that a detailed requirement for traceability is a component on the path of success. A similar example could be constructed for the model agile project in which intimate customer involvement can substitute for detailed traceability.

A more illustrative example would be for projects that inhabit gray areas. The following example employs the model to suggest a traceability approach.

An organization (The Org) was engaged a firm (WEB.CO) after evaluating a series of competitive bids to build a new ecommerce web site. The RFP required the use of several WEB 2.0 community and ecommerce functions. The customer that engaged WEB.CO felt they had defined the high level requirements in the RFP. WEB.CO uses some agile techniques on all projects in which they are engaged. The techniques include defining user stories, two weeks sprints, and a coach to support the team, co-located teams and daily builds. The RFP and negotiations indicated that the customer would not be on-site and at times would have constraints on their ability to participate in the project. These early pronouncements on involvement were deemed to non-negotiable. The contract included performance penalties that WEB.CO wished to avoid. The site was considered critical to the customer’s business. Delivery of the site was timed to be in conjunction with the initial introduction of the business. Let’s consider how we would apply the questionnaire in this case.

Question Number Involvement Complexity Criticality
1 Yes Yes No
2 No Yes No
3 No Yes Unknown
(need to know)
4 Yes Yes Yes
5 Yes
(Inferred)
Yes Yes
6 Yes Yes No
7 Yes Yes No
8 Yes Yes No
9 Yes Yes Yes

 

Graphically the results look like:

2

Running the numbers on the individual radar plot axes highlights the high degree of perceived criticality for this project. The model recommends the moderate level of traceability documentation. As a final note, if this were a project I was involved on, I would keep an eye on the weakness in the involvement category. Knowing that there are weaknesses in the customer involvement category will make sure you do not rationalize away the criticality score.

SPaMCAST 293 features our essay on the Test Maturity Model Integration (TMMi). The TMMi is a maturity model focused on improving both the process and practice of testing! The TMMi covers the entire testing environment not just typical dynamic testing. The essay begins:

“All models are wrong, but some are useful.”  – George E. P. Box

Information Technology (IT) has many useful models for addressing the complexity of developing, delivering and running software.  Well known models include the Capability Maturity Model Integration (CMMI®), the Information Technology Infrastructure Library (ITIL®) and the Test Maturity Model Integration (TMMi®) to name a few. The TMMi delivers a framework to help practitioners and IT executives understand and improve the quality of the products they deliver through better testing.
To listen to the rest listen of the essay on the Software Process and Measurement Cast 293.

Thanks for the feedback on shortening the introduction of the cast this week. Please keep your feedback coming. Get in touch with us anytime or leave a comment here on the blog. Help support the SPaMCAST by reviewing and rating it on iTunes. It helps people find the cast. Like us on Facebook while you’re at it.
Next week we will feature our interview with Sean Robson. We discussed his book, Agile SAP: Introducing flexibility, transparency and speed to SAP implementations. SAP and Agile, some say it can’t be done and they would just be wrong.

Upcoming Events
Upcoming DCG Webinars:
June 19 11:30 EDT – How To Split User Stories
July 24 11:30 EDT – The Impact of Cognitive Bias On Teams

Check these out at www.davidconsultinggroup.com

I look forward to seeing or hearing all SPaMCAST readers and listeners at all of these great events!

The Software Process and Measurement Cast has a sponsor.
As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.
Available in English and Chinese.

A slightly different sort of model...

A slightly different sort of model…

Governance is a balancing act between ensuring what has been contracted is done in the manner stipulated and asking so many questions that you get in the way of your sourcer. Organizations use many techniques for creating a balanced approach to governance. Techniques range from developing your own governance structure to using process models. Whether or not you engage consultants to help develop a contract, using a model (or models) as the discussion framework is an effective method to jump-start governance.

Why are models so effective? Initially a model provides a common set of constraints that all parties can agree to without much consternation. How can you argue with a model that is regularly billed as an industry standard? Models such as CMMI, COBIT, or ITIL are certainly well accepted and are usually effective if the parties express clear goals for their use. Models are practically ubiquitous in sourcing arrangements, which usually makes the meat of the discussion not whether, but which, model will be adopted. (While outside of the scope of this discussion, the choice is important as each model focuses on different constraints.)

Each model brings with it a defined vocabulary and a set of processes that direct communication and activity. These common definitions provide the basis for governance while minimizing the work needed to arrive at an agreement. Whether recognized or not, the creation of a common vocabulary and process set is the single most important value that a model provides.

But there are problems with using common models such as CMMI or ITIL to create balance. The first is a temptation for one party to view the model as a goal unto itself. The second is that no individual model covers all of the elements of more complex sourcing arrangements. The size and complexity of governance provisions required are generally correlated to the size and complexity of the sourcing deal. As a result, complex sourcing arrangements often require using more than one model.

The balanced scorecard popularized by Kaplan and Norton provides a conceptual framework for establishing balance. Leveraging one or more of the popular process models such as the CMMI and ITIL, along with the balanced scorecard, creates a dominant tool set for providing balance in complex governance relationships.

Whether two or more organizations are involved in sourcing discussions, culture is a major determinant of governance structure. The act (or art) of balancing a sourcing contract is an intricate translation of multiple cultures into a set of covenants and agreements that must reflect the human factor. Individual needs and goals are important in an organization but are dwarfed by the goals of groups or constituencies. Constituencies can include senior management, users, unions, IT, and vendors, and organizational goals act as links between constituencies. Balance must include recognition of the overarching goals; the balanced scorecard is a tool for making sure this recognition occurs and is built into the governance process.

Balance provides focus that aligns monitoring and enforcement. Monitoring and enforcement are interrelated in their simplest forms. Monitoring is merely the process of watching for what you want to happenand enforcement is the mechanism to make it happen. In some cases, monitoring and enforcement are dealt with in the same process. An analogy for monitoring and enforcement processes is the speed trap, which monitors speed and at the same time is an enforcement tool (unless totally hidden). The tactic is enabled by the laws and tools. Sourcing uses similar monitoring/enforcement dichotomies. Cost and budget are monitored. Enforcement is accomplished by using accounting tools enabled by the contract.

Individual contracts refine the framework of a model, anchor the sourcing arrangement and provide the basis for any discussion of balance. The contract defines what is important (to all parties) and how the goals will be monitored and enforced. Cost of the governance structure is a part of the balance discussion. Implementing models, monitoring, and measurement processes are not free. But using models is the most efficient means of pursuing balance within sourcing contracts.

Models provide a focus for establishing a balanced approach to governance and as an equally important benefit, models provide a standard set of role definitions. Each of these areas is critical to creating an atmosphere in which communication can occur. The combination of models and enabled communication provide a platform from which organizations can construct a balanced governance approach.

The TMMi is comprised of eight primary components, similar to a pile of Legos.

The TMMi is comprised of eight primary components, similar to a pile of Legos.

The Test Maturity Model Integration (TMMi®) provides a framework to describe the requirements and environment for testing in the complex environment of most IT organizations.  The TMMi, like the CMMI® and ITIL®, describes a wide swath of the IT landscape.  Each model might cover part of the landscape, but not the entirely of the products and services delivered by a typical IT department. The TMMi has addressed the problem of describing part of the environment by being complementary with the CMMI (mainly the CMMI for Development). Part of being complementary is content (testing) and part is structure.

The TMMi is comprised of eight primary components, similar to a pile of Legos that are assembled into process areas define levels of maturity.  When putting the parts together some of the parts are needed, some of the parts are recommended but others can be substituted, and some of the components are there to provide explanation or elaboration on how the model works.  The model uses three terms to capture this concept:

  • Required Component:  This component must be visibly implemented.
  • Expected Component:  This component describes how the concept is typically implemented but alternatives are acceptable.
  • Informative Component:  These components provide explanation or elaboration on practices in the model.

The eight components of the TMMi are:

  1. Maturity Levels – Maturity levels are used to capture and convey a general sense of the capability of the organization. The TMMi model has five levels (Initial, Managed, Defined, Measured, Optimization).  An organization that is classified as Initial is considered to be less capable than one that is classified as Managed.
  2. Process Areas – A process area defines a set of practices required to support a part of the testing eco-system.  Process areas include Test Environment, Non-Functional Testing and Product Quality Evaluation to name just three.  Each maturity level, with the exception of “Initial,” is defined by a number of process areas.
  3. Specific Goals – Each process area has one or more specific goals that define the unique behavioral characteristics that the process area is attempting to generate. A process area is considered satisfied when the goal is satisfied. For example the first goal in the Test Planning process area is “Perform a Product Risk Assessment.”  (This is a Required Component.)
  4. Specific Practices – The specific practices express a path that, if taken (and done well), will satisfy a specific goal.  These activities are important to reach the specific goal, but there may be other means of attaining the goal.  (This is an Expected Component.)
  5. Example Work Products – This component elaborates on the types of deliverables that are typically seen when the specific practices are implemented.  For example, work products for the Test Planning Process Area might include a risk analysis and a test plan.  These work products indicate how others have implemented the specific practices, but not how you must implement them. (This is an Informative Component of the model)
  6. Sub-practices – Sub-practices described the tasks that are generally needed to satisfy a specific practice (think of this as a form of a work breakdown structure). These components of the model provide support for interpreting and implementing the model. (Sub-tasks are an Informative Component of the model.)
  7. Generic Goals – These goals are represent organizational goals that are common for each process area.  For example, have a policy, provide resources and identify responsibilities are three generic goals.  There are 12 generic goals (10 at Level 2 and 2 at Level 3).  Satisfying the generic goals is required to institutionalize model usage.  (The generic goals are Required Components of the model.)
  8. Generic Practices – The generic practices express a path that, if taken (and done well), will satisfy a generic goal.  These activities are important to reach the generic goal but there may be other means of attaining the goal.  (This is an Expected Component).

The structure of the TMMi is very similar to the CMMI. Once I irritated the instructor of a class I took on the TMMi by pointing that out.  That similarity makes understanding the structure of the TMMi significantly easier for those have previous exposure to CMMI.  The complementary nature of the two models ensures that a TMMi implementation can not only co-exist, but also support and extend each other.  We apply the CMMI for Development covering development functions and the TMMi to augment the verification and validation functions within IT.