Tallying Up the Answers:
After assessing the three components (customer involvement, criticality and complexity), count the number of “yes” and “no” answers for each model axis. Plotting the results is merely a matter of indicating the number of yes and no answers on each axis. For example, if an appraisal yields:

Customer Involvement:   8 Yes 1 No

Criticality:                       7 Yes 2 No

Complexity:                    5 Yes 4 No

The responses could be shown graphically as:

1

The Traceability model is premised on the idea that as criticality and complexity increases, the need for communication intensifies. Communication becomes more difficult as customer involvement shifts from intimate to arm’s length. Each component of the model influences the other to some extent. In circumstances where customer involvement is high, there are many different planning and control tools that must be utilized than when involvement is lower. The relationships between each axes will suggest a different implementation of traceability. In a perfect world, the model would be implemented as a continuum with an infinite number of nuanced implementations of traceability. In real life, continuums are difficult to implement. Therefore, for ease of use, I suggest an implementation of model with three basic levels of traceability (the Three Bears Approach); Papa Bear or formal/detailed tracking, Mama Bear or formal with function level tracking and Baby Bear or informal (but disciplined)/anecdote based tracking. The three bears analogy is not meant to be pejorative; heavy, medium and light would work as well.

Interpreting the axes:
Assemble the axes you have plotted with the zero intercept at the center (see example below).

Untitled

As noted earlier, I suggest three levels of traceability, ranging from agile to formal. In general if the accumulated “No” answers exceed three (on any axis); an agile approach is not appropriate. An accumulated of 7, 8 or 9 strongly suggests as formal an approach as possible should be used. Note there are certain “NO” answers that are more equal than others. For example, in the Customer Involvement Category, if ‘Agile Methods Used’ is no . . . it probably makes sense to raise the level of formality immediately. A future refinement of the model will create a hierarchy of questions and to vary the impact of the responses based on that hierarchy. All components of the model are notional rather than carved in stone – implementing the model in specific environments will require tailoring. Apply the model through the filter of your experience. Organizational culture and experience will be most important on the cusps (3-4 and 6-7 yes answer ranges).

Informal – Anecdote Based Tracing

Component Scores: No axis with more than three “No” answers.

Traceability will be accomplished through combination of stories, test cases and later test results coupled with the tight interplay between customer and developers found in agile methods. This will ensure what was planned (and not unplanned) is implemented and what was implemented was what was planned.

Moderately Formal – Function Based Tracking

Component Scores: No axis with more than six “No” answers.

The moderately formal implementation of traceability links requirements to functions (each organization needs to define the precise unit – tracing use cases can be very effective when a detailed level control is not indicated), tests cases (development and user acceptance). This type of linkage is typically accomplished using matrices and numbering, requirements tools or some combination of the two.

Formal – Detailed Traceability

Component Scores: One or more axis with more than six “No” answers.

The most formal version of traceability links individual requirements (detailed, granular requirements) through design components, code and test cases, and results. This level of traceability provides the highest level of control and oversight. This type of traceability can be accomplished using paper and pencil for small projects; however for projects of any size, tools are required.

Caveats – As with all models, the proposed traceability model is a simplification of the real world. Therefore customization is expected. Three distinct levels of traceability may be too many for some organizations or too few for others. One implemented version of the model swings between an agile approach (primarily for WEB based projects where SCRUM is being practiced) and the moderately formal model for other types of projects.   For the example organization, adding additional layers has been difficult to implement without support to ensure high degrees of consistency. We found that leveraging project level tailoring for specific nuances has been the most practical means for dealing with “one off” issues.

In practice, teams have reported major benefits to using the model.

The first benefit is that using the model ensures an honest discussion of risks, complexity and customer involvement early in the life of the project. The model works best when all project team members (within reason) participate in the discussion and assessment of the model. Facilitation is sometimes required to ensure that discussion paralysis does not occur. One organization I work with has used this mechanism as a team building exercise.

The second benefit is that the model allows project managers, coaches and team members to define the expectations for the processes to be used for traceability in a transparent/collaborative manner. The framework presented allows all parties to understand what is driving where on the formality continuum your implementation of scalability will fall – HUH?. It should be noted that once the scalability topic is broached for traceability, it is difficult to contain the discussion to just this topic. I applaud those who embrace the discussion and would suggest that all project process need to be scalable based on a disciplined and participative process that can be applied early in a project.

Examples:

Extreme examples are easy to apply without leveraging a model, a questionnaire, or graph. An extreme example would be a critical system where defects could be life threatening – such as a project to build an air traffic control system. The attributes of this type of project would include extremely high levels of complexity, a large system, many groups of customers, each with differing needs, and probably a hard deadline with large penalties for missing the date and any misses on anticipated functionality. The model recommends that a detailed requirement for traceability is a component on the path of success. A similar example could be constructed for the model agile project in which intimate customer involvement can substitute for detailed traceability.

A more illustrative example would be for projects that inhabit gray areas. The following example employs the model to suggest a traceability approach.

An organization (The Org) was engaged a firm (WEB.CO) after evaluating a series of competitive bids to build a new ecommerce web site. The RFP required the use of several WEB 2.0 community and ecommerce functions. The customer that engaged WEB.CO felt they had defined the high level requirements in the RFP. WEB.CO uses some agile techniques on all projects in which they are engaged. The techniques include defining user stories, two weeks sprints, and a coach to support the team, co-located teams and daily builds. The RFP and negotiations indicated that the customer would not be on-site and at times would have constraints on their ability to participate in the project. These early pronouncements on involvement were deemed to non-negotiable. The contract included performance penalties that WEB.CO wished to avoid. The site was considered critical to the customer’s business. Delivery of the site was timed to be in conjunction with the initial introduction of the business. Let’s consider how we would apply the questionnaire in this case.

Question Number Involvement Complexity Criticality
1 Yes Yes No
2 No Yes No
3 No Yes Unknown
(need to know)
4 Yes Yes Yes
5 Yes
(Inferred)
Yes Yes
6 Yes Yes No
7 Yes Yes No
8 Yes Yes No
9 Yes Yes Yes

 

Graphically the results look like:

2

Running the numbers on the individual radar plot axes highlights the high degree of perceived criticality for this project. The model recommends the moderate level of traceability documentation. As a final note, if this were a project I was involved on, I would keep an eye on the weakness in the involvement category. Knowing that there are weaknesses in the customer involvement category will make sure you do not rationalize away the criticality score.

 

Click this link to listen to SPaMCAST 305

Software Process and Measurement Cast number 305 features our essay on Estimation.  Estimation is a hot bed of controversy. We begin by synchronizing on what we think the word means.  Then, once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

The essay begins:

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  Once the budget is developed the evolution from budget to estimate and then plan follows a unique path as the project team learns about the project.

Next

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT
Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise.
Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT
Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

UntitledThe final axis in the model is ‘criticality’. Criticality is defined as the quality, state or degree of being of the highest importance. The problem with criticality is that the concept is far easier to recognize than to define precisely. This attribute of projects fits the old adage, ‘I will know it when I see it’. Each person on a project will be able to easily identify what they think is critical. The difficultly is that each person has their own perception of what is most important and that perception will change over time. This makes it imperative to define a set of questions or status indicators to appraise criticality consistently. The appraisal processes uses “group think” to find the central tendency in teams and consolidate the responses. Using a consensus model to develop the appraisal will also help ensure that a broad perspective is leveraged. It is also important to remember that any appraisal is specific to a point in time and that the responses to the assessment can and will change over time. I have found that the following factors can be leveraged to assess importance and criticality:

Perceived moderate level of business impact (positive or negative)     y/n
Project does not show significant time sensitivity                        y/n
Fall back position exits if the project fails                                       y/n
Low possibility of impacting important customers                                  y/n
Project is not linked to other projects                                              y/n
Project not required to pay the bills                                                            y/n
Project is not labeled “Mission Critical”                                           y/n
Normal perceived value to the stakeholders                                              y/n
Neutral impact on the organizational architecture                                    y/n

Since each project has its own set of hot button issues, other major contributors can be substituted. However, be careful to understand the impact of the questions and the inter-relationships between the categories. The model does recognize that there will always be some overlap between responses.

Perceived Moderate Business Impact: Projects that are perceived to have a significant business impact are treated as more important than those that not. There are two aspects to the perception of importance. The first aspect is to determine whether or not the project team believes that their actions will have an impact on the outcome. The second aspect is whether the organization’s management acts as if they believe that the project will have a significant business impact (acting as if there will be an impact is more important than whether it “true” – at least in the short term). The perception as to whether the impact will be positive or negative is less important than the perception of the degree of the impact (a perception of a large impact will cause a large reaction). Assessment Tip: If both the project team and the organization’s management perceive that the project will have only a moderate business impact, appraise this attribute as a “Y”. If management does not perceive the significance, do not act as if there is significance or that nothing out of the ordinary is occurring, I would strongly suggest rating this attribute as a “Y”.

Lack of Significant Time Sensitivity for Delivery: Time sensitivity is the relationship between value and when the project is delivered. An example might be the implied time sensitivity when trying to be first to market with a new product, the perception of time sensitivity creates a sense of urgency which is central to criticality. While time is one of the legs of the project management iron triangle (identified in the management constraints above) this attribute measures the relationship between business value and delivery date. Assessment Tip: If the team perceives a higher than normal time sensitivity to delivery, appraise this attribute as ‘N’.

Fall Back Available: All or nothing projects or projects without fall backs, impart a sense of criticality that can easily be recognized (usually by the large bottles of TUMS at project manager’s desks). These types of projects occur, but are rare. Assessment Tip: A simple test for the whether the project is ‘all or nothing’ is to determine whether the team understands that when the project is implemented and works, everybody is good, or it does not work; everyone gets to look for a job, appraise this as an ‘N’. Note: This assumes that a project is planned to be an all or nothing scenario (must be done) and is not just an artifact of poor planning, albeit the impact might be the same.

Low Possibility of Impacting Important Customers: Any software has a possibility of impacting an important customer or block of customers. However, determining the level of that possibility and significance of the impact, if an impact occurs, can be a bit of an art form (or at least risk analysis). Impact is defined, for this attribute, as an effect that, if noticed, would be outside of the customers’ expectations. Assessment Tip: If the project is targeted to delivering functionality for an important customer assess this as ‘N’, if not directly targeted but if there is a high probability on an impact regardless of to whom the change is targeted toward also assess this attribute as ‘N’.

Projects Not Interlinked: Projects whose outcomes are linked to other projects require closer communication. The situation is analogous to building a bridge from both sides of the river, and hoping they meet in the middle. Tools – such as traceability – that formally identify, communicate and link the requirements of one project to another substantially increase the chances of the bridge meeting in the middle. Note: that is not to say that formally documented traceability is the only method that will deliver results. The model’s strength is that it is driven by the confluence of multiple attributes to derive recommendations. Assessment Tip: If the outcome of a project is required for another project (or vice versa), assess this attribute as an ‘N’. Note: “required” means that one project can not occur without the functionality delivered by the other project. It is easy to see the inter-linkage of people as interlinking functionality. I would suggest that the former is a different management problem than we are trying to solve.

Not Directly Linked to Paying the Bills: There are projects that are a paler reflection of a “bet the farm” scenario. While there are very few true “bet the farm” projects, there are many that projects in the tier just below. These ‘second tier’ projects would cause substantial damage to the business and/or to your CIO’s career if they fail, as they are tied to delivering business value (RE: paying the bills). Assessment Tip: Projects that meet the “bet the farm” test or at least a “bet the pasture” project (major impact on revenue or the CIO’s career) can be smelled a mile away; these should be assessed as an “N”. It should be noted that if a project has been raised to this level of urgency artificially, it should be appraised as “Y”. Another tip, projects with the words SAP or PeopleSoft should automatically be assessed as an “N”.

Indirectly Important to Mission: The title “important to mission” represents a long view of the impact of the functionality being delivered by a project. An easy gauge for importance is to determine whether the project can be directly linked to the current or planned core products of the business. Understanding linkages is critical to determining whether a project is important to the mission of the organization. Remember, projects can be important to paying the bills, but not core to the mission of the business. An example, a major component of a clothing manufacturer that I worked for after I left university was its transportation group. Projects for this division were important for paying the bills, but at the same time, they were not directly related to the mission of the business, which was the design and manufacturing of women’s clothing. As an outsider, one quick test for importance to mission is to simply ask the question, “What is the mission of the organization and how does the project support it.” Not knowing the answer is either a sign to ask a lot more questions, or a sign that the project is not important to mission. Assessment Tip: If the project is directly linked to the delivery of a core (current or planned) product assess this attribute as an ‘N’. Appraisal of this attribute can engender passionate debate, most project teams want to believe that the project they are involved in is important to mission. Perception is incredibly important, if there is deeply held passion that the project is directly important to the mission of the organization assess it as an ‘N’.

Moderate Perceived Value to the Stakeholders: Any perception of value is difficult at more than an individual level. Where stakeholders are concerned, involvement clouds the rational assessments.   Simply put, stakeholders perceive most of the projects they are involved in as having more than a moderate level of value. Somewhere in their mind, stakeholders must be asking, why would I be involved with anything of moderate value? The issue is that most projects will deliver, at best, an average value. Assessment Tip: Assuming that you have access to the projected ROI (quantitative and non-quantitative) for the project you are involved in, you have the basis for a decision. A rule of thumb is that projects projected to deliver an ROI that is 10% or more of the organization’s or department’s value, appraise this as an ‘N’. Using the derived ROI assumes that the evaluations are worth more than the paper they are printed on. If you are not tracking the delivery of benefits after the project, any published ROI is suspect.

Neutral to Organizational Architecture: This attribute assesses the degree of impact the functionality/infrastructure to be delivered will have to the organization’s architecture. This attribute has a degree of covariance with the ‘architectural impact’ attribute in the previous model component. While related, they are not the exactly the same. As an example, the delivered output of a project can be critical (important and urgent), but will cause little change (low impact). An explicit example is the installation of a service pack within Microsoft Office. The service pack is typically critical (usually for security reasons), but does not change the architecture of the desktop. Assessment Tip: If delaying the delivery of the project would cause raised voices and gnashing of teeth appraise this as an ‘N’ and argue impact versus criticality over a beer.

An overall note on the concept of criticality, you will need to account for ‘false urgency’. More than a few organizations oversell the criticality of a project. The process of overselling is sometimes coupled with yelling, threats and table-pounding in order to generate a major visual effect. False urgency can have short term benefits, generating concerted action, however as soon as the team figures out the game a whipsaw affect (reduced productivity and attention) typically occurs. Gauge whether the words being used to describe how critical a project is match the appraisal vehicle you just created. Mismatches will sooner or later require action to synchronize the two points of view.

The concept of criticality requires a deft touch to assess. It is rarely as cut and dry as making checkmarks on a form. A major component of the assessment has to be the evaluation of what the project team believes. Teams that believe a project is critical will act as if the stress of criticality is real, regardless of other perceptions of reality. Alternately if a team believes a project is not critical, they will act on that belief, regardless of truth. Make sure you know how all project stakeholders perceive criticality or be ready for surprises.

Untitled

The second component, complexity, is a measure of the number of properties of a project that are judged to be outside of the norm.  The applicable norm is relative to the person or group making the judgment.  Assessing the team’s understanding of complexity is important because when a person or group perceives something to be complex they act differently.  The concept of complexity can be decomposed into many individual components, for this model the technical components of complexity will be appraised in this category.  The people or team driven attributes of complexity are dealt with in the user involvement section (above).  Higher levels of complexity are an important reason for pursuing traceability because complexity decreases the ability of a person to hold a consistent understanding of the problem and solution in their mind.  There are just too many moving parts.  The inability to develop and hold an understanding in the forefront of your mind increases the need to document understandings and issues to improve consistency.

The model assesses technical complexity by evaluating the following factors:

  1.  The project is the size you are used to doing
    2.    There is a single manager or right sized management
    3.    The technology is well known to the team
    4.    The business problem(s) is well understood
    5.    The degree of technical difficulty is normal or less
    6.    The requirements are stable (ish)
    7.    The project management constraints are minor
    8.    The architectural impact is minimal
    9.    The IT Staff perceives the impact to be minimal

As with customer involvement, the assessment process for complexity uses a simple yes or no scale for rating each of the factors.   Each factor will require some degree of discussion and introspection to arrive at an answer.  An overall assessment tip:  A maybe is equivalent to a ‘no’.   Remember that there is no prize for under or over-estimating the impact of these variables, value is only gained through an honest self-evaluation.

Project is normal size: The size of the project is a direct contributor to complexity; all things being equal, a larger than usual project will require more coordination, communication and interaction than a smaller project.  A common error when considering size of project is to use cost as a proxy.  Size is not the same thing as cost.  I suggest estimating the size of the project using standard functional size metrics.  Assessment Tip: Organizations with a baseline will be able to statistically determine the point where size causes a shift in productivity.  The shift is a sign post for where complexity begins to weigh on the processes being used.  In organizations without a baseline, develop and use a rule of thumb.  Consider using the rule that ‘if it is bigger than anything you have done before’ or the corollary ‘the same size as your biggest project’ as rules of thumb.  These equate to an ‘N’ rating.

Single Manager/Right Sized Management:
 There is an old saying ‘too many cooks in the kitchen spoil the broth’.  A cadre of managers supporting a single project can fit the ‘too many cooks’ bill.  While it is equally true that a large project will require more than one manager or leader it is important to understand the implications that the number of managers and leaders will have on a project.  Having the right number of managers and leaders can smooth out issues that are discovered, assemble and provide status without impacting the team dynamic while providing feedback to team members.  Having the wrong number of managers will gum up the works of project (measure the ratio of meeting time to a standard eight hour day anything over 25% is sign to closely check the level of management communication overhead).   The additional layers of communication and coordination are the downside of a project with multiple managers (it is easy for a single manager to communicate with himself or herself).  One of the most important lessons to be gleaned from the agile movement is that communication is critical (and this leads to the conclusion that communication difficulties may trump benefits) and that any process that gets in the way of communication should be carefully evaluated before they are implemented.  A larger communication web will need to be traversed with every manager added to the structure, which will require more formal techniques to ensure consistent and effective communication.  Assessment Tip: Projects with more than five managers and leaders or a worker to manager ratio lower than 8 workers to one manager/leader (with more than one manager) should assess this attribute as an ‘N’.

Well Known Technology: The introduction of a technology that is unfamiliar to the project team will require more coordination and interaction.  While the introduction of one or two hired guns into a group with experience is a good step to ameliorate the impact, it may not be sufficient (and may complicate communication in its own right).  I would suggest that until all relevant team members surmounts the learning curve; new technologies will require more formal communication patterns.  Assessment Tip:  If less than 50% of the project team has not worked with a technology on previous projects, assess the attribute as an ‘N’.

Well Understood Business Problem: A project team that has access to understanding of the business problem being solved by project will have a higher chance at solving the problem.  The amount of organizational knowledge the team has will dictate the level of analysis and communication required to find a solution.  Assessment Tip: If the business problem is not well understood or has not been dealt with in the past this attribute should be assessed as a ‘N”.

Low Technical Difficultly: The term ‘technical difficulty’ has many definitions.  The plethora of definitions means that measuring technical difficulty requires reflecting on many project attributes.  The attributes that define technical difficulty can initially be seen when there are difficulties in describing the solutions and alternatives for solving the problem.  Technical difficulty can include algorithms, hardware, software, data, logic or any combination of components.  Assessment Tip:  When assessing the level of technical difficulty, if it is difficult to frame the business problem in technical terms assess the level of complexity as ‘N’.

Stable Requirements: Requirements typically evolve as a project progresses (and that is a good thing).  Capers Jones indicates that requirements grow approximately 2% per calendar month across the life of a project.  Projects that are difficult to define or where project personnel or processes allow requirements to be amended or changed in an ad hoc manner should anticipate above average scope creep or churn.  Assessment Tip:  If historical data indicates that the project team, customer and application combination tends to have scope creep or churn above the norm assess this attribute as an ‘N’ unless there are procedural or methodological methods to control change.  (Note:  Control does not mean stop change, but rather that it happens in an understandable manner.)

Minor Project Management Constraints: Project managers have three macro levers (cost, scope and time) available to steer a project.   When those levers are constrained or locked (by management, users or contract) any individual problem becomes more difficult to address.  Formal communication becomes more important as options are constrained.  Assessment Tip:  If more than one of the legs of the project management iron triangle is fixed, assess this attribute as an ‘N’.

Minimal Architectural Impact: Changes to the standard architecture of the application(s) or organization will increase complexity on an exponential scale.  This change of complexity will increase the amount of communication required to ensure a trouble free change. Assessment Tip:  If you anticipate modifications (small or wholesale) to the standard architectural footprint of the application or organization, assess this attribute as an ‘N’.

Minimal IT Staff Impact:
 There are many ways a project can impact an IT staff ranging from process related changes (how work is done) to outcome related changes (employment or job duties).  Negative impacts are most apt to require increased formal communication, therefore the use of traceability methods that are more highly documented and granular.  Negative process impacts are those that are driven by the processes used or organizational constraints (e.g. death marches, poorly implemented processes, galloping requirements and resource constraints).  Outcome related impacts are those driven by the solution delivered (e.g. outsourcing, downsizing, and new application/solutions).  Assessment Tip:  Any perceived negative impact on the team or to the organization that is closely associated with the team should viewed as not neutral (assess as an ‘N’), unless you are absolutely certain you can remediate the impact on the team doing the work.  Reassess often to avoid surprises.

Ruminating on Customer Involvement

Ruminating on Customer Involvement

Customer involvement can be defined as the amount of time and effort applied to a project by the customers (or user) of the project.  Involvement can be both good (e.g. knowledge transfer and decision making) and bad (e.g. interference and indecision).  The goal in using the traceability model is to force the project team to predict both the quality and quantity of customer involvement as accurately as possible across the life of a project.  While the question of quality and quantity of customer involvement is important for all projects it becomes even more important as Agile techniques are leveraged.  Customer involvement is required for the effective use of Agile techniques and to reduce the need for classic traceability.  Involvement is used to replace documentation with a combination of lighter documentation and interaction with the customer.

Quality can be unpacked to include attributes such as competence: knowledge of the problem space, knowledge of the process and ability to make decisions that stick.  Assessing the quality attributes of involvement requires understanding how having multiple customer and/or user constituencies involved in the project outcome can change the complexity of the project.  For example, the impact of multiple customers and user constituencies’ on decision making, specifically the ability to make decisions correctly or on a timely basis, will influence how a project needs to be run.  Multiple constituencies complicate the ability to make decisions which drives the need for structure.  As the number of groups increases, the number of communication nodes increases, making it more difficult to get enough people involved in a timely manner.   Although checklists are used to facilitate the model, model users should remember that knowledge of the project and project management is needed to use the model effectively.  Users of the model should not see the lists of attributes and believe that this model can be used merely as a check-the-box method.

The methodical assessment of the quantity and quality of customer involvement requires determining the leading indicators of success.  Professional experience suggests a standard set of predictors for customer involvement which are incorporated into the appraisal questions below.
These predictors are as follows:

  1. Agile methods will be used                        y/n
  2. The customer will be available more than 80% of the time         y/n
  3. User/customer will be co-located with the project team            y/n
  4. Project has a single primary customer                    y/n
  5. The customer has adequate business knowledge            y/n
  6. The customer has knowledge of how development projects work         y/n
  7. Correct business decision makers are available                y/n
  8. Team members have a high level of interpersonal skills            y/n
  9. Process coaches are available                    y/n

The assessment process simplifies the evaluation process by using a simple yes-no evaluation.  Gray areas like ‘maybe’ are evaluated as an equivalent to a ‘no’.  While the rating scale is simple the discussion to get to a yes-no decision is typically far less simple.

Agile methods will be used:  The first component in the evaluation is to determine whether the project intends to use disciplined Agile methods for the project being evaluated.  The term ‘disciplined’ is used on purpose.  Agile methods like xP are a set of practices that interact to create development supported by intimate communication.  Without the discipline or without critical practices, the communication alone will not suffice.  Assessment tip:  Using a defined, agile process equates to a ‘Y’, making it up as you go equates to an ‘N’.

Customer availability (>80%):  Intense customer interaction is required to ensure effective development and to reduce reliance on classically documented traceability.  Availability is defined as the total amount of time the primary customer is available.  If customers are not available, lack of interaction is foregone conclusion.  I have found that agile methods (which require intense communication) tend to loose traction when customer availability drops below 80%.   Assessment Tip: Assess this attribute as a ‘Y’ if primary customer availability is above 80%.  Assess it as an ‘N’ if customer availability is below 80% (which means if your customers are not around 80% of the time normally during the project without very special circumstances rate this as a No).

Co-located customer/user:  Co-location is an intimate implementation scenario of customer/user availability.  The intimacy that co-location provides can be leveraged as a replacement for documentation-based communication by using less formal techniques like white boards and sticky notes.  Assessment Tip:  Stand up look around, if you don’t have a high probability of seeing your primary customer (unless it is lunch time), you should rate this attribute as an ‘N’.  Leveraging metaverse tools (e.g. Secondlife or similar) can be used to mitigate some of the problems of disparate physical location.

Project Has A Single Customer:  As the number of primary customers increase, the number of communication paths required for creating and deploying the project increases exponentially.  The impact that the number of customers has on communication is not a linear, it can be more easily conceived as a web.  Each node in the web will require attention (attention = communication) to coordinate activities.  Assessment Tip: Count the number of primary customers, if you need more than one finger, assess this question as an ‘N’.

Business Knowledge:  The quality and quantity of business knowledge the team has to draw upon is inversely related to the amount of documentation-based communication needed.  Availability of solid business knowledge impacts the amount of background that needs to be documented in order to establish the team’s bona fides.  It should be noted that it can be argued that sourcing long term business knowledge in human repositories is a risk.  Assessment Tip:  Assessing the quality and quantity of business knowledge will require introspection and fairly brutal honesty, but do not sell the team or yourself short.

Knowledge of How Development Projects Work:  All team members, whether they are filling a hardcore IT role or the most ancillary user role, need to understand both their project responsibilities and how they will contribute to the project.  The more intrinsically participants understand their roles and responsibilities the smaller the amount of wasted effort a project will typically have to expend on non-value added activities (like re-explaining how work is done).  Assessment Tip:  This is an area that can be addressed after assessment through training.  If team members can not be trained or educated as to their role, appraise this attribute as an ‘N’.

Decisions Makers:  The project attribute that defines “decision makers” is the process that leads to the selection of a course of action.  Most IT projects have a core set of business customers that are the decision makers for requirements and business direction.  Knowing who can make a decision (and have it stick) then having access to them is critical.  Having a set of customers available or co-located is not effective if they are not decision makers (‘the right people’).  The perfect customer for a development project is available, co-located and can make decisions that stick (and very apt not to be the person provided).  Assessment Tip:  This area is another that can only be answered after soul searching introspection (i.e. thinking about it over a beer).  If your customer has to check with a nebulous puppet master before making critical decisions then assessment response should be an “N”.

High Level of Interpersonal Skills:  All team members must be able to interact together and perform as a team.  Insular or other behavior that is not team conducive will cause communications to pool and stagnate as team members either avoid the non-team player or the offending party holds on to information at inopportune times.  Non-team behavior within a team is bad regardless of the development methodology being used.  Assessment Tip:  Teams that have worked together and crafted a good working relationship typically can answer this as a “Y”.

Facilitation: Projects perform more consistently with coaching (and seem to deliver better solutions), however coaching as a process has not been universally adopted.  The role that has been universally embraced is project manager (PM).  Coaches and project managers typically play two very different roles.  The PM role has an external focus and acts as the voice of the process, while the role of coach has an internal focus and acts the as the voice of the team (outside vs. inside, process vs. people).  Agile methods implement the role of coach and PM as two very different roles, even though they can co-exist.  Coaches nurture the personnel on the project; helping them to do their best (remember your last coach).  Shouldn’t the same facility be leveraged on all projects?  Assessment Tip:  Evaluate whether a coach is assigned if yes answer affirmatively.  If the role is not formally recognized within the group or organization, care should be taken, even if a coach is appointed.

Three core concepts.

Three core concepts.

My model for scaling traceability is based on an assumption that there is a relationship between customer involvement, criticality and complexity.  This yields the level of documentation required to achieve the benefits of traceability.  The model leverages an assessment of project attributes that define the three common concepts.  The concepts are:

  • Customer involvement in the project
  • Complexity of the functionality being delivered
  • Criticality of the project

A thumbnail definition of each of the three concepts begins with the concept of customer involvement, which is defined as the amount of time and effort applied to a project in a positive manner by the primary users of the project.  The second concept, complexity, is a measure of the number of project properties that are outside the normal expectations as perceived by the project team (the norm is relative to the organization or project group rather than to any external standard).  The final concept, criticality, is defined as the attributes defining quality, state or degree of being of the highest importance (again relative to the organization or group doing the work).  We will unpack these concepts and examine them in greater detail as we peal away the layers of the model.

The Model

Untitled

The process for using the model is a simple set of steps.
1.    Get a project (and team members)
2.    Assess the project’s attributes
3.    Plot the results on the model
4.    Interpret the findings
5.    Reassess as needed

The model is built for project environments. Don’t have a project you say!  Get one, I tell you! Can’t get one? This model will be less useful, but not useless.

Who Is Involved And When Will They Be Involved:

Implementing the traceability model assessment works best when the team (or a relevant subset) charged with doing the work conducts the assessment of project attributes.  The use of team members acts to turn Putt’s theory of “Competence Inversion ” on it head by focusing project-level competencies on defining the impact of specific attributes.  The use of a number of team members will provide a basis for consistency if assessments are performed again later in the project.

While the assessment process is best done by a cross functional team, it can be also be performed by those in the project governance structure alone.  The smaller the group that is involved in the assessment the more open and honest the communication between the assessment group and the project team must be or the exercise will be just another process inflicted on the team.  Regardless of the size, the assessment team needs to include technical competence.  Technical competence is especially useful when appraising complexity.  Technical competence is also a good tool to sell the results of the process to the rest of the project team.  Regardless of the deployment model, diversity of thought is generated in cross functional groups that will provide the breadth of knowledge needed to apply the model (this suggestion is based on feedback from process users).  The use of cross functional groups becomes even more critical for large projects and/or projects with embedded sub-projects.  In a situation where the discussion will be contentious or the group participating will be large I suggest using a facilitator to ensure an effective outcome.

An approach I suggest for integrating the assessment process into your current methodology is to incorporate the assessment as part of your formal risk assessment.  An alternative for smaller projects is to perform the assessment process during the initial project planning activities or in a sprint zero (if used).  This will minimize the impact of yet another assessment.

In larger projects where the appraisal outcome may vary across teams or sub-projects, thoughtful discussion will be required to determine whether the lowest common denominator will drive the results or whether a mixed approach is needed.  Use of this method in the real world suggests that in large projects/programs the highest or lowest common denominator is seldom universally useful.  The needs for scalability should be addressed at the level it makes sense for the project, which may mean that sub-projects are different.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability is an important tool in software engineering and a core tenant of the CMMI.  It is used as tool for the management and control of requirements. Controlling and understanding the flow of requirements puts a project manager’s hand on the throttle of the project by allowing and controlling the flow of work through a project. However, it is both hard to accomplish and requires a focused application to derive value. When does the control generated represent the proper hand on the throttle or a lead foot on a break?

The implementation of traceability sets the stage for the struggle over processes mandated by management or the infamous “model”.  Developers actively resist process when they perceive that the effort isn’t directly leading to functionality that can be delivered and therefore, not delivering value to their customers.  In the end, traceability, like insurance, is best when you don’t need the information it provides to sort out uncontrolled project changes or delivering functionality not related to requirements.

Identifying both the projects and the audience that can benefit from traceability is paramount for implementing and sustaining the process.  Questions that need to be asked and addressed include:

  • Is the need for control for all types of projects the same?
  • Is the value-to-effort ratio from tracing requirements the same for all projects?
  • What should be evaluated when determining whether to scale the traceability process?

Scalability is a needed step to affect the maximum value from any methodology component, traceability included, regardless of whether the project is plan-driven or Agile. A process is needed to ensure that traceability occurs based on a balance between process, effort and complexity.

The concept of traceability acts a lightening rod for the perceived excesses of CMMI (and by extension all other model-based improvement methods).  I will explore a possible approach for scaling traceability.  My approach bridges the typical approach (leveraging matrices and requirement tools) with an approach that trades documentation for intimate user involvement. It uses a simple set of three criteria (complexity, user involvement and criticality) to determine where a project should focus its traceability effort on continuum between documentation and involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement, a complex project, and increased criticality.  The model we will propose provides a means to apply traceability in a scaled manner so that it fits a project’s need and is not perceived as a one size fits all approach.

Follow

Get every new post delivered to your Inbox.

Join 3,711 other followers