UntitledThe final axis in the model is ‘criticality’. Criticality is defined as the quality, state or degree of being of the highest importance. The problem with criticality is that the concept is far easier to recognize than to define precisely. This attribute of projects fits the old adage, ‘I will know it when I see it’. Each person on a project will be able to easily identify what they think is critical. The difficultly is that each person has their own perception of what is most important and that perception will change over time. This makes it imperative to define a set of questions or status indicators to appraise criticality consistently. The appraisal processes uses “group think” to find the central tendency in teams and consolidate the responses. Using a consensus model to develop the appraisal will also help ensure that a broad perspective is leveraged. It is also important to remember that any appraisal is specific to a point in time and that the responses to the assessment can and will change over time. I have found that the following factors can be leveraged to assess importance and criticality:

Perceived moderate level of business impact (positive or negative)     y/n
Project does not show significant time sensitivity                        y/n
Fall back position exits if the project fails                                       y/n
Low possibility of impacting important customers                                  y/n
Project is not linked to other projects                                              y/n
Project not required to pay the bills                                                            y/n
Project is not labeled “Mission Critical”                                           y/n
Normal perceived value to the stakeholders                                              y/n
Neutral impact on the organizational architecture                                    y/n

Since each project has its own set of hot button issues, other major contributors can be substituted. However, be careful to understand the impact of the questions and the inter-relationships between the categories. The model does recognize that there will always be some overlap between responses.

Perceived Moderate Business Impact: Projects that are perceived to have a significant business impact are treated as more important than those that not. There are two aspects to the perception of importance. The first aspect is to determine whether or not the project team believes that their actions will have an impact on the outcome. The second aspect is whether the organization’s management acts as if they believe that the project will have a significant business impact (acting as if there will be an impact is more important than whether it “true” – at least in the short term). The perception as to whether the impact will be positive or negative is less important than the perception of the degree of the impact (a perception of a large impact will cause a large reaction). Assessment Tip: If both the project team and the organization’s management perceive that the project will have only a moderate business impact, appraise this attribute as a “Y”. If management does not perceive the significance, do not act as if there is significance or that nothing out of the ordinary is occurring, I would strongly suggest rating this attribute as a “Y”.

Lack of Significant Time Sensitivity for Delivery: Time sensitivity is the relationship between value and when the project is delivered. An example might be the implied time sensitivity when trying to be first to market with a new product, the perception of time sensitivity creates a sense of urgency which is central to criticality. While time is one of the legs of the project management iron triangle (identified in the management constraints above) this attribute measures the relationship between business value and delivery date. Assessment Tip: If the team perceives a higher than normal time sensitivity to delivery, appraise this attribute as ‘N’.

Fall Back Available: All or nothing projects or projects without fall backs, impart a sense of criticality that can easily be recognized (usually by the large bottles of TUMS at project manager’s desks). These types of projects occur, but are rare. Assessment Tip: A simple test for the whether the project is ‘all or nothing’ is to determine whether the team understands that when the project is implemented and works, everybody is good, or it does not work; everyone gets to look for a job, appraise this as an ‘N’. Note: This assumes that a project is planned to be an all or nothing scenario (must be done) and is not just an artifact of poor planning, albeit the impact might be the same.

Low Possibility of Impacting Important Customers: Any software has a possibility of impacting an important customer or block of customers. However, determining the level of that possibility and significance of the impact, if an impact occurs, can be a bit of an art form (or at least risk analysis). Impact is defined, for this attribute, as an effect that, if noticed, would be outside of the customers’ expectations. Assessment Tip: If the project is targeted to delivering functionality for an important customer assess this as ‘N’, if not directly targeted but if there is a high probability on an impact regardless of to whom the change is targeted toward also assess this attribute as ‘N’.

Projects Not Interlinked: Projects whose outcomes are linked to other projects require closer communication. The situation is analogous to building a bridge from both sides of the river, and hoping they meet in the middle. Tools – such as traceability – that formally identify, communicate and link the requirements of one project to another substantially increase the chances of the bridge meeting in the middle. Note: that is not to say that formally documented traceability is the only method that will deliver results. The model’s strength is that it is driven by the confluence of multiple attributes to derive recommendations. Assessment Tip: If the outcome of a project is required for another project (or vice versa), assess this attribute as an ‘N’. Note: “required” means that one project can not occur without the functionality delivered by the other project. It is easy to see the inter-linkage of people as interlinking functionality. I would suggest that the former is a different management problem than we are trying to solve.

Not Directly Linked to Paying the Bills: There are projects that are a paler reflection of a “bet the farm” scenario. While there are very few true “bet the farm” projects, there are many that projects in the tier just below. These ‘second tier’ projects would cause substantial damage to the business and/or to your CIO’s career if they fail, as they are tied to delivering business value (RE: paying the bills). Assessment Tip: Projects that meet the “bet the farm” test or at least a “bet the pasture” project (major impact on revenue or the CIO’s career) can be smelled a mile away; these should be assessed as an “N”. It should be noted that if a project has been raised to this level of urgency artificially, it should be appraised as “Y”. Another tip, projects with the words SAP or PeopleSoft should automatically be assessed as an “N”.

Indirectly Important to Mission: The title “important to mission” represents a long view of the impact of the functionality being delivered by a project. An easy gauge for importance is to determine whether the project can be directly linked to the current or planned core products of the business. Understanding linkages is critical to determining whether a project is important to the mission of the organization. Remember, projects can be important to paying the bills, but not core to the mission of the business. An example, a major component of a clothing manufacturer that I worked for after I left university was its transportation group. Projects for this division were important for paying the bills, but at the same time, they were not directly related to the mission of the business, which was the design and manufacturing of women’s clothing. As an outsider, one quick test for importance to mission is to simply ask the question, “What is the mission of the organization and how does the project support it.” Not knowing the answer is either a sign to ask a lot more questions, or a sign that the project is not important to mission. Assessment Tip: If the project is directly linked to the delivery of a core (current or planned) product assess this attribute as an ‘N’. Appraisal of this attribute can engender passionate debate, most project teams want to believe that the project they are involved in is important to mission. Perception is incredibly important, if there is deeply held passion that the project is directly important to the mission of the organization assess it as an ‘N’.

Moderate Perceived Value to the Stakeholders: Any perception of value is difficult at more than an individual level. Where stakeholders are concerned, involvement clouds the rational assessments.   Simply put, stakeholders perceive most of the projects they are involved in as having more than a moderate level of value. Somewhere in their mind, stakeholders must be asking, why would I be involved with anything of moderate value? The issue is that most projects will deliver, at best, an average value. Assessment Tip: Assuming that you have access to the projected ROI (quantitative and non-quantitative) for the project you are involved in, you have the basis for a decision. A rule of thumb is that projects projected to deliver an ROI that is 10% or more of the organization’s or department’s value, appraise this as an ‘N’. Using the derived ROI assumes that the evaluations are worth more than the paper they are printed on. If you are not tracking the delivery of benefits after the project, any published ROI is suspect.

Neutral to Organizational Architecture: This attribute assesses the degree of impact the functionality/infrastructure to be delivered will have to the organization’s architecture. This attribute has a degree of covariance with the ‘architectural impact’ attribute in the previous model component. While related, they are not the exactly the same. As an example, the delivered output of a project can be critical (important and urgent), but will cause little change (low impact). An explicit example is the installation of a service pack within Microsoft Office. The service pack is typically critical (usually for security reasons), but does not change the architecture of the desktop. Assessment Tip: If delaying the delivery of the project would cause raised voices and gnashing of teeth appraise this as an ‘N’ and argue impact versus criticality over a beer.

An overall note on the concept of criticality, you will need to account for ‘false urgency’. More than a few organizations oversell the criticality of a project. The process of overselling is sometimes coupled with yelling, threats and table-pounding in order to generate a major visual effect. False urgency can have short term benefits, generating concerted action, however as soon as the team figures out the game a whipsaw affect (reduced productivity and attention) typically occurs. Gauge whether the words being used to describe how critical a project is match the appraisal vehicle you just created. Mismatches will sooner or later require action to synchronize the two points of view.

The concept of criticality requires a deft touch to assess. It is rarely as cut and dry as making checkmarks on a form. A major component of the assessment has to be the evaluation of what the project team believes. Teams that believe a project is critical will act as if the stress of criticality is real, regardless of other perceptions of reality. Alternately if a team believes a project is not critical, they will act on that belief, regardless of truth. Make sure you know how all project stakeholders perceive criticality or be ready for surprises.

Untitled

The second component, complexity, is a measure of the number of properties of a project that are judged to be outside of the norm.  The applicable norm is relative to the person or group making the judgment.  Assessing the team’s understanding of complexity is important because when a person or group perceives something to be complex they act differently.  The concept of complexity can be decomposed into many individual components, for this model the technical components of complexity will be appraised in this category.  The people or team driven attributes of complexity are dealt with in the user involvement section (above).  Higher levels of complexity are an important reason for pursuing traceability because complexity decreases the ability of a person to hold a consistent understanding of the problem and solution in their mind.  There are just too many moving parts.  The inability to develop and hold an understanding in the forefront of your mind increases the need to document understandings and issues to improve consistency.

The model assesses technical complexity by evaluating the following factors:

  1.  The project is the size you are used to doing
    2.    There is a single manager or right sized management
    3.    The technology is well known to the team
    4.    The business problem(s) is well understood
    5.    The degree of technical difficulty is normal or less
    6.    The requirements are stable (ish)
    7.    The project management constraints are minor
    8.    The architectural impact is minimal
    9.    The IT Staff perceives the impact to be minimal

As with customer involvement, the assessment process for complexity uses a simple yes or no scale for rating each of the factors.   Each factor will require some degree of discussion and introspection to arrive at an answer.  An overall assessment tip:  A maybe is equivalent to a ‘no’.   Remember that there is no prize for under or over-estimating the impact of these variables, value is only gained through an honest self-evaluation.

Project is normal size: The size of the project is a direct contributor to complexity; all things being equal, a larger than usual project will require more coordination, communication and interaction than a smaller project.  A common error when considering size of project is to use cost as a proxy.  Size is not the same thing as cost.  I suggest estimating the size of the project using standard functional size metrics.  Assessment Tip: Organizations with a baseline will be able to statistically determine the point where size causes a shift in productivity.  The shift is a sign post for where complexity begins to weigh on the processes being used.  In organizations without a baseline, develop and use a rule of thumb.  Consider using the rule that ‘if it is bigger than anything you have done before’ or the corollary ‘the same size as your biggest project’ as rules of thumb.  These equate to an ‘N’ rating.

Single Manager/Right Sized Management:
 There is an old saying ‘too many cooks in the kitchen spoil the broth’.  A cadre of managers supporting a single project can fit the ‘too many cooks’ bill.  While it is equally true that a large project will require more than one manager or leader it is important to understand the implications that the number of managers and leaders will have on a project.  Having the right number of managers and leaders can smooth out issues that are discovered, assemble and provide status without impacting the team dynamic while providing feedback to team members.  Having the wrong number of managers will gum up the works of project (measure the ratio of meeting time to a standard eight hour day anything over 25% is sign to closely check the level of management communication overhead).   The additional layers of communication and coordination are the downside of a project with multiple managers (it is easy for a single manager to communicate with himself or herself).  One of the most important lessons to be gleaned from the agile movement is that communication is critical (and this leads to the conclusion that communication difficulties may trump benefits) and that any process that gets in the way of communication should be carefully evaluated before they are implemented.  A larger communication web will need to be traversed with every manager added to the structure, which will require more formal techniques to ensure consistent and effective communication.  Assessment Tip: Projects with more than five managers and leaders or a worker to manager ratio lower than 8 workers to one manager/leader (with more than one manager) should assess this attribute as an ‘N’.

Well Known Technology: The introduction of a technology that is unfamiliar to the project team will require more coordination and interaction.  While the introduction of one or two hired guns into a group with experience is a good step to ameliorate the impact, it may not be sufficient (and may complicate communication in its own right).  I would suggest that until all relevant team members surmounts the learning curve; new technologies will require more formal communication patterns.  Assessment Tip:  If less than 50% of the project team has not worked with a technology on previous projects, assess the attribute as an ‘N’.

Well Understood Business Problem: A project team that has access to understanding of the business problem being solved by project will have a higher chance at solving the problem.  The amount of organizational knowledge the team has will dictate the level of analysis and communication required to find a solution.  Assessment Tip: If the business problem is not well understood or has not been dealt with in the past this attribute should be assessed as a ‘N”.

Low Technical Difficultly: The term ‘technical difficulty’ has many definitions.  The plethora of definitions means that measuring technical difficulty requires reflecting on many project attributes.  The attributes that define technical difficulty can initially be seen when there are difficulties in describing the solutions and alternatives for solving the problem.  Technical difficulty can include algorithms, hardware, software, data, logic or any combination of components.  Assessment Tip:  When assessing the level of technical difficulty, if it is difficult to frame the business problem in technical terms assess the level of complexity as ‘N’.

Stable Requirements: Requirements typically evolve as a project progresses (and that is a good thing).  Capers Jones indicates that requirements grow approximately 2% per calendar month across the life of a project.  Projects that are difficult to define or where project personnel or processes allow requirements to be amended or changed in an ad hoc manner should anticipate above average scope creep or churn.  Assessment Tip:  If historical data indicates that the project team, customer and application combination tends to have scope creep or churn above the norm assess this attribute as an ‘N’ unless there are procedural or methodological methods to control change.  (Note:  Control does not mean stop change, but rather that it happens in an understandable manner.)

Minor Project Management Constraints: Project managers have three macro levers (cost, scope and time) available to steer a project.   When those levers are constrained or locked (by management, users or contract) any individual problem becomes more difficult to address.  Formal communication becomes more important as options are constrained.  Assessment Tip:  If more than one of the legs of the project management iron triangle is fixed, assess this attribute as an ‘N’.

Minimal Architectural Impact: Changes to the standard architecture of the application(s) or organization will increase complexity on an exponential scale.  This change of complexity will increase the amount of communication required to ensure a trouble free change. Assessment Tip:  If you anticipate modifications (small or wholesale) to the standard architectural footprint of the application or organization, assess this attribute as an ‘N’.

Minimal IT Staff Impact:
 There are many ways a project can impact an IT staff ranging from process related changes (how work is done) to outcome related changes (employment or job duties).  Negative impacts are most apt to require increased formal communication, therefore the use of traceability methods that are more highly documented and granular.  Negative process impacts are those that are driven by the processes used or organizational constraints (e.g. death marches, poorly implemented processes, galloping requirements and resource constraints).  Outcome related impacts are those driven by the solution delivered (e.g. outsourcing, downsizing, and new application/solutions).  Assessment Tip:  Any perceived negative impact on the team or to the organization that is closely associated with the team should viewed as not neutral (assess as an ‘N’), unless you are absolutely certain you can remediate the impact on the team doing the work.  Reassess often to avoid surprises.

Ruminating on Customer Involvement

Ruminating on Customer Involvement

Customer involvement can be defined as the amount of time and effort applied to a project by the customers (or user) of the project.  Involvement can be both good (e.g. knowledge transfer and decision making) and bad (e.g. interference and indecision).  The goal in using the traceability model is to force the project team to predict both the quality and quantity of customer involvement as accurately as possible across the life of a project.  While the question of quality and quantity of customer involvement is important for all projects it becomes even more important as Agile techniques are leveraged.  Customer involvement is required for the effective use of Agile techniques and to reduce the need for classic traceability.  Involvement is used to replace documentation with a combination of lighter documentation and interaction with the customer.

Quality can be unpacked to include attributes such as competence: knowledge of the problem space, knowledge of the process and ability to make decisions that stick.  Assessing the quality attributes of involvement requires understanding how having multiple customer and/or user constituencies involved in the project outcome can change the complexity of the project.  For example, the impact of multiple customers and user constituencies’ on decision making, specifically the ability to make decisions correctly or on a timely basis, will influence how a project needs to be run.  Multiple constituencies complicate the ability to make decisions which drives the need for structure.  As the number of groups increases, the number of communication nodes increases, making it more difficult to get enough people involved in a timely manner.   Although checklists are used to facilitate the model, model users should remember that knowledge of the project and project management is needed to use the model effectively.  Users of the model should not see the lists of attributes and believe that this model can be used merely as a check-the-box method.

The methodical assessment of the quantity and quality of customer involvement requires determining the leading indicators of success.  Professional experience suggests a standard set of predictors for customer involvement which are incorporated into the appraisal questions below.
These predictors are as follows:

  1. Agile methods will be used                        y/n
  2. The customer will be available more than 80% of the time         y/n
  3. User/customer will be co-located with the project team            y/n
  4. Project has a single primary customer                    y/n
  5. The customer has adequate business knowledge            y/n
  6. The customer has knowledge of how development projects work         y/n
  7. Correct business decision makers are available                y/n
  8. Team members have a high level of interpersonal skills            y/n
  9. Process coaches are available                    y/n

The assessment process simplifies the evaluation process by using a simple yes-no evaluation.  Gray areas like ‘maybe’ are evaluated as an equivalent to a ‘no’.  While the rating scale is simple the discussion to get to a yes-no decision is typically far less simple.

Agile methods will be used:  The first component in the evaluation is to determine whether the project intends to use disciplined Agile methods for the project being evaluated.  The term ‘disciplined’ is used on purpose.  Agile methods like xP are a set of practices that interact to create development supported by intimate communication.  Without the discipline or without critical practices, the communication alone will not suffice.  Assessment tip:  Using a defined, agile process equates to a ‘Y’, making it up as you go equates to an ‘N’.

Customer availability (>80%):  Intense customer interaction is required to ensure effective development and to reduce reliance on classically documented traceability.  Availability is defined as the total amount of time the primary customer is available.  If customers are not available, lack of interaction is foregone conclusion.  I have found that agile methods (which require intense communication) tend to loose traction when customer availability drops below 80%.   Assessment Tip: Assess this attribute as a ‘Y’ if primary customer availability is above 80%.  Assess it as an ‘N’ if customer availability is below 80% (which means if your customers are not around 80% of the time normally during the project without very special circumstances rate this as a No).

Co-located customer/user:  Co-location is an intimate implementation scenario of customer/user availability.  The intimacy that co-location provides can be leveraged as a replacement for documentation-based communication by using less formal techniques like white boards and sticky notes.  Assessment Tip:  Stand up look around, if you don’t have a high probability of seeing your primary customer (unless it is lunch time), you should rate this attribute as an ‘N’.  Leveraging metaverse tools (e.g. Secondlife or similar) can be used to mitigate some of the problems of disparate physical location.

Project Has A Single Customer:  As the number of primary customers increase, the number of communication paths required for creating and deploying the project increases exponentially.  The impact that the number of customers has on communication is not a linear, it can be more easily conceived as a web.  Each node in the web will require attention (attention = communication) to coordinate activities.  Assessment Tip: Count the number of primary customers, if you need more than one finger, assess this question as an ‘N’.

Business Knowledge:  The quality and quantity of business knowledge the team has to draw upon is inversely related to the amount of documentation-based communication needed.  Availability of solid business knowledge impacts the amount of background that needs to be documented in order to establish the team’s bona fides.  It should be noted that it can be argued that sourcing long term business knowledge in human repositories is a risk.  Assessment Tip:  Assessing the quality and quantity of business knowledge will require introspection and fairly brutal honesty, but do not sell the team or yourself short.

Knowledge of How Development Projects Work:  All team members, whether they are filling a hardcore IT role or the most ancillary user role, need to understand both their project responsibilities and how they will contribute to the project.  The more intrinsically participants understand their roles and responsibilities the smaller the amount of wasted effort a project will typically have to expend on non-value added activities (like re-explaining how work is done).  Assessment Tip:  This is an area that can be addressed after assessment through training.  If team members can not be trained or educated as to their role, appraise this attribute as an ‘N’.

Decisions Makers:  The project attribute that defines “decision makers” is the process that leads to the selection of a course of action.  Most IT projects have a core set of business customers that are the decision makers for requirements and business direction.  Knowing who can make a decision (and have it stick) then having access to them is critical.  Having a set of customers available or co-located is not effective if they are not decision makers (‘the right people’).  The perfect customer for a development project is available, co-located and can make decisions that stick (and very apt not to be the person provided).  Assessment Tip:  This area is another that can only be answered after soul searching introspection (i.e. thinking about it over a beer).  If your customer has to check with a nebulous puppet master before making critical decisions then assessment response should be an “N”.

High Level of Interpersonal Skills:  All team members must be able to interact together and perform as a team.  Insular or other behavior that is not team conducive will cause communications to pool and stagnate as team members either avoid the non-team player or the offending party holds on to information at inopportune times.  Non-team behavior within a team is bad regardless of the development methodology being used.  Assessment Tip:  Teams that have worked together and crafted a good working relationship typically can answer this as a “Y”.

Facilitation: Projects perform more consistently with coaching (and seem to deliver better solutions), however coaching as a process has not been universally adopted.  The role that has been universally embraced is project manager (PM).  Coaches and project managers typically play two very different roles.  The PM role has an external focus and acts as the voice of the process, while the role of coach has an internal focus and acts the as the voice of the team (outside vs. inside, process vs. people).  Agile methods implement the role of coach and PM as two very different roles, even though they can co-exist.  Coaches nurture the personnel on the project; helping them to do their best (remember your last coach).  Shouldn’t the same facility be leveraged on all projects?  Assessment Tip:  Evaluate whether a coach is assigned if yes answer affirmatively.  If the role is not formally recognized within the group or organization, care should be taken, even if a coach is appointed.

Three core concepts.

Three core concepts.

My model for scaling traceability is based on an assumption that there is a relationship between customer involvement, criticality and complexity.  This yields the level of documentation required to achieve the benefits of traceability.  The model leverages an assessment of project attributes that define the three common concepts.  The concepts are:

  • Customer involvement in the project
  • Complexity of the functionality being delivered
  • Criticality of the project

A thumbnail definition of each of the three concepts begins with the concept of customer involvement, which is defined as the amount of time and effort applied to a project in a positive manner by the primary users of the project.  The second concept, complexity, is a measure of the number of project properties that are outside the normal expectations as perceived by the project team (the norm is relative to the organization or project group rather than to any external standard).  The final concept, criticality, is defined as the attributes defining quality, state or degree of being of the highest importance (again relative to the organization or group doing the work).  We will unpack these concepts and examine them in greater detail as we peal away the layers of the model.

The Model

Untitled

The process for using the model is a simple set of steps.
1.    Get a project (and team members)
2.    Assess the project’s attributes
3.    Plot the results on the model
4.    Interpret the findings
5.    Reassess as needed

The model is built for project environments. Don’t have a project you say!  Get one, I tell you! Can’t get one? This model will be less useful, but not useless.

Who Is Involved And When Will They Be Involved:

Implementing the traceability model assessment works best when the team (or a relevant subset) charged with doing the work conducts the assessment of project attributes.  The use of team members acts to turn Putt’s theory of “Competence Inversion ” on it head by focusing project-level competencies on defining the impact of specific attributes.  The use of a number of team members will provide a basis for consistency if assessments are performed again later in the project.

While the assessment process is best done by a cross functional team, it can be also be performed by those in the project governance structure alone.  The smaller the group that is involved in the assessment the more open and honest the communication between the assessment group and the project team must be or the exercise will be just another process inflicted on the team.  Regardless of the size, the assessment team needs to include technical competence.  Technical competence is especially useful when appraising complexity.  Technical competence is also a good tool to sell the results of the process to the rest of the project team.  Regardless of the deployment model, diversity of thought is generated in cross functional groups that will provide the breadth of knowledge needed to apply the model (this suggestion is based on feedback from process users).  The use of cross functional groups becomes even more critical for large projects and/or projects with embedded sub-projects.  In a situation where the discussion will be contentious or the group participating will be large I suggest using a facilitator to ensure an effective outcome.

An approach I suggest for integrating the assessment process into your current methodology is to incorporate the assessment as part of your formal risk assessment.  An alternative for smaller projects is to perform the assessment process during the initial project planning activities or in a sprint zero (if used).  This will minimize the impact of yet another assessment.

In larger projects where the appraisal outcome may vary across teams or sub-projects, thoughtful discussion will be required to determine whether the lowest common denominator will drive the results or whether a mixed approach is needed.  Use of this method in the real world suggests that in large projects/programs the highest or lowest common denominator is seldom universally useful.  The needs for scalability should be addressed at the level it makes sense for the project, which may mean that sub-projects are different.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability is an important tool in software engineering and a core tenant of the CMMI.  It is used as tool for the management and control of requirements. Controlling and understanding the flow of requirements puts a project manager’s hand on the throttle of the project by allowing and controlling the flow of work through a project. However, it is both hard to accomplish and requires a focused application to derive value. When does the control generated represent the proper hand on the throttle or a lead foot on a break?

The implementation of traceability sets the stage for the struggle over processes mandated by management or the infamous “model”.  Developers actively resist process when they perceive that the effort isn’t directly leading to functionality that can be delivered and therefore, not delivering value to their customers.  In the end, traceability, like insurance, is best when you don’t need the information it provides to sort out uncontrolled project changes or delivering functionality not related to requirements.

Identifying both the projects and the audience that can benefit from traceability is paramount for implementing and sustaining the process.  Questions that need to be asked and addressed include:

  • Is the need for control for all types of projects the same?
  • Is the value-to-effort ratio from tracing requirements the same for all projects?
  • What should be evaluated when determining whether to scale the traceability process?

Scalability is a needed step to affect the maximum value from any methodology component, traceability included, regardless of whether the project is plan-driven or Agile. A process is needed to ensure that traceability occurs based on a balance between process, effort and complexity.

The concept of traceability acts a lightening rod for the perceived excesses of CMMI (and by extension all other model-based improvement methods).  I will explore a possible approach for scaling traceability.  My approach bridges the typical approach (leveraging matrices and requirement tools) with an approach that trades documentation for intimate user involvement. It uses a simple set of three criteria (complexity, user involvement and criticality) to determine where a project should focus its traceability effort on continuum between documentation and involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement, a complex project, and increased criticality.  The model we will propose provides a means to apply traceability in a scaled manner so that it fits a project’s need and is not perceived as a one size fits all approach.

Best practices aren't magic and neither are goblins.

Best practices aren’t magic and neither are goblins.

To paraphrase Edwin Starr, “Best Practices, huh, what are they good for? Absolutely nothing,  Say it again . . .”

Every organization wants to use best practices. How many organizations do you know that would stand up and say we want to use average practices? Therefore a process with the moniker “best practice” on it has an allure that is hard to resist.  The problem is that one organization’s best practice is another’s average process, even if they produce the same quality and quantity of output.  Or even worse, one organization’s best practice might be beyond another organization.  The process reflects the overall organizational context.  It is possible that adopting a new process wholesale could produce output faster or better, but without tailoring, the chances are more random than many consultants would suggest. For example, just buying a configuration management tool without changing how you do configuration management will be less effective melding the tool with your processes.  Tailoring will allow you to use the process based on the attributes of the current organizational context such as the organization’s overall size or the capabilities of the people involved.

An example of an organization’s best practice that might not translate to all of its competitors is the use of super sophisticated inventory control computer systems used at Walmart. Would Walmart’s computer system help a local grocery store (let’s call this Hometown Grocery)? Not likely, the overhead of the same system would be beyond Hometown’s IT capabilities and budget.  However if hundreds of Hometown Groceries banded together, the answer might be different (tailoring the process to the environmental context).  Without tailoring the context, the best practice for Walmart would not be a best practice for our small town grocery.

The term best practice gets thrown around as if there was a dusty old tome full of magical incantations that will solve any crisis regardless of context (assuming you are a seventh level mage).  There are those that hold up the CMMI, ISO or SCRUM and shout (usually on email lists) that they are only way.  Let’s begin by putting the idea that there is a one-size-fits-all solution to every job to rest.  There isn’t and there never was any such animal.  Any individual process, practice or step that worked wonderfully in the company down the street will not work the same way for you, especially if you try to it do it same way they did.  Software development and maintenance isn’t a chemical reaction, a Lego construct or even magic.  Best practices, what are they good for?  Fortunately a lot, if used correctly.

Best practices find their highest value as a tool for you to use as a comparison in order for you to expose the assumptions hat have been used to build or evolve your own processes.   Knowledge allows you to challenge how and why are you are doing any specific step and provides an opportunity for change.  How many companies have embraced the tenants of the Toyota Production Systems after benchmarking Toyota?

Adopting best practices without regard to your context may not yield the benefits found on the box.  If you read the small print you’d see a warning. Use best practices only after reading all of the instructions and understanding of your goals and your environment.  This is not to say that exemplary practices should not be aggressively studied and translated into your organization.  Ignoring new ideas because they did not grow out of your context is just as crazy as embracing best practices without understanding the context it was created in. Best practices as an ideal, as a comparison so that you can understand your organization makes sense, not as plug-compatible modules.

Listen to the Software Process and Measurement Cast 304

Software Process and Measurement Cast number 304 features our interview with Jamie Lynn Cooke. Jamie Lynn Cooke is the author of The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and huge value of Agile business analysts bring to projects!

Jamie Lynn’s Bio:
Jamie Lynn Cooke has 24 years of experience as a senior business analyst and solutions consultant, working with more than 130 public and private sector organizations throughout Australia, Canada, and the United States.

She is the author of The Power of the Agile Business Analyst: 30 surprising ways a business analyst can add value to your Agile development team, which details how Agile business analysts can increase the relevance, quality and overall business value of Agile projects; Agile Principles Unleashed, a book written specifically to explain Agile in non-technical business terms to managers and executives outside of the IT industry; Agile: An Executive Guide: Real results from IT budgets, which gives IT executives the tools and strategies needed for bottom-line business decisions on using Agile methodologies; and Everything You Want to Know About Agile: How to get Agile results in a less-than-Agile organization, which gives readers strategies for aligning Agile work within the reporting, budgeting, staffing, and governance constraints of their organization. Also checkout,  Agile Productivity Unleashed: Proven Approaches for Achieving Real Productivity Gains in Any Organization (Second Edition)!

Jamie has a Bachelor of Science in Engineering Psychology (Human Factors Engineering) from Tufts University in Medford, Massachusetts; and a Graduate Certificate in e-Business/Business Informatics from the University of Canberra in Australia.

You can find her website here.

 

Next

Software Process and Measurement Cast number 305 will feature our essay on estimation (here is our essay on specific topics within estimation). Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Follow

Get every new post delivered to your Inbox.

Join 3,701 other followers