Overall project size influences variability.

Overall project size influences variability.

Risk is reflection of possibilities, stuff that could happen if the stars align. Therefore projects are highly influenced by variability. There are many factors that influence variability including complexity, process discipline, people and the size of the work. The impact of the size can be felt in two separate but equally important manners. The first is the size of the overall project and second is the size any particular unit of work.

Overall project size influences variability by increasing the sheer number of moving parts that have to relate to each other. As an example, the assembly of an automobile is a large endeavor and is the culmination of a number of relatively large subprojects. Any significant variance in how the subprojects are assembled along the path of building the automobile will cause problems in the final deliverable. Large software projects require extra coordination, testing, integration to ensure that all of the pieces fit together, deliver the functionality customers and stakeholders expect and act properly. All of these extra steps increase the possibility of variance.

Similarly large pieces of work, user stories in Agile, cause similar problems as noted for large projects, but at the team level. For example, when any piece of work enters a sprint the first step in the process of transforming that story into value is planning. Large pieces of work are more difficult to plan, if for no other reason that they take longer to break down into tasks increasing the likelihood that something will not be considered generating a “gotcha” later in the sprint.

Whether at a project or sprint level, smaller is generally simpler, and simpler generates less variability. There are a number of techniques for managing size.

  1. Short, complete sprints or iterations. The impact of time boxing on reducing project size has been discussed and understood in mainstream since the 1990’s (see Hammer and Champy, Reengineering the Corporation). Breaking a project into smaller parts reduces the overhead of coordination and provides faster feedback and more synchronization points, which reduces the variability. Duration of a sprint acts as constraint to the amount of work that can be taken and into a sprint and reasonably be completed.
  2. Team sizes of 5 to 9 are large enough to tackle real work while maintaining the stabile team relationships needed to coordinate the development and integration of functionality that can potentially be shipped at the end of each sprint. Team size constrains the amount of work that can enter a sprint and be completed.
  3. Splitting user stories is a mechanism to reduce the size of a specific piece of work so that it can be complete faster with fewer potential complications that cause variability. The process of splitting user stories breaks stories down into smaller independent pieces (INVEST) that be developed and tested faster. Smaller stories are less likely to block the entire team if anything goes wrong. This reduces variability and generates feedback more quickly, thereby reducing the potential for large batches of rework if the solution does not meet stakeholder needs. Small stories increases flow through the development processes reducing variability.

I learned many years ago that supersizing fries at the local fast food establishment was a bad idea in that it increased the variability in my caloric intake (and waistline). Similarly, large projects are subject to increased variability. There are just too many moving parts, which leads to variability and risk. Large user stories have exactly the same issues as large project just on a smaller scale. Agile techniques of short sprints and small team size provide constraints so that teams can control the size of work they are considering at any point in time. Teams need to take the additional step of breaking down stories into smaller pieces to continue to minimize the potential impact of variability.

Does the raft have a peak load?

Software development is inherently complex and therefore risky. Historically there have been many techniques leveraged to identify and manage risk. As noted in Agile and Risk Management, much of the risk in projects can be put at the feet of variability. Complexity is one of the factors that drives complexity. Spikes, prioritization and fast feedback are important techniques for recognizing and reducing the impact of complexity.

  1. Spikes provide a tool to develop an understanding of an unknown within a time box. Spikes are a simple tool used to answer a question, gather information, perform a specific piece of basic research, address project risks or break a large story down. Spikes generate knowledge, which both reduces the complexity intrinsic in unknowns and provides information that can be used to simplify the problem being studied in the spike.
  2. Prioritization is a tool used to order the project or release backlog. Prioritization is also a powerful tool of reducing the impact of variability. It generally easier to adapt to a negative surprise in a project earlier in the lifecycle. Teams can allocate part of their capacity to the most difficult stories early in the project using complexity as a criterion for prioritization.
  3. Fast feedback is the single most effective means of reducing complexity (short of avoidance). Core to the DNA of Agile and lean frameworks is the “inspect and adapt” cycle. Deming’s Plan-Do-Check-Act cycle is one representation of “inspect and adapt” as are retrospectives, pair-programming and peer reviews. Short iterative cycles provide a platform to effectively apply the model of “inspect and adapt” to reduce complexity based on feedback. When teams experience complexity they have a wide range of tool to share and seek feedback ranging from daily stand-up meetings to demonstrations and retrospectives.
    Two notes on feedback:

    1. While powerful, feedback only works if it is heard.
    2. The more complex (or important) a piece of work is, the shorter the feedback cycle should be.

Spikes, prioritization and feedback are common Agile and lean techniques. The fact that they are common has led some Agile practitioners to feel that frameworks like Scrum has negated the need to deal specifically with risks at the team level. These are powerful tools for identifying and reducing complexity and the variability complexity generates however they need to be combined with other tools and techniques to manage the risk that is part of all projects and releases.

We brush our teeth to avoid cavities; a process and an outcome.

We brush our teeth to avoid cavities; a process and an outcome.

There is no consensus on how risk is managed in Agile projects or releases or whether there is a need for any special steps. Risk is defined as “any uncertain event that can have an impact on the success of a project.” Risk management, on the other hand, is about predicting the future. Risk management is a reflection of the need manage the flow of work in the face of variability. Variability in software development can be defined as the extent that any outcome differs from expectations. Understanding and exploiting that variability is the essence of risk management. Variability is exacerbated by a number of factors, such as the following.

  1. Complexity in any problem is a reflection of the number of parts to the problem, how those parts interact and the level of intellectual difficulty. The more complex a problem, the less likely you will be able to accurately predict the delivery of a solution to the problem. Risk is reduced by relentlessly focusing on good design and simplifying the problem. Reflecting the Agile principle of simplicity, reducing the work to only what is absolutely needed is one mechanism for reducing variability.
  2. Size of project or release influences the overall variability by increasing the number of people involved in delivering an outcome and therefore the level of coordination needed. Size is a form of complexity. Size increases the possibility that there will be interactions and dependencies between components of the solution. In a perfect world, the work required to deliver each requirement and system component would be independent of each other, however when this can’t be avoided projects need mechanisms to identify and deal with dependencies. An example of a mechanism to address complexity created by size is the release planning event is SAFe.
  3. Ad hoc or uncontrolled processes can’t deliver a consistent output. We adopt processes in order to control the outcome. We brush our teeth to avoid cavities; a process and an outcome. When a process is not under control, the outcome is not predictable. In order for a process to be under control, the process needs to be defined and then performed based on that definition. Processes need to be defined in as simple and lean a manner possible to be predictable.
  4. People are chaotic by nature. Any process that includes people will be more variable than the same process without people. Fortunately we have not reached the stage in software development where machines have replaced people (watch any of the Terminator or Matrix movies). The chaotic behavior of teams or teams of teams can be reduced through alignment. Alignment can be achieved at many levels. At release or program level, developing a common goal and understanding of business need helps to develop alignment. At a team level, stable teams helps team members build interpersonal bonds leading to alignment. Training is another tool to develop or enhance alignment.

Software development, whether the output maintenance, enhancements or new functionality a reflection of processes that have a degree of variability. Variability, while generally conceived of as being bad and therefore to be avoided, could be good or bad. Finding a new was to deliver work that increases productivity and quality is good. The goal for any project or release is to reduce the possibility of negative variance and then learning to use what remains to best advantage.

Untitled

The second component, complexity, is a measure of the number of properties of a project that are judged to be outside of the norm.  The applicable norm is relative to the person or group making the judgment.  Assessing the team’s understanding of complexity is important because when a person or group perceives something to be complex they act differently.  The concept of complexity can be decomposed into many individual components, for this model the technical components of complexity will be appraised in this category.  The people or team driven attributes of complexity are dealt with in the user involvement section (above).  Higher levels of complexity are an important reason for pursuing traceability because complexity decreases the ability of a person to hold a consistent understanding of the problem and solution in their mind.  There are just too many moving parts.  The inability to develop and hold an understanding in the forefront of your mind increases the need to document understandings and issues to improve consistency.

The model assesses technical complexity by evaluating the following factors:

  1.  The project is the size you are used to doing
    2.    There is a single manager or right sized management
    3.    The technology is well known to the team
    4.    The business problem(s) is well understood
    5.    The degree of technical difficulty is normal or less
    6.    The requirements are stable (ish)
    7.    The project management constraints are minor
    8.    The architectural impact is minimal
    9.    The IT Staff perceives the impact to be minimal

As with customer involvement, the assessment process for complexity uses a simple yes or no scale for rating each of the factors.   Each factor will require some degree of discussion and introspection to arrive at an answer.  An overall assessment tip:  A maybe is equivalent to a ‘no’.   Remember that there is no prize for under or over-estimating the impact of these variables, value is only gained through an honest self-evaluation.

Project is normal size: The size of the project is a direct contributor to complexity; all things being equal, a larger than usual project will require more coordination, communication and interaction than a smaller project.  A common error when considering size of project is to use cost as a proxy.  Size is not the same thing as cost.  I suggest estimating the size of the project using standard functional size metrics.  Assessment Tip: Organizations with a baseline will be able to statistically determine the point where size causes a shift in productivity.  The shift is a sign post for where complexity begins to weigh on the processes being used.  In organizations without a baseline, develop and use a rule of thumb.  Consider using the rule that ‘if it is bigger than anything you have done before’ or the corollary ‘the same size as your biggest project’ as rules of thumb.  These equate to an ‘N’ rating.

Single Manager/Right Sized Management:
 There is an old saying ‘too many cooks in the kitchen spoil the broth’.  A cadre of managers supporting a single project can fit the ‘too many cooks’ bill.  While it is equally true that a large project will require more than one manager or leader it is important to understand the implications that the number of managers and leaders will have on a project.  Having the right number of managers and leaders can smooth out issues that are discovered, assemble and provide status without impacting the team dynamic while providing feedback to team members.  Having the wrong number of managers will gum up the works of project (measure the ratio of meeting time to a standard eight hour day anything over 25% is sign to closely check the level of management communication overhead).   The additional layers of communication and coordination are the downside of a project with multiple managers (it is easy for a single manager to communicate with himself or herself).  One of the most important lessons to be gleaned from the agile movement is that communication is critical (and this leads to the conclusion that communication difficulties may trump benefits) and that any process that gets in the way of communication should be carefully evaluated before they are implemented.  A larger communication web will need to be traversed with every manager added to the structure, which will require more formal techniques to ensure consistent and effective communication.  Assessment Tip: Projects with more than five managers and leaders or a worker to manager ratio lower than 8 workers to one manager/leader (with more than one manager) should assess this attribute as an ‘N’.

Well Known Technology: The introduction of a technology that is unfamiliar to the project team will require more coordination and interaction.  While the introduction of one or two hired guns into a group with experience is a good step to ameliorate the impact, it may not be sufficient (and may complicate communication in its own right).  I would suggest that until all relevant team members surmounts the learning curve; new technologies will require more formal communication patterns.  Assessment Tip:  If less than 50% of the project team has not worked with a technology on previous projects, assess the attribute as an ‘N’.

Well Understood Business Problem: A project team that has access to understanding of the business problem being solved by project will have a higher chance at solving the problem.  The amount of organizational knowledge the team has will dictate the level of analysis and communication required to find a solution.  Assessment Tip: If the business problem is not well understood or has not been dealt with in the past this attribute should be assessed as a ‘N”.

Low Technical Difficultly: The term ‘technical difficulty’ has many definitions.  The plethora of definitions means that measuring technical difficulty requires reflecting on many project attributes.  The attributes that define technical difficulty can initially be seen when there are difficulties in describing the solutions and alternatives for solving the problem.  Technical difficulty can include algorithms, hardware, software, data, logic or any combination of components.  Assessment Tip:  When assessing the level of technical difficulty, if it is difficult to frame the business problem in technical terms assess the level of complexity as ‘N’.

Stable Requirements: Requirements typically evolve as a project progresses (and that is a good thing).  Capers Jones indicates that requirements grow approximately 2% per calendar month across the life of a project.  Projects that are difficult to define or where project personnel or processes allow requirements to be amended or changed in an ad hoc manner should anticipate above average scope creep or churn.  Assessment Tip:  If historical data indicates that the project team, customer and application combination tends to have scope creep or churn above the norm assess this attribute as an ‘N’ unless there are procedural or methodological methods to control change.  (Note:  Control does not mean stop change, but rather that it happens in an understandable manner.)

Minor Project Management Constraints: Project managers have three macro levers (cost, scope and time) available to steer a project.   When those levers are constrained or locked (by management, users or contract) any individual problem becomes more difficult to address.  Formal communication becomes more important as options are constrained.  Assessment Tip:  If more than one of the legs of the project management iron triangle is fixed, assess this attribute as an ‘N’.

Minimal Architectural Impact: Changes to the standard architecture of the application(s) or organization will increase complexity on an exponential scale.  This change of complexity will increase the amount of communication required to ensure a trouble free change. Assessment Tip:  If you anticipate modifications (small or wholesale) to the standard architectural footprint of the application or organization, assess this attribute as an ‘N’.

Minimal IT Staff Impact:
 There are many ways a project can impact an IT staff ranging from process related changes (how work is done) to outcome related changes (employment or job duties).  Negative impacts are most apt to require increased formal communication, therefore the use of traceability methods that are more highly documented and granular.  Negative process impacts are those that are driven by the processes used or organizational constraints (e.g. death marches, poorly implemented processes, galloping requirements and resource constraints).  Outcome related impacts are those driven by the solution delivered (e.g. outsourcing, downsizing, and new application/solutions).  Assessment Tip:  Any perceived negative impact on the team or to the organization that is closely associated with the team should viewed as not neutral (assess as an ‘N’), unless you are absolutely certain you can remediate the impact on the team doing the work.  Reassess often to avoid surprises.

Three core concepts.

Three core concepts.

My model for scaling traceability is based on an assumption that there is a relationship between customer involvement, criticality and complexity.  This yields the level of documentation required to achieve the benefits of traceability.  The model leverages an assessment of project attributes that define the three common concepts.  The concepts are:

  • Customer involvement in the project
  • Complexity of the functionality being delivered
  • Criticality of the project

A thumbnail definition of each of the three concepts begins with the concept of customer involvement, which is defined as the amount of time and effort applied to a project in a positive manner by the primary users of the project.  The second concept, complexity, is a measure of the number of project properties that are outside the normal expectations as perceived by the project team (the norm is relative to the organization or project group rather than to any external standard).  The final concept, criticality, is defined as the attributes defining quality, state or degree of being of the highest importance (again relative to the organization or group doing the work).  We will unpack these concepts and examine them in greater detail as we peal away the layers of the model.

The Model

Untitled

The process for using the model is a simple set of steps.
1.    Get a project (and team members)
2.    Assess the project’s attributes
3.    Plot the results on the model
4.    Interpret the findings
5.    Reassess as needed

The model is built for project environments. Don’t have a project you say!  Get one, I tell you! Can’t get one? This model will be less useful, but not useless.

Who Is Involved And When Will They Be Involved:

Implementing the traceability model assessment works best when the team (or a relevant subset) charged with doing the work conducts the assessment of project attributes.  The use of team members acts to turn Putt’s theory of “Competence Inversion ” on it head by focusing project-level competencies on defining the impact of specific attributes.  The use of a number of team members will provide a basis for consistency if assessments are performed again later in the project.

While the assessment process is best done by a cross functional team, it can be also be performed by those in the project governance structure alone.  The smaller the group that is involved in the assessment the more open and honest the communication between the assessment group and the project team must be or the exercise will be just another process inflicted on the team.  Regardless of the size, the assessment team needs to include technical competence.  Technical competence is especially useful when appraising complexity.  Technical competence is also a good tool to sell the results of the process to the rest of the project team.  Regardless of the deployment model, diversity of thought is generated in cross functional groups that will provide the breadth of knowledge needed to apply the model (this suggestion is based on feedback from process users).  The use of cross functional groups becomes even more critical for large projects and/or projects with embedded sub-projects.  In a situation where the discussion will be contentious or the group participating will be large I suggest using a facilitator to ensure an effective outcome.

An approach I suggest for integrating the assessment process into your current methodology is to incorporate the assessment as part of your formal risk assessment.  An alternative for smaller projects is to perform the assessment process during the initial project planning activities or in a sprint zero (if used).  This will minimize the impact of yet another assessment.

In larger projects where the appraisal outcome may vary across teams or sub-projects, thoughtful discussion will be required to determine whether the lowest common denominator will drive the results or whether a mixed approach is needed.  Use of this method in the real world suggests that in large projects/programs the highest or lowest common denominator is seldom universally useful.  The needs for scalability should be addressed at the level it makes sense for the project, which may mean that sub-projects are different.

Boundaries, like fences are one potential difficulty.

Boundaries, like fences are one potential difficulty.

Systems thinking is a powerful concept that can generate significant for value for organizations by generating more options. Dan and Chip Heath indicate that options are a precursor to better decisions in their book Decisive. Given the power of the concept and the value it can deliver, one would expect the concept to be used more. The problem is that systems thinking is not always straightforward.  The difficulties with using systems thinking fall in to three categories.

  • Boundaries
  • Complexity
  • Day-to-Day Pressures

Organizational boundaries and their impact of the flow of both work and information have been a source of discussion and academic study for years.  Boundaries are a key tool for defining teams and providing a send of belonging; however some boundaries not very porous. As noted in our articles on cognitive biases, groups tend to develop numerous psychological tools to identify and protect their members.  Systems, in most cases, cut across those organizational boundaries. In order to effectively develop an understanding of a system and then to affect a change to that system, members of each organizational unit that touches the system need to be involved (involvement can range from simple awareness to active process changes). When changes are limited due to span of control or a failure to see the big picture, they can be focused on parts of a process that, even if perfectly optimized, will not translate to the delivery of increased business value.  In a recent interview for SPaMcast, author Michael West provided examples of a large telecommunication company that implemented a drive to six sigma quality in its handsets, only to find out that pursuing the goal made the handset too expensive to succeed in the market. In this case the silos between IT, manufacturing and marketing allowed a change initiative to succeed (sort of) while harming the overall organization.

The second category is complexity.  Even simple systems are complex. Complexity can be caused by the breadth of a system.  For example, consider the relatively simple product of a lawn service.  How many processes and steps are required to attract and sign-up customers, then secure the equipment and employees to perform the job, schedule the service, actually deliver the service and then to collect payment? The simple system becomes more complex as we broaden the scope of our understanding.  Add in the impact of a variable like weather or an employee calling in sick, and things get interesting.  Really complicated systems, such as the manufacturing and sale of an automobile, can be quite mind-numbing in their complexity.  The natural strategy when faced with this level of complexity is to focus on parts of the overall system. This can lead to optimizations that do not translate to the bottom line. A second natural strategy for dealing with complexity is to develop models. All models are abstractions, and while abstractions are needed, you need to strike a balance so that the ideas generated from studying the model or results of experiments or pilots run against the model are representative of results in the real world.  For example, let’s say you build a 1/8 scale replica of a server farm.  The replica is a model that could be used to study how the servers would be placed or used to plan how they would be accessed for service. This would be a valid use of a model.  But if the model was placed in the actual server room and the observation used to jump to the conclusion that since the model fit, the server farm would fit, a mistake could quite possibly be made. Because of the complexity, we tend to focus on a part of system or to make abstractions. However, both can make it hard to think about the big picture needed to apply systems thinking.

The final difficulty with the use of systems thinking is the pressure of the day-to-day.  As I noted when I re-read The 7 Habits of Highly Effective People, the urgent and important issues of the day can easily overwhelm the important but not urgent. The use of systems thinking and the systemic changes identified through the use of the tool will generally fall into the important but not urgent category.  A person who is having a heart attack due to cholesterol-clogged arteries needs to deal with urgency of the attack before identifying and addressing their diet and other root causes. Leveraging systems thinking as a tool to address large-scale systemic issues is not a magic wand, rather a concept that will require time and effort to use.  If the organization is being overcome by day-to-day events, systems thinking will be difficult to use.  One technique that I have seen to deal with this scenario is to create a cross-functional, highly focused (tiger team) with a specific time box to start the process. This will require removing the team’s day-to-day responsibilities until the team meets its goal. This is a very difficult tactic to use as the people you would want on the team are generally the best at their job. This can potentially have a negatively impact organizational performance for a short period of time. You must consider the ROI.

The use of systems thinking is not for the faint of heart.  There are difficulties in applying the concept.  Boundaries, complexity and day-to-day pressures are the most significant, however there are others such as ensuring the team has system thinking training. Systems thinking can deliver a broad overall perspective and great value, but as a leader you must balance the difficulties with the benefits.

Sometimes you have to seek a little harder to understand the big picture.

Sometimes you have to seek a little harder to understand the big picture.

We should be guided by theory, not by numbers. – W.E. Deming

Many process improvement programs falter when, despite our best efforts, they don’t improve the overall performance of IT. The impact of fixing individual processes can easily get lost in the weeds, the impact overtaken by the inertia of the overall systems. Systems thinking is a way to view the world, including organizations, from a broad perspective that includes structures, patterns, and events.  Systems thinking is all about the big picture. Grasping the big picture is important when approaching any change program.  It becomes even more critical when the environment you are changing is complex and previous attempts at change have been less than successful. The world that professional developers operate within is complex, even though the goal of satisfying the projects stakeholders, on the surface, seems so simple. Every element of our work is part of a larger system that visibly and invisibly shapes our individual and organizational opportunities and risks.  The combination of complexity and the nagging issues that have dogged software-centric product development and maintenance suggest that real innovation will only come through systems thinking.

The Waters Foundation, a group dedicated to applying systems thinking to education, suggests a set of “Habits of a Systems Thinker.” The habits are:

  • Seeking to understand the big picture
  • Observing how elements within systems change over time, generating patterns and trends
  • Recognizing that a system’s structure generates its behavior
  • Identifying the circular nature of complex cause and effect relationships
  • Changing perspectives to increase understanding
  • Surfacing and tests assumptions
  • Considering an issue fully and resists the urge to come to a quick conclusion
  • Considering how mental models affect current reality and the future
  • Using understanding of system structure to identify possible leverage action
  • Considering both short and long-term consequences of actions
  • Finding where unintended consequences emerge
  • Recognizing the impact of time delays when exploring cause and effect relationships
  • Checking results and changes actions if needed: “successive approximation”

These habits illustrate that to really create change you need to take the overall process into account and test all of our assumptions before you can know that your change is effective.

An example presented at MIT’s System Design and Management (SDM) program on Oct. 22 and 23 exposed the need to address complexity through holistic solutions. A hospital scenario was described in which alarm fatigue has occurred, leading to negative patient outcomes. Alarm fatigue occurs when health professionals are overwhelmed by monitoring medical devices that provide data and alerts.  The devices don’t interoperate, therefore all of the data and alerts just create noise which can hide real problems.  Any IT manager that has reviewed multiple monthly project status reports and updates can appreciate how a specific problem signal could be missed and what the consequences might be.  Systems thinking applied through the filter of the “Habits of a Systems Thinker” is tailor-made to help us conceptualize, understand and then address complex problems; to find solutions for problems that seem elusive or that reoccur in an organization.

Retrospectives look back to impact the future.

Retrospectives look back to impact the future.

In the Daily Process Thought, August 22, 2013 we discussed list-generation retrospective techniques.  They are easy to execute and have a great track record.  However, there are other, more specialized techniques like the Timeline Retrospective, which is useful for long-running releases or in projects were a retrospective has not occurred in recent memory. These techniques deal with more complex issues than can be tackled using simple lists.  These more complex methods can also be used to spice up a more basic fare of listing techniques to keep teams involved and interested in the retrospective process.

These techniques are more complex to execute.  Let’s explore a few examples of this class of retrospective.

The Timeline Retrospective uses the following process:

Goal: The Timeline Retrospective technique develops a visual overview of the events that occurred during the period under investigation.  This technique identifies and isolates the events that impacted the team’s capacity to deliver over a set period of time. It uses distinct colors to identify events (avoid typical red – green colors as color blind participants may have difficulty).

When To Use:  The Timeline Retrospective is useful for refreshing and re-grounding the memories of team.  Other circumstances in which this may be a useful technique:

  • If there have not been any intermediate retrospectives;
  • To provide context to program-level (i.e. multiple projects) retrospectives;
  • If the team has not been working on the project over the whole life cycle, and
  • An end of project retrospective.

Set Up: Draw a timeline that represents the period since the last retrospective on a white board (or several pieces of flipchart paper).  Make sure there is room above and below the line.  Secure dry erase markers in a few colors and sticky notes in three colors.  The three sticky note colors will represent:

  • Blue represents good events;
  • Yellow represents significant events that are neither good nor bad, and
  • Red represents problem events.

Use the colors that you feel the most comfortable with and that you have in sufficient supply.

The process is as follows:

  1. Have each team member silently write down on sticky notes the major events, from their perspective, using the color code from above.
  2. Have each team member put their events on timeline chronologically, placing positive events above the timeline, neutral on or near the timeline and negative events below the timeline
  3. Throw out duplicates.
  4. Have the team select someone to walk through the final timeline.
  5. Using the dot voting technique (provide each team member with three dots) rank the event that slowed the project down the most to date.
  6. Identify tasks and actions that could be taken to solve the problems. Pick the top two or three.
  7. Have the team tell the story of the project for the next sprint or release, if they took the identified actions. This will help validate the choice of the proposed changes.

Another example of non-list retrospectives is the 6 Thinking Hats Retrospective (based on De Bono’s Six Thinking Hats).  I use this type of approach when the team has experienced significant challenges, has not established norms on how to interact or tends to be dominated by one or two personalities.  In this technique, the team uses a structured approach to discuss the period since the last retrospective.  The team “wears” one of De Bono’s “hats” at a time, which means all participants talk about a specific topic area at a time. Each hat represents a particular way of thinking.  Using the hats forces the team to have a focused discussion (this is called collective thinking in the literature). Until you are comfortable with this type of technique, use a facilitator. The facilitator should ensure that the comments are appropriate to the “hat” that is currently being worn. Here is the order of the “hats”:

  • Blue Hat (5 minutes) – focus on discussing session objectives.
  • White Hat (10 minutes) – discuss or identify FACTS or information since the last sprint (e.g. we had a hurricane during this sprint).
  • Yellow Hat (10 minutes) – talk only about the good things that happened since the last retrospective.
  • Black Hat (10 minutes) – talk only about the bad things that happened since the last retrospective.
  • Green Hat (10 minutes) – talk only about ideas to solve the identified problems or would add more significant value in the Product Owner’s perception.
  • Red Hat (5 minutes) – Have each team member come to the white board or flip chart and write two emotive statements about the project during this period. Do this fast and with very little preparation.  You want gut reactions.

As team review the emotive statements to identify clusters of comments or trends that can be combined with the issues in green group.  From the identified issues pick one or two actions that will improve the ability of the team to deliver to add to the backlog for the next sprint.

Other techniques in this class include:

Emotional Trend Line – This is many times combined with the Timeline technique. It provides an estimate of the team’s emotional state since the last retrospective.

Complexity Retrospective – Draw a complexity radar plot with at least five axes. Engage the team to determine what each axis should be labeled (e.g. data, workflow, code, business problem) and then engage the team to rate each axis.  If an axis is rated as complex ask the team to identify actions to reduce complexity.

Non-list based retrospectives are generally more complicated to apply due to the formal structure they use to guide teams toward discovery.  For example the De Bono’s Six Thinking Hats will require active facilitation (at least until everyone is comfortable with the process).  These techniques are generally used to address special or specific circumstances.  The structure of the techniques has been designed (or in some cases these techniques were adopted from other disciplines) because they help to focus the retrospective participants on a type of problem. The goal of any retrospective, list or non-list, is to help the team to discover how they can learn to be better during the next iteration.

Daily Process Thoughts:  Retrospective Theme Entries:

Retrospectives: A Basic Process Overview

Retrospectives: Retrospectives versus a Classic Post-Mortem, Mindset Differences

Retrospectives: Obstacles

Retrospectives: Listing Techniques

Retrospectives: A Social Event

Complex Tattoos

Some processes require complexity, such as splitting the atom. Other processes don’t require a complex solution. For example, maintaining a product backlog  usually represents a very lean requirements management process.

To turn the old adage “everything should be as simple as it can be, but not simpler” around just a bit, we could just as easily say that “everything should only be as complex as needed but no more complex.”  Processes that are overly complex can easily mask other more acute problems. Don’t let complexity mask issues that can destroy you or your organization regardless of how artistic they are!