I am spending a week with a large subset of my family, lots of running around, food and conversation — not very conducive to generating new content. I am reprinting (and re-editing) an essay published on 14 March 2017 after a trip to India.  The title is —

Four Attributes That Support Incremental Change Initiatives (more…)

Step 3 is to take smaller bites!

Step 3 is to take smaller bites!

Changing how any organization works is not easy.  Many different moving parts have to come together for a change to take root and build up enough inertia to pass the tipping point. Unfortunately because of misalignment, misunderstanding or poor execution, change programs don’t always win the day.  This is not new news to most of us in the business.  What should happen after a process improvement program fails?  What happens when the wrong kind of inertia wins?

Step One:  All failures must be understood.

First, perform a critical review of the failed program that focuses on why and how it failed.  The word critical is important.  Nothing should be sugar coated or “spun” to protect people’s feelings.  A critical review must also have a good dose of independence from those directly involved in the implementation.  Independence is required so that the biases and decisions that led to the original program can be scrutinized.  The goal is not to pillory those involved, but rather to make sure the same mistakes are not repeated.  These reviews are known by many names: postmortems, retrospectives or troubled project reviews, to name a few.

Step two:  Determine which way the organization is moving.

Inertia describes why an object in motion tends to stay in motion or those at rest tend to stay at rest.  Energy is required to change the state of any object or organization; understanding the direction of the organization is critical to planning any change. In process improvement programs we call the application of energy change management.  A change management program might include awareness building, training, mentoring or a myriad of other events all designed to inject energy into the system. The goal of that energy is either to amplify or change the performance of some group within an organization.  When not enough or too much energy is applied, the process change will fail.

Just because a change has failed does not mean all is lost.  There are two possible outcomes to a failure. The first is that the original position is reinforced, making change even more difficult.  The second is that the target group has been pushed into moving, maybe not all the way to where they should be or even in the right direction, but the original inertia has been broken.

Frankly, both outcomes happen.  If the failure is such that no good comes of it, then your organization will be mired in the muck of living off past performance.  This is similar to what happens when a car gets stuck in snow or sand and digs itself in.  The second scenario is more positive, and while the goal was not attained, the organization has begun to move, making further change easier.  I return to the car stuck in the snow example.  A technique that is taught to many of us that live in snowy climates is “rocking.” Rocking is used to get a car stuck in snow moving back and forth.  Movement increases the odds that you will be able to break free and get going in the right direction.

Step Three:  Take smaller bites!

The lean startup movement provides a number of useful concepts that can be used when changing any organization.  In Software Process and Measurement Cast 196, Jeff Anderson talked in detail about leveraging the concepts of lean start-ups within change programs (Link to SPaMCAST 196).  A lean start up will deliver a minimum amount of functionality needed to generate feedback and to further populate a backlog of manageable changes. The backlog should be groomed and prioritized by a product owner (or owners) from the area being impacted by the change.  This will increase ownership and involvement and generate buy-in.  Once you have a prioritized backlog, make the changes in a short time-boxed manner while involving those being impacted in measuring the value delivered.  Stop doing things if they are not delivering value and go to the next change.

Being a change agent is not easy, and no one succeeds all the time unless they are not taking any risks.  Learn from your mistakes and successes.  Understand the direction the organization is moving and use that movement as an asset to magnify the energy you apply. Involve those you are asking to change to building a backlog of prioritized minimum viable changes (mix the concept of a backlog with concepts from the lean start up movement).  Make changes based on how those who are impacted prioritize the backlog then stand back to observe and measure.  Finally, pivot if necessary.  Always remember that the goal is not really the change itself, but rather demonstrable business value. Keep pushing until the organization is going in the right direction.  What do you do when inertia wins?  My mother would have said just get back up, dust your self off and get back in the game.

What do you do when inertia wins?
Thomas M. Cagley Jr.

Audio Version on SPaMCAST 197

Changing how any organization works is not easy.  Many different moving parts have to come together for a change to take root and build up enough inertia to pass the tipping point. Unfortunately because of misalignment, misunderstanding or poor execution, change programs don’t always win the day.  This is not new news to most of us in the business.  The question I pose is what should happen after a process improvement program fails?  What happens when the wrong kind of inertia wins?

 

Step One:  All failures must be understood.

A critical review of the failed program that focuses on why and how it failed must be performed.  The word critical is important.  Nothing should be sugar coated or “spun” to protect people’s feelings.  A critical review must also have a good dose of independence from those directly involved in the implementation.  Independence is required so that the biases and decisions that led to the original program can be scrutinized.  The goal is not to pillory those involved but rather to make sure the same mistakes are not repeated.  These reviews are known by many names: postmortems, retrospectives or troubled project reviews, to name a few.

 

Step two:  Determine which way the organization is moving.

Inertia describes why an object in motion tends to stay in motion or those at rest tend to stay at rest.  Energy is required to change the state of any object or organization: therefore understanding the direction of the organization is critical to planning any change. In process improvement programs, we call the application of energy change management.  A change management program might include awareness building, training, mentoring or a myriad of other events all designed to inject energy into the system, the goal of that energy is either to amplify or change the performance of some group within an organization.  When not enough or too much energy is applied, the process change will fail.

Just because a change has failed does not mean all is lost.  I would suggest that there are two possible outcomes to a failure: The first is that the original position is reinforced, making change even more difficult.  The second is that the target group has been pushed into moving, maybe not all the way to where they should be or even in the right direction but the original inertia has been broken.

Frankly, both outcomes happen.  If the failure is such that no good comes of it, then your organization will be mired in the muck of living off past performance.  This is similar to what happens when a car gets stuck in snow or sand and digs itself in.  The second scenario is more positive, and while the goal was not attained, the organization has begun to move, making further change easier.  I return to the car stuck in the snow example.  A technique that is taught to many of us that live in snowy climates is “rocking.” Rocking is used to get a car stuck in snow moving back and forth.  Movement increases the odds that you will be able to break free and get going in the right direction.  Interestingly, the recognition of movement is a powerful sales technique taught in the Sandler Sales System.

 

Step Three:  Take smaller bites!

The lean startup movement provides a number of useful concepts that can be used when changing any organization.  In the Software Process and Measurement Cast 196, Jeff Anderson talked in detail about leveraging the concepts of lean start-ups within change programs (Link to SPaMCAST 196).  In this essay, I suggest using the concept of minimum viable changes to build a backlog of manageable changes.  The backlog should be groomed and prioritized by a product owner (or owners) from the area being impacted by the change.  This will increase ownership and involvement and generate buy-in.  Once you have a prioritized backlog, make the changes in a short time-boxed manner while involving those being impacted in measuring the value delivered.  Stop doing things if they are not delivering value and go to the next change.

What do you do when inertia wins? Being a change agent is not easy, and no one succeeds all the time unless they are not taking any risks.  Learn from your mistakes and successes.  Understand the direction the organization is moving and use that movement as an asset to magnify the energy you apply. Involve those you are asking to change to building a backlog of prioritized minimum viable changes (mix the concept of a backlog with concepts from the lean start up movement).  Make changes based on how those who are impacted prioritize the backlog then stand back to observe and measure.  Finally, pivot (change direction) if necessary.  Always remember that the goal is not really the change itself but rather demonstrable business value. Keep pushing until the organization is going in the right direction.  What do you do when inertia wins?  My mother would have said just get back up, dust yourself off and get back in the game; it isn’t that easy but it is not that much more complicated.

Just How Badly Do You Want A Number?

Thomas M Cagley Jr.

An audio version of this essay can be found in the Software Process and Measurement Cast 38 (www.spamcast.net)

Every project begins with a prediction of how much it will cost and when it will be delivered. Project managers, as a rule, admit this behavior delivers results that are a mistake, can lead to perception problems and might actually warp space and time. Most projects I see begin with this sort of incantation. Why? The rationale for this seemingly irrational behavior is generated by many competing forces. It might be a reaction to a market deadline, a requirement to secure funding or based on the mistaken impression that the project team actually knows what they need to know to complete the project. Every IT manager I know understands the fallacy of the initial estimate, however I know very few if any that will actually stand up and just say no. This behavior causes cognitive dissonance (stress caused by holding two contradictory ideas simultaneously), but it is assuaged by continuing to look for a solution to stop the insanity (or by giving up and embracing the dark side).

Why, if we all know this type of behavior is wrong and that the outcome of the behavior is rarely effective, do we continue to do it? Why do we feel compelled to act in a manner that is non-nominal? Do Information Technology (IT) managers and project managers have a touch of a victim complex? The driver for this behavior begins at the interface between IT and finance known as the budgeting or funding process. While not the root of all evil, financial procedures and thinking are at times at odds with many standard ideas for planning and estimating a project. Concepts like ROI and tax accruals have precise predictable financial definitions and calculations. Financial control and analysis requires a level of precision in reporting project data. The problem is that the level of precision is generally at odds with initial project estimates. There will always be a mismatch between finance’s needs and IT’s data needs, unless projects are being actively measured and are using mechanisms to continually access they need to know to complete the project.

Agile methods such as xP and Scrum attempt to make peace with the initial estimation conundrum by breaking projects into small bites and only making promises for each small bite prior to beginning the work for that bite. In Scrum terms, the sprint planning exercise is an operational illustration of how agile methods have used a short term planning horizon to address the ‘how much’ and ‘when’ questions. This does not address the need to know when the overall project will be done and how much it will cost. You will encounter the same problem we began this essay with if you extend the short term planning windows to the entire known project backlog by counting the number of sprints or iterations runs. .

Non-agile shops have been adopted other tools to deal with the vagaries of early estimation and the perceived need for precision. The estimation funnel is an example of other strategies. An estimation funnel helps enforce the understanding that the variance of any prediction is larger earlier in the project and reduces as project team learns what they need to deliver and how to deliver it.

In the end however, nature, our society, finance and our fellow project managers betray the quest for abandoning the initial fixed estimate. Nature’s betrayal comes in the form of the belief that Sir Isaac Newton had something to do with project management. We believe that each action has an equal and opposite reaction, or that if we do step A then outcome B will magically appear. Remember that the combination of humans and physics does not a straight line make. Projects are human ventures, and therefore are more akin to herding cats than the laws of physics. Society’s betrayal is driven by ingrained consumerism. As a consumer, you and I have an expectation that if we’re going to buy something, regardless of complexity, someone will tell us for how much to write the check. An example of this type of behavior can be seen in the general hullabaloo that occurs when NASA overruns a cost estimate for the space station, despite the incredible level of complexity. How can anyone know exactly how much a project of this type will cost when it begins? The finance department’s betrayal is through their need to plan, secure funding and to understand cash flow. Making a significant mistake in the financial arena will have disastrous consequences for any company’s value and potentially its ability to make payroll. Most importantly, project managers let themselves down by saying “yes” and giving the number to whomever asks. Why? They feel they must, if for no other reason that there is always someone who wants the job badly enough to say anything and managers want a number badly enough to believe the lie and drink the Kool-Aid. Failure to play the game is seem to be career limiting.

I believe we need to rethink our concept of initial estimates. To drive this change we will have to change both our vision of the world and that of other constituencies within our companies. We need to de-link estimation from other non-estimation behaviors (we are not shopping for an HDTV), we need to change how estimation the process works, we need to measure projects which means embracing concepts such as function points for size and a proxy for functional knowledge. But most of all we will need to set a standard of behavior for ourselves and our fellow project managers and follow it.

Recognize these issues? Leave a comment or drop me an email at spamcastinfo@gmail.com

Hey this is a work in progress and I would be happy for comments, corrections or any other reactions! ****

Humans seem to have a need to classify people as either being inside their group or outside.   We humans then use that classification to determine how we will treat both insiders and outsiders.  From a critical point of view it would be easy to claim to be shocked at this behavior (but only when other exhibit it)  however the tendency to divide people into groups can be used as a tool when implementing change.  Marketing types call this segmenting your markets.  Each segment will have differing needs and will react to change differently.  Developing an implementation plan that embraces the positives components of classifying people into groups (let’s call it process improvement segmentation) will maximize chances of developing an implementation approach that is focused rather than omnidirectional.  

 

One application of grouping was recently suggested by the Process Improvement Philosopher.  He suggested a continuum of groups as a tool for discussing how different groups would react to change.  In terms of the behaviors exhibited toward change the continuum ranges from a group we will call the adopters to a group that would be best classified as resistors.  While we could forecast that this type of model would have these types of bi-polar opposites, I would suggest that the motivations of the groups that inhabit the middle groups are more interesting to understand and define.    

 

In the next installment we wrestle with the middle ground! 

Why Should You Care What Is Driving Change?
Thomas M. Cagley Jr.

What are the pressures driving process change in your organization?  I ask the question because I believe that people, teams or even whole organizations don’t wake up in the morning with the idea that they need to change.  There has to be a trigger, a reason that helps provide the motivation. That reason will create pressure that change will relieve.   

 

Process improvement champions must understand why change is being pursued or risk failing.  Knowledge of the rationale behind change and the urgency of that rationale is an important component to actually making change happen.   There are numerous reason why knowing the “why” of change is important.  One reason is that knowing “why” will help you select the proper solution for the proper problem. A good think right?  A second reason and the focus of this paper is that knowing “why” allows you to communicate the rationale to those affected.  Communication is one of the core tools for reducing resistance and promoting buy-in.  

 

Communication is important because left to their own devices every person impacted by a change will develop their own opinion as to why the change is occurring.  Those opinions will range from the rational (change will help us become more efficient or change will help us increase market share) to the scary (change is a precursor to outsourcing, change is a precursor to downsizing).    The hopes, fears and paranoia’s of each individual affected will be represented when the rationale for change is left to assumption.  Opinions influence the level of effort and commitment applied toward the change (the negative side is resistance).  Communication and actions are required as a part of a coordinated organizational change management plan. 

 

Organizational change management, communication and their kissing cousins marketing and sales tend to be tough subjects for most IT project managers.   Perhaps this is because they are discussions of emotions rather than deterministic tasks.  A good dose of sales training or training in how counsel teams ought to be required as part of every project manager’s CV. 

 

In change programs of any size, communication will range from mentoring, cheerleading through persuasion (perhaps in the same conversation).   Knowing why any specific change is needed and how important that change is will allow you to engage with your customers in the most reasonable and honest manner possible.

 

As a leader of change, you are responsible to know why the change you are pursuing is needed.  If you do not know, find out.  If you can’t find out, think about saying no until you find out. Bottom line, effective change requires effective communication and if you do not know the rationale for the change you are pursuing you will not be effective. 

 

 

 

 

Section 2: Oops!  Mistakes Can Make Good Numbers Go Bad.

 

Mistakes can come in many flavors, errors of commission and omission; calculation mistakes or errors in mathematics (wrong formulas, logic or just ignoring things like co-variance), and just plain stupid mistakes.  The group as a whole is the single biggest reason Good Numbers Go Bad.  Mistakes by definition occur by accident and are not driven by direct animus. The grace and speed in which you recognize and recover from a mistake will determine the long-term prognosis of the practitioner and his/her program (assuming you don’t make the same mistake more than once or twice).  Ignoring a mistake is bad practice; if you need to make a habit of brazening out the impact of mistakes, you should consider a new career as you have lost the long term battle over the message.

 

Collection Mistakes:

 

Collection mistakes are a category that covers a lot of ground ranging from gathering wrong data to erratic data collection.  While collecting the wrong information can lead to many other kinds of mistakes, we will explore credibility issues in this section.  Recognition and the recovery from collection errors which lead to credibility issues will be explored in depth in this section.

 

“In order to capture metrics, the procedures, guidelines, templates, and databases need to be in sync with the standard practices.”

— Donna Hook, Medco

 

Data collection errors typically represent errors of omission (data not collected); however, occasionally the wrong information is collected or data is not collected at all.  Collecting the wrong data (or data you do not understand) will create situations where your analysis will be wrong (garbage in) with the possibly that you won’t know it (gospel out).  Someone will usually discover this error at the worst possible time, leading to profuse sweating and embarrassment.  Gathering the wrong or incomplete data is a non-trivial mistake which makes Good Numbers Go Bad.  However, what you do about it will say a lot about your program.  Begin by making sure you have specified the data to a level that allows you to ascertain that what you collect is correct.  Audit the collection process against the collection criteria periodically helps to make you collect the correct data and collect it correctly.  Create rules (or at least rules of thumb) that support validation.  Rules of thumb will help you to quickly interpret the data.  Did you get the quantity of data you expected?  Has the process capability apparently changed more than you would reasonably expect? 

 

Erratic Collection:

 

Measures and metrics can be perceived to be so important that panicked phone calls are known to precede collection.  Equally as interesting are the long periods of silence that occur before the panic.  Erratic data collection sends a message that the data (and therefore the results) are only as important as whoever goosed the caller (or slightly less important to whatever the caller was doing right before he/she called).  Inconsistent collection leads to numerous problems including rushed collection (after the call), mistakes and an overall loss of face for the program (fire drills and metrics ought to be kept separate).  Consistency spreads a better message of quiet importance that can supplant the urgency of yelling.

 

Mathematical Mistakes:

 

“We accidentally used one number instead of a correct value.  Now our stakeholders ask for a second source.”

— Rob Hoerr, Fidelity Information Services

 

“Mathematical mistakes happen! We are all human!” The excuses are anthem, which means all measurement programs must take the time and effort to validate the equations they use.  Equations must be mathematically and intellectually sound.  Inaction in the face of mistakes in the equations or results makes Good Numbers Go Bad.  If a mistake is found, neither results nor equations should be ingrained to the point of freezing your project into inaction.  This places a lot of stress on the need to create measurement and metrics specifications.  Once the specification, (specifications include data like a description, formulas and definitions) is created, it is easier to make sure you a measuring what you want and that you get the behavior you anticipate.  The spec provides a tool to gauge the validity of the math, the validity of the presentation, and, by inference, the validity of the analysis.

 

Liars, Damn Liars and Statisticians:

 

Statistics has long been a staple of graduate business schools, which instill the belief that numbers can prove anything.  Numbers, however, require a sensitivity to the equations that flies in the face of this mentality.  When simple relationships are ignored to make a point Good Numbers Go Bad.  Examples of questionable math can include graphs with the same variable (in different forms) on both axes presented with linear regressions lines driven through them.  The created co-variance goes unrecognized, leaving the analysts speculating on what the line means without the recognition that the relationship is self-inflicted.  Developing a simple understanding of the concepts of co-variance, ‘r”-squared values and standard error are easy steps to help sort out basic conceptual errors.  A corollary to this is that the knowledge of statistics will not necessarily stop the mistake of adding the wrong EXCEL cells together, but it can’t hurt.  Always check your equations, your statistics, and never fail to check the math!

 

Excerpt from When Good Numbers Go Bad
Thomas M. Cagley Jr.

(Recorded for the Software Process and Measurement Cast 32 – www.spamcast.net)

When Communication Makes Good Numbers Go Bad

 

One of the most tragic errors young metrics programs make can be classified as the Field of Dreams Syndrome; “measure it and they will find it useful.”  The issues that this type of thinking causes are typically recognized first as communication or training problems.  Questions surface such as: “Why isn’t anyone using our measures?” or “Why isn’t anyone interested?”  Dashboards and reports are created, but no one cares.  At least two problems are generally present, insular vision and lack of validation. 

 

Monologues:

 

Late night television is the home of the monologue.  Jay Leno and David Letterman use monologues to make us laugh.  Their only feedback is the laugh track.  The unidirectional flow of the information is an important feature of a monologue.  Late night comedy and metrics presentations really should have little similarity (albeit a bit of levity is probably a good thing).  Most metrics reports and presentations are approached as if they were monologues rather than dialogs. 

 

The monologue approach occurs for a number of reasons.  The first is the confusion of the volume/value attribute.  Metrics programs need to show value, and the two attributes of volume and value are sometimes confused (recall old bromides like ”the more the merrier”).  When these concepts are confused, it seems that the goal of a metrics presentation seems to be to show every bit of data ever collected crammed into charts (or slides) and then to tell anyone who will listen the perception of what they mean ( also known as death by slides).  Focusing on volume chokes the ability to hold a dialog.  It should be noted that volume and quality are unrelated attributes.  An old adage states, ”a designer has achieved perfection not when there is nothing left to add, but when there is nothing left to be taken away.” (Read any or all of Edward Tufte’s books.)  Design your presentation with the aim of evoking action by the recipient.  Simplicity and minimalism are concepts that need to be used when designing your presentation tool (show pictures but have the data).  Once you have a tool to aid your communication, the next step is to use the tool to facilitate a dialog as the basis for creating understanding (bi-directional).  A dialog (defined as an exchange of information and understanding) provides a platform for the metrics team to affect behavior the behavior of the organization and to absorb information about how work is being done.  Wikis and blogs are means of creating this type of dialog.

 

Another idea to combat monologue is to recognize that presentations and handouts are not the same thing. Presentations are structures to create dialogs; handouts are one way vehicles, monologues.

 

Beliefs:

 

Beliefs act as a powerful filter that can promote communication problems.  Deep-seated beliefs force the believer into a difficult position when it comes to challenging the status quo.  When change occurs without the ability to challenge, it leads to confusion and possibly to conflict.  This is a scenario where Good Numbers Go Bad.  Beliefs do not have to be based on mathematical or scientific fact but can be driven by common understandings of how things work which may or may not be correct.  Once upon a time most people believed the world was flat.  This belief constrained behavior.    

 

As an example, I was recently exposed to a senior executive who firmly believed that education and training were not related to improved capability of his organization.  If acted upon, the outcome of this belief will potentially be lower productivity, lower innovation, lower capability and potentially the need to outsource.  The workforce would not stay current or gain new skills, creating a spiral downward.  I certainly wish I could have asked whether the executive thought it was important for his children to be educated and whether that education would impact their future capability.  Facts and the relationships between facts can and are abridged through beliefs.  Metrics professionals must continually create awareness so that everyone in the metrics equation keeps an open, questioning mind to extract the full value from numbers.

 

Just Plain Wrong:

 

One of the final classes of communication errors occurs when the information gleaned from a chart, graph or single number, is published by the metrics team and is wrong.  In my mind the most frightening words are “my interpretation of this graph is that the earth is flat.”   Misinterpretation can be caused by a number of problems ranging from education and knowledge of the interpreter, active misinterpretation or the act of spreading misinformation (or that belief thing in the last section).  Regardless of why the interpretation is wrong, the damage is done.  As soon as the misinterpretation is perceived the metrics program will be viewed as non-neutral and potentially biased.  When measurement drives activity based on misinterpretation, the results can be erroneous business decisions with lasting implications.  Motivation is discussed later; however, when the reason for the error is perceived to be caused by misinterpretation, a bad taste will be left in people’s mouths for a long period of time.

 

Zombie Hypothesis:

 

One of the worst errors made by humans is not publicly recognizing a mistake and trying to tough it out.  The affliction can be encapsulated by the phrase, “throwing good money after bad.”  When applied to a metrics program, this affliction can lead to a scarcity of funds for metrics and SPI investment opportunities.  Not facing up to your mistakes causes a scenario where Good Numbers Go Bad.  The cost and effort needed to gather, analyze, report and react to the measures being collected will eclipse the value derived if you are living a lie.  The Zombie Hypothesis is a variant of the Law of Crappy Process which implies that the worst, most incorrect data will become the de facto (italics) standard (real or perceived) for your measurement program.  When a problems is found, recognize it, fix the process(es) then the definitions and re-implement the measurement.  The effectiveness and efficiency of the measurement program will be improved.  More importantly, you will inhabit the moral high ground of knowing you are measuring the right thing in the right way.

 

Are Words a Predictor of Change Adoption?

see www.spamcast.net!

 

I have been listening to how people talk about change for a few months.  Sort of an informal poll to see weather people’s language reflects their point of view toward change and how that point of view affects how they behave. The goal is to understand the gray areas where yes and no are not always what they seem to be.  Overall my observations have led me to believe there are two camps (Phil Armour in a recent interview for SPaMCAST indicates that seeing things in two categories is a syndrome . . .).  The first category are those that seem to be looking for a reason for change to work and the second are those that seem to be looking for a reason for change to fail.  I have found very little middle ground.  Interestingly when discussing change and process improvement, almost everyone starts by saying “yes” then adds a qualification.  The qualification (when it exists) becomes the most important segment of the discussion.  When I listened to the people that I ended up slotting into the second category I heard words like “yes, but . . .”, “Yes but let me play the devil’s advocate” or “We could do that but let’s be realistic”.  These phrases masquerade as constructive but to a greater or lesser extent they are markers who someone that is searching for a reason the change will fail (or for a tool to help it fail).   Many of the people that fall into the category that if asked would slot not themselves into that category.  Most see themselves as being helpful by being “realistic”. 

 

Why are these nuances important?  I suggest that everyone in the process improvement community would all agree that change is important otherwise why be involved in process improvement?  It sure isn’t because it is fun, glamorous or gets you a seat at the cool people’s lunch table. It is because we feel that it is important but we need to understand that each change is fragile and that fragility means all nuances are critically important.  A colleague of mine, Steve Lett has said many times that implementing change is most effective when at least a portion of the community being impacted has an optimistic point of view towards the change and at least a portion of that portion (at least) are willing to actively look for a way to make the change to successful.  When looking for people in the optimistic category I listen for phrases such as “yes and . . .” or “what can I do to help”.  The primary characteristic that I am looking for are for phrases that are positive, do not include terms like “but” and when they include a qualification that they are action oriented.  Optimists are a resource to mine for process change agents (and they are certainly more fun to be around).

 

 

 

Passion and SuccessThomas M. Cagley Jr.  There are many reasons process improvement projects succeed or fail.  These reasons range from the purely technical to managerial and because they can vary so much we generally to pause rather than consider any one on an individual basis.  A conversation this week reminded me of one of the most important and potentially powerful success indicators, passion.  In software process improvement projects passion can come in multiple flavors however the person I was speaking with suggested that passion built on successful experience has power above and beyond simpe passion. Experiencing successful process improvement provides a level of knowledge that builds motivation which can be used to fight through the hard parts that all process improvement projects encounter. Successful process improvement provides the knowledge that organizations can change and what the benefits of that change are.  A corollary to the knowledge that success can create motivation and passion is that success can be manufactured.  One of the tidbits that I took away from the conversation was that when you are attempting to build passion based on success, small steps that help ensure wins for the process improvement team is an excellent strategy. One of the benefits of my job as process improvement consultant is to talk with people with a wide range of experience and backgrounds (diversity of practice and thought).  Discussions, such the one about the role of passion in motivating process improvement teams, provides a context that is easy to lose when you are operating on a day to day basis.  Another reflection is that there are philosophers among us.  This is true whether  they recognize their role as thinkers of deep thoughts or not.  Process improvement philosophers such as my new acquaintance and the Canadian Oracle (I quoted the Oracle before and will again) help keep all of us focused on more than just the day–to-day project tasks by providing an overall context to organizational change.  The discussion I had about building passion based on success was one of those events that provides context to process improvement and team motivation.  Remember, find success and use it as your anchor and motivator.