26096385160_0723a35c74_o

In simplest terms, productivity is the ratio of output per unit of input.

Almost every conversation about change includes a promise of greater productivity.  In simplest terms, productivity is the ratio of output per unit of input.  While the equation is for calculating productivity is straightforward, as we have discussed, deciding on which outputs from an organization or process to count is never straightforward. The decisions on the input side of the equation are often equally contentious.  Three critical decisions shape what measures will be needed to supply the inputs used to calculate productivity.   (more…)

Three Pencils

     How Many Pencils?

One of the most ubiquitous conversations that occurs in organizations is about productivity, although the word itself is rarely used.  The conversation might be about the need to get more value from the software development budget or the ability to deliver more new functions with the same staff.  The concepts of labor, capital, material and total factor productivity are an undercurrent in these conversations even if the word productivity is not directly mentioned.  There are several possible reasons why the word productivity is not used in polite software development company.  Those reasons range from a mistaken belief that productivity concerns are reserved for blue collar occupations to the complexity of defining productivity in software development and maintenance environments.  Complexity is often the most pernicious reason that keeps productivity in the background during strategic conversations that affect products and jobs. However, with the exception of total factor productivity, the mathematical computation of labor, material or capital productivity appears to be very straight forward. 

The equation for productivity equals output divided by input. The simplicity of the equation obscures the complexity hidden in the term productivity.

Unwinding the complex concepts of measuring productivity begins with understanding and making three critical decisions that define the output side of the equation. (more…)

5255124016_1229905b61_b

Productivity is a classic economic metric that measures the process of creating goods and services.  Productivity is the ratio of the amount of output from a team or organization per unit of input. Conceptually productivity is a simple metric. In order to calculate the metric, you would simply sum up the number of units of item produced and divide it by the amount “stuff” needed to make those units.  For example, if a drain cleaning organization of three people cleans 50 drains per month, their labor productivity per month would be 50/3 = 16.6 drains per person. The metric is a sign of how efficiently a team or organization has organized and managed the piece of work being measured. There are four types of productivity.  Each type of productivity focuses on a different part of the supply chain needed to deliver a product or a service.  The four types are: (more…)

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 384 features our interview with Gwen Walsh.  Gwen is the President of TechEdge LLC. We discuss leadership and why leadership is important.  We also discuss the topic of performance appraisals and how classic methods can hurt your organization. Gwen’s advice both redefines industry standards and provides you with an idea of what is truly possible.

Gwen Walsh has built a career creating and implementing business and technology solutions that redefine the industry standards for both Fortune 100 corporations and entrepreneurial organizations. With over 25 years of experience in leadership development and organizational transformation, Ms. Walsh, founder of TechEdge LLC, helps her clients stay ahead of their competition, stay in touch with their customers and stay in high demand. Ms. Walsh’s client portfolio includes Kaiser Permanente, Hospital Corporation of America, Hewlett-Packard, KeyBank, Medical Mutual of Ohio, General Motors, Omaha Public Power District and Anheuser-Busch. (more…)

Grave Hands Prague

Originally posted on the Software Process and Measurement Cast 57

Efficiency has a simple technical definition, the ratio of work done to the energy required to do that work. In the software development world, efficiency is rarely managed. Rather the discussion tends to be on cost. Cost and efficiency are different; they are related, but they are not the same.

Increasing the level of efficiency in a person, process or organization will require some kind of change. You do not get a different result by doing the same thing over and over. We know that in order to change efficiency, we need to change the process that transforms an input into a product, the environment the transformation occurs inside, the input (such as people or raw materials) or some combination of these areas.

While it might be too obvious for most people, the first place to look when you want to improve efficiency is the process. Some examples include, removing process dead wood, simplifying the flow, adding tools and automation whether through frameworks like CMMI or Agile or through techniques like Six Sigma, lean or others. Unfortunately process changes are not instantaneous, which causes many organizations to jump over the process improvement step and go right to the cutting people. The thought process around the cutting people option goes something like this:

We will cut some percentage of people because we have gotten ‘fat’. Those that are left will pick of the slack through working a little harder and by multitasking. They should just be happy to have jobs. Tasks that we just can’t cover anymore probably didn’t need to be done anyway.

Multitasking is the silver bullet of the 21st Century. However, relying on multitasking steals efficiency. According to René Marois, a neuroscientist and director of the Human Information Processing Laboratory at Vanderbilt University, “trying to do two things at once can be disadvantageous.” Now I will readily admit that I have not been able to put aside multitasking. Humans do most of their multitasking in an unconscious mode (breathing, pumping blood and other mostly autonomic tasks). I have tested conscious multitasking personally, trying to talk on the cell phone while driving; no one has died (although my wife has threatened divorce). Unfortunately humans aren’t good at multitasking because we do not really multitask. What we do typically is fast switch shuffling between tasks quickly focusing on each slice for a brief period of time. It is during the switch and reorientation that we lose efficiency. An article published in the The Journal of Experimental Psychology: Human Perception and Performance reported a 2001 study by Joshua Rubinstein, David Meyer, and Jeffrey Evans on the brain’s executive control process found that “at best, a person needs to be aware that multitasking causes inefficiency in brain function.”

Focus is required for efficiency. Doing one thing at a time correctly has an appeal both in terms of logic and science. We are faced with the issue that most workplace cultures do not seem to support focus with action. As evidence I suggest you count the number of interrupters in your environment (cell phones, email, twitter, instant messengers, etc.). Our work culture is sending a strong message that you are not expected to be cut off from the information flow at any time. The ability to deal with continual partial attention is a career success factor in many instances. Quiet time for concentration tends to happen outside of core hours or at home when we are tired. The question we must ask is when the does the cost of interruptions and multitasking ever outweigh the benefit of focus? How can we construct processes or environments that allow connection and collaboration to happen while providing an atmosphere where focus is not the odd man out of the equation?

 www.spamcast.net

                             www.spamcast.net

Listen Now

Subscribe on iTunes

The next Software Process and Measurement features our interview with Steve Turner.  Steve and I talked time management and email inbox tyranny! If you let your inbox and interruptions manage your day you will deliver less value than you should and feel far less in control than you could!

Steve’s Bio:

With a background in technology and nearly 30 years of business expertise, Steve has spent the last eight years sharing technology and time management tools, techniques and tips with thousands of professionals across the country.  His speaking, training and coaching has helped many organizations increase the productivity of their employees. Steve has significant experience working with independent sales and marketing agencies. His proven ability to leverage technology (including desktops, laptops and mobile devices) is of great value to anyone in need of greater sales and/or productivity results. TurnerTime℠ is time management tools, techniques and tips to effectively manage e-mails, tasks and projects.

Contact Information:
Email: steve@getturnertime.com
Web: www.GetTurnerTime.com

 

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Why Did the Tower Babel Fall”!  Check out the new installment at Software Process and Measurement Blog.

 

Upcoming Events

Software Quality and Test Management
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our essay on mind mapping.  To quote Tony Buzan, Mind Mapping is a technique for radiant thinking.  I view it as a technique to stimulate creative thinking and organize ideas. I think that is what Tony meant by radiant thinking. Mind Mapping is one of my favorite tools.

We will also feature Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Anchoring the cast will be Gene Hughson returning with an entry from his Form Follows Function blog.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 345 features our essay on Cognitive Biases and two new columns. The essay on cognitive bias provides important tools for anyone that works on a team or interfaces with other people! A sample for the podcast:

“The discussion of cognitive biases is not a theoretical exercise. Even a relatively simple process such as sprint planning in Scrum is influenced by the cognitive biases of the participants. Even the planning process itself is built to use cognitive biases like the anchor bias to help the team come to consensus efficiently. How all the members of the team perceive their environment and the work they commit to delivering will influence the probability of success therefore cognitive biases need to be understood and managed.”

The first of the new columns is Jeremy Berriault’s QA Corner.  Jeremy’s first QA Corner discusses root cause analysis and some errors that people make when doing root cause analysis. Jeremy, is a leader in the world of quality assurance and testing and was originally interviewed on the Software Process and Measurement Cast 274.

The second new column features Steve Tendon discussing his great new book, Hyper-Productive Knowledge Work Performance.  Our intent is to discuss the book chapter by chapter.  This is very much like the re-read we do on blog weekly but with the author.  Steve has offered the SPaMCAST listeners are great discount if you use the link shown above.

As part of the chapter by chapter discussion of Steve’s book we are embedding homework questions.  The first question we pose is “Is the concept of hyper-productivity transferable from one group or company to another?” Send your comments to spamcastinfo@gmail.com.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next . . . The Mythical Man-Month Get a copy now and start reading! We will start in four weeks!

Upcoming Events

2015 ICEAA PROFESSIONAL DEVELOPMENT & TRAINING WORKSHOP
June 9 – 12
San Diego, California
http://www.iceaaonline.com/2519-2/
I will be speaking on June 10.  My presentation is titled “Agile Estimation Using Functional Metrics.”

Let me know if you are attending!

Also upcoming conferences I will be involved in include and SQTM in September. More on these great conferences next week.

Next SPaMCast

The next Software Process and Measurement Cast will feature our will interview with Jon Quigley.  We discussed configuration management and his new book Configuration Management: Theory, Practice, and Application. Jon co-authored the book with Kim Robertson. Configuration management is one of the most critical practices anyone building a product, writing a piece of code or working on a project with other must learn or face the consequences!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Zoom!

Zoom!

Audio Version:  Software Process and Measurement Cast 119.

Definition:

The simple definition of velocity is the amount of work that is completed in a period of time (typically a sprint). The definition is related to productivity, which is the amount of effort required to complete a unit of work and delivery rate (the speed that the work is completed).  The inclusion of a time box (the sprint) creates a fixed duratio,n which transforms velocity into more of a productivity metric than a speed metric (how much work can be be done in a specific timescale by a specific team). Therefore to truly measure velocity you need to estimate the units of work completed, have a definition of complete and have a time box.

The definition of done in Agile is typically functional code, however I think the definition can be stretched to reflect the terminal deliverable the sprint team has committed to create and based on the definition of done (for example requirements for a sprint team working on requirements or completed test cases in a test sprint) that the team has established.

Many Agile projects use the concept of story points as a metaphor for size or functional code. Note other functional size measures can be just as easily used. Examples in this paper will use story points as a unit of measure.  What is not size however is effort or duration. Effort is an input that is consumed while transforming ideas into functional code. The amount of effort required for the transformation is a reflection of size, complexity and other factors. Duration like effort is consumed by a sprint not created therefore does not measure what is delivered.

Formula

To calculate velocity, simply add up the size estimates of the features (user stories, requirements, backlog items, etc.) successfully delivered in an iteration.  The use of the size estimates allows the team to distinguish between items of differing levels of granularity.  Successfully delivered should equate to the definition of done.

Velocity = Story Points Completed Per Sprint

And:

Average velocity = Average Number of Story Points Per Sprint

The formula becomes more complex if staffing varies between sprints (and potentially less valuable as a predictive measure).  In order to account for variable staffing the velocity formula would have to be modified as follows:

velocity per person = sum (size of completed features in a sprint / number of people) / number of sprints or observations

To be really precise (not necessarily more accurate) we would have to understand the variability of the data as variability would help define level of confidence.  Variability generated by differences in team member capabilities is one of the reasons that predicability is enhanced by team stability. As you can see, the more complex the environmental scenario becomes, the less simple the math must be to describe the scenario.

Uses:

Velocity is used as a tool in project planning and reporting. Velocity is used in planning to predict how much work will be completed in a sprint and in reporting to communicate what has been done.

When used for planning and estimation the team’s velocity is used along with a prioritized set of granular features (e.g., user stories, backlog items, requirements, etc.) that have been sized or estimated.  The team uses these factors to select what can be done in the upcoming sprint. When the sprint is complete the results are used to update velocity for the next sprint. This is a top down estimation process using historical data.

Over a number of sprints velocity can be used both as a macro planning tool (when will will the project be done) and a reporting tool (we planned at this velocity and are delivering at this velocity).

Velocity can be used in all methodologies and because it is team specific, it is agnostic in terms of units of size.

Issues

As with all metrics, velocity has it’s share of issues.

The first is that there is an expectation of team stability inherent in the metric. Velocity is impacted by team size and composition and without collecting additional attributes and correlating these attributes to performance, change is not predictable (except by gut feel or Ouija Board). There should always be notes kept on team size and capability so that you can understand your data over time.

Similarly team dynamics change over time, sometimes radically. Radical changes in  team dynamics will affect velocity. Note shocks to any system of work are apt to create the same issue. Measurement personnel, SCRUM masters and team leaders need to be aware of people’s personalities and how they change over time.

The first time application of velocity requires either historical data of other similar teams and projects or an estimate. In a perfect world a few sprints would be executed and data gathered before expectations are set however generally clients want an idea of if a project will be completed, when it will be completed and the functions that will be delivered along the way.  Estimates of velocity based on the teams knowledge of the past or other crowd sourcing techniques are relatively safe starting points assuming continuos recalibration.

The final issue is the requirement for a good definition of done. Done is a concept that has been driven home in the agile community. To quote Mayank Gupta (http://www.scrumalliance.org/articles/106-definition-of-done-a-reference), “An explicit and concrete definition of done may seem small but it can be the most critical checkpoint of an agile project.”  A concrete definition of done provides the basis for estimating velocity by reducing variability based on features that are in different states of completion.  Done also focuses the team by providing a goal to pursue. Make sure you have a crisp definition of done and recognize how that definition can change from sprint to sprint.

Related Metrics:

Productivity (size / effort)

Delivery Rate (duration / size)

Criticisms:

The first criticism of velocity is that the metric is not comparable between teams and by Inference is not useful as a benchmark. Velocity was conceived as a tool for Scrum Masters and Team Leads to manage and plan individual sprints. There are no overarching set of rules for the metric to enforce standardization therefore one velocity is apt to reflect something different than the next. The criticism is correct but perhaps off the mark. As a team level tool velocity works because it is very easy to use and can be consistent, adding the complexity of standards and rules to make it more organizational will by definition reduce the simplicity and therefore the usefulness at the team level.

A second criticism is that estimates and budgets are typically set early in a projects life.  Team level velocity may well be an unknown until later.  The dichotomy between estimating and planning (or budgeting and estimating for that matter) is often overlooked.  Estimates developed early in a project or in projects with multiple teams require different techniques to generate. In large projects applying team level velocities requires using techniques more akin to portfolio management which add significant levels of overhead. I would suggest that velocity is more valuable as a team planning tool than as a budgeting or estimation tool at a macro level.

A final criticism is that backlog items may not be defined at consistent level of granularity therefore when applied, velocity may deliver inconsistent results. I tend to dismiss this criticism as it is true for any mechanism that relies on relative sizing. Team consistency will help reduce the variability in sizing however all teams should strive to break backlog items into as atomic stories as possible.

Productivity is number of clothes sewed per hour.

Productivity is number of clothes sewed per hour.

Definition:

Labor productivity measures the efficiency of the transformation of labor into something of higher value. It is the amount of output per unit of input: an example in manufacturing terms can be expressed as  X widgets for every hour worked. Labor productivity (typically just called productivity) is a fairly simple manufacturing concept that is useful in IT. It is a powerful metric, made even more powerful by it’s simplicity. At its heart, productivity is a measure of the efficiency of a process (or group of processes). That knowledge can be tool to target and facilitate change. The problem with using productivity in a software environment is the lack of a universally agreed upon output unit of measure.

The lack of a universally agreed upon, tangible unit of output (for example cars in an automobile factory or steel from a steel mill) means that software processes often struggle to define and measure productivity because they’re forced to use esoteric size measures. IT has gone down three paths to solve this problem. The three basic camps to size software include relative measures (e.g. story points), physical measures (e.g. lines of code) and functional measures (e.g. function points). In all cases these measures of software size seek to measure the output of the processes and are defined independently of the input (effort or labor cost).

Formula:
The standard formula for labor productivity is:

Productivity = output / input

If you were using lines of code for productivity, the equation would be as follows:

Productivity = Lines of Code / Hours to Code the Lines of Code

Uses:
There are numerous factors that can influence productivity like skills, architecture, tools, time compression, programming language and level of quality. Organizations want to determine the impact of these factors on the development environment.

The measurement of productivity has two macro purposes. The first purpose is to determine efficiency. When productivity is known a baseline can be produced (line in the sand) then compared to external benchmarks. Comparisons between projects can indicate whether one process is better than another. The ability to make a comparison allows you to use efficiency as a tool in a decision-making process. The number and types of decisions that can be made using this tool are bounded only by your imagination and the granularity of the measurement.

The second macro rational for measuring productivity is as a basis for estimation. In its simplest form a parametric estimate can be calculated by multiplying size by a productivity rate.

Issues:
1. The lack of a consistent size measure is the biggest barrier for measuring productivity.
2. Poor time accounting runs a close second. Time account issues range from misallocation of time to equating billing time to effort time.
3. Productivity is not a single number and is most accurately described as a curve which makes it appear complicated.

Variants or Related Measures:
1. Cost per unit of work
2. Delivery rate
3. Velocity (agile)
4. Efficiency

Criticisms:
There are several criticisms of the using productivity in the software development and maintenance environment. The most prevalent is an argument that all software projects are different and therefore are better measured by metrics focusing on terminal value rather than by metrics focused on process efficiency (artesian versus manufacturing discussion). I would suggest that while the result of a software project tends to be different most of the steps taken are the same which makes the measure valid but that productivity should never be confused with value.

A second criticism of the use of productivity is a result of improper deployment. Numerous organizations and consultants promote the use of a single number for productivity. The use of a single number to describe the productivity the typical IT organization does match reality at the shop floor level when the metrics is used to make comparisons or for estimation. For example, would you expect a web project to have the same productivity rate of a macro assembler project? Would you expect a small project and a very large project to have the same productivity? In either case the projects would take different steps along their life cycles therefore we would expect their productivity to be different. I suggest that an organization analyze their data to look for clusters of performance. Typical clusters can include: client server projects, technology specific projects, package implementations and many others. Each will have a statistically different signature. An example of a productivity signature expressed as an equation is shown below:

Labor Productivity=46.719177-(0.0935884*Size)+(0.0001578*((Size-269.857)^2))

(Note this is an example of a very specialized productivity equation for a set of client server projects tailored for a design, code and unit testing. The results would not representative a typical organization.)

A third criticism is that labor productivity is an overly simple metric that does not reflect quality, value or speed. I would suggest that two out three of these criticisms are correct. Labor productivity is does not measure speed (although speed and effort are related) and does not address value (although value and effort may be related). Quality may be a red herring if rework due to defects is incorporated into productivity equation. In any case productivity should not be evaluated in a vacuum. Measurement programs should incorporate a palette of metrics to develop a holistic picture of a project or organization.

 

Velocity and productivity are different.

Velocity and productivity are different.

Mention productivity to adherents of Agile methods and you will get a range of responses. Some of the typical responses include blank stares, tirades against organization-level control mentality or discussions on why velocity is more relevant. Similar reactions (albeit 180 degrees out of phase) will be experienced when you substitute the word velocity and have discussions with adherents of other methodologies.

Fantasy movies and novels have taught us that in the realm of magic, knowing the name of a person or thing confers power. In fantasy novels the power conferred is that of control. In real life, the power of having a name for a concept is the power of spin. Spin and control are a pair of highly related terms. Spin is to provide an interpretation (a statement or event, for example), especially in a way meant to sway public opinion.

Naming a concept, even if many similar concepts have already been given names, creates an icon that can rally followers and be used to heap derision on non-followers. Maybe because the followers of fantasy and science fiction in the IT professions is higher than you would find in the normal population, the pattern of naming as a concept to focus attention has risen to a fine art. Examples abound in the IT world, such as the use of the term logical files in the IFPUG Function Points (where are the illogical files?) and Agile methods (the others must be the inflexible methods). Productivity and velocity are named concepts that reflect this rule. Each can evoke followers to alter their behavior or to generate violent rage in what began as a civil conversation. The irony is that these terms represent highly related concepts. Both seek to describe the amount of output that will be delivered in a specific period of time. The difference is a matter of perspective.

If they are so similar why are there two terms describing similar concepts? The lets peel back another layer.

Dion Hinchcliffe has defined project velocity is the measurement of the event rate of a project. A simpler definition is simply work divided by time. In both cases velocity is used to describe the speed a specific team delivers results. Typical velocity metrics include story points per person month, requirements per sprint and stories or story points per iteration. The units of measure are targeted at the level of requirements or user stories. The granularity of the unit of measure and collection time frame (iteration or sprints) ensures that the metric is generated and collected multiple times throughout the project. Repetition makes it easy for this process to become repeatable through based on rote memory. Because of the short time horizon and the use of measures that can be derived at a team level, the data is useful to the team as they plan and monitor their work. Useful equals metrics that get collected, in my book. Unfortunately because relative measures (measures based on perception) are used to size requirements these metrics tend to be less useful for organizational comparison than more classic productivity measures. Productivity is also relatively simple metric. It is simply the output (numerator) of a project divided by the input(s) required to produce the output (denominator).

The productivity equation is divided by more esoteric units than calendar time, such as hours of effort or FTE months (full-time equivalents), that relate to the entire project. The units of measure for the numerator range from the venerable line of code to functional units such as function points. Because productivity is generally collected and used at an overall project level it is very useful for parametric estimation or comparing projects, but far less effective for planning day to day activities than is velocity. It should be noted that some organizations collect many separate units to create a lower level view of productivity. I would suggest this can be done albeit it will require a substantial amount of effort to implement and maintain.

So if velocity and productivity are both useful and related, which one should we use? The first place to start is to decide what question you are trying to answer. Once the problem you are trying to solve is identified the unit of measure and the collection time horizon both become manageable decisions. The question of whether we have to choose one over the other I would suggest is a false question. I propose that if we focus on selecting the proper numerator we can have measures that are useful at both the project and organization level. One solution is to substitute Quick and Early Function Points (QEFP is a rules based functional metric) for the typical story points (relative measure). QEFP can be applied at a granular level and then aggregated because it is rules based for reporting at different levels. Understanding the relationship between the two measures we devise a solution to have our cake and eat it too.