On a scale of fist to five, I’m at a ten.

(This is lightly re-edited version of a post from 2016 — I have been on planes for two days going hither and yon, therefore, we are revisiting quality.)

Quality is partly about the number of defects delivered in a piece of software and partly about how the stakeholders and customers experience the software.  Experience is typically measured as customer satisfaction. Customer satisfaction is a measure of how products and services supplied by a company meet or surpass customer expectations. Customer satisfaction is impacted by all three aspects of software quality: functional (what the software does), structural (whether the software meets standards) and process (how the code was built).

Surveys can be used to collect customer- and team-level data.  Satisfaction is used to measure if products, services, behaviors or work environment meets expectations. 

  1. Asking the question, “are you happy (or some variant of the word happy) with the results of XYZ project?” is an assessment of satisfaction. The answer to that simple question will indicate whether the people you are asking are “happy”, or whether you need to ask more questions.  Asking is a powerful tool and can be as simple as asking a single question to a team or group of customers or as complicated using multifactor surveys. Even though just asking whether someone is satisfied and then listening to the answer can provide powerful information, the size of projects or the complexity of the software being delivered often dictates a more formal approach, which means that surveys are often used to collect satisfaction data.  Product or customer satisfaction is typically measured after a release or on a periodic basis.
    Fist to Five is a simple asking technique. Agile teams measure team level satisfaction using simple techniques such as Fist-to-Five.  Team members are asked to vote on how satisfied they are by flashing a number of fingers all at the same time.  Showing five fingers means you are very satisfied and a fist (no fingers) is unsatisfied.  This form of measurement can be used to assess team satisfaction on a daily basis. Here is a  simple video explanation. I generally post an average score on the wall in the team room in order to track the team’s satisfaction trend.
  2. The Net Promoter metric is a more advanced form of a customer satisfaction measure than simply asking, but less complicated than the multifactor indexes that are sometimes generated. Promoters are people who are so satisfied that they will actively spread knowledge to others. Generating the metric begins by asking “how likely you are to recommend the product or organization being measured to a friend or colleague?” I have seen many variants of the net promoter question, but at the heart of it the question is whether the respondent will recommend the service, product, team or organization.  The response is scored using a scale from 1 – 10.  Answers of 10 or 9 represent promoters, 7 or 8 are neutral and all other answers represent detractors. The score is calculated using the following formula: (# of Promoters — # of Detractors) / (Total Promoters + Neutral + Detractors) x 100.   If 10 people responded to a net promoter question and 5 where promoters, 3 neutral and 2 detractors the net promoter score is 30 (5 -2 /10 *100). Over time the goal is to improve the net promoter score, which will increase the chance your work will be recommended.

Software quality is a nuanced concept that reflects many factors, some of which are functional, structural or process related. Satisfaction is a reflection of quality from a different perspective than measuring defects or code structure. The essence of customer satisfaction is a very simple question: Are you happy with what we delivered? Knowing if the team, stakeholders, and customers are happy with what was delivered or the path that was taken to get to that delivery is often just as important as knowing the number of defects that were delivered.

The kingfisher was about this far away!

Each mapping layer, value chains, value streams, and process maps serve related but different purposes. As an organization drills down from a value chain to a process map different measures and metrics are exposed. One could summarize value chain metrics as high-level cost, revenue and speed while process mapping as variations on effort, delay, and work-in-process. Each metric set is highly related but targeted at different levels of the organization.

Value Chain Metrics Pallet (more…)

5255124016_1229905b61_b

**Reprint**

Productivity is a classic economic metric that measures the process of creating goods and services.  Productivity is the ratio of the amount of output from a team or organization per unit of input. Conceptually productivity is a simple metric. In order to calculate the metric, you would simply sum up the number of units of item produced and divide it by the amount “stuff” needed to make those units.  For example, if a drain cleaning organization of three people cleans 50 drains per month, their labor productivity per month would be 50/3 = 16.6 drains per person. The metric is a sign of how efficiently a team or organization has organized and managed the piece of work being measured. There are four types of productivity.  Each type of productivity focuses on a different part of the supply chain needed to deliver a product or a service.  The four types are: (more…)

Alternatives!

Three possible alternatives:

IFPUG function points. If you have to have a standards-based approach to sizing and comparison. IFPUG function points are the gold standard. IFPUG function points are an ISO standard and can be applied to all software types (technology agnostic). The drawbacks for using function points include the perceptions that there is a high level of overhead, counting requires too much information too early in the processes and that only highly skilled wizards can count (or approximate) function points correctly. None of these perceptions are really true, however, in some circles, the tar and feathering has stuck. (more…)

How did we get to this point!

Story points were originally developed as a metaphor to give a rough answer to the question of how much functionality could be delivered in a specific period of time.  The problem is that all good metaphors are eventually abused or, worse, people forget that the metaphor is a simplification and approximation of real life. Metaphors become reality.   Three basic behaviors of leaders and stakeholders in software development (broad definition) have lead the metaphor of story points to evolve into story points as measures — something they FAIL miserably at. (more…)

Listen Now
Subscribe: Apple Podcast
Check out the podcast on Google Play Music

SPaMCAST 520 features our interview with Doc Norton. We talked about his new book Escape Velocity, measurement, and why velocity isn’t generally a good measure for teams. By the time teams get to a point where story point velocity is consistent and predictable, they will have better tools that have fewer negative side effects.

Doc’s Bio

Doc Norton is passionate about working with teams to improve delivery and building great organizations. Once a dedicated code slinger, Doc has turned his energy toward helping teams, departments, and companies work better together in the pursuit of better software. Working with a wide range of companies such as Groupon, Nationwide Insurance, Belly, and JaTango, Doc has applied tenants of agile, lean, systems thinking, and servant leadership to develop highly effective cultures and drastically improve their ability to deliver valuable software and products.

A Pluralsight Author, Clean Coders contributor, frequent blogger, international keynote speaker and coach, in his spare time, Doc has been working on his latest book, Escape Velocity: Better Metrics for Agile Teams. You can find his book on LeanPub at www.leanpub.com/EscapeVelocity

Twitter: @DocOnDev

Web: http://docondev.com/

Can you help keep the podcast growing? Here are some ideas:

  1. Tell a friend about the cast.
  2. Tweet or post about the cast.  Every mention helps.
  3. Review the podcast wherever you get the cast.
  4. Pitch a column to me. You are cool enough to be listening; you deserve to be heard.
  5. Sponsor an episode (text or call me to talk about the idea).
  6. Listen.

Whether you do one or all six, being here is a big deal to me. Thank you!


Re-Read Saturday News
This week we continue on our journey through Bad Blood, Secrets and Lies in a Silicon Valley Startup by John Carreyrou (published by Alfred A. Knopf, 2018 – Buy a copy and read along!) Today we tackle a single chapter.  Chapter 6, titled Sunny, introduces Ramesh “Sunny” Balwani to the story. Sunny, Holmes’ live-in boyfriend (the stress on the live-in part is to shine a light on just how close Holmes was to Sunny), adds another layer of toxicity to the Theranos story. The toxicity feels extraordinary but is not that uncommon when teams break down.  

Current Entry:

Week 5 — Sunnyhttps://bit.ly/2AZ5tRq (more…)

Pareto chart of Pokemon

Got to catch them all!

A Pareto analysis is based on the principle suggested by Joseph Juran (and named after Vilfredo Pareto) that 80% of the problems/issues are produced by 20% of the issues. This is the famous 80/20 rule, and this principle is sometimes summarized as the vital few versus the trivial many. Process improvement professionals use the Pareto principle to focus limited resources (time and money) on a limited number of items that produce the biggest benefit. (more…)