Measurement


Soap, Shampoo, Towel and Rubber Duckie

Everything you need for a proper bath!

The fourth category of considerations for an organization that is primarily focused on internal applications to think about before they start measuring customer satisfaction is self-sufficiency. However, before we start, after the first article in this theme, I was asked whether the overhead of the four considerations would put teams and individuals off from talking to their clients, customers, and stakeholders. The simple answer is no. Conversations with individuals about their satisfaction with your efforts are important feedback tools. Sprint Reviews and Demos are events that are structured to create those conversations.  Conversations and formally measuring customer satisfaction are not the same thing. Neither should preclude or interfere with the other but rather doing both will provide different types of information. If you are not talking with your stakeholders, I will put it more succinctly: you probably have a career issue that measurement will not fix.   (more…)

 

Rock piled by the shoreMost internal IT organizations do not have a lot of experience as professional customer research personnel, but they have to get a handle on how their work is perceived. Before tackling the collection and analysis of how customers and clients perceive their work there are four considerations, we take a deeper dive into three today. (more…)

One Way Stop Sign

Measuring or assessing customer satisfaction is a fact of life for products and services for organizations that deliver to their customers. I receive several every day. Each text, email and phone call asking my opinion tells me that my opinion matters. The process of determining whether customers are happy is a form of attention. Internal customers are not always paid the same compliment, this is a rectifiable mistake. There are multiple ways to collect customer satisfaction data (a sample of techniques are in Customer Satisfaction Metrics and Quality) the next four segments of the blog are not going to focus on data collection techniques, but rather on internal customer satisfaction measurement rationale and infrastructure. Spending time upfront to understand whether what you are doing solves a problem or is a sustainable process is important. None of this is easy and doubly so because most collecting and analyzing the data aren’t marketing or market research personnel. There are four areas that need to consider before you send your first survey or schedule your first stakeholder interview. (more…)

Story Points: No Parking

Story points are a planning tool that has proponents and opponents. Like most tools, story points are not prima facie evil, it is only through misuse that they are problematic.  Problems tend to occur because leader, managers, and team members have a need (or at least a very strong desire) to know when a piece work will be done, what they are going to do next, whether they are meeting expectations, and in many cases what something will cost. Story points are a relative measure, a proxy of a proxy that makes answering any of these questions with precision very difficult. The best anyone can hope for is that the answers story points provide are accurate enough and provide a coherent feedback loop for teams. This could be considered damning with faint praise, however, in the right circumstances story points are a useful tool for a TEAM. I am a proponent of using story points in three scenarios. (more…)

Nucleon by Jeppe Hedaa is a short and concise book that is rich in thought-provoking ideas. To give you a sense of scope, the subtitle, “The Missing Formula That Measures Your IT Development Team’s Performance” speaks volumes. The book weighs in at 119 pages with front matter (always read the front matter), six chapters and eight pages of endnotes. I will admit that I am a sucker for grand unifying theories. I am still rooting for Stephen Hawking to posthumously pull a rabbit out of the hat (I sure hope someone is looking through Hawking’s personal papers). Mr. Hedaa, founder and CEO of 7N, developed the theory that team effectivity is a function of the sum of each person’s effectivity (the ability to be effective). Effectivity is a function of people, organizational, and complexity factors. Arguably the idea that people, organizational, and complexity factors influence effectivity is not controversial.  But, these factors can be consistently measured and then used in a deterministic manner to predict performance is controversial. Mr. Hedaa spends the six chapters of the book developing a logical argument based on experience and data for the premise that there are ways to measure the factors that matter and that knowing the answer matters to leaders that want to get the maximum value from the money they spend on software development (the broad definition that includes development, enhancement, and maintenance). The Nucleon formula is: (more…)

On a scale of fist to five, I’m at a ten.

(This is lightly re-edited version of a post from 2016 — I have been on planes for two days going hither and yon, therefore, we are revisiting quality.)

Quality is partly about the number of defects delivered in a piece of software and partly about how the stakeholders and customers experience the software.  Experience is typically measured as customer satisfaction. Customer satisfaction is a measure of how products and services supplied by a company meet or surpass customer expectations. Customer satisfaction is impacted by all three aspects of software quality: functional (what the software does), structural (whether the software meets standards) and process (how the code was built). (more…)

The kingfisher was about this far away!

Each mapping layer, value chains, value streams, and process maps serve related but different purposes. As an organization drills down from a value chain to a process map different measures and metrics are exposed. One could summarize value chain metrics as high-level cost, revenue and speed while process mapping as variations on effort, delay, and work-in-process. Each metric set is highly related but targeted at different levels of the organization.

Value Chain Metrics Pallet (more…)

Kafka Statue

Are you measuring a team effort?

**Reprint**

Productivity is used to evaluate how efficiently an organization converts inputs into outputs.  However, productivity measures can and often are misapplied for a variety of reasons ranging from simple misunderstanding to gaming the system. Many misapplications of productivity measurement cause organizational behavior problems both from leaders and employees.  Five of the most common productivity-related behavioral problems are: (more…)

5255124016_1229905b61_b

**Reprint**

Productivity is a classic economic metric that measures the process of creating goods and services.  Productivity is the ratio of the amount of output from a team or organization per unit of input. Conceptually productivity is a simple metric. In order to calculate the metric, you would simply sum up the number of units of item produced and divide it by the amount “stuff” needed to make those units.  For example, if a drain cleaning organization of three people cleans 50 drains per month, their labor productivity per month would be 50/3 = 16.6 drains per person. The metric is a sign of how efficiently a team or organization has organized and managed the piece of work being measured. There are four types of productivity.  Each type of productivity focuses on a different part of the supply chain needed to deliver a product or a service.  The four types are: (more…)

Develop a plan of attack

There are times when just letting go and going with the flow is a great idea.  I plan to be spontaneous at least twice before I die. Agile assessments are not one of those events that work best without planning.  An even broader rule is that any form of assessment requires a framework and a plan. A framework and a plan are required if the results are to be reliable and repeatable. The type of assessments and the reason for the assessment will go a long way to determining what needs to be looked at in a general way, however, the assessment plan needs to get down to the nitty-gritty.  The assessment plan will need to explicitly determine what will be looked at including behaviors, decision-making capability, ceremonies and deliverables, and then communicate those decisions to the assessment’s stakeholders. The areas covered in an solid assessment plan include: (more…)

« Previous PageNext Page »