Rock piled by the shoreMost internal IT organizations do not have a lot of experience as professional customer research personnel, but they have to get a handle on how their work is perceived. Before tackling the collection and analysis of how customers and clients perceive their work there are four considerations, we take a deeper dive into three today.

The first of the considerations is relevance.  Relevance is defined as the state of being closely connected or appropriate.  In terms of customer satisfaction, relevance is the determination of whether the data collected (or are going to collection) provide insights about the performance of the product being delivered and groups involved in delivering that product. Questions to ask to judge relevance include:

  1. Can we ask for information that will provide insight into our business goals?
  2. Will collecting the information reflect stakeholder satisfaction with our product?
  3. Can we use the data to provide an understanding of the satisfaction with how the product was developed and delivered?
  4. Is the information useful for leaders’ decision making?
  5. How easy will it be to consistently collect data?
  6. How confident are we that questions asked will be answered truthfully?

The second consideration is usability. Usability is defined as the degree to which something is able or fit to be used.  Assuming data is collected and that data is relevant to the questions being asked the second step in the process is to determine whether you can do anything with what you collect. Questions to ask to judge usability include:

  1. How easy will the data collected be to derive comprehension?
  2. Can actions be informed by information from these measures? (An alternative is to consider if an indicator shows a problem whether anything would be done to change the response.)
  3. Are answers from different groups comparable?
  4. What are the inherent biases in the questions? Is the information timely enough to improve the outcome for the team or product?

The third consideration is value.  The definition used in this scenario is whether the benefit of the knowledge gathered measuring satisfaction outweighs the cost. Costs to include are: data collection costs, analysis costs, meetings to argue over results, tools, and opportunity costs. Benefits need to just as tangible and the costs. How has the information delivered changed the behavior of those delivering products so that stakeholders get what they want, when they want it and at the level of quality needed.

  1. How much will the full measurement lifecycle cost?
  2. Are the benefits going to be monitored and quantified?
  3. Could those involved in measuring satisfaction being doing something of higher value?
  4. Can the same information be gathered in a less costly manner?

As noted earlier in this theme relevance, usability and value are related. Starting your consideration with relevance is important because as soon as a measure or metric fails the test of relevance, by definition it can not be useful. Once you have decided the information is relevant and potentially useful evaluating the value can be derived makes sense. I once talked with a colleague that had been asked to help a CIO of a small firm assess internal satisfaction. The original plan was to run several days of focus groups touching nearly everyone in the firm. The opportunity cost alone would have been huge. A targeting and sampling plan was devised to reduce the cost of data gathering which improved the value equation.

Once a plan forward is established self-sufficiency needs to be tackled (which will be the next entry in this theme).