The Activity Relevance Matrix in Customer Satisfaction Surveys
Recently, I was once again confronted with customer satisfaction surveys during a project. The "CSS" and I, .. yes, one can say that we are old friends. My first honest employment 10 years ago was as a project manager in a market research institute where I practically did nothing but elicit from the customers the many variations of "how satisfied are you with ... measured on a scale from...".
CSS surveys are still highly popular although I meanwhile consider their use as very limited, at least in the form in which they are generally carried out. A boss in a later employment propagated correctly that one should perform dissatisfaction surveys instead. To be honest, who really knows what measures should to be taken if e.g. the average value of the "satisfaction with personal support" declines from 1.5 to 1.7? I, for one, do not ... but I know that many pitiable employees have target agreements linked to something like that, but that is a different topic.
What annoyed me when re-encountering a CSS was a summary chart which I got to know as activity relevance matrix. It reminds me a little of the four Boston consulting quadrants. First, there is an x-axis, which shows the meaning of various satisfaction features. Crossed to the x-axis, the expression of the features is shown on the y-axis.
Criticism of the Activity Relevance Matrix
Predictability of the results
Great, actually. By means of the chart, it is suggested that I am able to immediately evaluate what the most important drivers for global satisfaction or recommendations are and, in particular, where the greatest failures are in this context. I admit that I thought the same when I first saw this chart. After creating dozens or hundreds of them and installing them in stylish presentations, the enthusiasm crumbled very quickly. Most of these charts are very similar. Basically, the industry and the selection of the satisfaction factors are enough for me to already guess parts of the chart.
Correlation Coefficients are Inappropriate
One basic mistake is to detect the importance of the features via a correlation. Usually, a product moment correlation is used by default setting. However, that is nothing fancy, but the normal Pearson correlation which is used when I simply click on "correlation" in a tool.
This would indeed be a perfect choice if the scales of all features were equidistant, i.e. the distance from evaluation level to level is always the same. This, of course, is simply assumed, otherwise one would not be able to calculate an average value and would surely be a spoilsport. Of course, the distortion is usually negligible, as some academic works have proven by simulations.
A further prerequisite would be a linear relationship between the features. The Pearson correlation reacts quite robustly if something like that does not occur as planned.
However, as soon as I am able to classify satisfaction features into so-called enthusiasm and hygiene factors (see Kano model or Herzberg's two factors theory), the exception is the rule. Then, the connections vary in strength depending on the value range. Hygiene factors work more via dissatisfaction, enthusiasm factors via top ratings.
All this does not inevitably lead to a wrong result. Even if the rank correlation would be the better choice, the result would not look very different.
The more serious problem is in the concept of the correlation in general, because it depends in particular on the range of the satisfaction values and their limits. If I measure satisfaction and recommendation on 5 point scales, then 5 is the end, after all, and if the businesses have not done everything wrong, more than half of the own customers' opinions are expected to be in the two top categories. Adding a little noise, coverage errors and individual response behaviour, one cannot expect an overwhelming correlation.
A correlation with steady data, however, needs a certain variance of the values. A feature, such as satisfaction with price/service, usually correlates quite highly with overall satisfaction because it is indeed an important feature but also because it is often rated comparatively poorly and I get a wider distribution of the values.
Dependence on the Frequency of Events
I can measure the significance of the influence of a feature on overall satisfaction the better the more customers are unsatisfied by it. Areas with severe consequences but lower correlation (because they occur too rarely) are also possible. Assuming we would ask electricity customers, a feature named "reliable power supply with electricity" in the activity relevance matrix would be top-rated in our first world and the relevance would be very low. As soon as we delight entire customer groups with targeted power failures shortly before the evaluation for a clean A-B test, however, our matrix would look considerably different. Customers whose complaints have not been taken seriously or whose insurance has not covered the damage, are extremely unsatisfied but are so rare in the evaluations that they are not sufficiently taken into account in an activity relevance matrix.
In sum: The activity relevance matrix explains the composition of overall satisfaction in the concrete survey data but it cannot be unconditionally generalized with respect to the actual meaning of the features beyond the survey. It shows the interaction between a "sub"-satisfaction and overall satisfaction by means of only one point although the effect mechanism usually is more complex. It is usually not based on explicit statements of the importance of features but indirectly derives it via correlations, thus containing errors.
Now, I may not criticize everything without providing alternatives. Well, I have not found the answer to everything yet. I like to solve the dilemma by means of the penalty reward contrast analysis approach. This examines the effect of the most extreme opinions, positive and negative ones. Thus, I am able to multidimensionally compare the features, namely, in how far they contribute to annoyance or to enthusiasm.