Turns out, measuring customer satisfaction on a scale of 1-10 isn’t a silver bullet
Every time we get comfortable with a “truth”, something happens that throws us for a loop. Fat was bad, now it’s good. Profitability was a key driver of corporate value, now it’s irrelevant (to VC at least). Daenerys was our Rightful Queen, now she’s batsh*t crazy. It’s hard to keep up.
Similarly, we read this week that a core metric used to evaluate corporate performance – the Net Promoter Score, or “NPS” – may not be exactly what it seems. The NPS is attractive because it is relatively simple to administer, asking customers a single question, “On a scale of 0 to 10, how likely are you to recommend the company’s product or service to a friend?”. Results are divided into three groups: “Promoters” (people who answer 9 or 10), “Passives” (7 or 8), and “Detractors” (0 to 6) with the score calculated by subtracting Detractors from Promoters (Passives are ignored).
First introduced in 2003, it has become increasingly common within the corporate lexicon to use NPS as a reflection of customer satisfaction and by extension, an indicator of business health. We’re not talking about off-hand comments here either. The score is being reported in earnings calls, references in securities filings, and the results are being used as the basis for large decisions at large companies, like employee bonuses (Best Buy, Citigroup, and American Express), investment decisions (Target and Intuit), and to qualify overall business health (Delta and UnitedHealth).
Unfortunately, based on recent WSJ research, there’s not much basis for the score as a reliable metric as it can produce rather noisy results and does not account for cultural differences. From the WSJ:
“Some academics have questioned the whole idea, suggesting that NPS has been oversold. Two 2007 studies analyzing thousands of customer interviews said NPS doesn’t correlate with revenue or predict customer behavior any better than other survey-based metrics. A 2015 study examining data on 80,000 customers from hundreds of brands said the score doesn’t explain the way people allocate their money. ‘The science behind NPS is bad,’ said Timothy Keiningham, a marketing professor at St. John’s University in New York, and one of the co-authors of the three studies. He said the creators of NPS haven’t provided peer-reviewed research to support their original claims of a strong correlation to growth. ‘When people change their net promoter score, that has almost no relationship to how they divide their spending.’”
Peer reviews aside, even the inventor of the metric, Fred Reichheld, believes that the results have been taken out of context. As noted to the WSJ: “…he is astonished companies are using NPS to determine bonuses and as a performance indicator. ‘That’s completely bogus… I had no idea how people would mess with the score to bend it, to make it serve their selfish objectives.'”
One of Chenmark’s core values is Keeping Score (using data to evaluate performance), so the temptation is there to lean heavily on NPS, but we are increasingly aware that there are no silver bullets when it comes to satisfaction metrics. Despite its benefits – it is simple to administer and communicate – the limitations of NPS require that we utilize more of a mosaic approach to evaluate how our customers perceive the value our companies provide. This makes life more difficult, but also more authentic and fun. If there is anything we have learned in our small business experience thus far, it is that few things can truly be distilled down to a single number.