Customer Experience will be a Fad without a Better Business Case

CX University Point of View
By Scott McCallister and Mohamed Latib

Widely used Customer Experience measures provide soft associations with benefits instead of hard numbers on financial investment results.

Part of the Series: Financial Measurement is Critical for the Future of CX

Over the last fifteen years, many corporations have embraced the goal of creating a superior Customer Experience (CX). The principle is powerfully intuitive – customers that have an emotional attachment to your brand or products will buy more and build a reputation that attracts more customers.

The promise of growth and competitive advantage is so enticing that large investments have been made despite a weak business case for an attractive return. All CX gurus state that a connection to the company bottom-line is important, but few discuss the hard details on how to do that. The current business case for CX rests on weak correlation analyses.

To convince the C-Suite to sustain investment, CX initiatives must show a cause and effect relationship on revenue growth and profitability. Without quantitative evidence, CX principles risk becoming the latest business strategy fad.

Net Promoter Score

The current business case for CX initiatives relies on a correlation between customer survey ratings and company-level growth or revenue per customer.

Net Promoter Score (NPS), the most hyped CX measure, is logically connected to improved financial results, but honestly, has weak quantitative support. NPS is a loyalty metric derived from customer surveys asking, on a ten-point scale, “How likely is it that you would recommend our organization to a friend or colleague?” The NPS is the percent of total responses rating 9 or 10 (the highest Promoters) minus the percent that score 0 to 6 (Detractors).

NPS has a strong pedigree. It was developed by Fred Reichheld, Bain & Company and Satmetrix Systems. Forrester, the major research firm, identified a positive correlation between NPS and company or division-level revenue growth.

Sounds good, except there has been no publication of analysis isolating NPS from other potential growth drivers:

  • Having a superior product
  • Having large and memorable marketing expenditures
  • What about geographic expansion or blunders by competitors?

It would be very expensive to complete survey samples that are large enough to do statistically significant correlation analysis that can separate multiple explanatory variables. It would also be important to cover a timeframe that matches revenue measures.

Over the years, several CX industry authors have questioned the value of the NPS metric on a variety of grounds.  Published evidence of NPS scores driving financial results have made a weak case for major investment in CX. By the way, how many practitioners have measured the percent of customer Promoters that do in fact recommend the product or brand to one or more friends or colleagues? The connection between intention in a survey and action is questionable.

Customer Satisfaction Score

Customer satisfaction (CSAT) scores are captured by surveys usually timed closer to the CX interactions, but responses are usually anonymous so they cannot be directly connected to change in that respondent’s revenue.

CSAT is a broad term that describes many different types of customer service survey questions, the classic being, “How would you describe your overall satisfaction with this product?” Many companies are smart about asking multiple questions to diagnose factors underlying such an overall rating.  Examples are:

  • “How knowledgeable was the representative?”
  • “How many contacts were needed to resolve the problem?”
  • And other questions relevant to the company’s process

This information helps guide improvements. However, when the data source is anonymous surveys of a sample of customers, it is impossible to confirm that the responses fairly represent the broader customer base. Respondents are probably those motivated to respond because they are either highly satisfied or highly dissatisfied. Such biased samples are a weak foundation for a business case.

Changes in overall CSAT scores are commonly compared to changes in aggregate financial results, but these correlations are soft associations rather than hard connections . Most companies have no methodology to confirm that the survey respondents are a sample of the customers behind the change in average revenue per customer.

Happy customers will logically spend more, but how many customers with average satisfaction spend the same amount? Could the customers who increase their spending have only moderate CSAT scores? Deeper analyses of such questions are needed to build confidence that CSAT scores are significant drivers of revenue.

How to Build the Case for CX

To prevent CX from becoming a business strategy fad, CX practitioners must develop a better business case than has been presented to date by big name research firms. Measures from surveys capture only a small sample of customers, and are generally not directly connected to changes in financial outcomes of those specific customers.

In a future post, we will offer three types of tests that can be created to measure the financial outcome of CX initiatives. The most powerful approach would be to create a customer-level business intelligence database including internal financial measures. This would allow comparison of actual results over time for customer groups who have had different customer experiences. The impact of CX engagement on individual customers can also be tracked over time. This approach would take customer centricity to a new level.