Supp/Blog/CSAT Is a Vanity Metric (And What to Track Instead)
Analytics7 min read· Updated

CSAT Is a Vanity Metric (And What to Track Instead)

Your CSAT is 4.5 out of 5. Congratulations. It tells you almost nothing useful. Here's why, and what metrics actually predict customer behavior.


Your customer satisfaction score is 4.5 out of 5. The board is happy. The VP of CX puts it in every presentation. You celebrate.

Meanwhile, churn is up 3% this quarter.

CSAT says your customers are happy. Churn says they're leaving. Both can't be right. So what's going on?

The Selection Bias Problem

CSAT surveys have a 10 to 25% response rate. That means 75 to 90% of your customers didn't answer. The people who do answer are a biased sample.

Happy customers answer because they want to be nice or because they had a genuinely great experience. Angry customers answer because they want to vent. Neutral customers (the majority) don't bother.

The result: your CSAT data over-represents the extremes and under-represents the middle. Your 4.5 average might mean most respondents gave 5s (the happy ones) with a few 1s and 2s (the angry ones). The actual average satisfaction of your entire customer base could be 3.5.

You're making decisions based on a sample that doesn't represent your customers.

CSAT Measures the Interaction, Not the Relationship

A customer rates their support interaction 5/5. The agent was great. The response was fast. The issue was resolved.

That same customer is still frustrated because this is the third time they've had to contact support about the same category of issue. The product keeps breaking in the same way. Each individual interaction was fine. The cumulative experience is terrible.

CSAT captures the micro (this specific interaction) and misses the macro (how the customer feels about your company). A high CSAT score can coexist with high churn because customers are satisfied with your support but unsatisfied with your product.

The Recency Effect

CSAT is typically measured right after resolution. The customer just got their problem fixed. They feel relief. They rate highly.

If you surveyed them a week later, the score would be lower. The relief has faded, and what remains is the memory of having a problem in the first place. The interaction was good, but the fact that they needed the interaction at all is a negative.

Some companies have experimented with delayed CSAT surveys (sent 24 to 48 hours after resolution). The scores are consistently 0.3 to 0.5 points lower than immediate surveys. The immediate score captures the relief. The delayed score captures the actual sentiment.

What Predicts Behavior Better

Customer Effort Score (CES). As covered in our CES guide, effort predicts loyalty better than satisfaction. A customer who's satisfied but had to jump through hoops will churn faster than a customer who's merely okay but found it easy. Track CES alongside CSAT for a more complete picture.

Repeat contact rate. Customers who contact support 3+ times in 30 days churn at 2 to 3x the normal rate, regardless of their CSAT scores. Each contact is a friction point. Frequent contact means the product isn't meeting expectations.

Resolution completeness. Did the issue actually get solved? A ticket marked "resolved" that gets reopened within a week wasn't really resolved. Track reopen rates as a quality signal. High CSAT with high reopen rates means agents are good at making customers feel heard but bad at actually fixing problems.

Time to value after resolution. For product-related issues, track whether the customer successfully uses the feature after the support interaction. If they contacted support about exports and still haven't exported anything a week later, the "resolved" ticket didn't actually help.

When CSAT Is Useful

CSAT isn't worthless. It's useful for specific things:

Comparing agents. If agent A consistently scores 4.8 and agent B scores 3.5, there's a quality gap worth investigating. CSAT varies between agents more reliably than between time periods.

Detecting acute problems. If your weekly CSAT drops from 4.4 to 3.6, something changed. A product bug, a policy change, a staffing shortage. CSAT is a decent smoke detector for sudden quality drops.

Benchmarking against yourself over long periods. Your CSAT trending from 4.0 to 4.4 over 12 months means something improved. Just don't over-attribute it or compare it to other companies (whose measurement methodology is different).

The Better Dashboard

Replace CSAT as your primary metric with a composite view:

CES (how easy was it?) as the primary satisfaction metric. Predicts loyalty and churn better than CSAT.

TTR (time to resolution) as the primary speed metric. Better than FRT because it measures the whole experience.

Repeat contact rate as the primary quality metric. High repeat contacts = problems aren't getting solved.

Automation rate as the primary efficiency metric. What percentage of volume is handled without humans? This tells you if your system is scaling.

Keep CSAT as a secondary metric. Look at it monthly, not daily. Use it for agent coaching and trend detection. Stop putting it in board presentations as proof that customers are happy, because it's not.

How to Transition

If your team has been reporting CSAT to leadership for years, you can't just stop. You need to introduce the new metrics alongside CSAT and let the data speak.

Start reporting CES and TTR alongside CSAT in your next quarterly review. Show the cases where CSAT was high but CES was low (easy to find). Explain why CES is a better predictor of retention. Let leadership see the discrepancy.

Over 2 to 3 quarters, the new metrics become the standard. CSAT moves to the appendix. Nobody misses it because the new metrics tell a more complete, more actionable story.

See Supp Analytics

$5 in free credits. No credit card required. Set up in under 15 minutes.

See Supp Analytics
CSAT problemscustomer satisfaction metric problemsCSAT alternativessupport metrics that mattercustomer effort score vs CSAT
CSAT Is a Vanity Metric (And What to Track Instead) | Supp Blog