What Is NPS? And Does It Actually Predict Churn?
Net Promoter Score has fans and critics. Here is what it measures, what it misses, and whether you should track it.
NPS in 30 Seconds
Net Promoter Score asks one question: "On a scale of 0-10, how likely are you to recommend [product] to a friend?"
- 9-10 = Promoters. They'll recommend you. They're loyal. - 7-8 = Passives. They're satisfied but not enthusiastic. Vulnerable to competitors. - 0-6 = Detractors. They're unhappy and might actively discourage others from using you.
NPS = % Promoters - % Detractors. Range: -100 to +100.
A score of 0 to +30 is generally good. +30 to +50 is great. +50 to +70 is excellent. +70 or above is world-class. Negative means more unhappy customers than happy ones.
What NPS Actually Measures
NPS measures overall brand sentiment. Not support quality. Not product satisfaction. Not likelihood of renewal. Just: would you recommend us?
This distinction matters. A customer might love your product but hate your support. Their NPS might be 8 (passive) because the product is great, even though every support interaction was terrible. Or they might give you a 6 (detractor) because of a single bad support experience, even though they use your product daily and aren't actually going to leave.
NPS captures a snapshot of sentiment. It doesn't explain why. And it doesn't reliably predict individual behavior.
Does NPS Predict Churn?
Sort of. At the aggregate level, companies with higher NPS tend to have lower churn. But at the individual level, NPS is a weak predictor. A detractor (0-6) might stay for years because switching costs are high. A promoter (9-10) might leave next month because their budget got cut.
Research from Bain & Company (who created NPS) shows correlation between NPS and revenue growth. But correlation isn't causation. Companies that are growing tend to have happier customers AND higher NPS. Both might be effects of being a good company, not cause and effect of each other.
The most reliable churn predictors aren't NPS scores — they're behavioral signals: login frequency dropping, feature usage declining, support tickets increasing, payment failures. These tell you what the customer is doing, not what they say they'd do hypothetically.
NPS vs CSAT
NPS: "Would you recommend us?" Measures overall sentiment. Sent quarterly or annually. Captures brand perception.
CSAT: "Were you satisfied with this interaction?" Measures specific experience quality. Sent after each support interaction. Captures support quality.
For support teams, CSAT is more actionable. It tells you whether your support is good or bad. NPS tells you whether your brand is liked — which is influenced by product, pricing, marketing, competition, and a dozen other things beyond support.
Track CSAT for support quality. Use NPS if leadership wants it for investor decks and board meetings.
Should You Track NPS?
Track NPS if: - Your investors or board expect it - You want a high-level health metric for the business - You have 500+ customers (smaller samples produce noisy NPS scores) - You'll actually read the open-ended comments (which are 10x more valuable than the score)
Don't bother if: - You have fewer than 100 customers - You're looking for actionable support metrics (use CSAT and FCR instead) - You don't have a process for acting on the feedback
The honest truth: most small companies track NPS because "you're supposed to." They send the survey, get a score, put it in a slide deck, and do nothing with it. That's a waste of everyone's time.
If you're going to track NPS, commit to reading every detractor response, following up with every detractor personally, and closing the loop ("You said X was frustrating. We've fixed it."). Otherwise, skip it and focus on metrics that directly improve your customer experience.