Customer Support KPIs Every Founder Should Track
You cannot improve what you do not measure. These are the 6 support metrics that actually matter for small teams.
Skip the Vanity Metrics
Enterprise support teams track 30+ metrics. You do not need 30 metrics. You need 6. Here they are, in order of importance.
1. First Response Time
What it is: How long between the customer sending a message and receiving a response (human or automated).
Why it matters: This is the single strongest predictor of customer satisfaction with support. Research consistently shows that speed beats quality in customer perception. A fast, adequate response is rated higher than a slow, perfect response.
Target: Under 5 minutes for automated responses, under 1 hour for human responses during business hours.
How to improve: Automate responses for common intents. Even a "we received your message" auto-reply counts as a first response and sets expectations.
2. Automation Rate
What it is: The percentage of support messages resolved automatically without human intervention.
Why it matters: This directly measures the ROI of your automation setup. A higher automation rate means less human time spent on support and faster responses for customers.
Target: 50 to 70% for a well-configured system. Below 50%, your rules need work. Above 80%, double-check that auto-responses are actually solving problems and not just sending generic replies.
How to improve: Review messages that fall below your confidence threshold. Are there patterns you can capture with new rules? Are there intents you have not set up auto-responses for?
3. Resolution Rate
What it is: The percentage of conversations that end in a resolution (customer's question answered, issue fixed) vs left unresolved.
Why it matters: High automation rate means nothing if the automated responses do not actually solve the problem. Resolution rate tells you if your responses are effective.
Target: Above 85%. If customers keep following up after your auto-reply, the auto-reply is not working.
How to improve: Read the conversations where customers follow up after an auto-response. Was the response wrong? Incomplete? Confusing? Fix the template.
4. Intent Distribution
What it is: A breakdown of what customers are asking about, by intent category.
Why it matters: This is the most actionable metric for your product. If 30% of your support messages are about the same bug, fix the bug. If 20% are pricing questions, your pricing page is not clear enough. Intent distribution tells you what to fix in your product, not just how to respond faster.
Target: No single intent should account for more than 25% of volume. If one does, it signals a systemic issue worth fixing at the source.
How to improve: For the top 3 intents, ask: "Can we fix the underlying issue so customers stop asking?" Every support message is a signal that something in your product or documentation could be better.
5. Cost Per Resolution
What it is: Your total support spend divided by the number of resolutions.
Why it matters: This tells you whether your support is efficient. A dropping cost per resolution means your automation is getting better. A rising one means something needs attention.
Target: Under $1 per resolution for automated messages. Under $5 for human-handled messages. These numbers vary by industry but give you a ballpark.
How to improve: Increase automation rate (fewer human touches per resolution) and improve auto-response quality (fewer follow-ups required).
6. Customer Satisfaction (CSAT)
What it is: How customers rate their support experience (usually 1 to 5 stars or thumbs up/down).
Why it matters: All the other metrics are leading indicators. CSAT is the outcome. Fast response time and high automation rate should produce high CSAT. If they do not, something is wrong with response quality.
Target: Above 4.0 out of 5.0, or above 80% thumbs up.
How to improve: Read the low-rated conversations. What went wrong? Slow response? Wrong answer? Tone issue? Fix the specific problem.
How to Track These
Set up a weekly 15-minute review:
1. Pull your numbers (automation rate, response time, resolution rate, intent distribution, cost per resolution) 2. Compare to last week 3. Identify the one metric that moved most (good or bad) 4. Take one action to improve or maintain it
Do not try to improve all 6 at once. Pick the weakest one and focus on it for a week. Then move to the next.