Supp/Blog/Ticket Deflection Rate: How to Measure It
Analytics6 min read· Updated

Ticket Deflection Rate: How to Measure It

Most companies measure deflection wrong. They count every help page visit as a deflected ticket. Here's how to measure it honestly and what good actually looks like.


Your self-service portal got 5,000 visits last month. Your support team handled 800 tickets. Your VP declares a 84% deflection rate. Everyone celebrates.

The problem: most of those 5,000 visitors weren't going to submit a ticket anyway. They were browsing docs, looking for setup guides, reading API reference. Counting all help page traffic as "deflected tickets" is like counting everyone who walks past a store as a "deflected shoplifter."

Real deflection is much harder to measure. And much more useful.

What Deflection Actually Means

Deflection rate measures: of the people who would have contacted support, how many found their answer without creating a ticket?

The denominator matters. It's not "total help page visits." It's "people who intended to contact support." Those are very different numbers.

A more honest formula: Deflection Rate = (Users who started the contact flow but resolved via self-service) / (Users who started the contact flow)

"Started the contact flow" means they clicked "Contact Us," opened the chat widget, or took some action that signals intent to reach out. If they found an answer (clicked a suggested article, viewed a help page, used an automated tool) and didn't submit a ticket, that's a deflection.

If they never intended to contact support, they can't be deflected. They were never headed your way.

How to Track It Honestly

Set up a measurement funnel. Track these steps:

Step 1: User opens the contact/support flow (clicks "Help," opens chat widget, visits contact page). Count these users.

Step 2: User is shown self-service options (suggested articles, FAQ, automated troubleshooting). Count users who engage with these.

Step 3: User either submits a ticket (not deflected) or leaves without submitting (potentially deflected).

Deflection rate = Step 3 exits without ticket / Step 1 entries.

This isn't perfect. Some users who leave without a ticket still didn't find their answer (they gave up, not deflected). But it's far more accurate than counting raw page views.

To refine further: add a "Did this answer your question?" prompt after self-service content. Users who click "Yes" are genuinely deflected. Users who click "No" need to be directed to human support. Track the "Yes" clicks as confirmed deflections.

Benchmarks

What's a good deflection rate? It depends on how you measure, but using the honest method above:

20 to 30%: Average. Your self-service exists but isn't great. Probably some outdated articles, poor search, or missing topics.

30 to 50%: Good. You have well-written, findable content covering the top ticket drivers. Search works. Articles are current.

50 to 65%: Excellent. You have strong self-service plus contextual help (in-app suggestions, proactive answers). Most simple queries never reach a human.

Above 65%: Either you have exceptional self-service, or your product is so simple that most issues are trivial. Or you're measuring wrong.

Claims above 70% should be scrutinized. If a vendor tells you their product achieves 80% deflection, ask how they're measuring. If they count all help page visits, the number is meaningless.

How to Improve Deflection

Start with your ticket data. What are the top 10 reasons people contact you? Do you have self-service content for each one? Is that content findable, accurate, and clear?

Most deflection improvements come from fixing gaps in coverage (writing the article that doesn't exist), improving findability (better search, contextual placement), and updating stale content (the article that was right 6 months ago but isn't anymore).

AI classification helps identify gaps automatically. When Supp classifies incoming tickets, you see the intent distribution. If 15% of tickets are "password reset" and you have a self-service password reset flow, why are people still contacting support? Either they can't find the flow, or the flow is broken. The classification data tells you where to look.

Proactive deflection is the next level. Instead of waiting for customers to seek help, push answers to them before they ask. Onboarding emails, in-app tooltips, status page notifications, renewal reminders. Each proactive touchpoint preempts a future ticket.

The Deflection Quality Problem

There's a dark side to deflection optimization. If you optimize too aggressively, you start "deflecting" people who actually need help.

Signs of bad deflection: customer contacts support 2 to 3 times for the same issue (the self-service didn't actually help but the system counted it as deflected). CSAT scores drop on self-service interactions. Customers complain about not being able to reach a person.

Deflection is good when the customer gets their answer. Deflection is bad when the customer gives up. The metric doesn't distinguish between the two. Your CSAT and repeat-contact data does.

Track both. If deflection goes up and CSAT stays stable (or improves), your self-service is working. If deflection goes up and CSAT drops, you're blocking people from help.

See Supp Analytics

$5 in free credits. No credit card required. Set up in under 15 minutes.

See Supp Analytics
ticket deflection ratesupport deflection metricsmeasure ticket deflectionself service deflection ratereduce support tickets
Ticket Deflection Rate: How to Measure It | Supp Blog