Supp/Blog/Stop Measuring First Response Time
Analytics7 min read· Updated

Stop Measuring First Response Time

FRT is the vanity metric of customer support. A 30-second auto-reply followed by 48 hours of silence gives you great FRT and terrible support. Measure something that matters.


Your team's first response time (FRT) is 4 minutes. It looks great on the dashboard. Your VP cites it in board meetings. "We respond in under 5 minutes!"

Except that 4-minute response is an auto-acknowledgment: "Thanks for reaching out! A member of our team will get back to you shortly." The actual human response comes 6 hours later. The issue gets resolved in 2 days.

Your FRT is 4 minutes. Your customer's experience is 2 days. Those are very different numbers.

Why FRT Became the Default Metric

FRT is popular because it's easy to measure, easy to game, and easy to report. Every help desk tracks it automatically. Leadership likes it because it sounds good. "We respond in minutes, not hours" is a great marketing line.

But FRT measures the wrong thing. It measures when you said something, not when you said something useful. An auto-reply that says "we got your message" is technically a first response. It's not helpful. The customer doesn't feel responded to. They feel acknowledged, which is different.

Acknowledgment has value (as we discussed in the waiting psychology post). But measuring acknowledgment speed as your primary KPI is like a restaurant measuring how fast they hand out menus and ignoring how long people wait for food.

How FRT Gets Gamed

The most common FRT game: set up an auto-reply that fires immediately on every ticket. FRT drops to 30 seconds. Nobody's actually responding faster. The auto-reply just masks the real delay.

More subtle games: agents "respond" to easy tickets first (typing a quick "looking into this") and leave hard tickets for later. FRT stays low because the quick acknowledgments bring the average down, even though the hardest tickets take the longest.

Priority gaming: tickets that are going to drag down FRT get rapid responses ("I'm investigating this, bear with me") while actual resolution takes the same time as before. The metric improves. The customer experience doesn't.

What to Measure Instead

Time to resolution (TTR). How long from the customer's first message to the problem being actually solved? This is the metric customers care about. They don't care when you first responded. They care when their problem was fixed.

Customer effort (messages per resolution). How many back-and-forth messages did it take? A resolution in 1 message is great. A resolution in 7 messages is terrible even if each individual response was fast. Every additional message is friction.

Reopen rate. What percentage of "resolved" tickets get reopened within 72 hours? A high reopen rate means you're not actually solving problems on the first try. You're closing tickets prematurely to make resolution time look good.

Full resolution rate at different time intervals. What percentage of tickets are fully resolved within 1 hour? 4 hours? 24 hours? This gives you a distribution, not just an average. An average of 4 hours might mean 90% are resolved in 1 hour and 10% take 30 hours. The average looks fine. The 10% are having a terrible experience.

First Meaningful Response Time

If you want to keep a response-time metric, measure "first meaningful response time" instead of first response time.

A meaningful response is one that either resolves the issue or makes substantive progress (asks a relevant question, provides a partial answer, gives a timeline). An auto-acknowledgment is not meaningful. "Looking into this" is not meaningful. "I've checked your account and the charge was a duplicate. I've refunded it." is meaningful.

This is harder to measure automatically because "meaningful" requires judgment. Some teams tag responses as "substantive" vs "acknowledgment" and only track FRT on substantive responses. Others train agents to count their first real response separately.

It's more work. But the metric actually correlates with customer satisfaction, which FRT doesn't.

The Dashboard Problem

The deeper issue is what appears on your support dashboard. If FRT is the first number leadership sees, it becomes the number everyone optimizes for. Goodhart's Law: when a measure becomes a target, it ceases to be a good measure.

Put these metrics front and center instead:

TTR (time to resolution): the actual customer experience duration.

CSAT: direct measurement of customer happiness.

Ticket volume trend: is volume going up or down? If it's going up, you have a product or documentation problem, not just a support throughput problem.

Automation rate: what percentage of tickets are resolved without human intervention? This measures your system efficiency, not just your team's speed.

Move FRT to a secondary dashboard. It's useful as a diagnostic (if FRT is 24 hours, something is wrong with staffing). It's useless as a primary KPI.

What Supp Measures

Supp's analytics dashboard tracks intent distribution, response times, resolution rates, and automation rates by default. The classification data shows you which intents have the longest resolution times (where you need better documentation or automation) and which have the highest reopen rates (where your responses aren't actually solving the problem).

The export functionality (CSV, JSON) lets you build custom metrics in your own BI tools. If you want to track "first meaningful response time" or "messages per resolution by intent category," the data is there.

The point isn't that FRT is worthless. It's that FRT alone, without TTR, effort metrics, and quality metrics alongside it, tells you almost nothing about your customer's experience. Fast responses and slow resolutions is a common pattern, and FRT completely hides it.

See Supp Analytics

$5 in free credits. No credit card required. Set up in under 15 minutes.

See Supp Analytics
first response time problemssupport metrics that matterFRT vanity metriccustomer support KPIbetter support metrics
Stop Measuring First Response Time | Supp Blog