Supp/Blog/The Panopticon Effect: How QA Monitoring Changes Agent Behavior
Analytics6 min read· Updated

The Panopticon Effect: How QA Monitoring Changes Agent Behavior

If you think you're being watched, you self-regulate. QA monitoring does this to agents. But Foucault warned that surveillance creates compliance, not excellence.


In 1791, Jeremy Bentham designed a prison called the panopticon. The architecture was circular: cells around the perimeter, a guard tower in the center. The guard could see any cell at any time, but the prisoners couldn't see the guard. They didn't know if they were being watched at any given moment, so they behaved as if they were always being watched.

Michel Foucault later analyzed the panopticon as a model for institutional power. His insight: the effect of constant possible surveillance is self-regulation. People internalize the rules and police themselves. They don't need an actual guard watching. The possibility of being watched is enough.

Your QA monitoring system is a panopticon.

How QA Monitoring Works (and What It Does to Agents)

Most support teams use some form of quality monitoring: a percentage of tickets are randomly reviewed by a team lead or QA specialist. The agent's response is scored on accuracy, tone, completeness, and adherence to guidelines.

The agent knows they're being monitored. They don't know which tickets are being reviewed. This creates the panopticon dynamic: they treat every ticket as if it's being graded.

The intended effect: agents write better responses because they know someone might check.

The actual effect is more complex:

Compliance increases. Agents follow the template, use the approved greeting, include the required closing. The monitored metrics improve. Tickets that get reviewed look good.

Risk-taking decreases. Agents stop using creative solutions, personal touches, or judgment calls because deviating from the script is risky. If they improvise and the review scores it poorly, they get coached. If they follow the template and the review scores it well, they're safe.

Stress increases. The knowledge that any ticket might be reviewed adds cognitive load to every interaction. The agent is simultaneously solving the customer's problem and performing for an invisible audience.

The Compliance Trap

Foucault's critique of the panopticon was that it produces compliance, not improvement. The surveilled subject learns to meet the minimum standard, not to exceed it. They optimize for the rubric, not for the customer.

In support QA, this manifests as:

Agents who score perfectly on QA reviews but have mediocre CSAT. They're following the rules but not connecting with customers. The template response is "correct" but lifeless.

Agents who avoid difficult tickets. Complex tickets are harder to handle correctly and more likely to result in a low QA score. If the incentive is to score well on reviews, the rational strategy is to cherry-pick easy tickets.

Agents who pad responses. Some QA rubrics score for completeness. Agents learn to add unnecessary information ("As a reminder, your account includes...") to tick the completeness box, even when it makes the response longer than it needs to be.

Better Monitoring: The Coach Model

The alternative to surveillance-style QA is coaching-style QA. Instead of scoring tickets against a rubric, review them as a starting point for coaching conversations.

The difference is framing. Surveillance QA asks: "Did the agent follow the rules?" Coaching QA asks: "How can this agent get better?"

In coaching QA, the reviewer reads 5 tickets per agent per week, not to grade them, but to identify patterns. "I notice you always ask the same clarifying question. What if you asked this different question instead, which gets to the answer faster?" That's coaching. "You scored 3/5 on 'clarifying questions' this week" is surveillance.

Coaching QA produces improvement because the agent understands the why behind the feedback. Surveillance QA produces compliance because the agent understands the what (the rubric) but not the why.

The Transparency Approach

Some companies have moved to fully transparent QA: agents know which tickets were reviewed and can see their scores in real time. No hidden reviews. No surprise feedback.

The transparency removes the panopticon anxiety. The agent isn't wondering "was that ticket reviewed?" They know. And when they see a score, they can immediately review their response and understand the feedback.

Transparent QA produces a different dynamic than hidden QA:

Agents self-correct. When they see a low score on a specific ticket, they review their response, understand what went wrong, and adjust. The feedback loop is immediate and clear.

Agents trust the system. Hidden QA feels adversarial ("they're watching me"). Transparent QA feels collaborative ("they're helping me improve"). The trust difference affects retention.

Agents focus on improvement, not avoidance. When monitoring is transparent, agents focus on doing better work, not on avoiding detection. The motivation shifts from extrinsic (fear of a bad score) to intrinsic (desire to improve).

What to Actually Measure

The rubric matters as much as the process. A rubric that measures "did the agent use the approved greeting?" produces compliance. A rubric that measures "did the customer get their problem solved completely?" produces results.

Good QA measures:

Resolution accuracy: was the answer correct?

Resolution completeness: did the agent address everything the customer asked?

Customer effort: did the resolution require one message or five?

Bad QA measures:

Template adherence: did the agent use the exact approved phrasing?

Response length: was the response "long enough" (an arbitrary threshold)?

Keyword usage: did the agent say "thank you" and "is there anything else"?

The first set measures outcomes. The second measures performance. Outcomes matter to customers. Performance matters to auditors. Optimize for the one that generates revenue.

The AI Connection

AI can assist QA without the panopticon effect. Supp's classification provides objective data: was the ticket classified correctly? Was the response intent-appropriate? Did the resolution match the expected response for that intent category?

This data supplements human QA review. The human reviewer sees not just the agent's response but the AI's assessment of whether the response matched the intent. "The AI classified this as a billing dispute but the agent responded with a feature explanation" is an objective flag that something went wrong.

AI-assisted QA catches errors that human reviewers might miss (misclassifications, wrong answers to factual questions) while leaving judgment calls (tone, empathy, creativity) to human coaches.

The goal is a QA system that makes agents better, not one that makes them nervous. Foucault was right: surveillance produces compliance. Coaching produces excellence. Choose accordingly.

See Supp Analytics

$5 in free credits. No credit card required. Set up in under 15 minutes.

See Supp Analytics
support quality monitoringQA support agentsagent monitoring ethicssupport quality assuranceagent performance tracking
The Panopticon Effect: How QA Monitoring Changes Agent Behavior | Supp Blog