Supp/Blog/Are AI Chatbots Bad for Consumers?
Big Question
AI & Technology12 min read· Updated

Are AI Chatbots Bad for Consumers?

75% of consumers prefer humans. Satisfaction scores are at an all-time low. Regulators are circling. We sell AI support tools and we still think the honest answer matters more than the comfortable one.


We sell AI-powered support tools. You should know that before reading this, because it means we have a financial incentive to tell you AI chatbots are great. We're going to try to be honest instead, and you can decide whether we succeeded.

The uncomfortable data

Customer experience quality in the United States has declined for four consecutive years. Forrester's CX Index, the most comprehensive measure of customer experience quality, fell to a new all-time low of 68.3 in 2025, its lowest score since the index began in 2016. In 2024, an unprecedented 39% of brands saw statistically significant declines. The trend continued in 2025, with 25% declining and only 7% improving.

The American Customer Satisfaction Index tells the same story. After dropping to 77.3 in Q4 2024 (down from 77.8 in Q4 2023), the ACSI fell further to 76.9 in Q4 2025 and has stayed there. On an annual basis, the index has fallen 0.5% and, as the ACSI noted, "has not materially increased since 2017." More than a decade of supposed innovation and the needle hasn't moved.

These declines coincide almost exactly with the mass deployment of AI chatbots in customer service.

Correlation isn't causation. But it's not nothing, either.

What consumers actually say

A Five9 study from October 2024 found that 75% of consumers prefer talking to a real human for customer support. Nearly half (48%) don't trust information from AI-powered customer service bots. And 56% report being frustrated by chatbot interactions.

These numbers have been remarkably consistent across multiple studies from different research firms since 2023. The consumer verdict is clear. The industry is deploying technology that most consumers don't want.

The counterargument: 44% of consumers now find chatbots at least somewhat helpful, up from 34% in 2022. And 68% of end-users reported higher satisfaction when AI provided an instant first response, even when a human followed up later. The technology is getting better. But it's getting better from a low base, and "somewhat helpful" is a far cry from "preferred."

Real harm, real cases

In February 2024, Air Canada's chatbot told passenger Jake Moffatt he could retroactively apply for bereavement fares after his grandmother died. This was wrong. Air Canada's policy explicitly didn't allow retroactive applications. Moffatt relied on the chatbot's advice, booked his flight, and then discovered the information was false.

When he complained, Air Canada argued that the chatbot was essentially a "separate legal entity" and that the airline couldn't be held responsible for what its own chatbot said on its own website.

The British Columbia Civil Resolution Tribunal disagreed. Air Canada was ordered to pay approximately $650 CAD in damages. The ruling was straightforward: you deployed it, you're responsible for what it says.

In January 2024, DPD's AI chatbot called the company "the worst delivery firm in the world," used profanity, and wrote a poem about its own uselessness when a frustrated customer pushed it. Millions of people watched the screenshots circulate on social media.

The CFPB documented broader patterns before the Trump administration effectively shut the agency down in early 2025. Their research found that 98 million Americans used financial institution chatbots in 2022. The complaints were consistent: "rudimentary, circular answers," "inaccurate and unreliable information," and difficulty reaching a human. When chatbots provide wrong information about financial products, consumers can select wrong products or be assessed fees and penalties. In August 2024, the CFPB proposed rules requiring financial services providers to let consumers reach a real person with one click. That rulemaking has been frozen since February 2025, when the agency's acting director ordered all rulemaking, enforcement, and supervision to cease. Federal courts have blocked the full dismantling of the CFPB, but the one-click-to-human proposal is effectively dead for now.

The FTC has been equally direct. In August 2025, they filed suit against Air AI Technologies, which had marketed "conversational AI" claiming its product could replace human customer service representatives. The complaint alleged the company bilked small businesses out of roughly $19 million through deceptive claims about earnings potential and AI capabilities, and rarely honored its refund guarantees.

These aren't edge cases. They're the predictable result of deploying systems that generate confident-sounding text without understanding what they're saying, on behalf of companies that are legally and ethically responsible for the information they give to customers.

Why companies deploy chatbots anyway

The answer is money, and being honest about that matters.

A human-handled support interaction costs $5 to $15. An AI-handled interaction costs $0.20 to $2.00. For a company handling 10,000 support interactions per month, the difference between full human support and mostly-AI support is $40,000 to $130,000 per month. Per month.

No executive looks at that number and says "but the consumers prefer humans, so let's keep the larger cost." They say "deploy the chatbot." And then they present the resulting cost reduction as innovation.

This is the fundamental tension, and most coverage of AI chatbots ignores it. The question isn't whether chatbots are good or bad in the abstract. The question is: who benefits? Companies benefit from the cost savings. Consumers bear the cost of worse service. The savings flow to shareholders. The frustration flows to customers.

That framing sounds anti-business. It's not. It's just honest about the incentive structure. And recognizing the incentive structure is the first step toward deploying AI in a way that actually serves both sides.

When chatbots work

Chatbots work well, genuinely well, for simple, factual queries where the customer wants an answer and doesn't need a relationship.

"What are your business hours?" A chatbot answers this in 2 seconds. A human takes 4 hours to get to the email and types the same answer. The chatbot is better.

"Where's my order?" A chatbot pulls the tracking number from the order management system and responds with a delivery estimate. Instant. Accurate. The customer gets what they want without waiting. The chatbot is better.

"How do I reset my password?" A chatbot sends the reset link. Done. No reason for a human to be involved. The chatbot is better.

For these interactions (which make up 30 to 50% of most support volume), chatbots are a genuine improvement. Faster, cheaper, and the customer prefers the speed to the human alternative.

The problem starts when companies take the success of chatbots on simple queries and extrapolate it to all queries. "The chatbot handles password resets well, so let's have it handle billing disputes too." That's where things break.

When chatbots cause harm

Billing disputes, where the customer's money is at stake and the situation has nuance.

Complaints, where the customer is upset and needs to feel heard by another human, not processed by a machine.

Complex technical issues, where the troubleshooting requires back-and-forth diagnosis that chatbots handle poorly.

Medical, legal, or financial decisions, where wrong information can cause real damage and the chatbot can't verify what it's saying.

Emotional situations, where the customer is grieving, scared, or desperate. "My husband died and I need to cancel his account." A chatbot that responds to this with "I can help you with account cancellations! Please provide the account email address" is doing measurable emotional harm.

In each of these cases, the chatbot doesn't just fail to help. It actively makes things worse. The customer came in with a problem. They leave with the same problem plus the frustration of having been processed by a machine that couldn't understand them.

The honest answer

Are AI chatbots bad for consumers? The honest answer is: it depends on what they're used for, and most companies use them for too much.

Chatbots for simple queries: good. Faster and cheaper for everyone.

Chatbots for complex queries: bad. The customer gets worse service, the company gets worse outcomes (more escalations, more chargebacks, more negative reviews), and the short-term cost savings are offset by long-term churn.

Chatbots that block access to humans: indefensible. If a customer can't reach a person when they need one, you haven't automated support. You've eliminated it.

The dividing line should be based on what the customer needs, not what's cheapest for the company. Simple, factual questions where the answer is always the same? Automate. Anything involving money, emotion, complexity, or judgment? Human.

What we think the right approach looks like

We build a classification system, not a chatbot. The difference matters. A chatbot tries to handle the conversation. A classifier figures out what the customer needs and routes them to the right resolution, whether that's an automated answer, a knowledge base article, or a human agent.

The classifier reads the message and determines: is this something with a standard, verifiable answer (automate it), or is this something that needs judgment, empathy, or investigation (send it to a person)?

We think this is the right architecture because it doesn't force the customer through a chatbot interaction they don't want. If the answer is simple, they get it instantly. If it's not, they get a human. The AI decides which path, not the customer.

Is this the only right approach? No. Is it the approach that makes us the most money? Also no, honestly. We'd make more money if we handled the full conversation instead of just the classification. But we think the classification-first approach is better for consumers, and building something that's genuinely better for consumers is a reasonable long-term business strategy, even if it means smaller margins in the short term.

What should change

Companies should disclose when customers are talking to AI. The EU AI Act requires this. The US should too. Transparency lets customers set appropriate expectations.

Regulators should require a visible "talk to a human" option on every AI-powered support interface. The CFPB proposed a one-click-to-human rule in 2024, but the agency has been effectively frozen since February 2025. The EU AI Act still requires it. The principle is right even if US enforcement is currently absent. If companies won't provide human access voluntarily, regulation should require it.

Companies should measure customer experience alongside cost savings. A chatbot that saves $50,000/month while increasing churn by 2% is a net loss for most subscription businesses. But companies measure the savings and ignore the churn because they're tracked by different departments.

And AI vendors (including us) should be honest about what their technology can and can't do. "Our AI resolves 80% of tickets" doesn't mean "our AI provides good service on 80% of tickets." Resolution and satisfaction are different things.

The support industry is at an inflection point. The technology exists to make support genuinely better for consumers: faster answers for simple questions, instant routing for complex ones, 24/7 availability without hold times. Or the technology can be used to make support cheaper for companies while making it worse for everyone else.

Which path the industry takes depends on whether companies optimize for cost reduction or customer experience. So far, cost reduction is winning. The declining satisfaction scores are the evidence.

We think there's a better way. We're trying to build it. But we'd be lying if we said the current trajectory of AI in customer support is good for consumers. For most implementations, it isn't. The data is clear, even if the industry doesn't want to hear it.

Read More Big Questions

$5 in free credits. No credit card required. Set up in under 15 minutes.

Read More Big Questions
are chatbots badAI chatbot consumer harmchatbot customer satisfactionAI support good or badchatbot consumer protection
Are AI Chatbots Bad for Consumers? | Supp Blog