When Your AI Tells a Vulnerable Customer to 'Have a Great Day'
AI support has told distressed customers to cheer up, provided harmful information to people in crisis, and responded to desperation with cheerful templates. These aren't hypotheticals.
In 2023, a mental health chatbot told a user simulating suicidal ideation to "try to stay positive." In separate incidents, customer service chatbots have responded to messages about financial desperation with upsell offers, and to messages about medical emergencies with FAQ links.
These aren't edge cases. They're the predictable result of deploying AI systems that optimize for response generation without guardrails for human vulnerability.
Every company that uses AI in support needs to answer this question: what happens when a distressed person sends a message to your chatbot?
The Problem
AI support systems are designed to classify and respond to product and service questions. They are not designed to recognize or respond to human distress. A message that says "I can't take this anymore" could be about a buggy product or a person in crisis. The AI doesn't know the difference, and its default response is the product-related one.
The consequences range from inappropriate to dangerous:
A customer whose business is failing because of a billing error writes "I don't know what to do, this is ruining everything." The chatbot responds: "I'd be happy to help with your billing question! Please provide your account number."
A grieving person canceling their deceased partner's subscription writes "nothing matters anymore." The chatbot responds: "Is there anything else I can help you with today?"
A person in financial crisis whose account is overdrawn writes "I have nothing left." The chatbot responds with the payment plan options page.
In each case, the AI did what it was designed to do: respond to the apparent product-related intent. But the human context was completely missed.
Why This Matters Legally and Ethically
Companies have a duty of care toward their customers. The specific scope of that duty varies by jurisdiction and industry, but no jurisdiction views "our chatbot encouraged a person in crisis to stay positive" as acceptable.
The liability risk is real. If an AI response to a distressed customer contributes to harm, the company that deployed the AI bears responsibility. "The AI generated that response, not us" is not a legal defense. You deployed it. You chose not to add safeguards.
The reputational risk is enormous. A screenshot of your chatbot responding inappropriately to a vulnerable person will generate more negative PR than any product failure.
The Safeguards
Every AI support system needs a vulnerability detection layer. This is a classification step that runs before any product-related response, checking whether the message contains indicators of:
Suicidal ideation or self-harm. Keywords and phrases: "end it all," "don't want to be here," "no reason to continue," "better off without me," "how to die."
Domestic violence or abuse. "He won't let me," "I'm scared to go home," "being hurt by someone."
Financial crisis beyond a billing dispute. "Can't feed my kids," "about to be evicted," "lost everything."
Medical emergency. "Having chest pain," "can't breathe," "overdose."
Mental health crisis. "Panic attack," "can't stop crying," "haven't slept in days."
When the vulnerability layer detects these signals, the AI should not generate a product response. Instead:
Acknowledge the person's distress without trivializing it. "I can see you're going through something really difficult."
Provide relevant crisis resources. For US customers: 988 Suicide & Crisis Lifeline (call or text 988), Crisis Text Line (text HOME to 741741). For financial crisis: 211 (connects to local services). These resources should be hardcoded, not generated by the AI.
Route to a human immediately. No chatbot loops. No "would you like to speak to an agent?" Just route.
Never auto-close or mark as resolved. A vulnerability-flagged interaction should require human review before any status change.
Implementation
The vulnerability detection layer is a classifier that runs in parallel with (or before) your product intent classifier. It's looking for a different signal: not "what product feature is this about?" but "is this person in distress?"
Supp's classification can be configured to detect distress signals in incoming messages. When detected, the message skips the normal auto-response flow entirely and routes to a designated human agent with a "vulnerability flag." The agent sees the flag before they read the message, which primes them to respond with appropriate care.
The vulnerability classifier should be tuned for high recall (catch every possible case) even at the cost of some false positives. A false positive (flagging a message that's actually about a product frustration, not a crisis) costs you one unnecessary human review. A false negative (missing a genuine crisis) costs immeasurably more.
The Tone Problem
Even when a message isn't a crisis, AI can get the tone catastrophically wrong. A customer who's clearly angry or upset should not receive a response that opens with "Great question!" or closes with "Have a wonderful day!" The tonal mismatch signals that nobody is actually reading their message.
Sentiment-aware response selection is the minimum bar. If the customer's message has negative sentiment, strip all positive-valence filler from the response. No "happy to help." No exclamation points. No emoji. Match the gravity of their tone with the seriousness of yours.
What This Costs
Adding a vulnerability detection layer costs very little technically. It's a classification step, similar to intent classification but trained on a different signal. Running it adds milliseconds to the processing time.
The crisis resources are free to provide (hotline numbers are public).
The human routing costs the same as any other escalated ticket.
The total cost of the safeguard is near zero. The cost of not having it is incalculable. Every company using AI in customer-facing interactions needs this layer. It's the minimum ethical bar for AI deployment, and most companies don't have it yet.