Supp/Blog/EU AI Act and Your Support Chatbot
AI & Technology7 min read· Updated

EU AI Act and Your Support Chatbot

The EU AI Act has transparency requirements for AI systems that interact with customers. Most support chatbots are 'limited risk,' which means disclosure obligations. Here's what to do.


The EU AI Act entered into force in August 2024, with different provisions phasing in over 2024 to 2027. For most customer support chatbots, the relevant transparency obligations under Article 50 take effect in August 2026. (Prohibited AI practices and AI literacy obligations kicked in February 2025, but chatbot transparency is in the later wave.)

If you serve EU customers and use AI in your support, you have obligations. They're not onerous for most support use cases, but ignoring them can result in fines. And unlike GDPR, where enforcement started slowly, the EU has signaled that AI Act enforcement will be active from the start.

How the AI Act Classifies Support Chatbots

The AI Act uses a risk-based approach with four tiers: unacceptable risk (banned), high risk (heavy regulation), limited risk (transparency obligations), and minimal risk (no obligations).

Most customer support chatbots fall into the "limited risk" category. This applies to AI systems that interact directly with people and could be mistaken for humans. The key requirement: you must disclose that the customer is interacting with AI.

Support chatbots are NOT classified as "high risk" unless they make decisions that significantly affect people's access to essential services (like denying insurance claims or credit applications). A chatbot that answers "what are your business hours?" is limited risk. A chatbot that decides whether to approve a loan application is high risk.

The practical implication for most support teams: your obligations are around transparency, not about the AI's decision-making process.

What You Need to Do

Disclosure requirement. Before or at the start of an AI-powered interaction, tell the customer they're talking to AI. This can be as simple as:

"Hi! I'm an AI assistant. I can help with common questions. If you'd like to talk to a person, just ask."

Or in the chat widget UI: a label that says "AI-powered" or "Automated assistant" near the chat bubble. The exact wording isn't prescribed. The requirement is that the customer knows they're interacting with AI.

What counts: a visible label, an introductory message, a tooltip on the chat icon. What doesn't count: disclosing it in your terms of service that nobody reads.

Record keeping. Keep logs of AI-powered interactions. You probably already do this for quality and debugging purposes. The AI Act may require you to retain these for a period (details vary by member state implementation).

Human escalation. While not explicitly required by the limited-risk tier, offering human escalation is strongly recommended and aligns with the Act's principles. If a customer explicitly asks to talk to a person, make it easy.

Accuracy. The Act includes general obligations around AI systems being "sufficiently accurate" for their intended purpose. For support chatbots, this means: don't deploy AI that gives wrong answers at a high rate. Use confidence thresholds. Test your system. Monitor accuracy over time.

What You Don't Need to Do

You don't need to register your chatbot with any EU authority (that's for high-risk systems only).

You don't need to conduct a conformity assessment (high-risk only).

You don't need to appoint an EU representative specifically for your chatbot (though you may need one for GDPR purposes if you serve EU customers without an EU presence).

You don't need to open-source your model or disclose your training data (that applies to general-purpose AI models, not application-specific deployments).

You don't need to stop using AI in support. The Act explicitly encourages AI innovation. It just wants transparency.

GDPR Overlap

If you're already GDPR-compliant (which you should be if you serve EU customers), you're partway there. GDPR requires:

A legal basis for processing personal data in AI interactions (legitimate interest or consent, depending on how you use the data).

Data minimization (don't store more customer data than needed for the support interaction).

Right to explanation (if AI makes a decision that affects the customer, they can ask for an explanation. For support chatbots that route tickets, this is usually trivial: "The AI classified your message as a billing question and routed it to our billing team.").

Data subject rights (customers can request access to or deletion of their data, including AI interaction logs).

The AI Act adds the transparency requirement on top of GDPR's data protection requirements. They're complementary, not conflicting.

Penalties

AI Act violations follow a tiered penalty structure. The highest tier (banned AI practices) can result in fines of up to 35 million euros or 7% of global annual turnover. Transparency violations (like failing to disclose AI use in a chatbot) fall under a lower tier: up to 15 million euros or 3% of global annual turnover, whichever is higher. Still significant by any measure.

The more realistic risk for small companies isn't a massive fine. It's a complaint that triggers an investigation, which costs time and legal fees even if the outcome is favorable. The easiest way to avoid this: add a disclosure label to your chatbot and keep logs. It takes an afternoon to implement.

Practical Steps

If you're using AI in customer support and serve EU customers, here's a checklist:

  1. Add a visible AI disclosure to your chat widget or support interface. "You're chatting with our AI assistant" in the first message or as a persistent label.
  1. Include a human escalation option. "Type 'human' or click here to talk to a person." Make it accessible at any point in the conversation.
  1. Review your data retention. Make sure you're keeping interaction logs for a reasonable period (90 days is typical for support) and that you're not storing unnecessary personal data.
  1. Document your AI system. A brief internal document describing what the AI does, how it works, what data it processes, and what accuracy benchmarks you target. This isn't required for limited-risk systems but it's good practice and will help if you ever face an inquiry.
  1. Monitor accuracy. Track your AI's classification accuracy or resolution rate. If it drops below acceptable levels, pull it back and investigate. "We deploy AI responsibly" is a defensible position. "We deploy AI and don't monitor it" is not.

Supp's architecture aligns well with the AI Act's requirements. It's a classifier, not a generator, so it doesn't produce the kind of hallucinated responses that create accuracy problems. Its widget can be configured with an AI disclosure message. And it always supports human escalation. We don't claim to be AI Act "certified" (that's not a thing for limited-risk systems), but the technical design makes compliance straightforward.

Learn About Supp

$5 in free credits. No credit card required. Set up in under 15 minutes.

Learn About Supp
EU AI Act customer supportAI Act chatbot complianceAI regulation customer servicechatbot transparency requirementsEU AI Act 2026
EU AI Act and Your Support Chatbot | Supp Blog