Supp/Blog/California SB 243: The AI Chatbot Law You Need to Know About
AI & Technology8 min read· Updated

California SB 243: The AI Chatbot Law You Need to Know About

California SB 243 took effect January 1, 2026. It targets 'companion chatbots' that form social relationships with users, but its ripple effects reach every company deploying AI chat. Here's what it means.


On January 1, 2026, California Changed the Rules for AI Chatbots

Senate Bill 243 went into effect at the start of this year. Governor Newsom signed it in October 2025, making California the first state to regulate "companion chatbots" with real enforcement teeth: a private right of action. That means affected users don't have to wait for the attorney general to investigate. They can sue directly.

The law specifically targets AI systems designed to form social or emotional relationships with users. But the principles behind it, and the regulatory momentum it represents, matter for every company deploying AI in customer-facing roles.

What SB 243 Actually Covers

SB 243 defines a "companion chatbot" as an AI system that provides adaptive, human-like responses, is capable of meeting a user's social needs, exhibits anthropomorphic features, and can sustain a relationship across multiple interactions. Think Character.ai, Replika, or any chatbot designed to be a user's friend, companion, or emotional support.

The law explicitly excludes bots used solely for customer service, business operations, productivity, internal research, or technical assistance. A standard support chatbot answering questions about your product is not a companion chatbot under SB 243.

However, the boundary is blurrier than it sounds. If a customer service chatbot remembers user preferences across sessions, adapts its personality to individual users, or engages in extended human-like dialogue beyond the support interaction, it could cross into companion chatbot territory. The law's drafters left room for interpretation, and courts will eventually draw the line.

What the Law Requires for Covered Systems

For chatbots that do fall under SB 243's definition, the requirements are specific.

Operators must clearly disclose that the user is interacting with an AI system. The disclosure has to be prominent and happen before the user engages meaningfully with the bot. Burying it in terms of service doesn't count.

Operators must provide a way for users to report concerns and access safety resources. For chatbots interacting with minors, additional safeguards apply, including parental notification features.

Operators must not use the AI system to deceive users about material facts. If the bot provides information, that information has to be accurate.

The Private Right of Action Is the Real Story

Here's why SB 243 matters beyond its specific scope. When only government agencies can enforce a law, the probability of enforcement for any individual company is low. Agencies have limited resources. They go after the biggest targets.

A private right of action means plaintiff's attorneys can bring cases. And they will. The statutory damages under SB 243 are $1,000 per violation, plus attorney's fees and costs. Class actions covering thousands of interactions could generate significant damages.

This enforcement model is what other states are watching. Even if your chatbot is a standard customer service tool excluded from SB 243, the regulatory direction is clear: AI disclosure requirements are expanding, and the next law might not carve out customer service bots.

Why Customer Service Teams Should Care Anyway

SB 243 doesn't directly regulate your support chatbot. But it signals where regulation is heading, and several related legal pressures already apply.

The FTC has signaled that non-disclosure of AI in customer interactions can constitute a deceptive practice. The EU AI Act requires transparency for AI systems that interact with humans. The Air Canada chatbot tribunal ruling established that companies are liable for what their bots tell customers.

The Cursor "Sam" incident happened outside SB 243's scope (it was a support bot, not a companion chatbot), but it illustrates exactly the kind of behavior that regulators are targeting. An AI that pretends to be human and gives false information creates the same trust damage regardless of which legal category it falls into.

Practical Steps for Any AI-Powered Support

Even though SB 243 exempts standard customer service bots, adopting its principles is smart risk management.

Add a disclosure message at the start of every AI-powered conversation. Something like "You're chatting with an AI assistant. You can ask to speak with a person at any time." Put it in the chat window, not in a linked terms page.

Implement a human escalation path. When a customer asks to speak with a person, the system should connect them to a real agent or create a callback request. Test this path monthly.

Audit your AI's outputs on sensitive topics. Run billing questions, refund requests, and product capability questions through your bot weekly. Compare answers against your actual policies. Document the audits.

Keep your chatbot within its lane. The more your bot acts like a companion (remembering personal details, adapting personality, engaging beyond the support task), the closer it gets to SB 243 territory. A support bot should support. It shouldn't try to be the customer's friend.

How Classification Systems Sidestep the Risk

There's an architectural distinction worth noting. A generative chatbot that produces natural language responses designed to mimic human conversation is the type of system regulators are focused on. A classification system that identifies intent and triggers predefined actions is structurally different.

Supp's classifier reads incoming messages and categorizes them into 315 intents. It doesn't generate conversational responses that mimic human agents. When it takes action, it's routing to your team on Slack, creating a Jira ticket, or triggering a workflow you've defined. The customer sees your responses, not AI-generated text pretending to be a person.

That said, transparency is always the right call regardless of architecture. We include clear labeling in the widget UI. Compliance shouldn't depend on legal technicalities about what counts as "communication." It should be obvious to the customer what's automated and what isn't.

What Comes Next

SB 243 is the first state law with a private right of action for AI chatbot behavior. Other states are watching. New York and Illinois have similar bills in various stages. The federal AI disclosure bill is still stalled, but SB 243's enforcement model is gaining support among consumer advocates.

The companies that get ahead of this won't just avoid lawsuits. They'll build trust with customers who are increasingly skeptical of AI interactions. Disclosure isn't a burden. It's a signal that you respect your customers enough to be honest about how you're serving them.

See How Supp Stays Compliant

$5 in free credits. No credit card required. Set up in under 15 minutes.

See How Supp Stays Compliant
California SB 243AI chatbot lawAI disclosure requirementsCalifornia AI regulationchatbot compliance 2026AI consumer protection lawSB 243 requirements
California SB 243: The AI Chatbot Law You Need to Know About | Supp Blog