SB-243
CA · State · USA
CA
USA
● Enacted
Effective Date
2026-01-01
California SB 243 — Companion Chatbots (Chapter 677, Statutes of 2025)
Imposes safety and disclosure obligations on operators of companion chatbot platforms accessible to users in California. Requires AI identity disclosure when a reasonable person could be misled into thinking they are speaking with a human, with stricter unconditional disclosure and periodic reminders for users known to be minors. Requires operators to maintain and publish protocols for preventing suicidal ideation and self-harm content, including automatic referral to crisis services. Operators must also disclose that companion chatbots may not be suitable for some minors. Imposes annual reporting obligations to the Office of Suicide Prevention beginning July 1, 2027. Creates a private right of action for injured users with a $1,000 statutory minimum per violation, plus injunctive relief and attorney's fees.
Summary

Imposes safety and disclosure obligations on operators of companion chatbot platforms accessible to users in California. Requires AI identity disclosure when a reasonable person could be misled into thinking they are speaking with a human, with stricter unconditional disclosure and periodic reminders for users known to be minors. Requires operators to maintain and publish protocols for preventing suicidal ideation and self-harm content, including automatic referral to crisis services. Operators must also disclose that companion chatbots may not be suitable for some minors. Imposes annual reporting obligations to the Office of Suicide Prevention beginning July 1, 2027. Creates a private right of action for injured users with a $1,000 statutory minimum per violation, plus injunctive relief and attorney's fees.

Enforcement & Penalties
Enforcement Authority
Private right of action. No designated agency enforcer for compliance (the Office of Suicide Prevention receives annual reports but is not granted enforcement authority). A person who suffers injury in fact as a result of a violation of this chapter may bring a civil action.
Penalties
Greater of actual damages or $1,000 per violation. Plaintiff may also recover injunctive relief and reasonable attorney's fees and costs. Statutory damages do not require proof of actual monetary harm — the statute requires only 'injury in fact,' not quantified economic loss.
Who Is Covered
"Operator" means a person who makes a companion chatbot platform available to a user in the state.
What Is Covered
"Companion chatbot" means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. "Companion chatbot" does not include any of the following: (A) A bot that is used only for customer service, a business' operational purposes, productivity and analysis related to source information, internal research, or technical assistance. (B) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game. (C) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
"Companion chatbot platform" means a platform that allows a user to engage with companion chatbots.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Bus. & Prof. Code § 22602(a)
Plain Language
If a user could reasonably mistake the chatbot for a real person, the operator must display a clear, prominent notice that the companion chatbot is AI-generated and not human. This is a conditional trigger — if the chatbot clearly presents itself as AI from the outset such that no reasonable person would be misled, no disclosure is required under this provision. Compare to the stricter unconditional disclosure required for known minors under § 22602(c).
Statutory Text
If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Bus. & Prof. Code § 22602(b)(1)-(2)
Plain Language
Operators may not run a companion chatbot at all unless they actively maintain a protocol that (1) prevents the chatbot from generating suicide or self-harm content, and (2) refers users to crisis resources — such as a suicide hotline or crisis text line — when a user expresses suicidal ideation or self-harm intent. Operators must also publicly post the details of this protocol on their website. This is a continuous operating prerequisite, not a one-time pre-launch check — the protocol must remain active as a condition of operation.
Statutory Text
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. (2) The operator shall publish details on the protocol required by this subdivision on the operator's internet website.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22602(c)(1)-(2)
Plain Language
When the operator knows a user is a minor, two obligations apply unconditionally: (1) always disclose that the user is talking to AI, regardless of whether a reasonable person would otherwise be misled; and (2) send a prominent reminder at least every three hours in ongoing conversations that the chatbot is AI and the user should take a break. The every-three-hours floor is a minimum — operators may remind more frequently. These obligations apply only when the operator has actual knowledge the user is a minor.
Statutory Text
An operator shall, for a user that the operator knows is a minor, do all of the following: (1) Disclose to the user that the user is interacting with artificial intelligence. (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22602(c)(3)
Plain Language
When the operator knows a user is a minor, the operator must implement reasonable measures to prevent the companion chatbot from (1) producing visual material depicting sexually explicit conduct, and (2) directly telling the minor to engage in sexually explicit conduct. The standard is 'reasonable measures' — not absolute prevention — but operators must be able to demonstrate what measures they have implemented. 'Sexually explicit conduct' is defined by cross-reference to 18 U.S.C. § 2256.
Statutory Text
An operator shall, for a user that the operator knows is a minor, do all of the following: (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
Bus. & Prof. Code § 22603(a)-(d)
Plain Language
Beginning July 1, 2027, operators must submit an annual report to the Office of Suicide Prevention covering: (1) how many crisis referral notifications were sent in the prior calendar year, (2) protocols for detecting and responding to suicidal ideation, and (3) protocols for blocking chatbot responses about suicide. Reports must contain no user personal information, and operators must use evidence-based measurement methods. The Office will post report data publicly on its website. Because the report covers the preceding calendar year, operators should begin tracking crisis referral counts no later than January 1, 2027.
Statutory Text
(a) Beginning July 1, 2027, an operator shall annually report to the office all of the following: (1) The number of times the operator has issued a crisis service provider referral notification pursuant to Section 22602 in the preceding calendar year. (2) Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users. (3) Protocols put in place to prohibit a companion chatbot response about suicidal ideation or actions with the user. (b) The report required by this section shall include only the information listed in subdivision (a) and shall not include any identifiers or personal information about users. (c) The office shall post data from a report required by this section on its internet website. (d) An operator shall use evidence-based methods for measuring suicidal ideation.
S-02 Prohibited Conduct & Output Restrictions · S-02.10 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22604
Plain Language
Operators must affirmatively disclose to all users — on the application, browser, or any other access format — that companion chatbots may not be suitable for some minors. This is a universal disclosure obligation that applies regardless of the user's age and must be visible on every access format, not buried in terms of service. It is a known-risk suitability disclosure rather than an AI identity disclosure.
Statutory Text
An operator shall disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors.
Other · Chatbot
Bus. & Prof. Code § 22605
Plain Language
This provision creates the private right of action and specifies available remedies — injunctive relief, the greater of actual damages or $1,000 per violation, and attorney's fees and costs. It is the enforcement mechanism for the chapter's substantive obligations but does not itself impose a new compliance obligation on operators.
Statutory Text
A person who suffers injury in fact as a result of a violation of this chapter may bring a civil action to recover all of the following relief: (a) Injunctive relief. (b) Damages in an amount equal to the greater of actual damages or one thousand dollars ($1,000) per violation. (c) Reasonable attorney's fees and costs.
Other · Chatbot
Bus. & Prof. Code § 22606
Plain Language
This savings clause clarifies that the obligations in this chapter are in addition to — not a replacement for — any other legal obligations operators may have. It prevents operators from arguing that compliance with this chapter satisfies or displaces other legal requirements. It creates no new independent compliance obligation.
Statutory Text
The duties, remedies, and obligations imposed by this chapter are cumulative to the duties, remedies, or obligations imposed under other law and shall not be construed to relieve an operator from any duties, remedies, or obligations imposed under any other law.