SB-243
CA · State · USA
CA
USA
● Enacted
Effective Date
2026-01-01
California SB 243 — Companion Chatbots (Chapter 22.6, Business and Professions Code)
Imposes safety and disclosure obligations on operators of companion chatbot platforms available to users in California. Requires AI identity disclosure when a reasonable person could be misled into thinking they are speaking with a human, with stricter unconditional disclosure and periodic reminders for users known to be minors. Requires operators to maintain and publish protocols for preventing suicidal ideation and self-harm content, including automatic referral to crisis services. Prohibits companion chatbots from producing sexually explicit visual material or encouraging minors to engage in sexually explicit conduct. Requires a product safety warning that companion chatbots may not be suitable for some minors. Imposes annual reporting obligations to the Office of Suicide Prevention beginning July 1, 2027. Creates a private right of action for injured persons with a $1,000 statutory minimum per violation.
Summary

Imposes safety and disclosure obligations on operators of companion chatbot platforms available to users in California. Requires AI identity disclosure when a reasonable person could be misled into thinking they are speaking with a human, with stricter unconditional disclosure and periodic reminders for users known to be minors. Requires operators to maintain and publish protocols for preventing suicidal ideation and self-harm content, including automatic referral to crisis services. Prohibits companion chatbots from producing sexually explicit visual material or encouraging minors to engage in sexually explicit conduct. Requires a product safety warning that companion chatbots may not be suitable for some minors. Imposes annual reporting obligations to the Office of Suicide Prevention beginning July 1, 2027. Creates a private right of action for injured persons with a $1,000 statutory minimum per violation.

Enforcement & Penalties
Enforcement Authority
Private right of action. No designated agency enforcer for compliance (the Office of Suicide Prevention receives annual reports but is not granted enforcement authority). A person who suffers injury in fact may bring a civil action.
Penalties
Greater of actual damages or $1,000 per violation. Plaintiff may also recover injunctive relief and reasonable attorney's fees and costs. Plaintiff must have suffered 'injury in fact' to bring suit, but statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Operator" means a person who makes a companion chatbot platform available to a user in the state.
What Is Covered
"Companion chatbot" means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. "Companion chatbot" does not include any of the following: (A) A bot that is used only for customer service, a business' operational purposes, productivity and analysis related to source information, internal research, or technical assistance. (B) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game. (C) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
"Companion chatbot platform" means a platform that allows a user to engage with companion chatbots.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Bus. & Prof. Code § 22602(a)
Plain Language
If a user could reasonably mistake the chatbot for a real person, the operator must display a clear, prominent notice that the companion chatbot is AI-generated and not human. This is a conditional trigger — if the chatbot's presentation already makes its artificial nature apparent such that no reasonable person would be misled, no disclosure is required. Compare to jurisdictions that impose an unconditional disclosure at the start of every interaction regardless of whether a reasonable person would be misled.
Statutory Text
If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Bus. & Prof. Code § 22602(b)(1)-(2)
Plain Language
Operators may not run a companion chatbot at all unless they actively maintain a protocol that (1) prevents the chatbot from generating suicide or self-harm content, and (2) refers users to crisis resources — such as a suicide hotline or crisis text line — when a user expresses suicidal ideation or self-harm intent. Operators must also publicly post the details of this protocol on their website. This is a continuous operating prerequisite, not a one-time pre-launch check — the protocol must remain active as a condition of operation.
Statutory Text
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. (2) The operator shall publish details on the protocol required by this subdivision on the operator's internet website.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22602(c)(1)-(2)
Plain Language
For users the operator knows are minors, the operator must disclose that the user is interacting with AI — unconditionally, with no reasonable-person standard. Actual knowledge of minor status is required to trigger this obligation.
Statutory Text
An operator shall, for a user that the operator knows is a minor, do all of the following: (1) Disclose to the user that the user is interacting with artificial intelligence.
T-01 AI Identity Disclosure · T-01.2 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22602(c)(1)-(2)
Plain Language
For users the operator knows are minors, a clear and prominent reminder must be sent at least every three hours during ongoing interactions that the chatbot is AI and the user should take a break. The three-hour interval is a floor — operators may remind more frequently. Actual knowledge of minor status is required to trigger this obligation.
Statutory Text
An operator shall, for a user that the operator knows is a minor, do all of the following: ... (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22602(c)(3)
Plain Language
When the operator knows a user is a minor, the operator must implement reasonable measures to prevent the companion chatbot from (1) generating visual material depicting sexually explicit conduct and (2) directly telling the minor that they should engage in sexually explicit conduct. The standard is 'reasonable measures,' not absolute prevention, which gives operators some flexibility in implementation but requires affirmative technical safeguards. 'Sexually explicit conduct' is defined by reference to 18 U.S.C. § 2256, which covers actual or simulated sexual intercourse, bestiality, masturbation, sadistic or masochistic abuse, and lascivious exhibition of the genitals or pubic area. This obligation is triggered by actual knowledge that the user is a minor.
Statutory Text
An operator shall, for a user that the operator knows is a minor, do all of the following: ... (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
Bus. & Prof. Code § 22603(a)-(d)
Plain Language
Beginning July 1, 2027, operators must submit an annual report to the Office of Suicide Prevention covering: how many crisis referral notifications were sent in the prior year, protocols for detecting and responding to suicidal ideation, and protocols for blocking chatbot responses about suicide. Reports must contain no user personal information or identifiers, and operators must use evidence-based measurement methods for measuring suicidal ideation. The Office will publish data from these reports on its website. Because the report covers the preceding calendar year, operators need to begin tracking crisis referral counts and maintaining measurement infrastructure by no later than July 1, 2026 to ensure they have data for the first reporting period.
Statutory Text
(a) Beginning July 1, 2027, an operator shall annually report to the office all of the following: (1) The number of times the operator has issued a crisis service provider referral notification pursuant to Section 22602 in the preceding calendar year. (2) Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users. (3) Protocols put in place to prohibit a companion chatbot response about suicidal ideation or actions with the user. (b) The report required by this section shall include only the information listed in subdivision (a) and shall not include any identifiers or personal information about users. (c) The office shall post data from a report required by this section on its internet website. (d) An operator shall use evidence-based methods for measuring suicidal ideation.
S-02 Prohibited Conduct & Output Restrictions · S-02.10 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22604
Plain Language
Operators must display a product safety warning — that companion chatbots may not be suitable for some minors — on every access point through which users can reach the platform, including the application, browser interface, or any other format. This is a blanket disclosure obligation that applies to all users (not just minors or their parents) and must appear on each access surface, not merely buried in terms of service. The warning is fixed language about suitability for minors and is not conditioned on any knowledge about the user's age.
Statutory Text
An operator shall disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors.