AB-1988
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2027-01-01
California AB 1988 — Companion Chatbots: Crisis Interruption Pauses
Requires operators of companion chatbot platforms to implement a graduated crisis response system when users express credible crisis statements indicating intent to harm themselves or others. Upon detecting a credible crisis expression (determined through contextual analysis, not keyword detection alone), the chatbot must acknowledge distress, encourage human support, provide 988 Lifeline contact information, and warn of a potential pause. If the user reaffirms or escalates the crisis expression, the chatbot must initiate a mandatory 20-minute crisis interruption pause during which conversational output is suspended and crisis resources are prominently displayed. Operators must document all crisis detections and pauses, and beginning January 1, 2028, annually report this information to the Office of Suicide Prevention. The bill includes no enforcement mechanism, penalties, or private right of action.
Summary

Requires operators of companion chatbot platforms to implement a graduated crisis response system when users express credible crisis statements indicating intent to harm themselves or others. Upon detecting a credible crisis expression (determined through contextual analysis, not keyword detection alone), the chatbot must acknowledge distress, encourage human support, provide 988 Lifeline contact information, and warn of a potential pause. If the user reaffirms or escalates the crisis expression, the chatbot must initiate a mandatory 20-minute crisis interruption pause during which conversational output is suspended and crisis resources are prominently displayed. Operators must document all crisis detections and pauses, and beginning January 1, 2028, annually report this information to the Office of Suicide Prevention. The bill includes no enforcement mechanism, penalties, or private right of action.

Enforcement & Penalties
Enforcement Authority
No enforcement mechanism is specified in the bill text. No designated agency enforcer is granted enforcement authority. The Office of Suicide Prevention receives annual reports but is not granted enforcement power. No private right of action is created. Enforcement would depend on existing California enforcement frameworks applicable to Business and Professions Code violations, potentially including action by the Attorney General or district attorneys under general consumer protection authority.
Penalties
The bill does not specify any penalties, damages, or remedies. No statutory minimum, civil penalties, injunctive relief, or attorney's fees provisions are included.
Who Is Covered
"Operator" means a person that makes a companion chatbot available in this state.
What Is Covered
"Companion chatbot" means an artificial intelligence system with a natural language interface that provides adaptive, humanlike responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. "Companion chatbot" does not include any of the following: (A) A bot that is used only for customer service, a business' operational purposes, productivity and analysis related to source information, internal research, or technical assistance. (B) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game. (C) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(a)
Plain Language
When a companion chatbot detects a credible crisis expression — which must be identified through contextual analysis rather than simple keyword matching — it must take four immediate steps without terminating the conversation: (1) acknowledge the user's distress without judgment, (2) encourage the user to seek human support, (3) provide 988 Suicide and Crisis Lifeline contact information across all modalities (call, text, chat), and (4) warn the user that a temporary pause may be triggered. This is the first step of a graduated response — the chatbot must not immediately shut down the interaction, but instead provide supportive de-escalation and crisis referral.
Statutory Text
(a) Notwithstanding any law, if a companion chatbot detects a credible crisis expression, the companion chatbot shall do all of the following without immediately terminating the interaction with the user: (1) Acknowledge the user's distress in nonjudgmental language. (2) Encourage the user to seek immediate human support. (3) Provide contact information for the 988 Suicide and Crisis Lifeline, including call, text, and chat options. (4) Inform the user that a temporary pause may occur to allow space for deescalation and human connection.
MN-02 AI Crisis Response Protocols · MN-02.1MN-02.2 · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(b)
Plain Language
If, after the initial graduated response under § 22587.2(a), the user reaffirms, escalates, or makes a new credible crisis expression, the chatbot must initiate a mandatory 20-minute crisis interruption pause. During the pause, the chatbot stops generating conversational responses entirely and displays a prescribed message explaining that the pause is designed to interrupt rumination and reduce emotional intensity, encouraging the user to contact a trained crisis counselor. The 988 Suicide and Crisis Lifeline contact options must be prominently displayed, with immediate access links where technically feasible. This is the escalation step in the graduated response — it is triggered only after the initial supportive warning has already been provided.
Statutory Text
(b) Notwithstanding any law, if a companion chatbot detects that a user is reaffirming or escalating the credible crisis expression or detects a subsequent credible crisis expression after the companion chatbot has complied with subdivision (a), the companion chatbot shall initiate a crisis interruption pause of 20 minutes.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(c)
Plain Language
Companion chatbots are subject to two prohibitions related to crisis interactions: (1) the chatbot must never characterize a crisis interruption pause as a punishment, violation, or enforcement action — it must be framed only as a supportive safety measure; and (2) the chatbot must never diagnose, label, or assess the risk level of a user during crisis interactions. These restrictions ensure the crisis response remains non-clinical and non-punitive, consistent with the legislative finding that companion chatbots are not substitutes for human crisis intervention.
Statutory Text
(c) Notwithstanding any law, a companion chatbot shall not do either of the following: (1) Describe a crisis interruption pause as a punishment, violation, or enforcement action. (2) Diagnose, label, or assess risk levels of a user.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(d)
Plain Language
Operators bear ultimate responsibility for ensuring that every companion chatbot they make available in California complies with the graduated crisis response requirements, the mandatory 20-minute crisis interruption pause, and the prohibitions on punitive framing and risk assessment. This provision places the compliance obligation squarely on the operator — not the chatbot developer — regardless of whether the operator built the underlying AI system.
Statutory Text
(d) An operator shall ensure that any companion chatbot it makes available in this state is compliant with this section.
G-01 AI Governance Program & Documentation · G-01.3 · Deployer · Chatbot
Bus. & Prof. Code § 22587.3(a)
Plain Language
Operators must maintain contemporaneous documentation for every companion chatbot they make available in California covering three categories: (1) confirmation that a graduated response system exists, (2) all credible crisis expressions detected by the chatbot, and (3) the duration and triggering conditions of every crisis interruption pause initiated. This is an ongoing recordkeeping obligation — operators must document each crisis detection and pause event as it occurs, not merely attest to having a system in place. These records form the basis of the annual reporting obligation under § 22587.3(b).
Statutory Text
(a) An operator shall document all of the following with respect to any companion chatbot that the operator makes available in this state: (1) The existence of a graduated response system. (2) All credible crisis expressions detected by the companion chatbot. (3) The duration and conditions of a crisis interruption pause initiated by the companion chatbot.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
Bus. & Prof. Code § 22587.3(b)
Plain Language
Beginning January 1, 2028, operators must submit annual reports to the Office of Suicide Prevention covering the prior calendar year's crisis-related data: the existence of the graduated response system, all credible crisis expressions detected, and the duration and conditions of all crisis interruption pauses. Because the report covers the preceding calendar year, operators need to begin collecting and retaining the underlying data from January 1, 2027 — the law's primary effective date — not from the first reporting date. The Office of Suicide Prevention receives these reports but is not granted enforcement authority under this bill.
Statutory Text
(b) Beginning January 1, 2028, an operator shall annually report to the Office of Suicide Prevention the items set forth in subdivision (a) with respect to the previous calendar year.