AB-1988
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2027-01-01
California AB 1988 — An act to add Chapter 22.2.6 (commencing with Section 22587.1) to Division 8 of the Business and Professions Code, relating to artificial intelligence. Companion chatbots: crisis interruption pauses.
Requires operators of companion chatbot platforms to implement a graduated crisis response system when a chatbot detects a credible crisis expression — a user statement reasonably indicating intent to harm themselves or others, determined through contextual analysis rather than keyword detection alone. Upon first detection, the chatbot must acknowledge distress, encourage human support, and provide 988 Suicide and Crisis Lifeline contact information. If the user reaffirms or escalates, the chatbot must initiate a 20-minute 'crisis interruption pause' that suspends conversational output, displays a deescalation message, and prominently shows crisis resources. Operators must document all crisis detections and pauses and, beginning January 1, 2028, report annually to the Office of Suicide Prevention. The bill specifies no enforcement mechanism, penalties, or private right of action.
Summary

Requires operators of companion chatbot platforms to implement a graduated crisis response system when a chatbot detects a credible crisis expression — a user statement reasonably indicating intent to harm themselves or others, determined through contextual analysis rather than keyword detection alone. Upon first detection, the chatbot must acknowledge distress, encourage human support, and provide 988 Suicide and Crisis Lifeline contact information. If the user reaffirms or escalates, the chatbot must initiate a 20-minute 'crisis interruption pause' that suspends conversational output, displays a deescalation message, and prominently shows crisis resources. Operators must document all crisis detections and pauses and, beginning January 1, 2028, report annually to the Office of Suicide Prevention. The bill specifies no enforcement mechanism, penalties, or private right of action.

Enforcement & Penalties
Enforcement Authority
No enforcement mechanism is specified in the bill text. No designated agency enforcer is granted enforcement authority; the Office of Suicide Prevention receives annual reports but is not given enforcement powers. No private right of action is created. Enforcement would depend on general California enforcement frameworks applicable to Business and Professions Code provisions.
Penalties
No damages, penalties, or remedies are specified in the bill text.
Who Is Covered
"Operator" means a person that makes a companion chatbot available in this state.
What Is Covered
"Companion chatbot" means an artificial intelligence system with a natural language interface that provides adaptive, humanlike responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. "Companion chatbot" does not include any of the following: (A) A bot that is used only for customer service, a business' operational purposes, productivity and analysis related to source information, internal research, or technical assistance. (B) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game. (C) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(a)(1)-(4)
Plain Language
When a companion chatbot detects a credible crisis expression, it must immediately respond with four specific actions — without terminating the conversation. It must: (1) acknowledge the user's distress in nonjudgmental language, (2) encourage the user to seek human support, (3) provide 988 Suicide and Crisis Lifeline contact information including call, text, and chat options, and (4) warn the user that a temporary pause may follow. The detection must be based on contextual analysis, not keyword matching alone. This is the first step of a graduated response — the crisis interruption pause (mapped separately) triggers only if the user reaffirms or escalates.
Statutory Text
(a) Notwithstanding any law, if a companion chatbot detects a credible crisis expression, the companion chatbot shall do all of the following without immediately terminating the interaction with the user: (1) Acknowledge the user's distress in nonjudgmental language. (2) Encourage the user to seek immediate human support. (3) Provide contact information for the 988 Suicide and Crisis Lifeline, including call, text, and chat options. (4) Inform the user that a temporary pause may occur to allow space for deescalation and human connection.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(b)
Plain Language
If a user reaffirms, escalates, or repeats a credible crisis expression after the chatbot has already delivered the initial graduated response (acknowledgment, encouragement to seek help, 988 contact info, and pause warning), the chatbot must initiate a mandatory 20-minute crisis interruption pause. During the pause, the chatbot stops generating conversational responses entirely and instead displays a specific three-part message explaining the pause's purpose and encouraging the user to contact a crisis counselor. The chatbot must also prominently display 988 Suicide and Crisis Lifeline contact options, with immediate access links if technically feasible. This is a novel 'forced cooling off' mechanism — distinct from simply restricting harmful output — designed to break rumination cycles and redirect users to human crisis support.
Statutory Text
(b) Notwithstanding any law, if a companion chatbot detects that a user is reaffirming or escalating the credible crisis expression or detects a subsequent credible crisis expression after the companion chatbot has complied with subdivision (a), the companion chatbot shall initiate a crisis interruption pause of 20 minutes.
S-02 Prohibited Conduct & Output Restrictions · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(c)(1)-(2)
Plain Language
Companion chatbots are subject to two specific prohibitions during crisis interactions: (1) they may not characterize a crisis interruption pause as a punishment, violation, or enforcement action — the pause must be framed as a supportive intervention, not a disciplinary measure; and (2) they may not diagnose, label, or assess risk levels of the user at any point. The second prohibition effectively prevents the chatbot from playing a clinical role during a crisis, consistent with the legislative finding that companion chatbots are not substitutes for human crisis intervention.
Statutory Text
(c) Notwithstanding any law, a companion chatbot shall not do either of the following: (1) Describe a crisis interruption pause as a punishment, violation, or enforcement action. (2) Diagnose, label, or assess risk levels of a user.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
Bus. & Prof. Code § 22587.2(d)
Plain Language
This provision makes operators directly responsible for ensuring that every companion chatbot they make available in California complies with all crisis response requirements in § 22587.2 — including the graduated response, crisis interruption pause, and prohibitions on punitive framing and clinical assessment. Liability flows to the operator even if the chatbot's behavior is determined by a third-party model or developer. This is a compliance pass-through that makes the operator the accountable party for all substantive obligations in this section.
Statutory Text
(d) An operator shall ensure that any companion chatbot it makes available in this state is compliant with this section.
G-01 AI Governance Program & Documentation · G-01.3 · Deployer · Chatbot
Bus. & Prof. Code § 22587.3(a)(1)-(3)
Plain Language
Operators must maintain documentation of three categories of information for every companion chatbot they make available in California: (1) whether a graduated response system exists, (2) all credible crisis expressions detected by the chatbot, and (3) the duration and conditions of every crisis interruption pause initiated. This is a contemporaneous recordkeeping obligation — operators need systems to log crisis detections and pauses as they occur. The documentation serves as the basis for the annual reporting obligation to the Office of Suicide Prevention beginning January 1, 2028.
Statutory Text
(a) An operator shall document all of the following with respect to any companion chatbot that the operator makes available in this state: (1) The existence of a graduated response system. (2) All credible crisis expressions detected by the companion chatbot. (3) The duration and conditions of a crisis interruption pause initiated by the companion chatbot.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
Bus. & Prof. Code § 22587.3(b)
Plain Language
Beginning January 1, 2028, operators must submit an annual report to the Office of Suicide Prevention covering the prior calendar year's data on: the existence of a graduated response system, all credible crisis expressions detected, and the duration and conditions of all crisis interruption pauses initiated. Because the report covers the preceding calendar year, operators must begin collecting and retaining this data from January 1, 2027 — not from the first reporting date. The Office of Suicide Prevention is the designated recipient but is not granted enforcement authority under this bill.
Statutory Text
(b) Beginning January 1, 2028, an operator shall annually report to the Office of Suicide Prevention the items set forth in subdivision (a) with respect to the previous calendar year.