S-04
Safety & Prohibited Conduct
AI Crisis Response Protocols
Operators of conversational AI, companion chatbots, and mental health AI systems must adopt, implement, and maintain protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. Upon detection, the system must refer users to crisis service providers such as suicide hotlines or crisis text lines using evidence-based methods. Operators must report crisis referral metrics annually to the applicable enforcement authority.
Applies to DeveloperDeployer Sector Consumer TechnologyMental HealthHealthcareChatbot
Bills — Enacted
1
unique bills
Bills — Proposed
34
Last Updated
2026-03-29
Core Obligation

Operators of conversational AI, companion chatbots, and mental health AI systems must adopt, implement, and maintain protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. Upon detection, the system must refer users to crisis service providers such as suicide hotlines or crisis text lines using evidence-based methods. Operators must report crisis referral metrics annually to the applicable enforcement authority.

Sub-Obligations3 sub-obligations
ID
Name & Description
Enacted
Proposed
S-04.1
Crisis Detection and Referral Protocol Operators must implement and maintain a defined protocol for AI systems to detect user prompts or expressions involving suicidal ideation, self-harm, or intent to harm others, and to respond by referring users to crisis service providers such as the 988 Suicide and Crisis Lifeline, Crisis Text Line, or equivalent local services. This is a continuous operating requirement — the protocol must be active at all times, not merely documented. Response must be immediate and must not be conditioned on platform engagement or commercial interests.
1 enacted
33 proposed
S-04.2
Evidence-Based Crisis Response Methods Crisis detection and response protocols must use evidence-based measurement methods and must prioritize user safety over platform engagement or commercial interests. Operators must adopt and maintain documented protocols specifically governing AI responses to user expressions of suicidal ideation, self-harm, or intent to harm others, including evidence-based methods for tracking incidents, referral counts, and protocol effectiveness. Documentation must be retained and available to regulators upon request.
0 enacted
5 proposed
S-04.3
Annual Crisis Protocol Reporting Operators must annually report to the applicable enforcement authority (e.g., attorney general) quantitative crisis referral counts and qualitative protocol descriptions related to suicidal ideation, self-harm detection, and harm-prevention measures. Reports must disclose the measurement methodology used and any protocol updates made during the reporting period.
0 enacted
0 proposed
Bills That Map This Requirement 35 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
S-04.1
Section 2(e)
Plain Language
Covered entities must implement and maintain systems that detect when a user indicates intent to harm themselves or others, and must promptly respond to, report, and mitigate such situations. The systems must prioritize user safety over the covered entity's other interests (e.g., commercial or engagement interests). This is a continuous operating obligation — not a one-time implementation exercise. Notably, this applies to all users, not just minors, and includes an obligation to 'report' emergency situations, though the statute does not specify to whom the report must be made.
(e) Each covered entity shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the covered entity's other interests.
Pending 2027-10-01
S-04.1
A.R.S. § 18-802(G)
Plain Language
Every operator must adopt and maintain a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer users to crisis service providers — such as a suicide hotline, crisis text line, or other appropriate crisis service. This applies to all users, not just minors. The standard is 'reasonable efforts,' not absolute guarantee of referral. Note that unlike California SB 243, this provision does not require the protocol details to be published on the operator's website, nor does it require annual reporting of crisis referral metrics.
G. Each operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, including making reasonable efforts to provide a response to the user that refers the user to crisis service providers such as a suicide hotline, crisis text line or other appropriate crisis service.
Pending 2027-01-01
S-04.1
Bus. & Prof. Code § 22587.2(a)(1)-(4)
Plain Language
When a companion chatbot detects a credible crisis expression, it must immediately respond with four specific actions — without terminating the conversation. It must: (1) acknowledge the user's distress in nonjudgmental language, (2) encourage the user to seek human support, (3) provide 988 Suicide and Crisis Lifeline contact information including call, text, and chat options, and (4) warn the user that a temporary pause may follow. The detection must be based on contextual analysis, not keyword matching alone. This is the first step of a graduated response — the crisis interruption pause (mapped separately) triggers only if the user reaffirms or escalates.
(a) Notwithstanding any law, if a companion chatbot detects a credible crisis expression, the companion chatbot shall do all of the following without immediately terminating the interaction with the user: (1) Acknowledge the user's distress in nonjudgmental language. (2) Encourage the user to seek immediate human support. (3) Provide contact information for the 988 Suicide and Crisis Lifeline, including call, text, and chat options. (4) Inform the user that a temporary pause may occur to allow space for deescalation and human connection.
Pending 2027-07-01
S-04.1S-04.4
Bus. & Prof. Code § 22612(d)(1)(A)-(C)
Plain Language
Operators must implement a documented crisis response protocol specifically designed to prevent the chatbot from generating suicide, self-harm, or suicidal ideation content to children. The protocol must include: (1) timely in-service support and referral to external crisis resources when a child expresses suicidal ideation or self-harm intent; (2) default notification to a connected parent within 24 hours if the child's account shows substantial risk of covered harm; and (3) age-appropriate disclosures to children whose accounts are linked to parents that a parent may be notified if risky content or behavior is detected. The parental notification is a default — it applies automatically when accounts are connected.
(1) A documented crisis response protocol to mitigate any material risk that the companion chatbot will generate a statement that promotes suicidal ideation, suicide, or self-harm content to a child, including, but not limited to, all of the following: (A) Timely in-service support and clear referral to appropriate external crisis resources if the operator determines a child has expressed suicidal ideation or intent to self-harm. (B) If a child's account is connected to a parent's account, default notifications to the parent within 24 hours if the child's account shows a substantial risk that the child may suffer a covered harm. (C) Clear and age-appropriate disclosures to child users whose accounts are linked to a parent's account that inform them that a parent may be notified if the companion chatbot detects content or behavior that indicates potential risks to the child's safety or well-being.
Pending 2027-01-01
S-04.1
C.R.S. § 6-1-1708(3)
Plain Language
Operators must implement and maintain a protocol for their conversational AI service to respond to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to a crisis service provider such as a suicide hotline or crisis text line. Notably, the statute explicitly excludes referrals to law enforcement — crisis referrals must go to mental health crisis services, not police. This applies to all users, not just minors. The obligation is continuous — the protocol must be active at all times during operation.
On and after January 1, 2027, an operator shall implement a protocol for a conversational artificial intelligence service to respond to a user prompt regarding suicidal ideation or self-harm, which protocol must include making reasonable efforts to provide a response that refers the user to a crisis service provider such as a suicide hotline, a crisis text line, or another appropriate crisis service, but not including a law enforcement agency.
Failed 2026-07-01
MN-01.10
Fla. Stat. § 501.9984(1)(a)5.
Plain Language
When a minor account holder expresses to the companion chatbot a desire or intent to self-harm or harm others, the platform must send a timely notification to the consenting parent or guardian. This is a parental notification obligation distinct from crisis referral — it requires alerting the parent, not just referring the minor to crisis resources. The statute does not define 'timely' with a specific timeframe, leaving some implementation discretion.
Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
Passed 2025-07-01
S-04.1
O.C.G.A. § 39-5-6(h)
Plain Language
Operators must adopt and maintain a protocol governing how the conversational AI service responds when a user raises suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to crisis service providers. Unlike California SB 243, this provision does not require public posting of the protocol details, annual reporting of crisis referral metrics, or use of evidence-based methods. The obligation is to adopt the protocol and make reasonable referral efforts — the standard is 'reasonable efforts,' not guaranteed delivery.
An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response which refers the user to crisis service providers.
Pending 2027-07-01
S-04.1
§ 554J.4
Plain Language
Operators must adopt and maintain protocols for their conversational AI service to respond to user prompts involving suicidal ideation or self-harm. At minimum, these protocols must include making reasonable efforts to refer users to crisis service providers — such as suicide hotlines, crisis text lines, or other appropriate crisis services. The 'includes but is not limited to' language means crisis referral is a floor, not a ceiling; additional protocol measures may be expected. This obligation applies to all users, not just minors.
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
Pending
S-04.1
§ 554J.2(1)(e)
Plain Language
Deployers must implement protocols for their public-facing chatbot to detect and respond to user prompts indicating suicidal ideation or self-harm intent. At a minimum, the protocols must make reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate service. This applies to all public-facing chatbots — not just AI companions or therapeutic chatbots.
e. Implement protocols for the deployer's public-facing chatbot for responding to user prompts indicating the user has suicidal ideations or the intent to cause self-harm. Protocols shall include but are not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate service.
Pending
MN-01.10
§ 554J.3(2)
Plain Language
Deployers of AI companions or therapeutic chatbots must implement protocols to notify a minor user's parent, legal guardian, or legal custodian when the minor enters a prompt indicating suicidal ideation or intent to self-harm. This is a parental notification obligation specific to minors, triggered by crisis-indicating prompts — it operates alongside the general crisis referral protocol required under § 554J.2(1)(e) for all users.
2. A deployer of an AI companion or a therapeutic chatbot shall implement protocols for sending a notification to a minor user's parent, legal guardian, or legal custodian when the minor user enters a prompt indicating the minor user has suicidal ideations or the intent to cause self-harm.
Pending 2026-07-01
S-04.1
§ 554J.2(3)
Plain Language
Providers must implement reasonable detection protocols so their chatbots can identify when users express self-harm, suicidal ideation, or emotional distress. Once detected, the chatbot must refer the user to appropriate crisis services — the statute specifically lists the national suicide prevention lifeline, the Iowa crisis hotline, and emergency services as examples, but the list is non-exhaustive. This is a continuing operational requirement: the detection protocols must be active at all times the chatbot is accessible. The standard is 'reasonable protocols,' giving providers some flexibility in implementation methodology. Educational institutions and libraries are exempt from liability solely for providing access to general-use software or the internet (§ 554J.5).
3. A provider shall implement reasonable protocols to have the provider's artificial intelligence chatbot detect expressions of self-harm, suicidal ideation, or emotional distress by users. Upon detection of such expressions, the artificial intelligence chatbot shall refer the user to appropriate crisis services, including but not limited to the national suicide prevention lifeline, the Iowa crisis hotline, or emergency services.
Passed 2027-07-01
S-04.1
§ 554J.4
Plain Language
Operators must adopt and maintain protocols governing how their conversational AI service responds to user prompts involving suicidal ideation or self-harm. At a minimum, the protocol must include making reasonable efforts to refer users to crisis service providers such as a suicide hotline, crisis text line, or equivalent. The 'includes but is not limited to' language means the referral is a floor — operators may need additional protocol elements depending on context. This applies to all users, not just minors.
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
Passed 2027-07-01
S-04.1
Idaho Code § 48-2103(2)
Plain Language
Operators must adopt and maintain a protocol for the conversational AI service to respond to user prompts involving suicidal ideation. At minimum, the protocol must include making reasonable efforts to refer users to crisis service providers such as a suicide hotline or crisis text line. The 'includes but is not limited to' language signals that referral alone may be insufficient — operators should consider additional measures. Unlike CA SB 243, this provision does not require public posting of the protocol or impose reporting obligations.
An operator shall adopt a protocol for the conversational AI service to respond to user prompts regarding suicidal ideation that includes but is not limited to making reasonable efforts to provide a response to users that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2027-01-01
S-04.1S-04.2
Section 15(c)
Plain Language
Operators must develop, implement, and continuously maintain a crisis intervention protocol that (1) uses industry best practices to detect user expressions indicating risk of suicide, self-harm, or imminent violence, (2) upon detection, immediately interrupts the conversation and prominently displays a notification providing direct access to at least one national crisis hotline and one crisis text line, and (3) is reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization. This is a continuous operating requirement — the protocol must be active at all times. The annual review with a mental health professional is a distinctive requirement not found in all comparable statutes.
(c) An operator shall develop, implement, and maintain a crisis intervention protocol. The crisis intervention protocol shall, at a minimum: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm, or imminent violence; (2) upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline and one crisis text line service; and (3) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
Pending 2027-01-01
S-04.1
Section 10
Plain Language
This is the same operative provision mapped above under S-02, viewed through the crisis response protocol lens. The operator must implement and maintain a protocol that detects user expressions of suicidal ideation or self-harm and responds by referring the user to crisis service providers such as 988. Unlike CA SB 243, this obligation applies to all users — not only minors — and there is no separate annual reporting requirement for crisis referral counts. The protocol must be active as a condition of operation.
An operator shall not operate or provide an artificial intelligence companion to a user unless the artificial intelligence companion contains a protocol to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm by a user to the artificial intelligence companion. The protocol shall include, but shall not be limited to, detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers them to crisis service providers, such as the 9-8-8 Suicide and Crisis Lifeline, a crisis text line, or other appropriate crisis services upon detection of the user's expressions of suicidal ideation or self-harm.
Pending 2026-07-01
S-04.1MN-01.10
Sec. 3(e)
Plain Language
Covered entities must continuously monitor all companion AI chatbot interactions for suicidal ideation — defined as any dialogue in which a minor expresses thoughts of self-harm or suicide. When suicidal ideation is detected, the entity must present the National Suicide Prevention Lifeline contact information both to the user and to the affiliated parental account. This is an ongoing monitoring and response obligation, not a one-time configuration. Note that the statutory definition of suicidal ideation is limited to interactions between a minor and a chatbot, but the monitoring obligation in subsection (e) uses the broader phrasing 'monitor companion AI chatbot interactions for suicidal ideation' without explicitly limiting it to minors — creating some ambiguity about whether adult interactions must also be monitored.
(e) A covered entity shall monitor companion AI chatbot interactions for suicidal ideation and, in response to any such interaction, provide to the user and the parental account affiliated with such user appropriate resources by presenting contact information for the national suicide prevention lifeline.
Pending 2026-01-01
S-04.1
R.S. 28:16(C)
Plain Language
Operators must maintain active protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. The protocols must include referral to crisis service providers such as a suicide hotline. This is a continuous operating requirement — the protocols must be in place at all times the chatbot is available to users. Unlike CA SB 243, this bill does not require operators to publicly post the protocol details on their website, nor does it require annual reporting of crisis referral counts.
An operator of a mental health chatbot shall have protocols in place to address possible suicidal ideation, self-harm, or physical harm to others expressed by the user, including referral to a crisis service provider such as a suicide hotline.
Pending 2026-10-01
S-04.1S-04.2
Commercial Law § 14–1330(B)(1)–(3)
Plain Language
This maps the same crisis detection and referral protocol obligation under the MN-02 crisis response taxonomy. Operators must implement a protocol that detects self-harm and suicidal ideation using evidence-based methods and immediately refers users to the Maryland Behavioral Health Crisis Response System and the 988 Lifeline. The protocol must be continuously active and documented using evidence-based measurement methods.
(B) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING CONTENT CONCERNING SELF–HARM, SUICIDAL IDEATION, OR SUICIDE TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO THE COMPANION CHATBOT. (2) THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION SHALL INCLUDE A NOTIFICATION TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION THAT REFERS THE USER TO A CRISIS SERVICE PROVIDER, INCLUDING: (I) THE MARYLAND BEHAVIORAL HEALTH CRISIS RESPONSE SYSTEM; AND (II) THE NATIONAL 9–8–8 SUICIDE AND CRISIS LIFELINE. (3) AN OPERATOR SHALL USE EVIDENCE–BASED METHODS FOR DETECTING WHEN A USER IS EXPRESSING THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO A COMPANION CHATBOT.
Failed 2026-06-15
S-04.1
10 MRSA § 1500-SS(1)
Plain Language
Deployers must implement and maintain systems that can detect when a user indicates intent to harm themselves or another person, and must promptly respond to, report, and mitigate such situations. The deployer's response must prioritize the user's safety and well-being over the deployer's commercial or other interests. This is a continuous operating requirement — the systems must be maintained and reasonably effective at all times, not merely documented. Note that unlike CA SB 243, this provision applies to all users, not just minors, and covers both self-harm and intent to harm others.
1. Emergency situations; detection and response. A deployer shall implement and maintain reasonably effective systems to detect, promptly respond to, report and mitigate emergency situations in a manner that prioritizes a user's safety and well-being over the deployer's other interests.
Pending 2026-08-01
S-04.1
Minn. Stat. § 604.115, subd. 4(a)-(b)
Plain Language
Companion chatbot proprietors must use industry-standard technology and known techniques to both (1) prevent the chatbot from promoting, causing, or aiding self-harm, and (2) detect when a user is expressing thoughts of self-harm. Upon detection that the chatbot has promoted self-harm or that a user is expressing self-harm thoughts, the proprietor must immediately suspend the user's access to the companion chatbot for at least 72 hours and prominently display contact information for a suicide crisis organization. The standard of care is a prudent good-faith effort using existing technology — not perfection. However, liability attaches in two ways: (a) general failure to comply with the prevention and detection obligations, and (b) irrespective of compliance, when the proprietor has actual knowledge of self-harm promotion or user self-harm ideation and still fails to suspend access and display crisis information. Liability cannot be waived or disclaimed.
(a) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to prevent the companion chatbot from promoting, causing, or aiding self-harm, and determine whether a covered user is expressing thoughts of self-harm. Upon determining that a companion chatbot has promoted, caused, or aided self-harm, or that a covered user is expressing thoughts of self-harm, the proprietor must prohibit continued use of the companion chatbot for a period of at least 72 hours and prominently display contact information for a suicide crisis organization to the covered user. (b) If a proprietor of a companion chatbot fails to comply with this section, the proprietor is liable to users who inflict self-harm, in whole or in part, as a result of the proprietor's companion chatbot promoting, causing, or aiding the user to inflict self-harm. Irrespective of the proprietor's compliance with this subdivision, a proprietor is liable for general and special damages to covered users who inflict self-harm, in whole or in part, when the proprietor: (1) has actual knowledge that: (i) the companion chatbot is promoting, causing, or aiding self-harm; or (ii) a covered user is expressing thoughts of self-harm; (2) fails to prohibit continued use of the companion chatbot for a period of at least 72 hours; and (3) fails to prominently display to the user a means to contact a suicide crisis organization. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision.
Pending 2026-01-01
S-04.1
G.S. 170-3(b)(1)
Plain Language
Covered platforms must implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations — defined as when a user indicates intent to harm themselves or others. The platform must prioritize user safety and well-being over the platform's other interests (e.g., engagement, retention, revenue). This is a continuous operational requirement covering detection, response, reporting, and mitigation. Unlike some other state chatbot statutes that specify crisis referral services (e.g., 988 Lifeline), this statute leaves the specific response mechanism to the platform's discretion as long as it is 'reasonably effective.'
(1) Duty of loyalty in emergency situations. — A covered platform shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the platform's other interests.
Pending 2027-01-01
S-04.1
G.S. § 170-3(b)(1)
Plain Language
Covered platforms must implement and maintain reasonably effective systems to detect when a user indicates intent to harm themselves or others, and must promptly respond to, report, and mitigate such emergency situations. User safety and well-being must be prioritized over the platform's commercial or engagement interests. The trigger is user-indicated intent to self-harm or harm others — a broader framing than suicidal ideation alone, as it also covers intent to harm others.
Duty of loyalty in emergency situations. – A covered platform shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the platform's other interests.
Failed 2027-07-01
S-04.1
Sec. 5
Plain Language
Operators must adopt and maintain a protocol for responding to user expressions of suicidal ideation or self-harm. The protocol must, at minimum, make reasonable efforts to refer users to crisis service providers — including suicide hotlines, crisis text lines, or other appropriate crisis services. The 'includes, but is not limited to' framing means crisis referral is a floor, not a ceiling — the protocol should also address detection and prevention. This obligation applies to all users, not just minors. Unlike California SB 243, this statute does not require publishing the protocol on the operator's website or reporting crisis referral metrics.
An operator shall adopt a protocol for the conversational artificial intelligence service to respond to user prompts regarding suicidal ideation or self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2027-01-01
S-04.1S-04.2
Section 4(C)(1)-(2)
Plain Language
Operators must develop, implement, and maintain a crisis intervention protocol for all users — not just minors. The protocol must use industry best practices to detect expressions of suicide risk, self-harm, or imminent violence, and upon detection must immediately interrupt the conversation and prominently display a notification providing direct access to at least three crisis services: a national crisis hotline, the New Mexico crisis and access line, and a crisis text line. The protocol must be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization. This is a continuous operating requirement — the protocol must be active at all times as a condition of operation. Unlike CA SB 243, this bill also covers imminent threats of violence to others, not just self-harm and suicidal ideation.
An operator shall, for all users, develop, implement and maintain a crisis intervention protocol. The protocol shall: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm or imminent violence and, upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline, the New Mexico crisis and access line and one crisis text line service; and (2) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
Enacted 2025-11-05
S-04.1
General Business Law § 1701
Plain Language
Operators must build and maintain a crisis response protocol into the AI companion before it may be offered to users. The protocol must include, at minimum: (1) detection of user expressions of suicidal ideation or self-harm, and (2) upon detection, a notification to the user directing them to the 988 Suicide & Crisis Lifeline, a crisis text line, or other appropriate crisis services. This is a product-design obligation — the protocol must exist in the system itself, not just as an operator policy. The obligation is unconditional and applies to all operators of AI companions regardless of user demographics.
It shall be unlawful for any operator to operate for or provide an AI companion to a user unless such AI companion contains a protocol to take reasonable efforts for detecting and addressing suicidal ideation or expressions of self-harm expressed by a user to the AI companion, that includes but is not limited to, detection of user expressions of suicidal ideation or self-harm, and a notification to the user that refers them to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline under section 36.03 of the mental hygiene law, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
Pending 2026-11-01
S-04.1
Section 4 (75A Okla. Stat. § 12)
Plain Language
Deployers must adopt a protocol governing how their social AI companion responds when a user's prompts indicate suicidal ideation or threats of self-harm. At a minimum, the protocol must include making reasonable efforts to refer the user to crisis service providers — such as a suicide hotline, crisis text line, or equivalent crisis services. The 'includes, but is not limited to' language means the crisis referral is a floor, not a ceiling — additional protective measures may be required. Note that unlike California SB 243, this statute does not require public publication of the protocol on the deployer's website, nor does it require annual reporting of crisis referral metrics. The obligation applies to all users, not just minors.
A deployer shall adopt a protocol for a social AI companion to respond to user prompts indicating suicidal ideation or threats of self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2026-11-01
S-04.1
75A O.S. § 702(A)
Plain Language
Deployers must implement and maintain reasonably effective systems that detect when a user indicates intent to harm themselves or others, and must promptly respond to, report, and mitigate such situations. The statute explicitly requires that user safety and well-being be prioritized over the deployer's other interests (including commercial interests). This is a continuing operational obligation — the systems must be maintained, not merely installed. The obligation covers detection, response, reporting, and mitigation as four distinct functions.
A. Deployers shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the deployer's other interests.
Passed 2027-07-01
S-04.1
75A Okla. Stat. § 302(E)
Plain Language
Operators must adopt a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to crisis service providers. Unlike California SB 243, this obligation applies to all users — not just minors — and does not require the protocol to be published on the operator's website. The statute uses a 'reasonable efforts' standard rather than an absolute referral requirement, providing some flexibility in how the protocol is implemented. The statute does not specify particular crisis services (e.g., 988 Lifeline) that must be referenced.
E. An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response that refers the user to crisis service providers.
Pending 2026-01-30
S-04.1
Section 3(a)(1),(3) and Section 3(b)
Plain Language
Operators must implement and maintain crisis detection and referral protocols as a condition of operating an AI companion. When the system identifies suicidal ideation or self-harm, it must refer the user to crisis resources including the 988 Suicide and Crisis Lifeline, nearby behavioral health crisis centers, or other appropriate services. Unlike CA SB 243, this statute does not impose annual reporting on crisis referral counts to any state agency — the obligation is limited to maintaining the protocol and providing referrals. This mapping captures the crisis referral dimension of Section 3; the output restriction dimension is mapped separately under S-02.
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
Pending 2027-01-01
S-04.1
R.I. Gen. Laws § 6-63-2
Plain Language
This maps the crisis referral component of § 6-63-2. Operators must maintain a protocol that includes referring users to crisis service providers (suicide hotline, crisis text line, or equivalent) when users express suicidal ideation, self-harm, physical harm to others, or financial harm to others. The crisis referral notification must be active as a condition of lawful operation. Unlike CA SB 243, this provision applies to all users — not only minors — and extends to harm-to-others scenarios beyond just self-harm and suicidal ideation.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending
S-04.1
S.C. Code § 39-81-40(B)(3)
Plain Language
Covered entities must implement reasonable systems to detect when any user expresses suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis. Upon detection, the entity must promptly provide a clear and prominent crisis message including crisis services information. This is a continuous operating requirement applicable to all users, not just minors. The crisis detection also triggers the parental notification obligation under § 39-81-30(E) for minor users whose parents are reachable.
(B) A covered entity shall implement reasonable systems and processes to: (3) identify when a user is expressing suicidal thoughts, intent to self-harm, or showing signs of an acute mental health crisis and shall promptly provide a clear and prominent crisis message, including crisis services information to any such user.
Pending
MN-01.10
S.C. Code § 39-81-30(E)
Plain Language
When a crisis message is triggered — i.e., the chatbot detects a user expressing suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis — the covered entity must immediately notify the parent if it has contact information or a linked parental account. This applies only when a parental communication channel was established under the parental consent process. The immediacy requirement means this notification should be sent as close to real-time as possible, not batched or delayed.
(E) If the covered entity has a way to reach the parent through a parental account or contact information provided under subsection (C) or (D), then the covered entity shall notify the parent immediately in the case of any incident provoking a crisis message, pursuant to Section 39-81-40(B)(3).
Pending
S-04.1
S.C. Code § 39-81-40(B)(3)
Plain Language
Covered entities must implement reasonable systems to detect when any user — not just minors — expresses suicidal thoughts, intent to self-harm, or shows signs of an acute mental health crisis. Upon detection, the entity must promptly provide a clear and prominent crisis message including crisis services information. This is a continuous operating requirement applicable to all users of the chatbot, and the response must be immediate. The statute does not specify which crisis services must be referenced, but the obligation requires that the information be actionable.
(B) A covered entity shall implement reasonable systems and processes to: (3) identify when a user is expressing suicidal thoughts, intent to self-harm, or showing signs of an acute mental health crisis and shall promptly provide a clear and prominent crisis message, including crisis services information to any such user.
Pending 2027-01-01
S-04.1
§ 59.1-617
Plain Language
Operators may not operate or provide a companion chatbot to any user (not just minors) unless the chatbot maintains an active protocol for detecting and responding to expressions of suicidal ideation or self-harm. Upon detection, the chatbot must refer the user to crisis service providers such as the 988 Suicide and Crisis Lifeline, a crisis text line, or equivalent services. This is a continuous operating prerequisite — the protocol must be active at all times as a condition of lawful operation. The standard is 'reasonable efforts,' providing some flexibility in implementation.
It is unlawful for any operator to operate or provide a companion chatbot to a user unless such companion chatbot contains a protocol to take reasonable efforts for detecting and addressing expressions of suicidal ideation or self-harm by a user to the companion chatbot. This protocol shall include detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers the user to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
Pending 2026-07-01
S-04.1
Va. Code § 59.1-615(3)
Plain Language
Covered entities must implement reasonable systems and processes to detect when any user — not just minors — expresses suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis. Upon detection, the system must promptly provide a clear and prominent crisis message including crisis services information. This is a continuous operating requirement covering detection, response, and referral. The obligation applies to all users, not only minors, despite the bill's title referencing minors.
A covered entity shall implement reasonable systems and processes to: 3. Identify when a user is expressing suicidal thoughts, expressing intent to self-harm, or showing signs of an acute mental health crisis and promptly provide a clear and prominent crisis message, including crisis services information, to any such user.
Passed 2027-01-01
S-04.1S-04.2
Sec. 5(1)-(2)
Plain Language
Operators may not operate an AI companion chatbot at all unless they maintain and implement a protocol for detecting and responding to suicidal ideation and self-harm. The protocol must include three elements: (1) reasonable methods for identifying user expressions of suicidal ideation or self-harm, expressly including eating disorders; (2) automated or human-mediated referral to crisis resources such as a suicide hotline or crisis text line; and (3) reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. This is a continuous operating prerequisite — the protocol must remain active as a condition of offering the product, not merely documented at launch. Notably, the self-harm definition encompasses intentional self-injury regardless of suicidal intent, and the protocol must cover eating disorders specifically.
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.
Pending 2027-01-01
S-04.1S-04.2
Sec. 5(1)-(2)
Plain Language
Operators may not make an AI companion chatbot available at all unless they maintain and implement a crisis detection and response protocol. The protocol must include reasonable methods for identifying expressions of suicidal ideation or self-harm (explicitly including eating disorders), provide automated or human-mediated referrals to crisis resources such as suicide hotlines or crisis text lines, and implement reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. This is a continuous operating prerequisite — the protocol must remain active and implemented as a condition of deployment, not merely documented. Notably, the self-harm detection requirement extends to eating disorders, which is broader than some comparable state laws.
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.