MN-02
Minor Protection
AI Crisis Response Protocols
Operators of conversational AI, companion chatbots, and mental health AI systems must adopt, implement, and maintain protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. Upon detection, the system must refer users to crisis service providers such as suicide hotlines or crisis text lines using evidence-based methods. Operators must report crisis referral metrics annually to the applicable enforcement authority.
Applies to DeveloperDeployer Sector Consumer TechnologyMental HealthHealthcareChatbot
Bills — Enacted
1
unique bills
Bills — Proposed
23
Last Updated
2026-03-29
Core Obligation

Operators of conversational AI, companion chatbots, and mental health AI systems must adopt, implement, and maintain protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. Upon detection, the system must refer users to crisis service providers such as suicide hotlines or crisis text lines using evidence-based methods. Operators must report crisis referral metrics annually to the applicable enforcement authority.

Sub-Obligations4 sub-obligations
ID
Name & Description
Enacted
Proposed
MN-02.1
Crisis Detection and Referral Protocol Operators must implement and maintain a defined protocol for AI systems to detect user prompts or expressions involving suicidal ideation, self-harm, or intent to harm others, and to respond by referring users to crisis service providers such as the 988 Suicide and Crisis Lifeline, Crisis Text Line, or equivalent local services. This is a continuous operating requirement — the protocol must be active at all times, not merely documented. Response must be immediate and must not be conditioned on platform engagement or commercial interests.
1 enacted
22 proposed
MN-02.2
Evidence-Based Crisis Response Methods Crisis detection and response protocols must use evidence-based measurement methods and must prioritize user safety over platform engagement or commercial interests. Operators must adopt and maintain documented protocols specifically governing AI responses to user expressions of suicidal ideation, self-harm, or intent to harm others, including evidence-based methods for tracking incidents, referral counts, and protocol effectiveness. Documentation must be retained and available to regulators upon request.
0 enacted
7 proposed
MN-02.3
Annual Crisis Protocol Reporting Operators must annually report to the applicable enforcement authority (e.g., attorney general) quantitative crisis referral counts and qualitative protocol descriptions related to suicidal ideation, self-harm detection, and harm-prevention measures. Reports must disclose the measurement methodology used and any protocol updates made during the reporting period.
0 enacted
0 proposed
MN-02.4
Minor-Specific Crisis Notification When a minor account holder expresses suicidal ideation or intent to self-harm, operators must notify the affiliated parent or guardian account in addition to providing crisis referral information to the user.
0 enacted
5 proposed
Bills That Map This Requirement 24 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
MN-02.1
Section 2(e)
Plain Language
Covered entities must implement and continuously maintain systems that can detect, promptly respond to, report, and mitigate emergency situations — defined as any situation where a user indicates intent to harm themselves or others. The statute requires that user safety and well-being be prioritized over the covered entity's other interests (e.g., engagement, revenue). Unlike some companion chatbot statutes, this obligation applies to all users, not only minors. The statute does not specify particular crisis referral services or protocols, leaving the 'reasonably effective' standard as the measure of compliance.
(e) Each covered entity shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the covered entity's other interests.
Pending 2027-10-01
MN-02.1
A.R.S. § 18-802(G)
Plain Language
Operators must adopt a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer users to crisis services such as suicide hotlines or crisis text lines. This obligation applies to all users (not just minors), and the standard is 'reasonable efforts' rather than an absolute prevention mandate. This statute does NOT separately require publication of the protocol on the operator's website or annual reporting of crisis referral metrics to a state authority.
G. Each operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, including making reasonable efforts to provide a response to the user that refers the user to crisis service providers such as a suicide hotline, crisis text line or other appropriate crisis service.
Pending 2027-01-01
MN-02.1
Bus. & Prof. Code § 22587.2(a)
Plain Language
When a companion chatbot detects a credible crisis expression — which must be identified through contextual analysis rather than simple keyword matching — it must take four immediate steps without terminating the conversation: (1) acknowledge the user's distress without judgment, (2) encourage the user to seek human support, (3) provide 988 Suicide and Crisis Lifeline contact information across all modalities (call, text, chat), and (4) warn the user that a temporary pause may be triggered. This is the first step of a graduated response — the chatbot must not immediately shut down the interaction, but instead provide supportive de-escalation and crisis referral.
(a) Notwithstanding any law, if a companion chatbot detects a credible crisis expression, the companion chatbot shall do all of the following without immediately terminating the interaction with the user: (1) Acknowledge the user's distress in nonjudgmental language. (2) Encourage the user to seek immediate human support. (3) Provide contact information for the 988 Suicide and Crisis Lifeline, including call, text, and chat options. (4) Inform the user that a temporary pause may occur to allow space for deescalation and human connection.
Pending 2027-01-01
MN-02.1MN-02.2
Bus. & Prof. Code § 22587.2(b)
Plain Language
If, after the initial graduated response under § 22587.2(a), the user reaffirms, escalates, or makes a new credible crisis expression, the chatbot must initiate a mandatory 20-minute crisis interruption pause. During the pause, the chatbot stops generating conversational responses entirely and displays a prescribed message explaining that the pause is designed to interrupt rumination and reduce emotional intensity, encouraging the user to contact a trained crisis counselor. The 988 Suicide and Crisis Lifeline contact options must be prominently displayed, with immediate access links where technically feasible. This is the escalation step in the graduated response — it is triggered only after the initial supportive warning has already been provided.
(b) Notwithstanding any law, if a companion chatbot detects that a user is reaffirming or escalating the credible crisis expression or detects a subsequent credible crisis expression after the companion chatbot has complied with subdivision (a), the companion chatbot shall initiate a crisis interruption pause of 20 minutes.
Pending 2027-07-01
MN-02.1MN-02.2MN-02.4
Bus. & Prof. Code § 22612(d)(1)
Plain Language
Operators must implement a documented crisis response protocol specifically addressing suicidal ideation, suicide, and self-harm content directed at children. The protocol must include: (1) timely in-service support and clear referral to external crisis resources when a child expresses suicidal ideation or intent to self-harm; (2) default parental notification within 24 hours when a linked child account shows a substantial risk of covered harm; and (3) age-appropriate disclosures to children that their parent may be notified when the chatbot detects potential safety risks. The parental notification obligation applies only when the child's account is connected to a parent's account.
(1) A documented crisis response protocol to mitigate any material risk that the companion chatbot will generate a statement that promotes suicidal ideation, suicide, or self-harm content to a child, including, but not limited to, all of the following: (A) Timely in-service support and clear referral to appropriate external crisis resources if the operator determines a child has expressed suicidal ideation or intent to self-harm. (B) If a child's account is connected to a parent's account, default notifications to the parent within 24 hours if the child's account shows a substantial risk that the child may suffer a covered harm. (C) Clear and age-appropriate disclosures to child users whose accounts are linked to a parent's account that inform them that a parent may be notified if the companion chatbot detects content or behavior that indicates potential risks to the child's safety or well-being.
Pending 2027-01-01
MN-02.1
C.R.S. § 6-1-1708(3)
Plain Language
Operators must implement a protocol for their conversational AI to respond to user prompts about suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to a crisis service provider — such as a suicide hotline or crisis text line — but expressly excludes referring to law enforcement. This applies to all users, not just minors. The standard is 'reasonable efforts' — not an absolute guarantee of referral. The law enforcement exclusion is notable and distinguishes this from some other jurisdictions' crisis response requirements.
On and after January 1, 2027, an operator shall implement a protocol for a conversational artificial intelligence service to respond to a user prompt regarding suicidal ideation or self-harm, which protocol must include making reasonable efforts to provide a response that refers the user to a crisis service provider such as a suicide hotline, a crisis text line, or another appropriate crisis service, but not including a law enforcement agency.
Failed 2026-07-01
MN-02.4
Fla. Stat. § 501.9984(1)(a)5.
Plain Language
When a minor account holder expresses to the companion chatbot a desire or intent to self-harm or harm others, the platform must send a timely notification to the consenting parent or guardian. This is a parental notification obligation distinct from crisis referral — it requires alerting the parent, not just referring the minor to crisis resources. The statute does not define 'timely' with a specific timeframe, leaving some implementation discretion.
Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
Passed 2025-07-01
MN-02.1
O.C.G.A. § 39-5-6(h)
Plain Language
Operators must adopt and maintain a protocol governing how the conversational AI service responds when a user expresses suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to crisis service providers (e.g., suicide hotlines, crisis text lines). This applies to all users, not just minors. The standard is 'reasonable efforts' to provide a referral, not absolute assurance of delivery. Unlike CA SB 243, there is no explicit requirement to publish the protocol on the operator's website or to report crisis referral metrics.
An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response which refers the user to crisis service providers.
Pending 2027-07-01
MN-02.1
§ 554J.4
Plain Language
Operators must adopt and maintain protocols governing how the conversational AI service responds when any user (not limited to minors) expresses suicidal ideation or self-harm. At a minimum, the protocol must include making reasonable efforts to refer the user to crisis services such as a suicide hotline or crisis text line. The "includes but is not limited to" language means crisis referral is a floor, not a ceiling — operators should consider additional response measures. Unlike CA SB 243, this bill does not require the protocol to be published on the operator's website or impose annual reporting obligations.
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
Pending 2025-07-01
MN-02.1
§ 554J.2(1)(e)
Plain Language
Deployers must implement protocols for their public-facing chatbots to detect and respond to user prompts indicating suicidal ideation or intent to self-harm. At a minimum, these protocols must include making reasonable efforts to refer the user to crisis service providers such as a suicide hotline or crisis text line. This applies to all public-facing chatbots — not only AI companions or therapeutic chatbots — and is a continuing operational requirement.
e. Implement protocols for the deployer's public-facing chatbot for responding to user prompts indicating the user has suicidal ideations or the intent to cause self-harm. Protocols shall include but are not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate service.
Pending 2025-07-01
MN-02.4
§ 554J.3(2)
Plain Language
Deployers of AI companions and therapeutic chatbots must implement protocols to notify a minor user's parent, legal guardian, or legal custodian when the minor enters a prompt indicating suicidal ideation or intent to self-harm. This is a minor-specific parental notification obligation that operates in addition to the general crisis referral protocol in § 554J.2(1)(e). The deployer must have a mechanism to identify both the minor's status and their parent or guardian contact information to satisfy this obligation.
2. A deployer of an AI companion or a therapeutic chatbot shall implement protocols for sending a notification to a minor user's parent, legal guardian, or legal custodian when the minor user enters a prompt indicating the minor user has suicidal ideations or the intent to cause self-harm.
Pending 2026-07-01
MN-02.1
§ 554J.2(3)
Plain Language
Providers must implement reasonable protocols enabling their chatbot to detect when users express self-harm, suicidal ideation, or emotional distress. When detected, the chatbot must refer the user to appropriate crisis services — the bill specifically names the national suicide prevention lifeline, the Iowa crisis hotline, and emergency services, but these are non-exhaustive examples. This is an ongoing operational requirement — the protocols must remain active and effective at all times, not just documented pre-launch. The 'reasonable protocols' standard gives providers some flexibility in implementation, but the detection-and-referral obligation is mandatory.
3. A provider shall implement reasonable protocols to have the provider's artificial intelligence chatbot detect expressions of self-harm, suicidal ideation, or emotional distress by users. Upon detection of such expressions, the artificial intelligence chatbot shall refer the user to appropriate crisis services, including but not limited to the national suicide prevention lifeline, the Iowa crisis hotline, or emergency services.
Pending 2027-07-01
MN-02.1
§ 554J.4
Plain Language
Operators must adopt and maintain protocols governing how their conversational AI service responds to user prompts expressing suicidal ideation or self-harm. At minimum, the protocol must include making reasonable efforts to refer users to crisis service providers — such as a suicide hotline, crisis text line, or equivalent service. The 'includes but is not limited to' language means crisis referral is a floor, not a ceiling — additional response measures may be appropriate. Unlike CA SB 243, there is no requirement to publish the protocol on the operator's website or to report crisis metrics to a state agency.
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
Pending 2027-01-01
MN-02.1MN-02.2
Section 15(c)
Plain Language
Operators must develop, implement, and continuously maintain a crisis intervention protocol that: (1) uses industry best practices to detect user expressions of suicidal ideation, self-harm, or imminent violence; (2) upon detection, immediately interrupts the conversation and prominently displays a notification providing direct access to at least one national crisis hotline and one crisis text line; and (3) is reviewed and updated at least annually with a qualified mental health professional or public health organization. This is a continuous operating requirement — the protocol must be active at all times, not just documented. The annual review with a qualified professional is a floor, not a ceiling.
(c) An operator shall develop, implement, and maintain a crisis intervention protocol. The crisis intervention protocol shall, at a minimum: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm, or imminent violence; (2) upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline and one crisis text line service; and (3) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
Pending 2026-07-01
MN-02.1MN-02.4
Sec. 3(e)
Plain Language
Covered entities must continuously monitor all companion AI chatbot interactions for suicidal ideation and, when detected, provide crisis resources — specifically the National Suicide Prevention Lifeline contact information — to both the user and the affiliated parental account. Note that the statutory definition of 'suicidal ideation' is limited to interactions between minors and chatbots; this monitoring obligation thus applies when a minor expresses thoughts of self-harm or suicide. The parental notification aspect makes this provision also map to MN-02.4. This is distinct from the access-blocking obligation in Sec. 3(c)(3), which applies specifically to minor accounts.
(e) A covered entity shall monitor companion AI chatbot interactions for suicidal ideation and, in response to any such interaction, provide to the user and the parental account affiliated with such user appropriate resources by presenting contact information for the national suicide prevention lifeline.
Pending 2026-06-16
MN-02.1MN-02.2
10 MRSA § 1500-SS(1)
Plain Language
Deployers must implement and continuously maintain systems capable of detecting when a user expresses intent to harm themselves or others, and must promptly respond to, report, and mitigate such situations. The system must prioritize user safety and well-being over the deployer's commercial or other interests. This obligation applies to all users — not only minors. The 'emergency situation' definition covers both self-harm and harm to others, making this broader than typical crisis-response provisions focused solely on suicidal ideation.
1. Emergency situations; detection and response. A deployer shall implement and maintain reasonably effective systems to detect, promptly respond to, report and mitigate emergency situations in a manner that prioritizes a user's safety and well-being over the deployer's other interests.
Pending 2027-07-01
MN-02.1
Sec. 5
Plain Language
Operators must adopt and maintain a protocol for responding to user prompts about suicidal ideation or self-harm. At minimum, the protocol must include reasonable efforts to refer users to crisis service providers such as suicide hotlines or crisis text lines. The standard is reasonable efforts — not a guarantee of successful referral. Note that unlike CA SB 243, this provision does not require the operator to publicly post the protocol details on its website, nor does it require annual reporting on crisis referral metrics. The obligation applies to all users, not just minors.
An operator shall adopt a protocol for the conversational artificial intelligence service to respond to user prompts regarding suicidal ideation or self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2027-01-01
MN-02.1MN-02.2
Section 4(C)(1)-(2)
Plain Language
Operators must develop, implement, and maintain a crisis intervention protocol for all users — not just minors. The protocol must use industry best practices to detect expressions indicating risk of suicide, self-harm, or imminent violence. Upon detection, the system must immediately interrupt the conversation and prominently display a notification providing direct access to at least three crisis resources: one national crisis hotline, the New Mexico crisis and access line, and one crisis text line service. The protocol must be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization. This is a continuous operating requirement — it applies to all users at all times.
C. An operator shall, for all users, develop, implement and maintain a crisis intervention protocol. The protocol shall: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm or imminent violence and, upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline, the New Mexico crisis and access line and one crisis text line service; and (2) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
Enacted 2025-11-05
MN-02.1
General Business Law § 1701
Plain Language
Operators must build and maintain a crisis response protocol into the AI companion before it may be offered to users. The protocol must include, at minimum: (1) detection of user expressions of suicidal ideation or self-harm, and (2) upon detection, a notification to the user directing them to the 988 Suicide & Crisis Lifeline, a crisis text line, or other appropriate crisis services. This is a product-design obligation — the protocol must exist in the system itself, not just as an operator policy. The obligation is unconditional and applies to all operators of AI companions regardless of user demographics.
It shall be unlawful for any operator to operate for or provide an AI companion to a user unless such AI companion contains a protocol to take reasonable efforts for detecting and addressing suicidal ideation or expressions of self-harm expressed by a user to the AI companion, that includes but is not limited to, detection of user expressions of suicidal ideation or self-harm, and a notification to the user that refers them to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline under section 36.03 of the mental hygiene law, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
Pending 2026-11-01
MN-02.1
Section 4 (75A Okl. St. § 12)
Plain Language
Deployers must adopt and maintain a crisis response protocol for their social AI companions. The protocol must address user prompts indicating suicidal ideation or self-harm threats, and must at minimum include making reasonable efforts to refer users to crisis service providers — such as suicide hotlines, crisis text lines, or equivalent services. The 'includes, but is not limited to' language signals that crisis referral is a floor, not a ceiling — additional measures may be expected. Note that unlike CA SB 243, this bill does not require the deployer to publish the protocol publicly or to report crisis referral metrics to any agency.
A deployer shall adopt a protocol for a social AI companion to respond to user prompts indicating suicidal ideation or threats of self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2026-11-01
MN-02.1MN-02.2
Section 3(A)
Plain Language
Deployers must implement and maintain reasonably effective systems that detect when a user indicates intent to harm themselves or others, promptly respond to such situations, report them, and mitigate the risk. The deployer must prioritize user safety and well-being over its own commercial or other interests. This is a continuous operating requirement — systems must be maintained, not merely established. The statute does not prescribe specific crisis referral resources (e.g., 988 Lifeline), leaving the response mechanism to the deployer's reasonable judgment, but the obligation to detect and respond is mandatory.
A. Deployers shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the deployer's other interests.
Passed 2027-07-01
MN-02.1
75A O.S. § 302(E)
Plain Language
Operators must adopt and maintain a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer users to crisis service providers. Unlike California SB 243, this provision does not require public publication of the protocol on the operator's website, does not mandate annual reporting of crisis referral metrics, and uses a 'reasonable efforts' standard rather than an absolute obligation. The protocol applies to all users, not just minors. The statute does not specify which crisis services must be referenced (e.g., 988 Lifeline), leaving operators discretion in selecting appropriate referral resources.
E. An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response that refers the user to crisis service providers.
Pending 2026-01-01
MN-02.4
S.C. Code § 39-81-30(E)
Plain Language
When a minor account holder triggers a crisis response (suicidal thoughts, self-harm, acute mental health crisis under § 39-81-40(B)(3)), the operator must immediately notify the affiliated parent or guardian if the operator has parental contact information or a linked parental account. This is mandatory and immediate — not discretionary or delayed.
(E) If the covered entity has a way to reach the parent through a parental account or contact information provided under subsection (C) or (D), then the covered entity shall notify the parent immediately in the case of any incident provoking a crisis message, pursuant to Section 39-81-40(B)(3).
Pending 2026-01-01
MN-02.1
S.C. Code § 39-81-40(B)(3)
Plain Language
Covered entities must maintain systems to detect when any user — not just minors — expresses suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis. Upon detection, the operator must promptly deliver a clear, prominent crisis message including crisis services information (e.g., 988 Lifeline, crisis text lines). This is a continuous operating requirement that applies to all users across all access modes, including limited-access mode.
(B) A covered entity shall implement reasonable systems and processes to: (3) identify when a user is expressing suicidal thoughts, intent to self-harm, or showing signs of an acute mental health crisis and shall promptly provide a clear and prominent crisis message, including crisis services information to any such user.
Pending 2027-01-01
MN-02.1
§ 59.1-617
Plain Language
Operators may not operate or provide a companion chatbot to any user — not just minors — unless the chatbot has an active protocol for detecting and responding to expressions of suicidal ideation or self-harm. Upon detection, the system must refer the user to crisis service providers such as the 988 Suicide and Crisis Lifeline, a crisis text line, or other appropriate services. The standard is 'reasonable efforts' for detection. This is a continuous operating prerequisite — the protocol must be active at all times as a condition of operation, and failure to maintain it makes operation itself unlawful.
It is unlawful for any operator to operate or provide a companion chatbot to a user unless such companion chatbot contains a protocol to take reasonable efforts for detecting and addressing expressions of suicidal ideation or self-harm by a user to the companion chatbot. This protocol shall include detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers the user to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
Pending 2026-07-01
MN-02.1
Va. Code § 59.1-615(3)
Plain Language
Covered entities must implement systems to detect when users express suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis. Upon detection, the system must promptly deliver a clear, prominent crisis message that includes crisis services information. This is a continuous operating requirement — the detection and response capability must be active at all times the chatbot is available. The obligation covers all users, not just minors.
A covered entity shall implement reasonable systems and processes to:
3. Identify when a user is expressing suicidal thoughts, expressing intent to self-harm, or showing signs of an acute mental health crisis and promptly provide a clear and prominent crisis message, including crisis services information, to any such user.
Pending 2027-01-01
MN-02.1MN-02.2
Sec. 5(1)-(2)
Plain Language
Operators may not deploy an AI companion chatbot at all unless they maintain and implement a crisis detection and response protocol. The protocol must include: (1) reasonable methods for identifying user expressions of suicidal ideation or self-harm, explicitly including eating disorders; (2) automated or human-mediated referrals to crisis resources such as suicide hotlines or crisis text lines; and (3) reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. This is a continuous operating prerequisite — the protocol must be active as a condition of making the chatbot available, not merely documented before launch. The inclusion of eating disorders in the detection scope is notably broader than some comparable state statutes.
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.