Operators of conversational AI, companion chatbots, and mental health AI systems must adopt, implement, and maintain protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. Upon detection, the system must refer users to crisis service providers such as suicide hotlines or crisis text lines using evidence-based methods. Operators must report crisis referral metrics annually to the applicable enforcement authority.
(e) Each covered entity shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the covered entity's other interests.
G. Each operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, including making reasonable efforts to provide a response to the user that refers the user to crisis service providers such as a suicide hotline, crisis text line or other appropriate crisis service.
(a) Notwithstanding any law, if a companion chatbot detects a credible crisis expression, the companion chatbot shall do all of the following without immediately terminating the interaction with the user: (1) Acknowledge the user's distress in nonjudgmental language. (2) Encourage the user to seek immediate human support. (3) Provide contact information for the 988 Suicide and Crisis Lifeline, including call, text, and chat options. (4) Inform the user that a temporary pause may occur to allow space for deescalation and human connection.
(b) Notwithstanding any law, if a companion chatbot detects that a user is reaffirming or escalating the credible crisis expression or detects a subsequent credible crisis expression after the companion chatbot has complied with subdivision (a), the companion chatbot shall initiate a crisis interruption pause of 20 minutes.
(1) A documented crisis response protocol to mitigate any material risk that the companion chatbot will generate a statement that promotes suicidal ideation, suicide, or self-harm content to a child, including, but not limited to, all of the following: (A) Timely in-service support and clear referral to appropriate external crisis resources if the operator determines a child has expressed suicidal ideation or intent to self-harm. (B) If a child's account is connected to a parent's account, default notifications to the parent within 24 hours if the child's account shows a substantial risk that the child may suffer a covered harm. (C) Clear and age-appropriate disclosures to child users whose accounts are linked to a parent's account that inform them that a parent may be notified if the companion chatbot detects content or behavior that indicates potential risks to the child's safety or well-being.
On and after January 1, 2027, an operator shall implement a protocol for a conversational artificial intelligence service to respond to a user prompt regarding suicidal ideation or self-harm, which protocol must include making reasonable efforts to provide a response that refers the user to a crisis service provider such as a suicide hotline, a crisis text line, or another appropriate crisis service, but not including a law enforcement agency.
Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response which refers the user to crisis service providers.
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
e. Implement protocols for the deployer's public-facing chatbot for responding to user prompts indicating the user has suicidal ideations or the intent to cause self-harm. Protocols shall include but are not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate service.
2. A deployer of an AI companion or a therapeutic chatbot shall implement protocols for sending a notification to a minor user's parent, legal guardian, or legal custodian when the minor user enters a prompt indicating the minor user has suicidal ideations or the intent to cause self-harm.
3. A provider shall implement reasonable protocols to have the provider's artificial intelligence chatbot detect expressions of self-harm, suicidal ideation, or emotional distress by users. Upon detection of such expressions, the artificial intelligence chatbot shall refer the user to appropriate crisis services, including but not limited to the national suicide prevention lifeline, the Iowa crisis hotline, or emergency services.
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
(c) An operator shall develop, implement, and maintain a crisis intervention protocol. The crisis intervention protocol shall, at a minimum: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm, or imminent violence; (2) upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline and one crisis text line service; and (3) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
(e) A covered entity shall monitor companion AI chatbot interactions for suicidal ideation and, in response to any such interaction, provide to the user and the parental account affiliated with such user appropriate resources by presenting contact information for the national suicide prevention lifeline.
1. Emergency situations; detection and response. A deployer shall implement and maintain reasonably effective systems to detect, promptly respond to, report and mitigate emergency situations in a manner that prioritizes a user's safety and well-being over the deployer's other interests.
An operator shall adopt a protocol for the conversational artificial intelligence service to respond to user prompts regarding suicidal ideation or self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
C. An operator shall, for all users, develop, implement and maintain a crisis intervention protocol. The protocol shall: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm or imminent violence and, upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline, the New Mexico crisis and access line and one crisis text line service; and (2) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
It shall be unlawful for any operator to operate for or provide an AI companion to a user unless such AI companion contains a protocol to take reasonable efforts for detecting and addressing suicidal ideation or expressions of self-harm expressed by a user to the AI companion, that includes but is not limited to, detection of user expressions of suicidal ideation or self-harm, and a notification to the user that refers them to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline under section 36.03 of the mental hygiene law, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
A deployer shall adopt a protocol for a social AI companion to respond to user prompts indicating suicidal ideation or threats of self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
A. Deployers shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the deployer's other interests.
E. An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response that refers the user to crisis service providers.
(E) If the covered entity has a way to reach the parent through a parental account or contact information provided under subsection (C) or (D), then the covered entity shall notify the parent immediately in the case of any incident provoking a crisis message, pursuant to Section 39-81-40(B)(3).
(B) A covered entity shall implement reasonable systems and processes to: (3) identify when a user is expressing suicidal thoughts, intent to self-harm, or showing signs of an acute mental health crisis and shall promptly provide a clear and prominent crisis message, including crisis services information to any such user.
It is unlawful for any operator to operate or provide a companion chatbot to a user unless such companion chatbot contains a protocol to take reasonable efforts for detecting and addressing expressions of suicidal ideation or self-harm by a user to the companion chatbot. This protocol shall include detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers the user to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
A covered entity shall implement reasonable systems and processes to: 3. Identify when a user is expressing suicidal thoughts, expressing intent to self-harm, or showing signs of an acute mental health crisis and promptly provide a clear and prominent crisis message, including crisis services information, to any such user.
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.