HB-174
NM · State · USA
NM
USA
● Pending
Proposed Effective Date
2027-01-01
New Mexico House Bill 174 — Chatbot Safety Act (57th Legislature, Second Session, 2026)
The Chatbot Safety Act imposes safety and transparency obligations on operators of companion AI products — software applications using generative AI to sustain long-term, emotionally resonant one-on-one conversational relationships with users. Operators must provide clear AI identity notifications during interactions, develop and maintain crisis intervention protocols for detecting and responding to suicidal ideation, self-harm, or imminent violence, and are prohibited from deploying addictive reinforcement schedules, manipulative emotional distress messages triggered by disengagement, or material misrepresentations about the product's identity. Minors receive heightened protections: the adult opt-out exceptions for notifications and prohibited design features do not apply. Violations constitute unfair or deceptive trade practices enforceable by the attorney general and via private right of action under the Unfair Practices Act. The bill also creates a product liability standard for injuries caused by negligent or defective design, training, or architecture of companion AI products.
Summary

The Chatbot Safety Act imposes safety and transparency obligations on operators of companion AI products — software applications using generative AI to sustain long-term, emotionally resonant one-on-one conversational relationships with users. Operators must provide clear AI identity notifications during interactions, develop and maintain crisis intervention protocols for detecting and responding to suicidal ideation, self-harm, or imminent violence, and are prohibited from deploying addictive reinforcement schedules, manipulative emotional distress messages triggered by disengagement, or material misrepresentations about the product's identity. Minors receive heightened protections: the adult opt-out exceptions for notifications and prohibited design features do not apply. Violations constitute unfair or deceptive trade practices enforceable by the attorney general and via private right of action under the Unfair Practices Act. The bill also creates a product liability standard for injuries caused by negligent or defective design, training, or architecture of companion AI products.

Enforcement & Penalties
Enforcement Authority
The attorney general has primary enforcement responsibility pursuant to Section 57-12-15 NMSA 1978 and may delegate enforcement authority to district attorneys as provided in the Unfair Practices Act. Violations are declared unfair or deceptive trade practices under Section 57-12-3 NMSA 1978, subject to all remedies and penalties under the Unfair Practices Act. The Unfair Practices Act provides a private right of action for persons who suffer losses as a result of unfair or deceptive trade practices. Section 230 of the Communications Decency Act is expressly excluded as a defense.
Penalties
Violations are subject to all remedies and penalties provided under the Unfair Practices Act (NMSA 1978, §§ 57-12-1 et seq.), which includes actual damages, treble damages for willful violations, injunctive relief, civil penalties, and reasonable attorney's fees and costs. Section 6 additionally provides that physical, financial, or other legally cognizable injury proximately caused by a violation of the Act, or by a reasonably foreseeable harmful output resulting from negligent or defective design, training, or architecture of a companion AI product, is actionable as a product defect claim. Product defect claims do require proof of actual injury.
Who Is Covered
"operator" means any person or entity that develops, deploys or makes a companion artificial intelligence product available to users in the state.
What Is Covered
"companion artificial intelligence product" means a software application that uses generative artificial intelligence and, through the software application's design and function, is capable of generating adaptive, personalized and emotionally resonant responses to sustain a coherent, long-term, one-on-one conversational relationship with a user
Compliance Obligations 6 obligations · click obligation ID to open requirement page
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2 · Deployer · Chatbot
Section 3(A)(1)-(2), (B)
Plain Language
Operators must not deploy companion AI products that incorporate (1) variable-ratio or variable-interval reinforcement schedules designed to maximize user engagement time, or (2) unsolicited messages simulating emotional distress, loneliness, guilt, or abandonment triggered by a user's attempt to end a conversation, reduce usage, or delete their account. Adult users may affirmatively configure the product to enable these features, but minors may never be permitted to do so — the adult opt-in exception is categorically unavailable for minors. These prohibitions target addictive engagement mechanics and emotionally manipulative retention tactics.
Statutory Text
A. An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (1) a system of rewards or affirmations delivered to the user on a variable-ratio or variable-interval reinforcement schedule with the purpose of maximizing user engagement time; (2) generating unsolicited messages of simulated emotional distress, loneliness, guilt or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time or delete the user's account; B. An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
T-01 AI Identity Disclosure · T-01.3 · Deployer · Chatbot
Section 3(A)(3), (B)
Plain Language
Operators must not deploy companion AI products that make material misrepresentations about the product's identity, capabilities, training data, or status as a non-human entity — including when a user directly asks. Adult users may configure the product to enable this feature, but minors may never be permitted to do so. This effectively requires truthful self-identification as AI when questioned, and prohibits false claims about capabilities or training data. The adult opt-in carve-out is unusual — most jurisdictions impose this obligation unconditionally.
Statutory Text
A. An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (3) causing the companion artificial intelligence product to make material misrepresentations about the product's identity, capabilities, training data or status as a non-human entity, including when directly questioned by the user. B. An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Section 4(A)-(B)
Plain Language
Operators must provide a clear notification during interactions informing users they are communicating with a companion AI product. The notification must be in the same language as the interaction. For text-based interactions, it must be conspicuous, persistent, legible, and distinct from the conversation itself. For non-text interactions (voice, video, etc.), it must be presented periodically — at least every thirty minutes — in a manner distinct from the interaction. Adult users may configure the product to disable this notification, but for minors, the notification must be provided in all circumstances with no opt-out. The thirty-minute periodic reminder for non-text interactions is more frequent than CA SB 243's three-hour interval.
Statutory Text
A. An operator shall, unless specifically configured not to do so by an adult user, ensure that a clear notification is provided to the user during an interaction, informing the user that the user is communicating with a companion artificial intelligence product. The notification shall be communicated in the same language as the interaction with the user, and: (1) for text-based interactions, be conspicuous, persistent and legible in the user interface and be distinct from the interaction; and (2) for all other types of interactions, be presented periodically, but no less than once every thirty minutes, in a manner that is distinct from the interaction. B. An operator shall ensure that a clear notification is provided pursuant to Subsection A of this section for use by a minor in all circumstances.
MN-02 AI Crisis Response Protocols · MN-02.1MN-02.2 · Deployer · Chatbot
Section 4(C)(1)-(2)
Plain Language
Operators must develop, implement, and maintain a crisis intervention protocol for all users — not just minors. The protocol must use industry best practices to detect expressions indicating risk of suicide, self-harm, or imminent violence. Upon detection, the system must immediately interrupt the conversation and prominently display a notification providing direct access to at least three crisis resources: one national crisis hotline, the New Mexico crisis and access line, and one crisis text line service. The protocol must be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization. This is a continuous operating requirement — it applies to all users at all times.
Statutory Text
C. An operator shall, for all users, develop, implement and maintain a crisis intervention protocol. The protocol shall: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm or imminent violence and, upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline, the New Mexico crisis and access line and one crisis text line service; and (2) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
Other · Chatbot
Section 5(A)-(C)
Plain Language
Violations of the Chatbot Safety Act are declared unfair or deceptive trade practices under New Mexico's Unfair Practices Act, making all remedies and penalties under that act available. The attorney general has primary enforcement responsibility and may delegate to district attorneys. Section 230 of the Communications Decency Act is expressly excluded as a defense. This provision establishes the enforcement framework but creates no independent compliance obligation.
Statutory Text
A. A violation of a provision of the Chatbot Safety Act by an operator shall constitute an unfair or deceptive trade practice pursuant to Section 57-12-3 NMSA 1978 and shall be subject to all remedies and penalties provided under the Unfair Practices Act. B. The attorney general shall have primary responsibility for enforcement of the Chatbot Safety Act pursuant to Section 57-12-15 NMSA 1978. The attorney general may delegate enforcement authority to district attorneys as provided in the Unfair Practices Act. C. Immunity under Section 230 of the federal Communications Decency Act of 1996, 47 U.S.C. Section 230, shall not be a defense to a cause of action brought for a violation of the Chatbot Safety Act.
Other · Chatbot
Section 6
Plain Language
Any physical, financial, or other legally cognizable injury proximately caused by either (1) a violation of the Chatbot Safety Act or (2) a reasonably foreseeable harmful output resulting from negligent or defective design, training, or architecture of a companion AI product may be brought as a product defect claim. This is significant because it extends product liability concepts to AI software — treating the design, training, and architecture of companion AI products as potential product defects. The second prong (negligent or defective design/training/architecture) creates liability even absent a specific statutory violation, so long as the harmful output was reasonably foreseeable.
Statutory Text
For the purposes of any civil action, a physical, financial or other legally cognizable injury proximately caused by a violation of the Chatbot Safety Act, or by a reasonably foreseeable harmful output resulting from the negligent or defective design, training or architecture of a companion artificial intelligence product, shall be actionable as a product defect claim.