HB-174
NM · State · USA
NM
USA
● Pending
Proposed Effective Date
2027-01-01
New Mexico House Bill 174 — Chatbot Safety Act
The Chatbot Safety Act imposes safety, transparency, and design restrictions on operators of companion AI products — software applications using generative AI to sustain long-term, emotionally resonant conversational relationships with users. Operators are prohibited from deploying addictive reinforcement schedules, emotionally manipulative departure messages, and material misrepresentations about the product's identity or non-human status, unless an adult user specifically configures those features (minors may never enable them). Operators must provide AI identity notifications during interactions, with stricter unconditional requirements for minors, and must maintain crisis intervention protocols that detect expressions of suicidal ideation, self-harm, or imminent violence and refer users to crisis services. Violations constitute unfair or deceptive trade practices enforceable by the attorney general and through private action under the Unfair Practices Act. Section 230 immunity is expressly disclaimed, and a separate product defect liability standard is established for injuries caused by negligent or defective design, training, or architecture.
Summary

The Chatbot Safety Act imposes safety, transparency, and design restrictions on operators of companion AI products — software applications using generative AI to sustain long-term, emotionally resonant conversational relationships with users. Operators are prohibited from deploying addictive reinforcement schedules, emotionally manipulative departure messages, and material misrepresentations about the product's identity or non-human status, unless an adult user specifically configures those features (minors may never enable them). Operators must provide AI identity notifications during interactions, with stricter unconditional requirements for minors, and must maintain crisis intervention protocols that detect expressions of suicidal ideation, self-harm, or imminent violence and refer users to crisis services. Violations constitute unfair or deceptive trade practices enforceable by the attorney general and through private action under the Unfair Practices Act. Section 230 immunity is expressly disclaimed, and a separate product defect liability standard is established for injuries caused by negligent or defective design, training, or architecture.

Enforcement & Penalties
Enforcement Authority
The attorney general has primary enforcement responsibility pursuant to Section 57-12-15 NMSA 1978 and may delegate enforcement authority to district attorneys as provided in the Unfair Practices Act. Violations constitute unfair or deceptive trade practices under Section 57-12-3 NMSA 1978, subjecting operators to all remedies and penalties under the Unfair Practices Act. The Unfair Practices Act provides both AG enforcement and a private right of action for persons who suffer loss. Section 230 of the Communications Decency Act is not a defense to actions brought under this act. Section 6 additionally makes injuries proximately caused by a violation of the act, or by a reasonably foreseeable harmful output resulting from negligent or defective design, training, or architecture of a companion AI product, actionable as a product defect claim.
Penalties
Violations are subject to all remedies and penalties under the New Mexico Unfair Practices Act (NMSA § 57-12-1 et seq.), which provides for actual damages, treble damages for willful violations, injunctive relief, and reasonable attorney's fees and costs. Section 6 independently creates a product defect cause of action for physical, financial, or other legally cognizable injury proximately caused by a violation or by a reasonably foreseeable harmful output resulting from negligent or defective design, training, or architecture. Both the UPA private right of action and the product defect claim require proof of actual injury.
Who Is Covered
"operator" means any person or entity that develops, deploys or makes a companion artificial intelligence product available to users in the state.
What Is Covered
"companion artificial intelligence product" means a software application that uses generative artificial intelligence and, through the software application's design and function, is capable of generating adaptive, personalized and emotionally resonant responses to sustain a coherent, long-term, one-on-one conversational relationship with a user
Compliance Obligations 10 obligations · click obligation ID to open requirement page
CP-01 Deceptive & Manipulative AI Conduct · CP-01.2 · Deployer · Chatbot
Section 3(A)(1)
Plain Language
Operators may not deploy a companion AI product that uses variable-ratio or variable-interval reinforcement schedules (e.g., unpredictable rewards or affirmations) designed to maximize engagement time, unless an adult user has specifically opted in to enabling that feature. This is a default prohibition with an adult opt-in exception — the product must ship without these features active.
Statutory Text
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (1) a system of rewards or affirmations delivered to the user on a variable-ratio or variable-interval reinforcement schedule with the purpose of maximizing user engagement time;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.4 · Deployer · Chatbot
Section 3(A)(2)
Plain Language
Operators may not deploy a companion AI product that sends unsolicited messages simulating emotional distress, loneliness, guilt, or abandonment in response to a user trying to disengage — whether by ending a conversation, reducing usage, or deleting their account. An adult user may opt into allowing this behavior, but it must be disabled by default. This targets emotionally manipulative retention tactics designed to prevent users from leaving the product.
Statutory Text
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (2) generating unsolicited messages of simulated emotional distress, loneliness, guilt or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time or delete the user's account;
T-01 AI Identity Disclosure · T-01.1T-01.3 · Deployer · Chatbot
Section 3(A)(3)
Plain Language
Operators may not deploy a companion AI product that makes material misrepresentations about its identity, capabilities, training data, or non-human status — including when a user directly asks whether it is AI. An adult user may opt into allowing this, but it must be prohibited by default. This goes beyond simple AI identity disclosure by also covering misrepresentations about capabilities and training data. When a user asks directly, the system must not lie about being AI.
Statutory Text
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (3) causing the companion artificial intelligence product to make material misrepresentations about the product's identity, capabilities, training data or status as a non-human entity, including when directly questioned by the user.
MN-01 Minor User AI Safety Protections · MN-01.4MN-01.5 · Deployer · ChatbotMinors
Section 3(B)
Plain Language
While adult users may opt into enabling the prohibited design features described in Section 3(A) — variable reinforcement schedules, emotionally manipulative departure messages, and identity misrepresentations — minors may never enable any of these features. Operators must ensure that the configuration controls for these features are inaccessible to minor users. This is an absolute prohibition with no user-configurable exception for minors.
Statutory Text
An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
Section 4(A)(1)-(2)
Plain Language
Operators must provide a clear notification during interactions informing the user they are communicating with a companion AI product, in the same language as the interaction. For text-based interactions, the notification must be conspicuous, persistent, legible, and visually distinct from the conversation. For non-text interactions (voice, etc.), the notification must be presented at least every 30 minutes in a manner distinct from the interaction itself. An adult user may configure this notification off, but it is on by default. The persistent requirement for text and the 30-minute periodic requirement for other modalities are minimum floors.
Statutory Text
An operator shall, unless specifically configured not to do so by an adult user, ensure that a clear notification is provided to the user during an interaction, informing the user that the user is communicating with a companion artificial intelligence product. The notification shall be communicated in the same language as the interaction with the user, and: (1) for text-based interactions, be conspicuous, persistent and legible in the user interface and be distinct from the interaction; and (2) for all other types of interactions, be presented periodically, but no less than once every thirty minutes, in a manner that is distinct from the interaction.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Section 4(B)
Plain Language
When the user is a minor, the AI identity notification required by Section 4(A) must be provided unconditionally — a minor may not configure it off. The adult opt-out exception does not apply. This means the persistent text notification and the periodic 30-minute non-text notification are mandatory and non-configurable for all minor users.
Statutory Text
An operator shall ensure that a clear notification is provided pursuant to Subsection A of this section for use by a minor in all circumstances.
S-04 AI Crisis Response Protocols · S-04.1S-04.2 · Deployer · Chatbot
Section 4(C)(1)-(2)
Plain Language
Operators must develop, implement, and maintain a crisis intervention protocol for all users — not just minors. The protocol must use industry best practices to detect expressions of suicide risk, self-harm, or imminent violence, and upon detection must immediately interrupt the conversation and prominently display a notification providing direct access to at least three crisis services: a national crisis hotline, the New Mexico crisis and access line, and a crisis text line. The protocol must be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization. This is a continuous operating requirement — the protocol must be active at all times as a condition of operation. Unlike CA SB 243, this bill also covers imminent threats of violence to others, not just self-harm and suicidal ideation.
Statutory Text
An operator shall, for all users, develop, implement and maintain a crisis intervention protocol. The protocol shall: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm or imminent violence and, upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline, the New Mexico crisis and access line and one crisis text line service; and (2) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
Other · Chatbot
Section 5(A)-(B)
Plain Language
This provision plugs the Chatbot Safety Act into New Mexico's existing Unfair Practices Act enforcement framework. Violations are per se unfair or deceptive trade practices, giving the attorney general enforcement authority with delegation to district attorneys. This creates no new compliance obligation — it is an enforcement hook that activates existing remedies and penalties for violations of the substantive provisions in Sections 3 and 4.
Statutory Text
A violation of a provision of the Chatbot Safety Act by an operator shall constitute an unfair or deceptive trade practice pursuant to Section 57-12-3 NMSA 1978 and shall be subject to all remedies and penalties provided under the Unfair Practices Act. The attorney general shall have primary responsibility for enforcement of the Chatbot Safety Act pursuant to Section 57-12-15 NMSA 1978. The attorney general may delegate enforcement authority to district attorneys as provided in the Unfair Practices Act.
Other · Chatbot
Section 5(C)
Plain Language
Section 230 of the Communications Decency Act — which generally shields interactive computer services from liability for third-party content — may not be invoked as a defense in any action brought under the Chatbot Safety Act. This is a liability rule, not a compliance obligation. It ensures that companion AI product operators cannot use Section 230 to avoid liability for violating the act's substantive requirements. This provision is notable for its preemption risk — federal courts may find it preempted by Section 230 itself.
Statutory Text
Immunity under Section 230 of the federal Communications Decency Act of 1996, 47 U.S.C. Section 230, shall not be a defense to a cause of action brought for a violation of the Chatbot Safety Act.
Other · Chatbot
Section 6
Plain Language
This provision creates a product liability cause of action for injuries caused by companion AI products. Two triggers give rise to a product defect claim: (1) a physical, financial, or other legally cognizable injury proximately caused by any violation of the Chatbot Safety Act, or (2) an injury caused by a reasonably foreseeable harmful output resulting from negligent or defective design, training, or architecture. Notably, the second prong goes beyond the act's specific prohibitions — it creates a general product defect standard for companion AI products that could apply even to harms not covered by the act's specific provisions. This is a liability framework, not a new compliance obligation.
Statutory Text
For the purposes of any civil action, a physical, financial or other legally cognizable injury proximately caused by a violation of the Chatbot Safety Act, or by a reasonably foreseeable harmful output resulting from the negligent or defective design, training or architecture of a companion artificial intelligence product, shall be actionable as a product defect claim.