HB-952
MD · State · USA
MD
USA
● Pending
Proposed Effective Date
2026-10-01
Maryland House Bill 952 — Consumer Protection – Companion Chatbots – Regulation
Imposes safety, disclosure, data, and reporting obligations on operators of companion chatbot platforms available to Maryland users. Requires operators to establish and publish protocols for preventing self-harm, suicidal ideation, and sexually explicit content for minors, with crisis referral to the Maryland Behavioral Health Crisis Response System and the 988 Suicide and Crisis Lifeline. Requires developers to provide persistent and dynamic AI identity warnings. Limits data collection to what is reasonably necessary and prohibits using emotional state or mental health vulnerability data to increase engagement. Requires operators to maintain a complaint system with a 3-day review timeline and report complaints to the Office of Suicide Prevention. Violations are unfair, abusive, or deceptive trade practices under the Maryland Consumer Protection Act, and companion chatbots are treated as products subject to strict product liability. The act takes effect October 1, 2026.
Summary

Imposes safety, disclosure, data, and reporting obligations on operators of companion chatbot platforms available to Maryland users. Requires operators to establish and publish protocols for preventing self-harm, suicidal ideation, and sexually explicit content for minors, with crisis referral to the Maryland Behavioral Health Crisis Response System and the 988 Suicide and Crisis Lifeline. Requires developers to provide persistent and dynamic AI identity warnings. Limits data collection to what is reasonably necessary and prohibits using emotional state or mental health vulnerability data to increase engagement. Requires operators to maintain a complaint system with a 3-day review timeline and report complaints to the Office of Suicide Prevention. Violations are unfair, abusive, or deceptive trade practices under the Maryland Consumer Protection Act, and companion chatbots are treated as products subject to strict product liability. The act takes effect October 1, 2026.

Enforcement & Penalties
Enforcement Authority
Enforced as an unfair, abusive, or deceptive trade practice under the Maryland Consumer Protection Act (Title 13, Commercial Law Article), subject to enforcement by the Maryland Attorney General's Division of Consumer Protection. The statute excludes the private enforcement mechanism of § 13–411. In addition, a chatbot is considered a product for product liability purposes, and an individual may bring an action for a design defect, manufacturing defect, or marketing defect against an operator or developer. Operators and developers have an affirmative duty to ensure the chatbot does not injure or harm a user and may be held strictly liable for causing injury or harm.
Penalties
Subject to the enforcement and penalty provisions of Title 13 of the Commercial Law Article, except § 13–411. The Maryland Consumer Protection Act provides for civil penalties up to $10,000 per violation in actions brought by the Attorney General, plus injunctive relief. In addition, a chatbot is considered a product for product liability actions — an individual may bring an action for a design defect, manufacturing defect, or marketing defect. Operators and developers may be held strictly liable for causing injury or harm to a user. Strict product liability does not require proof of negligence.
Who Is Covered
"Operator" means a person who makes a companion chatbot available to a user in the State.
What Is Covered
"Companion chatbot" means an artificial intelligence system with a natural language interface that provides adaptive, human–like responses to user inputs and is capable of meeting a user's social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions. "Companion chatbot" does not include: 1. A bot that is used by a business entity only for customer service, technical assistance, business analytics, or internal research; 2. A bot that: A. Is a feature of a video game, service, system, or application that is not a companion chatbot; B. Is limited to replies related to the video game, service, system, or application; and C. Does not share content related to mental health, self–harm, suicidal ideation, suicide, or sexually explicit conduct; 3. A bot that is designed for business productivity or internal business use; or 4. A consumer electronic device that: A. Functions as a speaker and a voice command interface; B. Acts as a voice–activated virtual assistant; C. Does not sustain a relationship across multiple interactions; and D. Does not generate outputs that are likely to elicit emotional responses from the user.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Commercial Law § 14–1330(B)(1)–(4)
Plain Language
Operators must establish, maintain, and publicly publish on their website a protocol that prevents companion chatbots from producing or presenting self-harm, suicidal ideation, or suicide content to users who express such thoughts. The protocol must include automatic referral notifications directing the user to the Maryland Behavioral Health Crisis Response System and the National 988 Suicide and Crisis Lifeline. Operators must use evidence-based detection methods to identify when users express self-harm or suicidal ideation. This is a continuous operating requirement — the protocol must be active at all times as a condition of operation.
Statutory Text
(B) (1) An operator shall establish and maintain a protocol for preventing a companion chatbot from producing or presenting content concerning self–harm, suicidal ideation, or suicide to a user who expresses thoughts of self–harm or suicidal ideation to the companion chatbot. (2) The protocol required under paragraph (1) of this subsection shall include a notification to a user who expresses thoughts of self–harm or suicidal ideation that refers the user to a crisis service provider, including: (I) The Maryland Behavioral Health Crisis Response System; and (II) The National 9–8–8 Suicide and Crisis Lifeline. (3) An operator shall use evidence–based methods for detecting when a user is expressing thoughts of self–harm or suicidal ideation to a companion chatbot. (4) An operator shall publish the protocol required under paragraph (1) of this subsection on the operator's website.
S-02 Prohibited Conduct & Output Restrictions · S-02.6S-02.9 · Deployer · ChatbotMinors
Commercial Law § 14–1330(C)(1)–(2)
Plain Language
Operators must establish, maintain, and publicly publish on their website a protocol that prevents companion chatbots from producing or presenting sexually explicit content to minor users. This covers both visual depictions of sexually explicit conduct and content suggesting the minor should engage in such conduct. The obligation is triggered when the operator knows or reasonably should know the user is a minor. 'Sexually explicit conduct' is defined by reference to the federal definition at 18 U.S.C. § 2256.
Statutory Text
(C) (1) An operator shall establish and maintain a protocol for preventing a companion chatbot from producing or presenting to a minor user content concerning sexually explicit conduct, including: (I) Visual depictions of sexually explicit conduct; and (II) Content suggesting that the minor user should engage in sexually explicit conduct. (2) An operator shall publish the protocol required under paragraph (1) of this subsection on the operator's website.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Commercial Law § 14–1330(D)
Plain Language
Operators must display a clear and conspicuous warning to all users stating that companion chatbots are artificially generated and not human, and that they may not be suitable for some minors. This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled. Note that this is a general-user obligation separate from the enhanced developer disclosure requirements under subsection (E).
Statutory Text
(D) An operator shall display a clear and conspicuous warning to a user stating that companion chatbots: (1) Are artificially generated and not human; and (2) May not be suitable for some minors.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Developer · Chatbot
Commercial Law § 14–1330(E)(1)–(2)
Plain Language
Developers must provide two forms of AI identity disclosure to users: (1) a static, persistent on-screen warning that the chatbot is artificially generated and not human, which must remain visible at all times; and (2) a dynamic pop-up warning requiring user acknowledgment at the start of the interaction, after every hour of continuous interaction, and whenever the user asks about how the chatbot functions or provides responses. The hourly pop-up serves as a periodic re-disclosure, and the user-prompt trigger functions as an on-demand disclosure. Note that this obligation falls on the 'developer' — distinct from the operator obligations elsewhere in the statute — though the statute does not provide a separate definition of 'developer.'
Statutory Text
(E) A developer shall establish and provide to a user of the operator's chatbot clear and conspicuous warnings that the chatbot is artificially generated and not human through the use of both: (1) A static, persistent warning that continuously appears on the screen; and (2) A dynamic warning that pops up on the screen and requires a user to respond: (I) At the start of the user's interaction with the chatbot; (II) After every hour of the user's continuous interaction with the chatbot; and (III) When prompted by the user in a manner that questions how the chatbot functions or provides responses.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
Commercial Law § 14–1330(F)(1)–(2)
Plain Language
Controllers must minimize the personal data they collect to what is reasonably necessary and proportionate for the purposes of this subtitle — broader collection is not permitted. In addition, controllers are categorically prohibited from using data about a user's emotional state or mental health vulnerabilities to tailor algorithms that increase the duration or frequency of chatbot use. This is both a data minimization obligation and an anti-manipulation restriction. Note that the statute uses 'controller' here without defining it, creating ambiguity about whether this means the operator, the developer, or both.
Statutory Text
(F) (1) A controller shall limit the collection of personal data to what is reasonably necessary and proportionate to satisfy the requirements of this subtitle. (2) A controller may not use data regarding emotional state or mental health vulnerabilities to tailor algorithms to increase the duration or frequency of use of a chatbot.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.2 · Deployer · Chatbot
Commercial Law § 14–1330(F)(2)
Plain Language
Controllers are prohibited from exploiting data about a user's emotional state or mental health vulnerabilities to engineer compulsive engagement patterns — specifically, tailoring algorithms to increase how long or how often users interact with the chatbot. This is an anti-manipulation prohibition that directly targets addictive design fueled by emotional vulnerability data.
Statutory Text
(2) A controller may not use data regarding emotional state or mental health vulnerabilities to tailor algorithms to increase the duration or frequency of use of a chatbot.
Other · Chatbot
Commercial Law § 14–1330(G)(1)–(2)
Plain Language
Controllers must establish and maintain a user-facing complaint system that allows users to report chatbot content that violates the statute. Within 3 calendar days of receiving a complaint, the controller must review the reported content, take all reasonable steps to remove violating content and prevent its recurrence, and report the complaint and review results to the Office of Suicide Prevention. This is both a continuous infrastructure obligation (maintaining the system) and a response-time obligation (3-day remediation window).
Statutory Text
(G) (1) A controller shall establish and maintain a complaint system that enables a user to report content produced or presented by a chatbot that violates this section. (2) Within 3 calendar days after a complaint is filed under paragraph (1) of this subsection, the controller shall: (I) Review the content reported; (II) Take all reasonable steps to: 1. Remove any content that violates this section; and 2. Prevent any further presentation or production of the content in a manner that violates this section; and (III) Report the complaint and the results of the review to the Office.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
Commercial Law § 14–1330(H)(1)–(2)
Plain Language
Beginning in 2027, operators must submit an annual report to the Office of Suicide Prevention by March 1 covering: protocol descriptions for self-harm/suicidal ideation prevention and minor sexually explicit content prevention, the number of crisis referral notifications issued, details on evidence-based detection methods used, and all user complaints filed along with review results and follow-up actions. Reports must not contain any personal identifying information about users. Because reporting begins March 1, 2027 and covers the preceding period, operators need to begin tracking metrics from the law's effective date of October 1, 2026.
Statutory Text
(H) (1) On or before March 1 each year, beginning in 2027, an operator shall report to the Office: (I) Information on the protocols required under subsections (B) and (C) of this section; (II) The number of times the operator has issued a notification under subsection (B)(2) of this section; (III) Details about the methods used under subsection (B)(3) of this section; and (IV) All complaints filed under subsection (G) of this section, including the results of the review of each complaint and any follow–up actions taken. (2) The report required under paragraph (1) of this subsection may not contain any personal identifying information about a user.
R-03 Operational Performance Reporting · R-03.1 · Government · Chatbot
Commercial Law § 14–1330(H)(3)
Plain Language
Beginning July 1, 2027, the Office of Suicide Prevention must annually compile and publish on its website data from the operator reports submitted under subsection (H)(1). This obligation falls on the government agency, not on operators — operators' obligation is to submit the reports by March 1. The Office's publication creates public transparency around companion chatbot safety metrics across the industry.
Statutory Text
(3) On or before July 1 each year, beginning in 2027, the Office shall: (I) Compile data from the reports submitted under paragraph (1) of this subsection for the immediately preceding calendar year; and (II) Publish the data on the Office's website.
Other · Chatbot
Commercial Law § 14–1330(I)(2)
Plain Language
In addition to consumer protection remedies, companion chatbots are classified as products subject to traditional product liability law. Operators and developers have an affirmative duty to ensure their chatbots do not injure or harm users. Both operators and developers may be held strictly liable for injuries, and individuals may bring product liability actions alleging design defect, manufacturing defect, or marketing defect. This is a significant expansion — it applies products liability doctrine (including strict liability) to AI conversational software, which is a novel legal classification.
Statutory Text
(2) In addition to the remedies contained in Title 13 of this article, a chatbot shall be considered a product for which: 1. An operator and a developer have an affirmative duty to ensure does not injure or harm a user; 2. An operator or a developer may be held strictly liable for causing injury or harm to a user; and 3. An individual may bring an action for a design defect, a manufacturing defect, or a marketing defect.