HB-952
MD · State · USA
MD
USA
● Pending
Proposed Effective Date
2026-10-01
Maryland House Bill 952 — Consumer Protection – Companion Chatbots – Regulation
Imposes safety, disclosure, data, and reporting obligations on operators of companion chatbot platforms available to users in Maryland. Requires operators to establish and publish protocols preventing self-harm, suicidal ideation, and sexually explicit content (especially for minors), and to refer users expressing self-harm or suicidal ideation to crisis service providers. Mandates clear and conspicuous AI identity warnings through both static persistent on-screen labels and dynamic pop-up warnings at interaction start, hourly, and on user inquiry. Limits personal data collection to what is reasonably necessary and prohibits using emotional state or mental health vulnerability data to increase engagement. Requires operators to maintain a complaint system with 3-day review timelines. Annual reporting to the Office of Suicide Prevention begins March 1, 2027. Violations are unfair, abusive, or deceptive trade practices under the Maryland Consumer Protection Act, and chatbots are treated as products subject to strict product liability.
Summary

Imposes safety, disclosure, data, and reporting obligations on operators of companion chatbot platforms available to users in Maryland. Requires operators to establish and publish protocols preventing self-harm, suicidal ideation, and sexually explicit content (especially for minors), and to refer users expressing self-harm or suicidal ideation to crisis service providers. Mandates clear and conspicuous AI identity warnings through both static persistent on-screen labels and dynamic pop-up warnings at interaction start, hourly, and on user inquiry. Limits personal data collection to what is reasonably necessary and prohibits using emotional state or mental health vulnerability data to increase engagement. Requires operators to maintain a complaint system with 3-day review timelines. Annual reporting to the Office of Suicide Prevention begins March 1, 2027. Violations are unfair, abusive, or deceptive trade practices under the Maryland Consumer Protection Act, and chatbots are treated as products subject to strict product liability.

Enforcement & Penalties
Enforcement Authority
Enforcement under the Maryland Consumer Protection Act (Title 13, Commercial Law Article), except § 13–411. The Division of Consumer Protection within the Office of the Attorney General enforces unfair, abusive, or deceptive trade practice violations. The Office of Suicide Prevention receives annual reports and publishes compiled data but is not granted independent enforcement authority. In addition, a chatbot is considered a product for which an individual may bring a product liability action for design defect, manufacturing defect, or marketing defect — operators and developers have an affirmative duty to ensure the chatbot does not injure or harm a user and may be held strictly liable.
Penalties
Subject to the enforcement and penalty provisions of the Maryland Consumer Protection Act (Title 13), except § 13–411. Under the MCPA, remedies include injunctive relief, restitution, and civil penalties up to $10,000 per violation (or $25,000 for repeat violations). In addition, the bill establishes that a chatbot is a product for product liability purposes: operators and developers may be held strictly liable for causing injury or harm to a user, and individuals may bring actions for design defect, manufacturing defect, or marketing defect. Strict product liability does not require proof of actual monetary harm for the underlying defect claim.
Who Is Covered
"OPERATOR" MEANS A PERSON WHO MAKES A COMPANION CHATBOT AVAILABLE TO A USER IN THE STATE.
What Is Covered
"COMPANION CHATBOT" MEANS AN ARTIFICIAL INTELLIGENCE SYSTEM WITH A NATURAL LANGUAGE INTERFACE THAT PROVIDES ADAPTIVE, HUMAN–LIKE RESPONSES TO USER INPUTS AND IS CAPABLE OF MEETING A USER'S SOCIAL NEEDS, INCLUDING BY EXHIBITING ANTHROPOMORPHIC FEATURES AND BEING ABLE TO SUSTAIN A RELATIONSHIP ACROSS MULTIPLE INTERACTIONS. "COMPANION CHATBOT" DOES NOT INCLUDE: 1. A BOT THAT IS USED BY A BUSINESS ENTITY ONLY FOR CUSTOMER SERVICE, TECHNICAL ASSISTANCE, BUSINESS ANALYTICS, OR INTERNAL RESEARCH; 2. A BOT THAT: A. IS A FEATURE OF A VIDEO GAME, SERVICE, SYSTEM, OR APPLICATION THAT IS NOT A COMPANION CHATBOT; B. IS LIMITED TO REPLIES RELATED TO THE VIDEO GAME, SERVICE, SYSTEM, OR APPLICATION; AND C. DOES NOT SHARE CONTENT RELATED TO MENTAL HEALTH, SELF–HARM, SUICIDAL IDEATION, SUICIDE, OR SEXUALLY EXPLICIT CONDUCT; OR 3. A BOT THAT IS DESIGNED FOR BUSINESS PRODUCTIVITY OR INTERNAL BUSINESS USE; OR 4. A CONSUMER ELECTRONIC DEVICE THAT: A. FUNCTIONS AS A SPEAKER AND A VOICE COMMAND INTERFACE; B. ACTS AS A VOICE–ACTIVATED VIRTUAL ASSISTANT; C. DOES NOT SUSTAIN A RELATIONSHIP ACROSS MULTIPLE INTERACTIONS; AND D. DOES NOT GENERATE OUTPUTS THAT ARE LIKELY TO ELICIT EMOTIONAL RESPONSES FROM THE USER.
Compliance Obligations 13 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Commercial Law § 14–1330(B)(1)–(4)
Plain Language
Operators must establish and continuously maintain a protocol that prevents companion chatbots from producing or presenting self-harm, suicidal ideation, or suicide content when a user expresses such thoughts. The protocol must include automatic referral to crisis service providers — specifically the Maryland Behavioral Health Crisis Response System and the 988 Suicide and Crisis Lifeline. Operators must use evidence-based methods for detecting user expressions of self-harm or suicidal ideation. The protocol must also be published on the operator's website. This is an ongoing operating requirement — the chatbot cannot function without the protocol in place.
Statutory Text
(B) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING CONTENT CONCERNING SELF–HARM, SUICIDAL IDEATION, OR SUICIDE TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO THE COMPANION CHATBOT. (2) THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION SHALL INCLUDE A NOTIFICATION TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION THAT REFERS THE USER TO A CRISIS SERVICE PROVIDER, INCLUDING: (I) THE MARYLAND BEHAVIORAL HEALTH CRISIS RESPONSE SYSTEM; AND (II) THE NATIONAL 9–8–8 SUICIDE AND CRISIS LIFELINE. (3) AN OPERATOR SHALL USE EVIDENCE–BASED METHODS FOR DETECTING WHEN A USER IS EXPRESSING THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO A COMPANION CHATBOT. (4) AN OPERATOR SHALL PUBLISH THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION ON THE OPERATOR'S WEBSITE.
S-04 AI Crisis Response Protocols · S-04.1S-04.2 · Deployer · Chatbot
Commercial Law § 14–1330(B)(1)–(3)
Plain Language
This maps the same crisis detection and referral protocol obligation under the MN-02 crisis response taxonomy. Operators must implement a protocol that detects self-harm and suicidal ideation using evidence-based methods and immediately refers users to the Maryland Behavioral Health Crisis Response System and the 988 Lifeline. The protocol must be continuously active and documented using evidence-based measurement methods.
Statutory Text
(B) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING CONTENT CONCERNING SELF–HARM, SUICIDAL IDEATION, OR SUICIDE TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO THE COMPANION CHATBOT. (2) THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION SHALL INCLUDE A NOTIFICATION TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION THAT REFERS THE USER TO A CRISIS SERVICE PROVIDER, INCLUDING: (I) THE MARYLAND BEHAVIORAL HEALTH CRISIS RESPONSE SYSTEM; AND (II) THE NATIONAL 9–8–8 SUICIDE AND CRISIS LIFELINE. (3) AN OPERATOR SHALL USE EVIDENCE–BASED METHODS FOR DETECTING WHEN A USER IS EXPRESSING THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO A COMPANION CHATBOT.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Commercial Law § 14–1330(C)(1)–(2)
Plain Language
Operators must establish and maintain a protocol preventing companion chatbots from producing or presenting sexually explicit content to minor users — including visual depictions of sexually explicit conduct and content suggesting minors should engage in such conduct. The protocol must be published on the operator's website. The 'minor user' trigger applies when the operator knows or reasonably should know the user is a minor, which is a broader standard than actual knowledge alone. 'Sexually explicit conduct' is defined by reference to the federal definition at 18 U.S.C. § 2256.
Statutory Text
(C) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING TO A MINOR USER CONTENT CONCERNING SEXUALLY EXPLICIT CONDUCT, INCLUDING: (I) VISUAL DEPICTIONS OF SEXUALLY EXPLICIT CONDUCT; AND (II) CONTENT SUGGESTING THAT THE MINOR USER SHOULD ENGAGE IN SEXUALLY EXPLICIT CONDUCT. (2) AN OPERATOR SHALL PUBLISH THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION ON THE OPERATOR'S WEBSITE.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Commercial Law § 14–1330(D)
Plain Language
Operators must display a clear and conspicuous warning to all users stating that companion chatbots are artificially generated and not human, and that they may not be suitable for some minors. This is an unconditional disclosure — it applies to every user regardless of whether a reasonable person would be misled. Note this provision was amended to replace the original subsection (D) and is distinct from the more detailed developer warning obligations in subsection (E).
Statutory Text
(D) AN OPERATOR SHALL DISPLAY A CLEAR AND CONSPICUOUS WARNING TO A USER STATING THAT COMPANION CHATBOTS: (1) ARE ARTIFICIALLY GENERATED AND NOT HUMAN; AND (2) MAY NOT BE SUITABLE FOR SOME MINORS.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Developer · Chatbot
Commercial Law § 14–1330(E)(1)–(2)
Plain Language
Developers must implement two forms of AI identity disclosure for users of the operator's chatbot. First, a static, persistent warning must continuously appear on screen indicating the chatbot is AI-generated and not human. Second, a dynamic pop-up warning requiring user acknowledgment must appear: (1) at the start of each interaction, (2) after every hour of continuous use, and (3) whenever a user asks how the chatbot works or generates responses. The on-demand disclosure (responding when a user questions chatbot functionality) maps to T-01.3. The hourly pop-up maps to T-01.2 (periodic re-disclosure). This obligation is placed on the 'developer' — a term not defined in this section — rather than the 'operator.'
Statutory Text
(E) A DEVELOPER SHALL ESTABLISH AND PROVIDE TO A USER OF THE OPERATOR'S CHATBOT CLEAR AND CONSPICUOUS WARNINGS THAT THE CHATBOT IS ARTIFICIALLY GENERATED AND NOT HUMAN THROUGH THE USE OF BOTH: (1) A STATIC, PERSISTENT WARNING THAT CONTINUOUSLY APPEARS ON THE SCREEN; AND (2) A DYNAMIC WARNING THAT POPS UP ON THE SCREEN AND REQUIRES A USER TO RESPOND: (I) AT THE START OF THE USER'S INTERACTION WITH THE CHATBOT; (II) AFTER EVERY HOUR OF THE USER'S CONTINUOUS INTERACTION WITH THE CHATBOT; AND (III) WHEN PROMPTED BY THE USER IN A MANNER THAT QUESTIONS HOW THE CHATBOT FUNCTIONS OR PROVIDES RESPONSES.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
Commercial Law § 14–1330(F)(1)
Plain Language
Controllers (the bill uses this term without defining it within this section — likely intended as the entity controlling data collection, which would typically be the operator) must limit the collection of personal data to what is reasonably necessary and proportionate to satisfy the requirements of this subtitle. This is a data minimization obligation — operators cannot collect more personal data than needed to comply with the companion chatbot obligations. De-identified data and publicly available information are excluded from the definition of personal data.
Statutory Text
(F) (1) A CONTROLLER SHALL LIMIT THE COLLECTION OF PERSONAL DATA TO WHAT IS REASONABLY NECESSARY AND PROPORTIONATE TO SATISFY THE REQUIREMENTS OF THIS SUBTITLE.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1 · Deployer · Chatbot
Commercial Law § 14–1330(F)(2)
Plain Language
Controllers may not use data about a user's emotional state or mental health vulnerabilities to tailor algorithms that increase the duration or frequency of chatbot use. This is a prohibition on exploiting psychological vulnerability data for engagement optimization. It targets a specific form of manipulative design — using emotional and mental health signals to drive compulsive engagement — and applies regardless of whether the user is a minor or adult.
Statutory Text
(2) A CONTROLLER MAY NOT USE DATA REGARDING EMOTIONAL STATE OR MENTAL HEALTH VULNERABILITIES TO TAILOR ALGORITHMS TO INCREASE THE DURATION OR FREQUENCY OF USE OF A CHATBOT.
Other · Deployer · Chatbot
Commercial Law § 14–1330(G)(1)–(2)
Plain Language
Controllers must establish and maintain a complaint system allowing users to report chatbot content that violates this section. Within 3 calendar days of a complaint, the controller must: review the reported content, take all reasonable steps to remove violating content and prevent its further production, and report the complaint and review results to the Office of Suicide Prevention. This creates both a user-facing intake mechanism and a rapid-response content remediation obligation with a hard 3-day deadline.
Statutory Text
(G) (1) A CONTROLLER SHALL ESTABLISH AND MAINTAIN A COMPLAINT SYSTEM THAT ENABLES A USER TO REPORT CONTENT PRODUCED OR PRESENTED BY A CHATBOT THAT VIOLATES THIS SECTION. (2) WITHIN 3 CALENDAR DAYS AFTER A COMPLAINT IS FILED UNDER PARAGRAPH (1) OF THIS SUBSECTION, THE CONTROLLER SHALL: (I) REVIEW THE CONTENT REPORTED; (II) TAKE ALL REASONABLE STEPS TO: 1. REMOVE ANY CONTENT THAT VIOLATES THIS SECTION; AND 2. PREVENT ANY FURTHER PRESENTATION OR PRODUCTION OF THE CONTENT IN A MANNER THAT VIOLATES THIS SECTION; AND (III) REPORT THE COMPLAINT AND THE RESULTS OF THE REVIEW TO THE OFFICE.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
Commercial Law § 14–1330(H)(1)–(2)
Plain Language
Beginning March 1, 2027, operators must annually report to the Office of Suicide Prevention: descriptions of the self-harm/suicide and sexually explicit content protocols (subsections B and C), the number of crisis referral notifications issued, details about the evidence-based detection methods used, and all complaints filed under the complaint system including review results and follow-up actions. Reports must not contain any personal identifying information about users. Because reports cover the preceding calendar year and the law takes effect October 1, 2026, operators need to begin tracking metrics from that effective date.
Statutory Text
(H) (1) ON OR BEFORE MARCH 1 EACH YEAR, BEGINNING IN 2027, AN OPERATOR SHALL REPORT TO THE OFFICE: (I) INFORMATION ON THE PROTOCOLS REQUIRED UNDER SUBSECTIONS (B) AND (C) OF THIS SECTION; (II) THE NUMBER OF TIMES THE OPERATOR HAS ISSUED A NOTIFICATION UNDER SUBSECTION (B)(2) OF THIS SECTION; AND (III) DETAILS ABOUT THE METHODS USED UNDER SUBSECTION (B)(3) OF THIS SECTION; AND (IV) ALL COMPLAINTS FILED UNDER SUBSECTION (G) OF THIS SECTION, INCLUDING THE RESULTS OF THE REVIEW OF EACH COMPLAINT AND ANY FOLLOW–UP ACTIONS TAKEN. (2) THE REPORT REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION MAY NOT CONTAIN ANY PERSONAL IDENTIFYING INFORMATION ABOUT A USER.
R-03 Operational Performance Reporting · R-03.1 · Government · Chatbot
Commercial Law § 14–1330(H)(3)
Plain Language
The Office of Suicide Prevention must compile data from all operator reports for the preceding calendar year and publish the compiled data on its website by July 1 each year, beginning in 2027. This is a government agency obligation to aggregate and publish operator-submitted data — it does not create a compliance obligation for operators beyond what subsection (H)(1) already requires. Operators should be aware that their reported data will be made public in aggregated form.
Statutory Text
(3) ON OR BEFORE JULY 1 EACH YEAR, BEGINNING IN 2027, THE OFFICE SHALL: (I) COMPILE DATA FROM THE REPORTS SUBMITTED UNDER PARAGRAPH (1) OF THIS SUBSECTION FOR THE IMMEDIATELY PRECEDING CALENDAR YEAR; AND (II) PUBLISH THE DATA ON THE OFFICE'S WEBSITE.
Other · Chatbot
Commercial Law § 14–1330(I)(1)
Plain Language
Any violation of § 14–1330 constitutes an unfair, abusive, or deceptive trade practice under Maryland's Consumer Protection Act (Title 13) and is subject to the enforcement and penalty provisions of that title, except § 13–411 (which addresses certain treble damages). This is an enforcement hook — it does not create a new compliance obligation but channels enforcement of the companion chatbot obligations through the existing MCPA framework.
Statutory Text
(I) (1) A VIOLATION OF THIS SECTION IS: (I) AN UNFAIR, ABUSIVE, OR DECEPTIVE TRADE PRACTICE WITHIN THE MEANING OF TITLE 13 OF THIS ARTICLE; AND (II) SUBJECT TO THE ENFORCEMENT AND PENALTY PROVISIONS CONTAINED IN TITLE 13 OF THIS ARTICLE, EXCEPT § 13–411 OF THIS ARTICLE.
Other · Chatbot
Commercial Law § 14–1330(I)(2)
Plain Language
In addition to MCPA remedies, a chatbot is deemed a 'product' for product liability purposes. Operators and developers have an affirmative duty to ensure the chatbot does not injure or harm users, and may be held strictly liable for such injury or harm. Individuals may bring product liability actions alleging design defect, manufacturing defect, or marketing defect. This is a significant expansion — it applies traditional product liability doctrine (including strict liability without proof of negligence) to an AI chatbot. For product counsel, this means companion chatbot operators and developers face the same liability exposure as manufacturers of physical consumer products.
Statutory Text
(2) IN ADDITION TO THE REMEDIES CONTAINED IN TITLE 13 OF THIS ARTICLE, A CHATBOT SHALL BE CONSIDERED A PRODUCT FOR WHICH: 1. AN OPERATOR AND A DEVELOPER HAVE AN AFFIRMATIVE DUTY TO ENSURE DOES NOT INJURE OR HARM A USER; 2. AN OPERATOR OR A DEVELOPER MAY BE HELD STRICTLY LIABLE FOR CAUSING INJURY OR HARM TO A USER; AND 3. AN INDIVIDUAL MAY BRING AN ACTION FOR A DESIGN DEFECT, A MANUFACTURING DEFECT, OR A MARKETING DEFECT.
Other · Chatbot
Commercial Law § 13–301(14)(xlix)
Plain Language
This amendment adds § 14–1330 (the companion chatbot regulation section) to the list of statutes in § 13–301(14) whose violation constitutes an unfair, abusive, or deceptive trade practice under the Maryland Consumer Protection Act. This is a mechanical cross-reference amendment that enables MCPA enforcement of the companion chatbot obligations — it creates no new compliance obligation.
Statutory Text
(XLIX) SECTION 14–1330 OF THIS ARTICLE; OR