HB-2006
PA · State · USA
PA
USA
● Pending
Proposed Effective Date
2026-01-29
Pennsylvania HB 2006 — Artificial Intelligence in Companionship Applications Safety Act
Imposes safety and disclosure obligations on operators of AI companion systems in Pennsylvania. Operators must maintain protocols to identify suicidal ideation and self-harm, decline to assist with suicide methods, and refer users to crisis services including the 988 Lifeline. Operators must publish protocol details on their website and provide AI identity disclosures at session start and every three hours. AI companions are prohibited from claiming to be or replace licensed mental health professionals. Enforcement is exclusively through the Attorney General upon user complaint, with civil penalties up to $15,000 per day per violation and injunctive relief available without proof of harm.
Summary

Imposes safety and disclosure obligations on operators of AI companion systems in Pennsylvania. Operators must maintain protocols to identify suicidal ideation and self-harm, decline to assist with suicide methods, and refer users to crisis services including the 988 Lifeline. Operators must publish protocol details on their website and provide AI identity disclosures at session start and every three hours. AI companions are prohibited from claiming to be or replace licensed mental health professionals. Enforcement is exclusively through the Attorney General upon user complaint, with civil penalties up to $15,000 per day per violation and injunctive relief available without proof of harm.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only, triggered by user complaint. A user may file a complaint with the Attorney General alleging a violation. The Attorney General, if provided satisfactory evidence that an operator has or intends to violate the act, may bring an action on behalf of the Commonwealth. The Attorney General may also initiate an action in equity for injunction in Commonwealth Court or the court of common pleas of the county in which the operator resides. No private right of action is created.
Penalties
Civil penalty of no more than $15,000 per day per violation. Additional remedies as the court deems appropriate. Injunctive relief is available without requiring proof that an individual has been injured or experienced harm. Respondent must comply with injunction within five days.
Who Is Covered
"Operator." Any individual, association, business, member or subsidiary who operates for or provides an AI companion to a user.
What Is Covered
"AI companion." (1) A system that: (i) uses artificial intelligence and generative artificial intelligence to simulate a human or humanlike relationship with emotional recognition algorithms; and (ii) interacts with a user by compiling previous information or discussions from user sessions to: (A) engage with the user's preferences; (B) personalize interaction based on user preferences; (C) ask emotion-based questions unprompted in order to illicit feelings; and (D) maintain conversations to the user feelings or personal matters. (2) The term does not include AI used for customer service, research, technical assistance or systems for employee productivity in the workplace.
Compliance Obligations 4 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
Section 3(a)-(b)
Plain Language
Operators may not offer an AI companion at all unless it has active protocols that (1) detect suicidal ideation or expressions of self-harm, (2) refuse to assist with suicide attempts or methods, and (3) refer the user to crisis resources upon detection. Referrals must include the 988 Suicide and Crisis Lifeline (or successor), the nearest behavioral health crisis centers, or other appropriate crisis services. This is a precondition of operation — the AI companion cannot be made available to any user unless these protocols are in place.
Statutory Text
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
Section 3(c)
Plain Language
AI companions are categorically prohibited from claiming, implying, or advertising that they are licensed emotional support professionals or mental health professionals, or that they replace the services of a licensed mental health professional. This applies to the AI companion's outputs, marketing, and interface design — operators must ensure neither the system's conversational responses nor any promotional materials suggest licensed professional equivalence.
Statutory Text
(c) Prohibition.--An AI companion may not claim, imply or advertise that the AI companion is a licensed emotional support professional or mental health professional or replaces services rendered by a licensed mental health professional.
S-02 Prohibited Conduct & Output Restrictions · S-02.9 · Deployer · Chatbot
Section 4(1)
Plain Language
Operators must publicly post on their website the details of their crisis detection and response protocol required under Section 3. This is a standalone disclosure obligation — the operator must make the protocol details publicly accessible, not merely maintain them internally.
Statutory Text
An operator shall: (1) Publish details on the protocol on the operator's Internet website.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
Section 4(2)
Plain Language
Operators must unconditionally disclose to every user that they are communicating with an AI companion and not a human. This disclosure must be provided at the start of every session and repeated every three hours during continuing sessions. The disclosure may be delivered verbally or in writing. Unlike some jurisdictions that trigger disclosure only when a reasonable person could be misled, this obligation is unconditional — it applies at every session regardless of context.
Statutory Text
An operator shall: (2) At the beginning of a session with an AI companion and once every three hours during the session, provide a notification to the user stating, either verbally or in writing, that the user is communicating with an AI companion and not a human.