HB-2006
PA · State · USA
PA
USA
● Pending
Proposed Effective Date
2026-01-30
Pennsylvania HB 2006 — Artificial Intelligence in Companionship Applications Safety Act
Imposes safety and disclosure obligations on operators of AI companion systems in Pennsylvania. Operators must implement protocols that detect suicidal ideation and self-harm expressions, decline to assist with suicide methods, and refer users to crisis services including the 988 Suicide and Crisis Lifeline. AI companions are prohibited from claiming or implying they are licensed mental health professionals. Operators must publish their safety protocols on their website and notify users at session start and every three hours that they are communicating with AI. Enforcement is exclusively through the Attorney General via user complaints, with civil penalties up to $15,000 per day per violation and injunctive relief.
Summary

Imposes safety and disclosure obligations on operators of AI companion systems in Pennsylvania. Operators must implement protocols that detect suicidal ideation and self-harm expressions, decline to assist with suicide methods, and refer users to crisis services including the 988 Suicide and Crisis Lifeline. AI companions are prohibited from claiming or implying they are licensed mental health professionals. Operators must publish their safety protocols on their website and notify users at session start and every three hours that they are communicating with AI. Enforcement is exclusively through the Attorney General via user complaints, with civil penalties up to $15,000 per day per violation and injunctive relief.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. A user may file a complaint with the Attorney General alleging a violation. The Attorney General, if provided satisfactory evidence that an operator has or intends to violate the act, may bring an action in the name and on behalf of the people of the Commonwealth. The Attorney General may also initiate an action in equity for an injunction in Commonwealth Court or the court of common pleas of the county in which the individual or entity resides. No private right of action is created.
Penalties
Civil penalty of no more than $15,000 per day per violation. Additional remedies that the court deems appropriate. Injunctive relief may be issued without requiring proof that an individual has been injured or experienced harm. Respondent must comply with an injunction within five days.
Who Is Covered
"Operator." Any individual, association, business, member or subsidiary who operates for or provides an AI companion to a user.
What Is Covered
"AI companion." (1) A system that: (i) uses artificial intelligence and generative artificial intelligence to simulate a human or humanlike relationship with emotional recognition algorithms; and (ii) interacts with a user by compiling previous information or discussions from user sessions to: (A) engage with the user's preferences; (B) personalize interaction based on user preferences; (C) ask emotion-based questions unprompted in order to illicit feelings; and (D) maintain conversations to the user feelings or personal matters. (2) The term does not include AI used for customer service, research, technical assistance or systems for employee productivity in the workplace.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
Section 3(a)-(b)
Plain Language
Operators may not provide an AI companion to any user unless the system contains active protocols that (1) detect suicidal ideation or self-harm expressions, (2) refuse to assist with suicide attempts or methods, and (3) refer the user to crisis services when suicidal ideation or self-harm is detected. Referrals must include the 988 Suicide and Crisis Lifeline (or its successor), the closest behavioral health crisis centers to the user, or other appropriate crisis services. This is a continuous operating prerequisite — the protocols must be in place as a condition of lawfully providing the AI companion at all.
Statutory Text
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
Section 3(a)(1),(3) and Section 3(b)
Plain Language
Operators must implement and maintain crisis detection and referral protocols as a condition of operating an AI companion. When the system identifies suicidal ideation or self-harm, it must refer the user to crisis resources including the 988 Suicide and Crisis Lifeline, nearby behavioral health crisis centers, or other appropriate services. Unlike CA SB 243, this statute does not impose annual reporting on crisis referral counts to any state agency — the obligation is limited to maintaining the protocol and providing referrals. This mapping captures the crisis referral dimension of Section 3; the output restriction dimension is mapped separately under S-02.
Statutory Text
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
Section 3(c)
Plain Language
AI companions are categorically prohibited from claiming, implying, or advertising that they are licensed emotional support professionals or mental health professionals, or that they replace services rendered by licensed mental health professionals. This covers any output, interface design, or marketing that could create the impression of professional equivalence. The prohibition applies to the AI companion itself (its outputs and interface) and to the operator's advertising.
Statutory Text
(c) Prohibition.--An AI companion may not claim, imply or advertise that the AI companion is a licensed emotional support professional or mental health professional or replaces services rendered by a licensed mental health professional.
S-02 Prohibited Conduct & Output Restrictions · S-02.9 · Deployer · Chatbot
Section 4(1)
Plain Language
Operators must publicly post the details of their suicidal ideation and self-harm detection and referral protocols on their website. This is a standalone disclosure obligation — the operator must make the crisis response protocol publicly accessible, separate from the obligation to maintain and operate the protocol itself.
Statutory Text
An operator shall: (1) Publish details on the protocol on the operator's Internet website.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
Section 4(2)
Plain Language
Operators must notify all users — both at the start of every AI companion session and at least every three hours during ongoing sessions — that they are communicating with an AI companion and not a human. The notification may be verbal or written. Unlike CA SB 243, which conditions initial disclosure on whether a reasonable person would be misled (except for minors), this obligation is unconditional and applies to all users regardless of whether deception is plausible. The three-hour periodic reminder matches CA SB 243's interval but applies to all users, not just known minors.
Statutory Text
An operator shall: (2) At the beginning of a session with an AI companion and once every three hours during the session, provide a notification to the user stating, either verbally or in writing, that the user is communicating with an AI companion and not a human.