SB-1090
PA · State · USA
PA
USA
● Pending
Proposed Effective Date
2026-06-03
Pennsylvania SB 1090 — Safeguarding Adolescents from Exploitative Chatbots and Harmful AI Technology Act
Imposes disclosure and safety obligations on operators of AI companion platforms in Pennsylvania. Requires operators to disclose AI identity when a reasonable person could be misled into thinking they are speaking with a human, with stricter unconditional disclosure and periodic reminders for users known or reasonably believed to be minors. Requires operators to maintain and publish protocols preventing AI companions from producing suicidal ideation, self-harm, or violence-encouraging content, including crisis service referrals. Prohibits AI companions from producing sexually explicit visual material or instructing minors to engage in sexually explicit conduct. As amended, requires suitability disclosures only when a service is offered to users the operator knows are minors. Enforced exclusively by the Attorney General with civil penalties up to $10,000 per violation. The act does not apply to underlying AI models unless directly offered, configured, or deployed as an AI companion.
Summary

Imposes disclosure and safety obligations on operators of AI companion platforms in Pennsylvania. Requires operators to disclose AI identity when a reasonable person could be misled into thinking they are speaking with a human, with stricter unconditional disclosure and periodic reminders for users known or reasonably believed to be minors. Requires operators to maintain and publish protocols preventing AI companions from producing suicidal ideation, self-harm, or violence-encouraging content, including crisis service referrals. Prohibits AI companions from producing sexually explicit visual material or instructing minors to engage in sexually explicit conduct. As amended, requires suitability disclosures only when a service is offered to users the operator knows are minors. Enforced exclusively by the Attorney General with civil penalties up to $10,000 per violation. The act does not apply to underlying AI models unless directly offered, configured, or deployed as an AI companion.

Enforcement & Penalties
Enforcement Authority
The Attorney General shall enforce this act. Enforcement is agency-initiated through civil actions filed by the Attorney General. No private right of action is created. No cure period or safe harbor is specified.
Penalties
Civil penalty not to exceed $10,000 per violation, in addition to any other remedy provided by law. Collected through civil action filed by the Attorney General. No statutory minimum floor is specified — the $10,000 figure is a cap, not a minimum. No provision for attorney fees, injunctive relief, or private damages.
Who Is Covered
"Operator." A person or business that makes an AI companion platform available to a user in this Commonwealth.
What Is Covered
"AI companion." As follows: (1) A system using artificial intelligence, generative artificial intelligence or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user by: (i) Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement. (ii) Asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt. (iii) Sustaining an ongoing dialogue concerning matters personal to the user. (2) The term does not include: (i) A system used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by the business entity, customer service account information or other information strictly related to the business entity's customer service. (ii) A system that is primarily designed and marketed for providing efficiency improvements, research or technical assistance. (iii) A system used by a business entity solely for internal purposes or employee productivity. (iv) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm or sexually explicit conduct or maintain a dialogue on other topics unrelated to the video game. (v) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
"AI companion platform." A platform that allows a user to engage with AI companions.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Section 3(a)
Plain Language
If a user could reasonably mistake the AI companion for a real person, the operator must display a clear, prominent notice that the AI companion is artificially generated and not human. This is a conditional trigger — if the AI companion clearly presents itself as AI from the outset in a way that no reasonable person would be misled, no disclosure is required under this subsection. Compare to subsection (c)(1), which imposes an unconditional disclosure requirement for known or suspected minors.
Statutory Text
Disclosure of nonhuman status.--If a reasonable person interacting with an AI companion would be misled to believe the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Section 3(c)(1)-(2)
Plain Language
When the operator knows or should have known a user is a minor, two unconditional obligations apply: (1) always disclose that the user is interacting with AI rather than a human — this is not subject to the 'reasonable person' condition in subsection (a); and (2) provide a prominent default reminder at least every three hours during ongoing conversations that the AI companion is AI-generated and the user should take a break. The 'should have known' standard (as amended from the original 'should reasonably suspect') creates a constructive knowledge obligation — operators cannot avoid these duties by failing to implement reasonable age-detection measures.
Statutory Text
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (1) Disclose to the user that the user is interacting with artificial intelligence and not an actual human being. (2) Provide by default a clear and conspicuous notification to the user at least once every three hours during continuing interactions that reminds the user to take a break and that the AI companion is artificially generated and not human.
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Section 3(b)(1)-(2)
Plain Language
Operators must maintain and implement a protocol — to the extent technologically feasible — that (1) prevents AI companions from producing suicidal ideation, suicide, self-harm, or violence-encouraging content, and (2) refers users expressing suicidal ideation or self-harm to crisis service providers such as suicide hotlines or crisis text lines. The operator must also publish the details of this protocol on its public website. The 'technologically feasible' qualifier applies to the prevention obligation but does not excuse publishing the protocol. This is a continuous operational requirement — the protocol must remain active as a condition of operating the platform.
Statutory Text
(1) An operator shall maintain and implement a protocol, to the extent technologically feasible, to prevent an AI companion on its platform from producing suicidal ideation, suicide or self-harm content to a user, or content that directly encourages the user to commit acts of violence. The protocol shall include providing a notification to the user referring the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide or self-harm. (2) The operator shall publish details of the protocol required under paragraph (1) on its publicly accessible Internet website.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Section 3(c)(3)
Plain Language
When the operator knows or should have known a user is a minor, the operator must implement reasonable measures to prevent the AI companion from (1) generating visual material depicting sexually explicit conduct, and (2) directly instructing the minor to engage in sexually explicit conduct. 'Sexually explicit conduct' is defined by reference to the federal definition at 18 U.S.C. § 2256. The standard is 'reasonable measures' — not absolute prevention — which provides a defense if the operator implements commercially reasonable safeguards that are circumvented.
Statutory Text
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (3) Institute reasonable measures to prevent its AI companion from producing visual material of sexually explicit conduct or directly instructing the minor to engage in sexually explicit conduct.
S-02 Prohibited Conduct & Output Restrictions · S-02.10 · Deployer · ChatbotMinors
Section 3(d)
Plain Language
If an operator offers its AI companion service to users it knows are minors, the operator must disclose to all users — on the application, browser, or any other access format — that AI companions may not be suitable for some minors. As amended, this disclosure obligation is triggered only when the operator knows it has minor users; the original version applied unconditionally. The disclosure must appear on the access interface itself, not buried in terms of service. This is a general suitability warning to all users, distinct from the minor-specific disclosures in subsection (c).
Statutory Text
IF A SERVICE IS OFFERED TO USERS THAT AN OPERATOR KNOWS ARE MINORS, AN operator shall disclose to users of its AI companion platform, on the application, browser or any other format through which the platform is accessed, that AI companions may not be suitable for some minors.
Other · Chatbot
Section 4
Plain Language
The act's obligations do not extend to the underlying AI model itself — only to AI companions as deployed products. A general-purpose foundation model is not covered unless it is directly offered, configured, or deployed as an AI companion. This means model developers whose models are used by third-party operators as AI companions are not directly liable under this act unless the developer itself offers the model in an AI companion configuration.
Statutory Text
This act shall not apply to the underlying artificial intelligence model unless the model is directly offered, configured or deployed as an AI companion.