SB-1090
PA · State · USA
PA
USA
● Pending
Proposed Effective Date
2026-06-03
Pennsylvania SB 1090 — Safeguarding Adolescents from Exploitative Chatbots and Harmful AI Technology Act
Imposes safety and disclosure obligations on operators of AI companion platforms in Pennsylvania. Requires AI identity disclosure when a reasonable person could be misled, with stricter unconditional disclosure and periodic reminders for users known or reasonably suspected to be minors. Mandates operators maintain and publish protocols to prevent suicidal ideation, self-harm, and violence-encouraging content, including crisis referral notifications. Requires operators to prevent AI companions from producing sexually explicit content to minors and to disclose suitability limitations when services are offered to known minors. Enforced exclusively by the Attorney General with civil penalties up to $10,000 per violation. The act does not apply to underlying AI models unless directly offered or deployed as an AI companion.
Summary

Imposes safety and disclosure obligations on operators of AI companion platforms in Pennsylvania. Requires AI identity disclosure when a reasonable person could be misled, with stricter unconditional disclosure and periodic reminders for users known or reasonably suspected to be minors. Mandates operators maintain and publish protocols to prevent suicidal ideation, self-harm, and violence-encouraging content, including crisis referral notifications. Requires operators to prevent AI companions from producing sexually explicit content to minors and to disclose suitability limitations when services are offered to known minors. Enforced exclusively by the Attorney General with civil penalties up to $10,000 per violation. The act does not apply to underlying AI models unless directly offered or deployed as an AI companion.

Enforcement & Penalties
Enforcement Authority
The Attorney General has exclusive enforcement authority. Violations are enforced through civil actions filed by the Attorney General. No private right of action is created by the statute. No cure period or safe harbor is specified.
Penalties
Civil penalty not to exceed $10,000 per violation, collected in a civil action filed by the Attorney General. The penalty is in addition to any other remedy provided by law. No statutory minimum is specified — the $10,000 figure is a cap, not a floor. Actual harm is not required for the civil penalty.
Who Is Covered
"Operator." A person or business that makes an AI companion platform available to a user in this Commonwealth.
What Is Covered
"AI companion." As follows: (1) A system using artificial intelligence, generative artificial intelligence or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user by: (i) Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement. (ii) Asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt. (iii) Sustaining an ongoing dialogue concerning matters personal to the user. (2) The term does not include: (i) A system used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by the business entity, customer service account information or other information strictly related to the business entity's customer service. (ii) A system that is primarily designed and marketed for providing efficiency improvements, research or technical assistance. (iii) A system used by a business entity solely for internal purposes or employee productivity. (iv) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm or sexually explicit conduct or maintain a dialogue on other topics unrelated to the video game. (v) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
"AI companion platform." A platform that allows a user to engage with AI companions.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Section 3(a)
Plain Language
If a user could reasonably mistake the AI companion for a real person, the operator must display a clear and prominent notice that the companion is AI-generated and not human. This is a conditional trigger — disclosure is only required when a reasonable person would be misled. If the AI companion clearly presents itself as artificial from the outset, no additional disclosure under this subsection is needed.
Statutory Text
If a reasonable person interacting with an AI companion would be misled to believe the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Section 3(b)(1)-(2)
Plain Language
Operators must maintain and implement a protocol — to the extent technologically feasible — that prevents AI companions from producing suicide, self-harm, or violence-encouraging content. When a user expresses suicidal ideation or self-harm, the protocol must include a referral notification directing the user to crisis service providers such as a suicide hotline or crisis text line. Operators must also publicly post the details of this protocol on their website. The 'technologically feasible' qualifier applies to the prevention protocol but the crisis referral and website publication obligations appear unconditional.
Statutory Text
(1) An operator shall maintain and implement a protocol, to the extent technologically feasible, to prevent an AI companion on its platform from producing suicidal ideation, suicide or self-harm content to a user, or content that directly encourages the user to commit acts of violence. The protocol shall include providing a notification to the user referring the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide or self-harm. (2) The operator shall publish details of the protocol required under paragraph (1) on its publicly accessible Internet website.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Section 3(c)(1)-(2)
Plain Language
When the operator knows or should have known that a user is a minor, two disclosure obligations apply unconditionally: (1) the operator must always disclose that the user is interacting with AI and not a human — regardless of whether a reasonable person would be misled; and (2) the operator must provide a default, clear and conspicuous reminder at least every three hours during continuing interactions that the AI companion is artificially generated and that the user should take a break. The 'should have known' standard is broader than actual knowledge and may require operators to make reasonable efforts to identify minor users.
Statutory Text
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (1) Disclose to the user that the user is interacting with artificial intelligence and not an actual human being. (2) Provide by default a clear and conspicuous notification to the user at least once every three hours during continuing interactions that reminds the user to take a break and that the AI companion is artificially generated and not human.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Section 3(c)(3)
Plain Language
When the operator knows or should have known a user is a minor, the operator must implement reasonable measures to prevent the AI companion from generating visual material depicting sexually explicit conduct (as defined under federal law at 18 U.S.C. § 2256) or from directly instructing the minor to engage in sexually explicit conduct. This is a 'reasonable measures' standard — not a strict liability prohibition — but operators must affirmatively institute safeguards. The obligation covers both visual content generation and direct solicitation of minors to engage in such conduct.
Statutory Text
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (3) Institute reasonable measures to prevent its AI companion from producing visual material of sexually explicit conduct or directly instructing the minor to engage in sexually explicit conduct.
S-02 Prohibited Conduct & Output Restrictions · S-02.10 · Deployer · ChatbotMinors
Section 3(d)
Plain Language
If an operator offers its AI companion service to users it knows are minors, the operator must disclose — on the application, browser, or any other access format — that AI companions may not be suitable for some minors. This is a point-of-access suitability disclosure that must be visible on the platform itself, not buried in terms of service. The obligation is triggered only when the operator knows it is serving minor users.
Statutory Text
IF A SERVICE IS OFFERED TO USERS THAT AN OPERATOR KNOWS ARE MINORS, AN operator shall disclose to users of its AI companion platform, on the application, browser or any other format through which the platform is accessed, that AI companions may not be suitable for some minors.
Other · Chatbot
Section 4
Plain Language
The act does not impose obligations on the underlying AI model itself — only on the operator of the AI companion platform. However, if a model developer directly offers, configures, or deploys its model as an AI companion, the act applies to that developer in its capacity as an operator. This is a scope limitation that clarifies the boundary between model development and deployment, not an independent compliance obligation.
Statutory Text
This act shall not apply to the underlying artificial intelligence model unless the model is directly offered, configured or deployed as an AI companion.
Other · Chatbot
Section 5(a)-(b)
Plain Language
The Attorney General enforces the act through civil actions. Operators that violate any provision are subject to civil penalties up to $10,000 per violation, in addition to any other remedy available under law. This is an enforcement mechanism, not an independent compliance obligation.
Statutory Text
(a) Attorney General.--The Attorney General shall enforce this act. (b) Civil penalty.--An operator that violates this act shall be liable for, IN ADDITION TO ANY OTHER REMEDY PROVIDED BY LAW, a civil penalty in an amount not to exceed $10,000 per violation to be collected in a civil action filed by the Attorney General.