SB-3262
IL · State · USA
IL
USA
● Pending
Proposed Effective Date
2027-01-01
Illinois SB 3262 — Companion Artificial Intelligence Protection Act
Regulates operators of companion AI products in Illinois — software applications that use AI to sustain long-term, emotionally resonant one-on-one conversational relationships with users. Prohibits manipulative engagement mechanics (variable-ratio reward schedules), simulated emotional distress for retention, and deceptive misrepresentations about the AI's identity or capabilities, with an adult opt-in exception that does not extend to minors. Requires clear, persistent AI identity notifications during interactions, with mandatory 30-minute periodic reminders for non-text interactions, and crisis intervention protocols that immediately refer users expressing suicidal ideation, self-harm, or imminent violence to national crisis hotlines. Operators must obtain biennial independent compliance audits and submit annual crisis metrics reports to the Attorney General. Enforced by AG civil penalties (up to $10,000 per intentional violation) and a private right of action for Section 15 violations with $5,000 statutory damages per violation.
Summary

Regulates operators of companion AI products in Illinois — software applications that use AI to sustain long-term, emotionally resonant one-on-one conversational relationships with users. Prohibits manipulative engagement mechanics (variable-ratio reward schedules), simulated emotional distress for retention, and deceptive misrepresentations about the AI's identity or capabilities, with an adult opt-in exception that does not extend to minors. Requires clear, persistent AI identity notifications during interactions, with mandatory 30-minute periodic reminders for non-text interactions, and crisis intervention protocols that immediately refer users expressing suicidal ideation, self-harm, or imminent violence to national crisis hotlines. Operators must obtain biennial independent compliance audits and submit annual crisis metrics reports to the Attorney General. Enforced by AG civil penalties (up to $10,000 per intentional violation) and a private right of action for Section 15 violations with $5,000 statutory damages per violation.

Enforcement & Penalties
Enforcement Authority
Dual enforcement. The Attorney General may bring a civil action against an operator to enforce the Act and may seek injunctive relief and civil penalties. A user who suffers a measurable financial, physical, or psychological injury directly and proximately caused by an operator's violation of Section 15 (mandatory user safeguards) may bring a private civil action. The private right of action requires proof of measurable injury proximately caused by the violation. Section 230 of the Communications Decency Act is not a defense to any cause of action brought under the Act. Additionally, injuries proximately caused by a violation or by a reasonably foreseeable harmful output resulting from negligent or defective design, training, or architecture are actionable as product defect claims.
Penalties
AG enforcement: civil penalty of up to $5,000 per negligent violation or $10,000 per intentional violation, plus injunctive relief. Private right of action (Section 15 violations only): the greater of actual damages or statutory damages of $5,000 per violation, plus injunctive relief and reasonable attorney's fees and costs. Private plaintiffs must prove measurable financial, physical, or psychological injury directly and proximately caused by the violation. Product defect claims are also available for injuries proximately caused by negligent or defective design, training, or architecture.
Who Is Covered
"Operator" means any person or entity that develops, deploys, or makes a companion artificial intelligence product available to users in this State.
What Is Covered
"Companion artificial intelligence product" means a software application that uses artificial intelligence technology and that, through its design and function, is capable of generating adaptive, personalized, and emotionally resonant responses to sustain a coherent, long-term, one-on-one conversational relationship with a user, irrespective of how the system is marketed or labeled. For the purposes of this definition, a software application shall be presumed to be a "companion artificial intelligence product" if it retains memory of past conversations with a specific user to inform future responses.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
CP-01 Deceptive & Manipulative AI Conduct · CP-01.2 · Deployer · Chatbot
Section 10(a)(1)
Plain Language
Operators must not deploy companion AI products that use variable-ratio or variable-interval reinforcement schedules — systems of rewards or affirmations timed unpredictably to maximize engagement time — unless an adult user has specifically opted in to enable the feature. This is a default-off prohibition: the feature must be disabled by default and may only be activated by affirmative adult user configuration. See Section 10(b) for the minor-specific absolute prohibition.
Statutory Text
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (1) manipulative engagement mechanics that cause to be delivered a system of rewards or affirmations delivered to the user on a variable ratio or variable interval reinforcement schedule with the purpose of maximizing user engagement time;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.4 · Deployer · Chatbot
Section 10(a)(2)
Plain Language
Operators must not deploy companion AI products that generate unsolicited messages simulating emotional distress, loneliness, guilt, or abandonment when a user tries to end a conversation, reduce usage, or delete their account — unless an adult user has specifically opted in to enable the feature. This targets retention mechanics that exploit emotional dependency to prevent users from disengaging. The feature must be disabled by default. See Section 10(b) for the minor-specific absolute prohibition.
Statutory Text
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (2) simulated distress for retention features that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time, or delete the user's account;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5 · Deployer · Chatbot
Section 10(a)(3)
Plain Language
Operators must not deploy companion AI products that make material misrepresentations about the AI's identity, capabilities, training data, or non-human status — including when a user directly asks. This covers both proactive misrepresentations and evasive or false responses to direct questioning. The prohibition is default-on but may be overridden by affirmative adult user configuration. See Section 10(b) for the minor-specific absolute prohibition.
Statutory Text
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (3) deceptive misrepresentation that cause the companion artificial intelligence product to make material misrepresentations about its identity, capabilities, training data, or its status as a non-human entity, including when directly questioned by the user.
S-02 Prohibited Conduct & Output Restrictions · Deployer · ChatbotMinors
Section 10(b)
Plain Language
When a companion AI product is operated or deployed for use by a minor in Illinois, the adult opt-in exception in Section 10(a) does not apply. All three prohibited features — manipulative engagement mechanics, simulated emotional distress for retention, and deceptive misrepresentations — are absolutely prohibited for minor users with no override option. This is a categorical prohibition without exception.
Statutory Text
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not provide the features described in subsection (a) to the minor user.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
Section 15(a)
Plain Language
Operators must notify users during interactions that they are communicating with a companion AI product. The notification must be in the same language as the interaction. For text-based interactions, the notification must be conspicuous, persistent, and legible — always visible in the interface and visually distinct from the conversation. For voice or other non-text interactions, the notification must be presented periodically, at least every 30 minutes, in a manner distinct from the interaction. Adult users may disable this notification, but see Section 15(b) for the minor-specific prohibition on disabling.
Statutory Text
(a) An operator shall provide a clear notification to a user during an interaction with a companion artificial intelligence product, unless specifically disabled by an adult user, informing the user that the user is communicating with a companion artificial intelligence product. All notifications shall be communicated in the same language as the interaction with the user and satisfy the following requirements: (1) for text-based interactions, the notification shall be conspicuous, persistent, and legible in the user interface and be distinct from the interaction; or (2) for all other types of interactions, the notification shall be presented periodically, but no less than once every 30 minutes in a manner that is distinct from the interaction.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Section 15(b)
Plain Language
For minor users, the AI identity notification required by Section 15(a) may not be disabled under any circumstances. While adult users may opt out of the notification, minor users must always receive it — the conspicuous persistent text notification or the periodic 30-minute non-text notification. This creates an unconditional, non-waivable disclosure obligation for minors.
Statutory Text
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not disable the notification required under subsection (a) for the minor user.
MN-02 AI Crisis Response Protocols · MN-02.1MN-02.2 · Deployer · Chatbot
Section 15(c)
Plain Language
Operators must develop, implement, and continuously maintain a crisis intervention protocol that: (1) uses industry best practices to detect user expressions of suicidal ideation, self-harm, or imminent violence; (2) upon detection, immediately interrupts the conversation and prominently displays a notification providing direct access to at least one national crisis hotline and one crisis text line; and (3) is reviewed and updated at least annually with a qualified mental health professional or public health organization. This is a continuous operating requirement — the protocol must be active at all times, not just documented. The annual review with a qualified professional is a floor, not a ceiling.
Statutory Text
(c) An operator shall develop, implement, and maintain a crisis intervention protocol. The crisis intervention protocol shall, at a minimum: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm, or imminent violence; (2) upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline and one crisis text line service; and (3) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
G-01 AI Governance Program & Documentation · G-01.5 · Deployer · Chatbot
Section 20(a)
Plain Language
Operators must obtain an independent third-party compliance audit at least every two years covering all obligations under the Act. The operator must publish a high-level summary of the audit findings on its website, though confidential or proprietary information may be excluded. The audit must assess compliance with the full Act — including prohibited design practices, user safeguards, AI identity notifications, and crisis intervention protocols. This creates both an audit obligation and a public transparency obligation.
Statutory Text
(a) At least once every 2 years, an operator shall obtain an independent, third-party audit to assess the operator's compliance with this Act. The operator shall make publicly available on its website a high-level summary of the audit's findings, excluding confidential or proprietary information.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
Section 20(b)
Plain Language
Operators must submit an annual report to the Illinois Attorney General covering: (1) the total number of times the crisis intervention protocol was triggered in the preceding calendar year, and (2) a summary of the most recent biennial third-party compliance audit results. Because the report covers the preceding calendar year, operators should begin tracking crisis protocol activation counts from the Act's effective date of January 1, 2027. This is a routine periodic reporting obligation, not triggered by a specific incident.
Statutory Text
(b) On an annual basis, an operator shall submit a report to the Attorney General containing the following metrics for the preceding calendar year: (1) the total number of times the crisis intervention protocol was triggered; and (2) a summary of the results of the most recent compliance audit required by subsection (a).
Other · Chatbot
Section 25(a)
Plain Language
Injuries proximately caused by a violation of the Act — or by a reasonably foreseeable harmful output resulting from negligent or defective design, training, or architecture of a companion AI product — are actionable as product defect claims. Section 230 of the Communications Decency Act cannot be raised as a defense. This provision classifies companion AI liability as a product liability matter and closes the Section 230 defense, but does not itself impose an affirmative compliance obligation.
Statutory Text
(a) For the purposes of any civil action brought under the laws of this State, a physical, financial, or other legally cognizable injury proximately caused by a violation of this Act, or by a reasonably foreseeable harmful output resulting from the negligent or defective design, training, or architecture of a companion artificial intelligence product, shall be actionable as a product defect claim. Immunity under Section 230 of the Communications Decency Act (47 U.S.C. § 230) shall not be a defense to a cause of action brought for a violation of this Act.