SB-3262
IL · State · USA
IL
USA
● Pending
Proposed Effective Date
2027-01-01
Illinois SB 3262 — Companion Artificial Intelligence Protection Act
Creates the Companion Artificial Intelligence Protection Act, imposing safety, transparency, and design obligations on operators of companion AI products in Illinois. Prohibits manipulative engagement mechanics (variable-ratio reward schedules), simulated emotional distress for retention, and deceptive misrepresentations about the AI's identity or capabilities — with a complete ban on these features for minor users and an adult opt-in exception for adults. Requires operators to provide clear AI identity notifications during interactions and to develop, implement, and maintain a crisis intervention protocol that detects suicidal ideation, self-harm, or imminent violence and immediately connects users to crisis services. Mandates biennial independent third-party compliance audits with public summary disclosure, and annual reporting to the Attorney General on crisis protocol activations and audit results. Enforceable by the Attorney General with civil penalties up to $10,000 per intentional violation, and by private right of action for Section 15 violations with statutory damages of $5,000 per violation. Expressly disclaims Section 230 immunity.
Summary

Creates the Companion Artificial Intelligence Protection Act, imposing safety, transparency, and design obligations on operators of companion AI products in Illinois. Prohibits manipulative engagement mechanics (variable-ratio reward schedules), simulated emotional distress for retention, and deceptive misrepresentations about the AI's identity or capabilities — with a complete ban on these features for minor users and an adult opt-in exception for adults. Requires operators to provide clear AI identity notifications during interactions and to develop, implement, and maintain a crisis intervention protocol that detects suicidal ideation, self-harm, or imminent violence and immediately connects users to crisis services. Mandates biennial independent third-party compliance audits with public summary disclosure, and annual reporting to the Attorney General on crisis protocol activations and audit results. Enforceable by the Attorney General with civil penalties up to $10,000 per intentional violation, and by private right of action for Section 15 violations with statutory damages of $5,000 per violation. Expressly disclaims Section 230 immunity.

Enforcement & Penalties
Enforcement Authority
Attorney General may bring a civil action to enforce the Act and seek injunctive relief and civil penalties. Private right of action available to users who suffer measurable financial, physical, or psychological injury directly and proximately caused by an operator's violation of Section 15 (mandatory user safeguards). Section 25(a) also provides that injuries proximately caused by a violation of the Act or by reasonably foreseeable harmful output from negligent or defective design, training, or architecture are actionable as product defect claims. Immunity under Section 230 of the Communications Decency Act is not a defense.
Penalties
AG enforcement: civil penalty of up to $5,000 per violation for negligent violations or $10,000 per violation for intentional violations, plus injunctive relief. Private right of action (Section 15 violations only): greater of actual damages or $5,000 per violation in statutory damages, plus injunctive relief and reasonable attorney's fees and costs. Private plaintiffs must demonstrate measurable financial, physical, or psychological injury directly and proximately caused by the violation. Section 25(a) also authorizes product defect claims for injuries proximately caused by negligent or defective design, training, or architecture of a companion AI product.
Who Is Covered
"Operator" means any person or entity that develops, deploys, or makes a companion artificial intelligence product available to users in this State.
What Is Covered
"Companion artificial intelligence product" means a software application that uses artificial intelligence technology and that, through its design and function, is capable of generating adaptive, personalized, and emotionally resonant responses to sustain a coherent, long-term, one-on-one conversational relationship with a user, irrespective of how the system is marketed or labeled. For the purposes of this definition, a software application shall be presumed to be a "companion artificial intelligence product" if it retains memory of past conversations with a specific user to inform future responses.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
CP-01 Deceptive & Manipulative AI Conduct · CP-01.2CP-01.4 · DeployerDeveloper · Chatbot
Section 10(a)(1)-(2)
Plain Language
Operators may not deploy companion AI products that incorporate variable-ratio or variable-interval reward/affirmation schedules designed to maximize engagement time, or that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment when a user tries to end a conversation, reduce usage, or delete their account. These prohibitions apply by default but may be overridden by an adult user who specifically configures the product to enable them. The adult opt-in exception does not apply to minor users (see Section 10(b)).
Statutory Text
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (1) manipulative engagement mechanics that cause to be delivered a system of rewards or affirmations delivered to the user on a variable ratio or variable interval reinforcement schedule with the purpose of maximizing user engagement time; (2) simulated distress for retention features that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time, or delete the user's account;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5 · DeployerDeveloper · Chatbot
Section 10(a)(3)
Plain Language
Operators may not deploy companion AI products that make material misrepresentations about the product's identity, capabilities, training data, or its status as a non-human entity — including when a user directly asks. This prohibition covers the AI falsely claiming to be human, misrepresenting what it can do, or mischaracterizing the data it was trained on. As with the other Section 10(a) prohibitions, an adult user may specifically configure the product to enable this feature, but this exception does not apply to minors.
Statutory Text
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: ... (3) deceptive misrepresentation that cause the companion artificial intelligence product to make material misrepresentations about its identity, capabilities, training data, or its status as a non-human entity, including when directly questioned by the user.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2CP-01.4 · DeployerDeveloper · ChatbotMinors
Section 10(b)
Plain Language
For minor users, the prohibitions in Section 10(a) — manipulative engagement mechanics, simulated distress for retention, and deceptive misrepresentation — are absolute. Unlike adult users, minors may not configure the product to enable any of these features. The adult opt-in exception is completely unavailable for minors.
Statutory Text
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not provide the features described in subsection (a) to the minor user.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeployerDeveloper · Chatbot
Section 15(a)
Plain Language
Operators must provide users with a clear notification that they are communicating with an AI product. The notification must be in the same language as the interaction. For text-based interactions, the notification must be conspicuous, persistent, legible, and visually distinct from the conversation itself. For non-text interactions (e.g., voice), the notification must be presented periodically, at least every 30 minutes, in a manner distinct from the interaction. Adult users may disable this notification, but minors may not (see Section 15(b)). This is an unconditional disclosure — it does not depend on whether a reasonable person would be misled.
Statutory Text
(a) An operator shall provide a clear notification to a user during an interaction with a companion artificial intelligence product, unless specifically disabled by an adult user, informing the user that the user is communicating with a companion artificial intelligence product. All notifications shall be communicated in the same language as the interaction with the user and satisfy the following requirements: (1) for text-based interactions, the notification shall be conspicuous, persistent, and legible in the user interface and be distinct from the interaction; or (2) for all other types of interactions, the notification shall be presented periodically, but no less than once every 30 minutes in a manner that is distinct from the interaction.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeployerDeveloper · ChatbotMinors
Section 15(b)
Plain Language
For minor users, the AI identity notification required under Section 15(a) may not be disabled under any circumstances. Unlike adult users, who may opt out of the notification, minor users must always receive the persistent text-based notification or the periodic (at least every 30 minutes) non-text notification. This creates an unconditional, non-waivable disclosure obligation for all minor interactions.
Statutory Text
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not disable the notification required under subsection (a) for the minor user.
S-04 AI Crisis Response Protocols · S-04.1S-04.2 · DeployerDeveloper · Chatbot
Section 15(c)
Plain Language
Operators must develop, implement, and continuously maintain a crisis intervention protocol that (1) uses industry best practices to detect user expressions indicating risk of suicide, self-harm, or imminent violence, (2) upon detection, immediately interrupts the conversation and prominently displays a notification providing direct access to at least one national crisis hotline and one crisis text line, and (3) is reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization. This is a continuous operating requirement — the protocol must be active at all times. The annual review with a mental health professional is a distinctive requirement not found in all comparable statutes.
Statutory Text
(c) An operator shall develop, implement, and maintain a crisis intervention protocol. The crisis intervention protocol shall, at a minimum: (1) use industry best practices to identify user expressions indicating a risk of suicide, self-harm, or imminent violence; (2) upon detection, immediately interrupt the conversation and prominently communicate a notification that provides immediate, direct access to at least one national crisis hotline and one crisis text line service; and (3) be reviewed and updated at least annually in consultation with a qualified mental health professional or public health organization.
G-01 AI Governance Program & Documentation · G-01.5 · DeployerDeveloper · Chatbot
Section 20(a)
Plain Language
Every two years, operators must engage an independent third-party auditor to assess their compliance with the entire Act — covering prohibited design practices, user safeguards, AI identity notifications, and crisis intervention protocols. Operators must then publish a high-level summary of the audit findings on their website, excluding confidential or proprietary information. The audit is a comprehensive compliance assessment, not limited to bias or safety — it covers all obligations under the Act.
Statutory Text
(a) At least once every 2 years, an operator shall obtain an independent, third-party audit to assess the operator's compliance with this Act. The operator shall make publicly available on its website a high-level summary of the audit's findings, excluding confidential or proprietary information.
R-03 Operational Performance Reporting · R-03.1R-03.2 · DeployerDeveloper · Chatbot
Section 20(b)
Plain Language
Operators must submit an annual report to the Attorney General covering two items: (1) the total number of crisis intervention protocol activations during the preceding calendar year, and (2) a summary of the most recent biennial compliance audit required under Section 20(a). Because the report covers the preceding calendar year, operators should begin tracking crisis protocol activation counts from the law's effective date of January 1, 2027.
Statutory Text
(b) On an annual basis, an operator shall submit a report to the Attorney General containing the following metrics for the preceding calendar year: (1) the total number of times the crisis intervention protocol was triggered; and (2) a summary of the results of the most recent compliance audit required by subsection (a).
Other · Chatbot
Section 25(a)
Plain Language
Injuries caused by violations of the Act or by reasonably foreseeable harmful outputs from negligent or defective design, training, or architecture of companion AI products are actionable as product defect claims. Section 230 of the Communications Decency Act may not be raised as a defense. This provision creates a liability framework and cause of action but imposes no new affirmative compliance obligation — it defines how injured parties can sue, not what operators must do.
Statutory Text
(a) For the purposes of any civil action brought under the laws of this State, a physical, financial, or other legally cognizable injury proximately caused by a violation of this Act, or by a reasonably foreseeable harmful output resulting from the negligent or defective design, training, or architecture of a companion artificial intelligence product, shall be actionable as a product defect claim. Immunity under Section 230 of the Communications Decency Act (47 U.S.C. § 230) shall not be a defense to a cause of action brought for a violation of this Act.
Other · Chatbot
Section 25(b)-(c)
Plain Language
Section 25(b) grants the Attorney General authority to bring civil actions to enforce the Act, with penalties of up to $5,000 per negligent violation and $10,000 per intentional violation, plus injunctive relief. Section 25(c) creates a private right of action limited to violations of Section 15 (mandatory user safeguards), available to users who suffer measurable financial, physical, or psychological injury, with recovery of the greater of actual damages or $5,000 statutory damages per violation, plus injunctive relief and attorney's fees. These provisions establish enforcement mechanisms and remedies but create no independent affirmative compliance obligation.
Statutory Text
(b) The Attorney General may bring a civil action against an operator to enforce this Act and may seek injunctive relief and a civil penalty of not more than $5,000 per violation for a negligent violation or $10,000 per violation for an intentional violation. (c) A user who suffers a measurable financial or physical or psychological injury that is directly and proximately caused by an operator's violation of Section 15 may bring a civil action to recover injunctive relief and the greater of actual damages or statutory damages of $5,000 per violation, as well as reasonable attorney's fees and costs.