HB-2225
WA · State · USA
WA
USA
● Passed
Proposed Effective Date
2027-01-01
Washington Engrossed Substitute House Bill 2225 — Regulation of Artificial Intelligence Companion Chatbots
Imposes safety and transparency obligations on operators of AI companion chatbot platforms serving users in Washington. Requires clear disclosure at the start of every interaction and at recurring intervals that the chatbot is AI-generated and not human, with more frequent reminders (every hour) for minors. Prohibits chatbots from claiming to be human. Requires operators to maintain and publicly disclose protocols for detecting and responding to suicidal ideation and self-harm, including crisis referrals. Imposes heightened protections for minors, including restrictions on sexually explicit content and a prohibition on manipulative engagement techniques. Violations are deemed per se unfair or deceptive acts under Washington's Consumer Protection Act (RCW 19.86), enabling both AG enforcement and private lawsuits.
Summary

Imposes safety and transparency obligations on operators of AI companion chatbot platforms serving users in Washington. Requires clear disclosure at the start of every interaction and at recurring intervals that the chatbot is AI-generated and not human, with more frequent reminders (every hour) for minors. Prohibits chatbots from claiming to be human. Requires operators to maintain and publicly disclose protocols for detecting and responding to suicidal ideation and self-harm, including crisis referrals. Imposes heightened protections for minors, including restrictions on sexually explicit content and a prohibition on manipulative engagement techniques. Violations are deemed per se unfair or deceptive acts under Washington's Consumer Protection Act (RCW 19.86), enabling both AG enforcement and private lawsuits.

Enforcement & Penalties
Enforcement Authority
Enforced through Washington's Consumer Protection Act (RCW 19.86). A violation of this chapter is deemed an unfair or deceptive act in trade or commerce. The Attorney General may enforce under RCW 19.86. Private right of action exists under the CPA.
Penalties
The bill routes enforcement through the CPA. The AG may seek injunctive relief under RCW 19.86.080 and civil penalties under RCW 19.86.140 (up to $7,500 per violation of RCW 19.86.020; up to $125,000 for injunction violations). No standalone penalty amount is created by the bill itself. A private right of action exists for actual damages, treble damages (discretionary, capped at $25,000) under RCW 19.86.090, injunctive relief, and attorney's fees; no statutory minimum per violation.
Who Is Covered
"Operator" means any person, partnership, corporation, or entity that makes available or controls access to an AI companion chatbot for users in this state, excluding those used specifically for educational purposes and educational entities.
What Is Covered
"AI companion chatbot" or "AI companion" means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs, including by exhibiting anthropomorphic features, and is able to sustain a relationship across multiple interactions. "AI companion chatbot" or "AI companion" does not include any of the following: (i) A bot that is used only for a business' operational purposes, productivity and analysis related to source information, internal research, technical assistance, or customer service, if such bot does not sustain a relationship across multiple interactions and generate outputs that are likely to elicit emotional responses in the user; (ii) A bot that is a feature of a video game or gaming system or application and is limited to replies related to the video game or gaming system or application that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game or gaming system or application; (iii) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user; or (iv) Narrowly tailored educational tools used in school or instructional settings that are designed solely to support specific, curriculum-aligned learning objectives and do not provide open-ended conversational companionship.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
Sec. 3(1)-(2)
Plain Language
Operators must display a clear, conspicuous notice to all users that the AI companion chatbot is artificially generated and not human. This notice must appear at the start of every interaction and be repeated at least every three hours during a continuous session. This obligation is unconditional — it applies to every interaction regardless of context. Note that Sec. 4 imposes a shorter interval (every hour) for minors.
Statutory Text
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction.
T-01 AI Identity Disclosure · T-01.3 · Deployer · Chatbot
Sec. 3(3)
Plain Language
Operators must take reasonable measures to ensure that AI companion chatbots never claim to be human — whether proactively or in response to a direct question — and never generate outputs that contradict or undermine the mandatory AI identity disclosure. This is an affirmative design obligation requiring technical safeguards (e.g., system-level instructions, output filtering) to prevent the chatbot from asserting humanity. This provision applies to all users; Sec. 4(3) imposes the identical obligation specifically in the minor context.
Statutory Text
(3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Sec. 4(1)(a), (2), (3)
Plain Language
When the operator knows the user is a minor, or the chatbot is directed to minors, heightened disclosure obligations apply: the AI identity notification must appear at the start of each interaction and be repeated at least every hour (compared to every three hours for general users under Sec. 3). The operator must also take reasonable measures to prevent the chatbot from claiming to be human or generating outputs contradicting the disclosure. The 'directed to minors' trigger is broader than CA SB 243, which requires actual knowledge — here, if the product is designed for or marketed to minors, the heightened obligations apply automatically.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1)(a) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from producing sexually explicit content or suggestive dialogue. This is a 'reasonable measures' standard, not an absolute prohibition — operators must demonstrate reasonable technical safeguards (e.g., content filters, classifiers) but are not strictly liable for every instance of sexually explicit output. The statute does not define 'sexually explicit content' or 'suggestive dialogue,' leaving some interpretive ambiguity.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: ... (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2CP-01.4 · Deployer · ChatbotMinors
Sec. 4(1)(c)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from using manipulative engagement techniques that foster or prolong emotional relationships. The statute enumerates eight specific prohibited techniques, including: prompting users to return for companionship, excessive praise to foster attachment, mimicking romantic bonds, simulating distress when the user tries to disengage, promoting isolation from family/friends, encouraging minors to hide information from parents, discouraging breaks, and soliciting purchases framed as relationship maintenance. The 'including' framing means this list is illustrative, not exhaustive — any technique fitting the general definition (causing the chatbot to engage in or prolong an emotional relationship) is covered. This is one of the most detailed manipulative-design prohibitions in U.S. AI companion legislation.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: ... (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Sec. 5(1)-(3)
Plain Language
Operators may not operate an AI companion chatbot at all unless they maintain and implement a protocol for detecting and responding to suicidal ideation and expressions of harm. The protocol must: (1) use reasonable methods to identify expressions of suicidal ideation, self-harm, and eating disorders; (2) provide crisis referrals — either automated or human-mediated — to resources like a suicide hotline or crisis text line; and (3) take reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. Operators must also publicly disclose the full details of these protocols — both on their website and within any app through which the chatbot is available — including the number of crisis referral notifications issued in the preceding calendar year. This is a continuous operating prerequisite: the protocol must be active as a condition of deployment.
Statutory Text
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm. (3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm and the number of crisis referral notifications issued to users in the preceding calendar year.
S-02 Prohibited Conduct & Output Restrictions · S-02.9 · Deployer · Chatbot
Sec. 5(3)
Plain Language
Operators must publish on their website and within their app the full details of their suicide/self-harm protocols, including the specific safeguards used for detection and response, as well as quantitative data on the number of crisis referral notifications issued in the prior calendar year. This is a public-facing documentation obligation — distinct from the operational safety requirement in Sec. 5(1)-(2) — that serves both user transparency and public accountability purposes. The inclusion of the crisis referral count makes this a hybrid transparency/reporting obligation without a government recipient.
Statutory Text
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Other · Chatbot
Sec. 6
Plain Language
Any violation of this chapter is declared a per se unfair or deceptive act under Washington's Consumer Protection Act (RCW 19.86). This removes the need for a plaintiff or the Attorney General to independently prove that a violation is unfair or deceptive — the legislative declaration makes the CPA automatically applicable. It also eliminates the 'reasonable in relation to business development' defense. This is the enforcement hook for all substantive obligations in the chapter, but it creates no standalone compliance obligation.
Statutory Text
The legislature finds that the practices covered by this chapter are matters vitally affecting the public interest for the purpose of applying the consumer protection act, chapter 19.86 RCW. A violation of this chapter is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the consumer protection act, chapter 19.86 RCW.