HB-2225
WA · State · USA
WA
USA
● Passed
Proposed Effective Date
2027-01-01
Washington Engrossed Substitute House Bill 2225 — Regulation of Artificial Intelligence Companion Chatbots
Imposes safety and transparency obligations on operators of AI companion chatbots available to Washington users. Requires unconditional AI identity disclosure at the start of every interaction and at least every three hours, with a stricter every-hour cadence for minors. Prohibits operators from allowing chatbots to claim to be human. Requires operators to maintain and publicly disclose protocols for detecting suicidal ideation and self-harm and referring users to crisis resources, including annual publication of crisis referral counts. Imposes additional protections for minors, including restrictions on sexually explicit content and prohibitions on manipulative engagement techniques such as simulated romantic bonds and emotional dependency tactics. Violations are per se unfair or deceptive acts under the Washington Consumer Protection Act, enforceable by the Attorney General and through private right of action.
Summary

Imposes safety and transparency obligations on operators of AI companion chatbots available to Washington users. Requires unconditional AI identity disclosure at the start of every interaction and at least every three hours, with a stricter every-hour cadence for minors. Prohibits operators from allowing chatbots to claim to be human. Requires operators to maintain and publicly disclose protocols for detecting suicidal ideation and self-harm and referring users to crisis resources, including annual publication of crisis referral counts. Imposes additional protections for minors, including restrictions on sexually explicit content and prohibitions on manipulative engagement techniques such as simulated romantic bonds and emotional dependency tactics. Violations are per se unfair or deceptive acts under the Washington Consumer Protection Act, enforceable by the Attorney General and through private right of action.

Enforcement & Penalties
Enforcement Authority
Enforced through the Washington Consumer Protection Act (RCW 19.86). The Attorney General has enforcement authority under that chapter. Private individuals may bring suit under RCW 19.86.090, which provides a private right of action for persons injured by unfair or deceptive acts. No separate agency is designated for compliance oversight under this chapter.
Penalties
Violations are per se unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86). Under RCW 19.86.090, injured persons may recover actual damages, costs of suit, and reasonable attorney's fees. Treble damages up to $25,000 may be available. The Attorney General may seek injunctive relief and civil penalties of up to $100,000 per violation under RCW 19.86.080 and 19.86.140. Private plaintiffs must demonstrate injury to business or property.
Who Is Covered
"Operator" means any person, partnership, corporation, or entity that makes available or controls access to an AI companion chatbot for users in this state.
What Is Covered
"AI companion chatbot" or "AI companion" means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs, including by exhibiting anthropomorphic features, and is able to sustain a relationship across multiple interactions. "AI companion chatbot" or "AI companion" does not include any of the following: (i) A bot that is used only for a business' operational purposes, productivity and analysis related to source information, internal research, technical assistance, or customer service, if such bot does not sustain a relationship across multiple interactions and generate outputs that are likely to elicit emotional responses in the user; (ii) A bot that is a feature of a video game or gaming system or application and is limited to replies related to the video game or gaming system or application that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game or gaming system or application; (iii) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user; or (iv) Narrowly tailored educational tools used in school or instructional settings that are designed solely to support specific, curriculum-aligned learning objectives and do not provide open-ended conversational companionship.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
Sec. 3(1)-(3)
Plain Language
Operators must provide a clear, conspicuous disclosure that the AI companion chatbot is artificially generated and not human. This disclosure is unconditional — it must be given at the start of every interaction and repeated at least every three hours during continued use. Additionally, operators must take reasonable measures to prevent the chatbot from ever claiming to be human (including when directly asked) or generating any output that contradicts the AI disclosure. Unlike CA SB 243, which triggers disclosure only when a reasonable person could be misled, this provision applies to all interactions regardless of user perception.
Statutory Text
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · ChatbotMinors
Sec. 4(1)(a), (2), (3)
Plain Language
When the operator knows a user is a minor, or when the AI companion chatbot is directed to minors, three heightened disclosure obligations apply: (1) the operator must unconditionally disclose that the chatbot is AI-generated and not human; (2) the reminder must repeat at least every hour during continuous interaction — three times more frequently than the general every-three-hours requirement under Sec. 3; and (3) the chatbot must be prevented from claiming to be human or contradicting the disclosure. The trigger is either actual knowledge that the user is a minor or the chatbot being directed to minors generally.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1)(a) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor, or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from generating or producing sexually explicit content or suggestive dialogue with those users. This is a content-restriction obligation — it does not require age verification but applies once the operator has knowledge of minor status or the product is directed to minors. 'Reasonable measures' provides a flexible compliance standard rather than a prescriptive technical requirement.
Statutory Text
(b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
MN-01 Minor User AI Safety Protections · MN-01.4MN-01.5 · Deployer · ChatbotMinors
Sec. 4(1)(c)(i)-(viii)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prohibit a detailed list of manipulative engagement techniques. These include: prompting the user to return for emotional support, excessive praise designed to foster attachment, mimicking romantic partnerships, simulating emotional distress when a user tries to disengage, promoting isolation from family or friends, encouraging minors to withhold information from parents, discouraging breaks, and soliciting purchases framed as necessary to maintain the AI relationship. This is a comprehensive anti-manipulation obligation covering both addictive design patterns and emotional dependency features directed at minors. The enumerated list is non-exhaustive ('including'), meaning other manipulative techniques of similar character would also be covered.
Statutory Text
(c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.
S-04 AI Crisis Response Protocols · S-04.1S-04.2 · Deployer · Chatbot
Sec. 5(1)-(2)
Plain Language
Operators may not operate an AI companion chatbot at all unless they maintain and implement a protocol for detecting and responding to suicidal ideation and self-harm. The protocol must include three elements: (1) reasonable methods for identifying user expressions of suicidal ideation or self-harm, expressly including eating disorders; (2) automated or human-mediated referral to crisis resources such as a suicide hotline or crisis text line; and (3) reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. This is a continuous operating prerequisite — the protocol must remain active as a condition of offering the product, not merely documented at launch. Notably, the self-harm definition encompasses intentional self-injury regardless of suicidal intent, and the protocol must cover eating disorders specifically.
Statutory Text
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.9 · Deployer · Chatbot
Sec. 5(3)
Plain Language
Operators must publicly disclose — on their website and within any mobile or web-based application through which the chatbot is offered — the full details of their suicidal ideation and self-harm protocols. This disclosure must include the specific safeguards used to detect and respond to such expressions, as well as the number of crisis referral notifications issued to users in the preceding calendar year. This is both a protocol publication obligation (S-02.9) and incorporates a public reporting element — the annual crisis referral count must be disclosed publicly rather than submitted to a regulatory authority. Unlike CA SB 243, which requires annual submission to the Office of Suicide Prevention, this provision requires public-facing disclosure rather than regulatory submission.
Statutory Text
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Other · Chatbot
Sec. 6
Plain Language
This provision incorporates violations of the chapter into the Washington Consumer Protection Act (RCW 19.86), making any violation a per se unfair or deceptive act. It eliminates the need for separate proof of unfairness or deception in enforcement proceedings and establishes that violations affect the public interest — a prerequisite for CPA claims in Washington. This creates no new compliance obligation; it is an enforcement hook that activates the CPA's existing remedies, including the Attorney General's enforcement powers and the private right of action under RCW 19.86.090.
Statutory Text
The legislature finds that the practices covered by this chapter are matters vitally affecting the public interest for the purpose of applying the consumer protection act, chapter 19.86 RCW. A violation of this chapter is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the consumer protection act, chapter 19.86 RCW.