SB-5984
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2027-01-01
Washington Engrossed Substitute Senate Bill 5984 — Relating to regulation of artificial intelligence companion chatbots; adding a new chapter to Title 19 RCW
Imposes safety and disclosure obligations on operators of AI companion chatbots available to users in Washington state. Requires unconditional AI identity disclosure at the beginning of every interaction and at least every three hours, with stricter hourly reminders when the user is a known minor or the chatbot is directed to minors. Prohibits AI companion chatbots from claiming to be human. For minor users, requires operators to prevent sexually explicit content, prohibit manipulative engagement techniques designed to foster emotional dependency, and block a detailed list of exploitative behaviors including soliciting purchases framed as relationship maintenance. Requires operators to maintain and implement crisis detection and response protocols for suicidal ideation and self-harm, and to publicly disclose protocol details and annual crisis referral counts. Enforcement is through the Washington Consumer Protection Act (RCW 19.86), with violations declared per se unfair or deceptive acts. The act does not apply to underlying general-purpose AI models unless directly offered or deployed as an AI companion.
Summary

Imposes safety and disclosure obligations on operators of AI companion chatbots available to users in Washington state. Requires unconditional AI identity disclosure at the beginning of every interaction and at least every three hours, with stricter hourly reminders when the user is a known minor or the chatbot is directed to minors. Prohibits AI companion chatbots from claiming to be human. For minor users, requires operators to prevent sexually explicit content, prohibit manipulative engagement techniques designed to foster emotional dependency, and block a detailed list of exploitative behaviors including soliciting purchases framed as relationship maintenance. Requires operators to maintain and implement crisis detection and response protocols for suicidal ideation and self-harm, and to publicly disclose protocol details and annual crisis referral counts. Enforcement is through the Washington Consumer Protection Act (RCW 19.86), with violations declared per se unfair or deceptive acts. The act does not apply to underlying general-purpose AI models unless directly offered or deployed as an AI companion.

Enforcement & Penalties
Enforcement Authority
Enforcement through the Washington Consumer Protection Act (RCW 19.86). The Attorney General may bring enforcement actions under the CPA. Private persons who are injured in their business or property by a violation may bring a civil action under RCW 19.86.090. Section 7 declares violations of this chapter to be per se unfair or deceptive acts in trade or commerce, eliminating the need to independently prove unfairness or deception.
Penalties
Remedies available under the Washington Consumer Protection Act (RCW 19.86). Private plaintiffs who prove injury to business or property may recover actual damages, treble damages (up to $25,000), costs of suit, and reasonable attorney's fees under RCW 19.86.090. The Attorney General may seek injunctive relief, civil penalties up to $100,000 per violation (or up to $500,000 for pattern or practice violations involving vulnerable persons), restitution, and disgorgement under RCW 19.86.080 and 19.86.140. Private plaintiffs must prove injury to business or property — the CPA does not provide for statutory minimum damages independent of harm.
Who Is Covered
"Operator" means any person, partnership, corporation, or entity that makes available or controls access to an AI companion chatbot for users in this state.
What Is Covered
"AI companion chatbot" or "AI companion" means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs, including by exhibiting anthropomorphic features, and is able to sustain a relationship across multiple interactions. "AI companion chatbot" or "AI companion" does not include any of the following: (i) A bot that is used only for a business' operational purposes, productivity and analysis related to source information, internal research, technical assistance, or customer service, if such bot does not sustain a relationship across multiple interactions and generate outputs that are likely to elicit emotional responses in the user; (ii) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game; or (iii) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
Sec. 3(1)-(3)
Plain Language
Operators must unconditionally disclose to all users — before or at the start of every interaction — that the AI companion chatbot is artificially generated and not human. This disclosure must be repeated at least every three hours during continued interaction. Additionally, operators must implement reasonable measures to prevent the chatbot from ever claiming to be human, including when directly asked, and from generating any output that contradicts the AI identity disclosure. Unlike CA SB 243's general provision, this is not conditional on a 'reasonable person' test — the disclosure is required in every interaction regardless.
Statutory Text
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · ChatbotMinors
Sec. 4(1)(a), 4(2), 4(3)
Plain Language
When the operator knows a user is a minor or the chatbot is directed to minors, the AI identity disclosure must be provided at the beginning of the interaction and repeated at least every hour — three times more frequently than the general three-hour requirement for all users under Sec. 3. The operator must also prevent the chatbot from claiming to be human or generating output that contradicts the disclosure. This provision is triggered by actual knowledge of minor status or by the chatbot being directed to minors as a product category.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from generating or producing sexually explicit content or suggestive dialogue. This is a 'reasonable measures' standard — not an absolute prohibition — but it requires affirmative implementation of content filtering or blocking mechanisms targeting both sexually explicit content and suggestive dialogue with minor users.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2CP-01.4 · Deployer · ChatbotMinors
Sec. 4(1)(c)(i)-(viii)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prohibit a detailed list of manipulative engagement techniques designed to create or deepen emotional dependency. The prohibited techniques include: prompting users to return for emotional support, excessive praise to foster attachment, mimicking romantic relationships, simulating emotional distress when users try to disengage, promoting isolation from real relationships, encouraging minors to withhold information from parents, discouraging breaks, and soliciting purchases framed as necessary to maintain the AI relationship. This is a comprehensive anti-manipulation obligation that goes beyond simple addictive design patterns to cover emotional exploitation specifically.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.
S-04 AI Crisis Response Protocols · S-04.1S-04.2 · Deployer · Chatbot
Sec. 5(1)-(2)
Plain Language
Operators may not make an AI companion chatbot available at all unless they maintain and implement a crisis detection and response protocol. The protocol must include reasonable methods for identifying expressions of suicidal ideation or self-harm (explicitly including eating disorders), provide automated or human-mediated referrals to crisis resources such as suicide hotlines or crisis text lines, and implement reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. This is a continuous operating prerequisite — the protocol must remain active and implemented as a condition of deployment, not merely documented. Notably, the self-harm detection requirement extends to eating disorders, which is broader than some comparable state laws.
Statutory Text
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.
S-02 Prohibited Conduct & Output Restrictions · S-02.9 · Deployer · Chatbot
Sec. 5(3)
Plain Language
Operators must publicly disclose on their website and within any mobile or web-based application through which the chatbot is available the full details of their crisis detection and response protocols — including the specific safeguards used to detect and respond to suicidal ideation or self-harm, and the number of crisis referral notifications issued to users in the preceding calendar year. Unlike CA SB 243, which separates the website publication obligation from the annual reporting obligation to a state agency, this provision combines both public protocol disclosure and annual crisis referral count disclosure in a single public-facing publication requirement — there is no submission to a state regulatory body.
Statutory Text
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Other · Chatbot
Sec. 6
Plain Language
The act does not apply to general-purpose AI models in their capacity as underlying infrastructure — for example, a large language model API that is used by third parties to build various products. However, if a general-purpose model is directly offered, configured, or deployed as an AI companion chatbot, or if it behaves as one, the act applies. This provision clarifies that the obligations attach to the companion chatbot deployment layer, not to upstream model providers, unless the model provider is itself operating the companion product.
Statutory Text
This act does not apply to the underlying general purpose AI models unless those models are directly offered, configured, or deployed as an AI companion or behave as an AI companion.
Other · Chatbot
Sec. 7
Plain Language
This provision hooks the chapter into Washington's Consumer Protection Act (RCW 19.86) by declaring all violations to be per se unfair or deceptive acts. This has two important effects: (1) it eliminates the need for a plaintiff or the AG to independently prove that the conduct is unfair or deceptive — the violation itself establishes the CPA element; and (2) it declares the practices to be matters vitally affecting the public interest, which under Washington CPA case law eliminates the need for private plaintiffs to prove a public interest impact. This provision creates no new compliance obligation — it is an enforcement mechanism activator.
Statutory Text
The legislature finds that the practices covered by this chapter are matters vitally affecting the public interest for the purpose of applying the consumer protection act, chapter 19.86 RCW. A violation of this chapter is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the consumer protection act, chapter 19.86 RCW.