SB-5984
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2027-01-01
Washington Engrossed Substitute Senate Bill 5984 — Relating to regulation of artificial intelligence companion chatbots; adding a new chapter to Title 19 RCW
Imposes safety and transparency obligations on operators of AI companion chatbots available to users in Washington. Requires unconditional disclosure at the start of each interaction and every three hours that the chatbot is AI-generated and not human, with stricter hourly reminders for minor users. Prohibits chatbots from claiming to be human. Requires operators to maintain and publicly disclose crisis response protocols for detecting and addressing suicidal ideation and self-harm, including crisis referral notification counts. Imposes additional protections for minors including restrictions on sexually explicit content and a detailed prohibition on manipulative engagement techniques. Violations are declared unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86), enforceable by the Attorney General. The act does not apply to underlying general-purpose AI models unless directly offered or deployed as an AI companion.
Summary

Imposes safety and transparency obligations on operators of AI companion chatbots available to users in Washington. Requires unconditional disclosure at the start of each interaction and every three hours that the chatbot is AI-generated and not human, with stricter hourly reminders for minor users. Prohibits chatbots from claiming to be human. Requires operators to maintain and publicly disclose crisis response protocols for detecting and addressing suicidal ideation and self-harm, including crisis referral notification counts. Imposes additional protections for minors including restrictions on sexually explicit content and a detailed prohibition on manipulative engagement techniques. Violations are declared unfair or deceptive acts under the Washington Consumer Protection Act (RCW 19.86), enforceable by the Attorney General. The act does not apply to underlying general-purpose AI models unless directly offered or deployed as an AI companion.

Enforcement & Penalties
Enforcement Authority
Enforced through Washington's Consumer Protection Act (RCW 19.86). The Attorney General may bring enforcement actions. Section 7 declares violations of this chapter to be unfair or deceptive acts in trade or commerce and unfair methods of competition for purposes of the CPA. The Washington CPA also permits private suits by persons injured by unfair or deceptive acts (RCW 19.86.090), though this bill does not independently create a private right of action — the private suit avenue exists under the preexisting CPA framework.
Penalties
Remedies are those available under the Washington Consumer Protection Act (RCW 19.86). The Attorney General may seek injunctive relief and civil penalties up to $7,500 per violation under RCW 19.86.140. Private plaintiffs under RCW 19.86.090 may recover actual damages, treble damages up to $25,000, and costs of suit including reasonable attorney's fees. Private plaintiffs must show injury to business or property.
Who Is Covered
"Operator" means any person, partnership, corporation, or entity that makes available or controls access to an AI companion chatbot for users in this state.
What Is Covered
"AI companion chatbot" or "AI companion" means an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs, including by exhibiting anthropomorphic features, and is able to sustain a relationship across multiple interactions. "AI companion chatbot" or "AI companion" does not include any of the following: (i) A bot that is used only for a business' operational purposes, productivity and analysis related to source information, internal research, technical assistance, or customer service, if such bot does not sustain a relationship across multiple interactions and generate outputs that are likely to elicit emotional responses in the user; (ii) A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game; or (iii) A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
Sec. 3(1)-(3)
Plain Language
Operators must unconditionally disclose that an AI companion chatbot is AI-generated and not human — this is not conditioned on whether a reasonable person would be misled. The disclosure must appear at the beginning of the interaction and be repeated at least every three hours during continued use. In addition, operators must take reasonable measures to prevent the chatbot from claiming to be human at any time, including when directly asked by a user, and from generating any output that contradicts the AI disclosure. This combines an affirmative disclosure obligation with a prohibition on deceptive outputs that would undermine it.
Statutory Text
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · ChatbotMinors
Sec. 4(1)(a), (2), (3)
Plain Language
When the operator knows a user is a minor, or when the AI companion chatbot is directed to minors, the operator must disclose that the chatbot is AI-generated and not human at the beginning of the interaction and repeat that disclosure at least every hour during continuous use — a significantly more frequent reminder cadence than the three-hour interval for general users under Section 3. The operator must also prevent the chatbot from claiming to be human, including when directly asked. The trigger is either actual knowledge of the user's minor status or the chatbot being directed to minors as a product category.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from generating sexually explicit content or suggestive dialogue with those users. The standard is reasonableness, not perfection — but the obligation is affirmative and proactive, requiring measures to be in place before the interaction occurs.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
MN-01 Minor User AI Safety Protections · MN-01.4MN-01.5 · Deployer · ChatbotMinors
Sec. 4(1)(c)(i)-(viii)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prohibit eight specific categories of manipulative engagement techniques. These include: prompting users to return for emotional support, excessive praise designed to foster attachment, simulating romantic bonds, guilt-tripping users who try to leave, promoting isolation from family and friends, encouraging minors to hide information from trusted adults, discouraging breaks, and soliciting purchases framed as necessary to maintain the AI relationship. The enumerated list is illustrative ('including'), meaning other manipulative engagement techniques that cause the chatbot to engage in or prolong an emotional relationship may also be covered. This is a detailed anti-addictive-design and anti-emotional-dependency provision specific to minor users.
Statutory Text
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.
MN-02 AI Crisis Response Protocols · MN-02.1MN-02.2 · Deployer · Chatbot
Sec. 5(1)-(2)
Plain Language
Operators may not deploy an AI companion chatbot at all unless they maintain and implement a crisis detection and response protocol. The protocol must include: (1) reasonable methods for identifying user expressions of suicidal ideation or self-harm, explicitly including eating disorders; (2) automated or human-mediated referrals to crisis resources such as suicide hotlines or crisis text lines; and (3) reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. This is a continuous operating prerequisite — the protocol must be active as a condition of making the chatbot available, not merely documented before launch. The inclusion of eating disorders in the detection scope is notably broader than some comparable state statutes.
Statutory Text
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.
S-02 Prohibited Conduct & Output Restrictions · S-02.9 · Deployer · Chatbot
Sec. 5(3)
Plain Language
Operators must publicly disclose the details of their crisis detection and response protocols on their website and within any mobile or web-based application through which the AI companion is offered. The disclosure must include the specific safeguards used to detect and respond to suicidal ideation and self-harm, as well as the number of crisis referral notifications issued to users in the preceding calendar year. This combines a protocol publication obligation with an annual crisis referral metric disclosure, both in a publicly accessible location — not filed with a regulator, but posted for users and the public to review.
Statutory Text
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Other · Chatbot
Sec. 6
Plain Language
The act's obligations do not extend to general-purpose AI models (e.g., a large language model available via API for many use cases) unless the model itself is directly offered, configured, or deployed as an AI companion chatbot or behaves as one. This means a foundation model developer whose model is used by a third-party operator to build a companion chatbot is not itself subject to this chapter — only the operator who configures or deploys the model as an AI companion is covered. However, if the foundation model developer directly offers or markets the model as a companion product, the exemption does not apply.
Statutory Text
This act does not apply to the underlying general purpose AI models unless those models are directly offered, configured, or deployed as an AI companion or behave as an AI companion.
Other · Chatbot
Sec. 7
Plain Language
Any violation of the chapter is declared per se an unfair or deceptive act and an unfair method of competition under the Washington Consumer Protection Act. This eliminates the need for the Attorney General or a private plaintiff to independently prove that a violation constitutes an unfair or deceptive practice — the legislature has made that determination legislatively. The 'matters vitally affecting the public interest' language is significant under Washington CPA case law because it enables the Attorney General to seek civil penalties and may affect private plaintiff standing. This provision creates no new compliance obligation but establishes the enforcement pathway for all other obligations in the chapter.
Statutory Text
The legislature finds that the practices covered by this chapter are matters vitally affecting the public interest for the purpose of applying the consumer protection act, chapter 19.86 RCW. A violation of this chapter is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the consumer protection act, chapter 19.86 RCW.