SB-6284
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2026-07-01
Washington Substitute Senate Bill 6284 — An Act Relating to Consumer Protections for Artificial Intelligence Systems
Washington SSB 6284 establishes a risk-based framework for regulating high-risk AI systems used to make or substantially influence consequential decisions affecting consumers in areas such as employment, housing, credit, healthcare, insurance, education, and legal services. The bill imposes obligations on deployers (effective July 1, 2026 for consumer notification; July 1, 2027 for risk management, bias review, and impact assessments) and developers (risk management program, effective July 1, 2027 with a small-business exemption for developers with fewer than 50 employees). Deployers must notify consumers before an AI-driven consequential decision is made, conduct annual algorithmic discrimination reviews, complete and retain impact assessments for at least three years, and report discovered discrimination to the AG within 90 days. A separate provision requires government agencies to disclose AI use to consumers unconditionally. Enforcement is exclusively by the Attorney General under the Consumer Protection Act with a 45-day notice and 60-day cure period for first violations; no private right of action exists. The bill also extends and expands the existing AI task force and creates a new AI workplace advisory group.
Summary

Washington SSB 6284 establishes a risk-based framework for regulating high-risk AI systems used to make or substantially influence consequential decisions affecting consumers in areas such as employment, housing, credit, healthcare, insurance, education, and legal services. The bill imposes obligations on deployers (effective July 1, 2026 for consumer notification; July 1, 2027 for risk management, bias review, and impact assessments) and developers (risk management program, effective July 1, 2027 with a small-business exemption for developers with fewer than 50 employees). Deployers must notify consumers before an AI-driven consequential decision is made, conduct annual algorithmic discrimination reviews, complete and retain impact assessments for at least three years, and report discovered discrimination to the AG within 90 days. A separate provision requires government agencies to disclose AI use to consumers unconditionally. Enforcement is exclusively by the Attorney General under the Consumer Protection Act with a 45-day notice and 60-day cure period for first violations; no private right of action exists. The bill also extends and expands the existing AI task force and creates a new AI workplace advisory group.

Enforcement & Penalties
Enforcement Authority
Exclusive enforcement by the Washington Attorney General under the Consumer Protection Act, chapter 19.86 RCW. The AG may bring an action in the name of the state or as parens patriae on behalf of state residents. Before commencing an action, the AG must provide 45 days' written notice of the alleged violation to the deployer or developer. For the first violation, the developer or deployer may cure the noticed violation within 60 days of receiving written notice. No private right of action is available. The statute expressly prohibits enforcement under RCW 19.86.090 and does not incorporate RCW 19.86.093.
Penalties
A violation of this chapter is deemed an unfair or deceptive act in trade or commerce for purposes of applying the Washington Consumer Protection Act (chapter 19.86 RCW). Remedies available to the AG under chapter 19.86 RCW include civil penalties up to $7,500 per violation (RCW 19.86.140), injunctive relief, restitution, and costs of investigation. The statute does not incorporate RCW 19.86.093 (treble damages) and expressly bars private actions under RCW 19.86.090. A rebuttable presumption of reasonable care applies if the deployer complied with the chapter.
Who Is Covered
"Deployer" means any person doing business in this state that deploys a high-risk artificial intelligence system to make a consequential decision in the state.
"Developer" means any person doing business in this state that develops, or intentionally and substantially modifies, a high-risk artificial intelligence system intended for use within the state.
What Is Covered
"High-risk artificial intelligence system" means any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision without meaningful human consideration. "High-risk artificial intelligence system" does not include: (i) Any artificial intelligence system that is intended to: (A) Perform any narrow procedural task; (B) Improve the result of a previously completed human activity; (C) Perform a preparatory task to an assessment relevant to a consequential decision; or (D) Detect any decision-making pattern, or any deviation from any preexisting decision-making pattern; (ii) Any antifraud technology, antimalware, antivirus, calculator, cybersecurity, database, data storage, firewall, internet domain registration, internet website loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, webcaching, webhosting, search engine, or similar technology; or (iii) Any technology that communicates in natural language for the purpose of providing users with information, making referrals or recommendations, answering questions, or generating other content, and is subject to an acceptable use policy that prohibits generating content that is unlawful.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.8 · Deployer · Automated Decisionmaking
Sec. 3(1)(a)-(b), (2)(a)-(b)
Plain Language
Deployers must use industry-standard measures to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination from their high-risk AI systems. Beginning July 1, 2027, deployers must also conduct at least annual reviews (internally or via a contracted third party) to verify each deployed high-risk AI system is not causing algorithmic discrimination. If discrimination is discovered, the deployer must notify the AG within 90 days. Compliance with the full chapter creates a rebuttable presumption of reasonable care. Trade secret protections apply — deployers need not disclose proprietary information. Algorithmic discrimination is defined by reference to Washington's Law Against Discrimination (chapter 49.60 RCW) and federal law, with carve-outs for bias testing, diversity expansion, and private clubs.
Statutory Text
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 10 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
Sec. 3(2)(b)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, it must notify the Attorney General within 90 days of discovery, using the form and manner the AG prescribes. This is an incident-triggered reporting obligation, distinct from the annual review requirement. The 90-day clock runs from the date of discovery, not the date the discrimination occurred.
Statutory Text
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
Sec. 4(1)-(3)
Plain Language
Deployers must implement and maintain a formal risk management policy and program governing each high-risk AI system deployment by July 1, 2027. The program must identify the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks, and must include an iterative lifecycle review process. Reasonableness is assessed based on deployer size and complexity, system scope, data sensitivity and volume, and adherence to a recognized risk framework — the NIST AI RMF, ISO/IEC 42001, or an equivalent or more stringent standard serve as safe harbors, and the AG may designate additional acceptable frameworks. A single program may cover multiple high-risk AI systems. Small deployer exemptions apply under Sec. 7. Trade secret protections apply.
Statutory Text
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Developer · Automated Decisionmaking
Sec. 5(1)-(5)
Plain Language
Developers of high-risk AI systems must implement and maintain a risk management policy and program parallel to the deployer obligation, with the same reasonableness factors and safe harbor frameworks (NIST AI RMF, ISO/IEC 42001, AG-designated frameworks). A developer that also serves as a deployer is not required to produce the documentation required by this section unless the system is provided to an unaffiliated entity acting as a deployer. This section does not apply to developers with fewer than 50 full-time equivalent employees. Trade secret protections apply.
Statutory Text
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each developer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the developer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the life cycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the developer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the developer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the developer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) A developer that also serves as a deployer for any high-risk artificial intelligence system may not be required to generate the documentation required by this section unless such high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer or as otherwise required by law. (4) Nothing in this section may be construed to require a developer to disclose any trade secret, or other confidential or proprietary information. (5) This section does not apply to a developer with fewer than 50 full-time equivalent employees.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 6(1)-(7)
Plain Language
Deployers must complete an impact assessment before deploying any high-risk AI system on or after July 1, 2027, and within 90 days after any intentional and substantial modification. The assessment must cover: system purpose and intended uses, algorithmic discrimination risk analysis with mitigation steps, data input categories, outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring and safeguards. Post-modification assessments must also disclose how actual use compared to the developer's intended uses. A single assessment may cover comparable systems. Assessments completed for other legal compliance purposes satisfy this requirement if reasonably similar in scope and effect. Records must be maintained for at least three years following final deployment, including the most recent assessment, supporting records, and prior assessments. Small deployer exemptions under Sec. 7 apply if the deployer has fewer than 50 FTEs, does not use its own data to train the system, uses the system for its disclosed intended uses, and makes the developer's impact assessment available to consumers. Trade secret protections apply.
Statutory Text
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3 · Deployer · Automated Decisionmaking
Sec. 8(1)-(2)
Plain Language
Every time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must, before the decision is made: (1) notify the consumer that an AI system is being used, and (2) provide a statement disclosing the system's purpose, the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the AI system. This obligation takes effect July 1, 2026 — one year earlier than the risk management and impact assessment provisions. Consequential decisions cover a broad set of high-stakes domains including employment, housing, credit, healthcare, insurance, education, legal services, essential government services, and criminal justice releases.
Statutory Text
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
T-01 AI Identity Disclosure · T-01.1 · Government · Government System
Sec. 11(1)-(3)
Plain Language
Government agencies that deploy AI systems intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with an AI system. The disclosure must be clear, conspicuous, in plain language, and may not use a dark pattern. A hyperlink to a separate web page is an acceptable disclosure method. Critically, the disclosure is unconditional — it must be provided even when it would be obvious to a reasonable consumer that they are interacting with AI. This provision is codified separately in Title 42 RCW (government operations), distinct from the private-sector obligations in Title 19 RCW.
Statutory Text
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.
Other · Automated Decisionmaking
Sec. 10(1)(a)-(c), (2)
Plain Language
This provision establishes the exclusive enforcement mechanism for the chapter: only the Attorney General may bring enforcement actions, treated as unfair or deceptive acts under the Washington Consumer Protection Act (chapter 19.86 RCW). Before bringing an action, the AG must give 45 days' written notice of the alleged violation; for first violations, the developer or deployer has 60 days to cure. Private rights of action are expressly prohibited — the statute bars suits under RCW 19.86.090 and does not incorporate RCW 19.86.093 (treble damages). Existing data privacy and security obligations are preserved. This provision creates no new compliance obligation.
Statutory Text
(1)(a) The attorney general may bring an action in the name of the state, or as parens patriae on behalf of persons residing in the state, to enforce this chapter. For actions brought by the attorney general to enforce this chapter, a violation of this chapter is an unfair or deceptive act in trade or commerce for the purpose of applying the consumer protection act, chapter 19.86 RCW. An action to enforce this chapter may not be brought under RCW 19.86.090. (b) The office of the attorney general, before commencing an action under the consumer protection act, chapter 19.86 RCW, must provide 45 days' written notice to a deployer or developer of the alleged violation of this chapter. For the first violation, the developer or deployer may cure the noticed violation within 60 days of receiving the written notice. (c) This chapter may be enforced solely by the attorney general under the consumer protection act, chapter 19.86 RCW, and may not be construed as providing the basis for, or be subject to, a private right of action for violations of this chapter. This chapter does not incorporate RCW 19.86.093. (2) Nothing in this chapter may be construed to limit or otherwise affect the obligations of developers and deployers under applicable laws, rules, or regulations relating to data privacy or security.
Other · Automated Decisionmaking
Sec. 7(1)-(2)
Plain Language
Deployers with fewer than 50 FTEs are exempt from the risk management program (Sec. 4) and impact assessment (Sec. 6) requirements, but only if all of the following conditions are continuously met: the deployer does not use its own data to train the system, the system is used only for its disclosed intended uses, the system's continued learning relies on non-deployer data, and the deployer makes the developer's impact assessment available to consumers. This is a conditional exemption — if any condition ceases to be true, the full requirements apply immediately. This provision creates no new obligation; it narrows the scope of existing ones.
Statutory Text
(1) The requirements in sections 4 and 6 of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed: (a) The deployer: (i) Employs fewer than 50 full-time equivalent employees; and (ii) Does not use the deployer's own data to train the high-risk artificial intelligence system; (b) The high-risk artificial intelligence system: (i) Is used for the intended uses that are disclosed by the deployer; and (ii) Continues learning based on data derived from sources other than the deployer's own data; and (c) The deployer makes available to consumers any impact assessment that: (i) The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and (ii) Includes information that is substantially similar to the information in the impact assessment required under section 6 of this act. (2) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.