SB-6284
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2026-07-01
Washington Substitute Senate Bill 6284 — Relating to consumer protections for artificial intelligence systems
Washington SSB 6284 establishes a risk-based regulatory framework for high-risk AI systems that autonomously make or substantially factor into consequential decisions (employment, housing, credit, healthcare, education, insurance, legal services, government services, and criminal justice) without meaningful human consideration. Deployers must use industry-standard protections against algorithmic discrimination, conduct at least annual reviews, maintain risk management programs aligned with NIST AI RMF or equivalent frameworks, complete impact assessments, retain records for three years, and notify consumers before AI-driven consequential decisions. Developers with 50+ employees must also maintain risk management programs. The bill separately requires government agencies to disclose AI use to consumers before or at the time of interaction. Enforcement is exclusively by the Attorney General under the state Consumer Protection Act, with a 45-day pre-suit notice and a 60-day cure period for first violations. No private right of action is created. Significant carve-outs apply to financial institutions, insurers regulated by the OIC, HIPAA-covered entities in certain healthcare contexts, federally approved AI systems, and federal government contracts.
Summary

Washington SSB 6284 establishes a risk-based regulatory framework for high-risk AI systems that autonomously make or substantially factor into consequential decisions (employment, housing, credit, healthcare, education, insurance, legal services, government services, and criminal justice) without meaningful human consideration. Deployers must use industry-standard protections against algorithmic discrimination, conduct at least annual reviews, maintain risk management programs aligned with NIST AI RMF or equivalent frameworks, complete impact assessments, retain records for three years, and notify consumers before AI-driven consequential decisions. Developers with 50+ employees must also maintain risk management programs. The bill separately requires government agencies to disclose AI use to consumers before or at the time of interaction. Enforcement is exclusively by the Attorney General under the state Consumer Protection Act, with a 45-day pre-suit notice and a 60-day cure period for first violations. No private right of action is created. Significant carve-outs apply to financial institutions, insurers regulated by the OIC, HIPAA-covered entities in certain healthcare contexts, federally approved AI systems, and federal government contracts.

Enforcement & Penalties
Enforcement Authority
Exclusive enforcement by the Washington Attorney General under the Consumer Protection Act, chapter 19.86 RCW. The AG may bring an action in the name of the state or as parens patriae on behalf of state residents. Before commencing an action, the AG must provide 45 days' written notice of the alleged violation. For the first violation, the developer or deployer may cure within 60 days of receiving written notice. No private right of action is created; the statute expressly prohibits enforcement under RCW 19.86.090 and does not incorporate RCW 19.86.093. A rebuttable presumption of reasonable care applies to deployers who comply with the chapter. The government agency AI disclosure obligation in Section 11 is codified in Title 42 RCW and has no specified enforcement mechanism in the bill text.
Penalties
Violations are treated as unfair or deceptive acts in trade or commerce under chapter 19.86 RCW (Washington Consumer Protection Act). Remedies available to the AG include injunctive relief, civil penalties up to $7,500 per violation under RCW 19.86.140, restitution, and costs of investigation including reasonable attorney fees. The statute does not incorporate RCW 19.86.093, which would otherwise permit treble damages. No private damages action is available.
Who Is Covered
"Deployer" means any person doing business in this state that deploys a high-risk artificial intelligence system to make a consequential decision in the state.
"Developer" means any person doing business in this state that develops, or intentionally and substantially modifies, a high-risk artificial intelligence system intended for use within the state.
What Is Covered
"High-risk artificial intelligence system" means any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision without meaningful human consideration. "High-risk artificial intelligence system" does not include: (i) Any artificial intelligence system that is intended to: (A) Perform any narrow procedural task; (B) Improve the result of a previously completed human activity; (C) Perform a preparatory task to an assessment relevant to a consequential decision; or (D) Detect any decision-making pattern, or any deviation from any preexisting decision-making pattern; (ii) Any antifraud technology, antimalware, antivirus, calculator, cybersecurity, database, data storage, firewall, internet domain registration, internet website loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, webcaching, webhosting, search engine, or similar technology; or (iii) Any technology that communicates in natural language for the purpose of providing users with information, making referrals or recommendations, answering questions, or generating other content, and is subject to an acceptable use policy that prohibits generating content that is unlawful.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.8 · Deployer · Automated Decisionmaking
Sec. 3(1)(a)-(b), (2)(a)-(b)
Plain Language
Deployers must use industry-standard measures to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. At least annually beginning July 1, 2027, each deployer (or a contracted third party) must review every deployed high-risk AI system to verify it is not causing algorithmic discrimination. If discrimination is discovered, the deployer must notify the Attorney General within 90 days. Deployers who comply with the full chapter benefit from a rebuttable presumption of reasonable care in any AG enforcement action. Testing for discrimination mitigation and diversity expansion are explicitly excluded from the definition of algorithmic discrimination.
Statutory Text
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 10 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
Sec. 4(1)-(2)
Plain Language
Each deployer of a high-risk AI system must implement and maintain a risk management policy and program beginning July 1, 2027. The program must identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination, using an iterative process regularly reviewed and updated over the system lifecycle. The program's reasonableness is evaluated based on the deployer's size and complexity, the nature and scope of deployed systems, data sensitivity and volume, and adherence to a recognized risk management framework — NIST AI RMF, ISO/IEC 42001, an equivalent international standard, or a framework designated by the AG. A single program may cover multiple high-risk AI systems. Small deployers meeting the conditions of Section 7 are exempt (fewer than 50 employees, no use of own data to train, system used for disclosed purposes, and developer's impact assessment made available to consumers).
Statutory Text
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Developer · Automated Decisionmaking
Sec. 5(1)-(5)
Plain Language
Each developer of a high-risk AI system with 50 or more full-time equivalent employees must implement and maintain a risk management policy and program beginning July 1, 2027, with the same substantive requirements as the deployer program (Sec. 4): the program must identify, document, and mitigate algorithmic discrimination risks using an iterative process and must align with NIST AI RMF, ISO/IEC 42001, an equivalent framework, or one designated by the AG. A developer that also deploys its own system is not required to produce the developer-side documentation unless the system is provided to an unaffiliated deployer. Developers with fewer than 50 FTEs are entirely exempt from this section.
Statutory Text
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each developer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the developer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the life cycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the developer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the developer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the developer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) A developer that also serves as a deployer for any high-risk artificial intelligence system may not be required to generate the documentation required by this section unless such high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer or as otherwise required by law. (5) This section does not apply to a developer with fewer than 50 full-time equivalent employees.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 6(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system deployed on or after July 1, 2027, and within 90 days of any intentional and substantial modification. The assessment must cover: the system's purpose and intended uses, an analysis of algorithmic discrimination risks and mitigation steps, input data categories, outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring and oversight. After a substantial modification, the assessment must also disclose how the system's actual use compared to the developer's intended uses. A single assessment may cover comparable systems. Cross-jurisdictional impact assessments that are reasonably similar in scope and effect satisfy this requirement. All impact assessments, supporting records, and prior assessments must be retained for at least three years after final deployment. Small deployers meeting the conditions of Section 7 (fewer than 50 FTEs, no own-data training, system used for disclosed purposes, developer's impact assessment made available) are exempt.
Statutory Text
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3 · Deployer · Automated Decisionmaking
Sec. 8(1)-(2)
Plain Language
Each time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a Washington consumer, the deployer must — before the decision is made — notify the consumer that AI is being used and provide a statement disclosing: the AI system's purpose and the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the system. This obligation takes effect July 1, 2026, one year earlier than the risk management and impact assessment obligations. The notification must occur on a per-decision basis, not just at onboarding.
Statutory Text
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
Sec. 3(2)(b)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, it must notify the Attorney General within 90 days of discovery, using a form and manner the AG prescribes. This is a reactive obligation triggered by actual discovery of discrimination, separate from the annual review obligation that requires proactively checking for discrimination. Trade secrets and confidential information are protected from disclosure under Section 3(3).
Statutory Text
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
Sec. 6(6)
Plain Language
Deployers must retain the most recent impact assessment, all supporting records, and all prior impact assessments for at least three years after final deployment of the high-risk AI system. While the statute does not explicitly require production to regulators, the AG's enforcement authority under Section 10 and the CPA's investigative powers implicitly require maintaining records in a form suitable for production. The three-year retention floor runs from final deployment — not from the date the assessment was completed.
Statutory Text
A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system.
T-01 AI Identity Disclosure · T-01.1 · Government · Government System
Sec. 11(1)-(3)
Plain Language
Government agencies that deploy AI systems intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with AI. The disclosure must be clear, conspicuously posted, written in plain language, and may not use dark patterns. A hyperlink to a separate web page is acceptable. Critically, the disclosure is unconditional — it must be made even if it would be obvious to a reasonable consumer that they are interacting with AI. This applies to any AI system (not just high-risk systems) and covers government agencies (which are excluded from the 'person' definition and therefore from the Title 19 chapter's deployer/developer obligations). This section is codified in Title 42 RCW and does not include a specified enforcement mechanism.
Statutory Text
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.