HB-2667
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2026-07-01
Washington HB 2667 — An Act Relating to Consumer Protections for Artificial Intelligence Systems
Establishes a comprehensive risk-based framework for deployers of high-risk AI systems making consequential decisions in Washington. Deployers must use industry-standard means to protect consumers from algorithmic discrimination, implement and maintain a risk management policy and program (with NIST AI RMF as a safe harbor), complete impact assessments for each high-risk AI system, conduct at least annual reviews for algorithmic discrimination, notify the attorney general within 90 days if discrimination is discovered, and provide pre-decision notice to consumers. Government agencies must separately disclose when consumers interact with any AI system. Enforcement is exclusively through the attorney general under Washington's Consumer Protection Act, with a 45-day notice requirement and a 60-day cure period for first violations. Also extends and expands an existing AI task force and creates a new AI workplace advisory group.
Summary

Establishes a comprehensive risk-based framework for deployers of high-risk AI systems making consequential decisions in Washington. Deployers must use industry-standard means to protect consumers from algorithmic discrimination, implement and maintain a risk management policy and program (with NIST AI RMF as a safe harbor), complete impact assessments for each high-risk AI system, conduct at least annual reviews for algorithmic discrimination, notify the attorney general within 90 days if discrimination is discovered, and provide pre-decision notice to consumers. Government agencies must separately disclose when consumers interact with any AI system. Enforcement is exclusively through the attorney general under Washington's Consumer Protection Act, with a 45-day notice requirement and a 60-day cure period for first violations. Also extends and expands an existing AI task force and creates a new AI workplace advisory group.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. The attorney general may bring an action in the name of the state or as parens patriae on behalf of persons residing in the state. A violation is treated as an unfair or deceptive act in trade or commerce under the Consumer Protection Act, chapter 19.86 RCW. Before commencing an action, the attorney general must provide 45 days' written notice to the deployer or developer. For a first violation, the developer or deployer may cure the noticed violation within 60 days of receiving written notice. No private right of action is created — the statute expressly prohibits enforcement under RCW 19.86.090.
Penalties
Violations are enforceable as unfair or deceptive acts under Washington's Consumer Protection Act (chapter 19.86 RCW), which provides for civil penalties up to $7,500 per violation, injunctive relief, and costs including reasonable attorney's fees. The statute expressly bars private actions under RCW 19.86.090. A rebuttable presumption of reasonable care applies to deployers who comply with the chapter.
Who Is Covered
"Deployer" means any person doing business in this state that deploys a high-risk artificial intelligence system in the state.
"Developer" means any person doing business in this state that develops, or intentionally and substantially modifies, a high-risk artificial intelligence system intended for use within the state.
What Is Covered
"High-risk artificial intelligence system": (a) Means any artificial intelligence system designed by its developer to, when deployed, make, or is a substantial factor in making, a consequential decision; and (b) Does not include: (i) Any artificial intelligence system that is intended to: (A) Perform any narrow procedural task; (B) Improve the result of a previously completed human activity; (C) Perform a preparatory task to an assessment relevant to a consequential decision; or (D) Detect any decision-making pattern, or any deviation from any preexisting decision-making pattern; (ii) Any antifraud technology, antimalware, antivirus, calculator, cybersecurity, database, data storage, firewall, internet domain registration, internet website loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, webcaching, webhosting, search engine, or similar technology; or (iii) Any technology that communicates in natural language for the purpose of providing users with information, making referrals or recommendations, answering questions, or generating other content, and is subject to an acceptable use policy that prohibits generating content that is unlawful.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Automated Decisionmaking
Sec. 3(1)(a)-(b)
Plain Language
Deployers of high-risk AI systems must use industry-standard measures to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination — defined as unlawful differential impact based on protected characteristics under Washington's anti-discrimination law (RCW 49.60) or federal law. Compliance with the entire chapter creates a rebuttable presumption of reasonable care in any attorney general enforcement action. Testing for bias mitigation and diversity-expanding uses are expressly excluded from the definition of algorithmic discrimination.
Statutory Text
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 9 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter.
H-02 Non-Discrimination & Bias Assessment · H-02.8 · Deployer · Automated Decisionmaking
Sec. 3(2)(a)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI system to verify it is not causing algorithmic discrimination. Reviews may be conducted by the deployer itself or by a contracted third party. The first review must be completed by July 1, 2027, with subsequent reviews at least annually thereafter. This is a post-deployment monitoring obligation separate from the pre-deployment impact assessment required under Section 5.
Statutory Text
(2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
Sec. 3(2)(b)
Plain Language
If a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of the discovery. The notice must be submitted in a form and manner prescribed by the attorney general. This is triggered by actual discovery of discrimination, not by a routine review cycle. The trade secret protection in Sec. 3(3) applies — nothing in this section requires disclosure of trade secrets or confidential or proprietary information.
Statutory Text
(b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
Sec. 4(1)-(2)
Plain Language
Deployers must establish and maintain a risk management policy and program governing deployment of high-risk AI systems. The program must specify the principles, processes, and personnel used to identify, document, and mitigate risks of algorithmic discrimination, and must include an iterative process that is regularly and systematically reviewed and updated over the system's lifecycle. The program must be reasonable considering the deployer's size, system scope, data sensitivity, and adherence to a recognized risk framework — the NIST AI RMF and ISO/IEC 42001 are expressly cited as safe harbors, as is any framework the attorney general may designate. A single program may cover multiple high-risk AI systems. The small-deployer exemption in Sec. 6 exempts deployers with fewer than 50 FTEs that do not use their own data to train the system, subject to conditions.
Statutory Text
(1) Beginning July 1, 2027, and except as provided in section 5(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 5(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system before or at deployment, and again within 90 days after any intentional and substantial modification. The assessment must cover the system's purpose and use cases, algorithmic discrimination risk analysis and mitigation steps, data inputs and outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring safeguards. After a substantial modification, the assessment must also disclose whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems, and an assessment completed for another law satisfies this requirement if reasonably similar in scope. Deployers must retain the most recent impact assessment, supporting records, and all prior assessments for at least three years after final deployment. The small-deployer exemption in Sec. 6 exempts deployers with fewer than 50 FTEs that do not use their own data to train the system, provided they make the developer's impact assessment available to consumers. Trade secrets and confidential information need not be disclosed.
Statutory Text
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Deployer · Automated Decisionmaking
Sec. 6(1)
Plain Language
Small deployers — those with fewer than 50 full-time equivalent employees that do not use their own data to train the high-risk AI system — are exempt from the impact assessment and annual review requirements, provided three conditions are all continuously met: (1) the system is used only for its disclosed intended uses, (2) it continues learning only from non-deployer data, and (3) the deployer makes available to consumers a substantially similar impact assessment completed by the developer. This exemption is conditional and must be maintained throughout deployment — if any condition ceases to be met, the full obligations apply. This is a safe harbor modifying the impact assessment and annual review obligations, not an independent compliance obligation.
Statutory Text
(1) The requirements in section 5 (1) through (3) of this act and section 3(2) of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed: (a) The deployer: (i) Employs fewer than 50 full-time equivalent employees; and (ii) Does not use the deployer's own data to train the high-risk artificial intelligence system; (b) The high-risk artificial intelligence system: (i) Is used for the intended uses that are disclosed by the deployer; and (ii) Continues learning based on data derived from sources other than the deployer's own data; and (c) The deployer makes available to consumers any impact assessment that: (i) The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and (ii) Includes information that is substantially similar to the information in the impact assessment required under section 5 of this act.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Sec. 7(1)-(2)
Plain Language
Each time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must notify the consumer before the decision is made. The pre-decision notice must disclose the system's purpose, the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the AI system. This obligation has the earliest effective date in the bill — July 1, 2026 — one year before the risk management and impact assessment obligations take effect. Note that 'substantial factor' has a narrow statutory definition requiring the AI factor to be weighed more heavily than any other factor contributing to the decision.
Statutory Text
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
Other · Automated Decisionmaking
Sec. 9(1)(a)-(b)
Plain Language
The attorney general is the sole enforcement authority for this chapter and may bring actions under the Consumer Protection Act. Private enforcement under RCW 19.86.090 is expressly prohibited. Before filing suit, the AG must give 45 days' written notice, and for first violations, the developer or deployer has 60 days to cure. This provision establishes the enforcement mechanism but creates no independent compliance obligation.
Statutory Text
(1)(a) The attorney general may bring an action in the name of the state, or as parens patriae on behalf of persons residing in the state, to enforce this chapter. For actions brought by the attorney general to enforce this chapter, a violation of this chapter is an unfair or deceptive act in trade or commerce for the purpose of applying the consumer protection act, chapter 19.86 RCW. An action to enforce this chapter may not be brought under RCW 19.86.090. (b) The office of the attorney general, before commencing an action under the consumer protection act, chapter 19.86 RCW, must provide 45 days' written notice to a deployer or developer of the alleged violation of this chapter. For the first violation, the developer or deployer may cure the noticed violation within 60 days of receiving the written notice.
T-01 AI Identity Disclosure · T-01.1 · Government · Government System
Sec. 10(1)-(3)
Plain Language
Government agencies that deploy any AI system intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with AI. The disclosure must be clear, conspicuous, in plain language, and may not use dark patterns. A hyperlink to a separate page is acceptable. Critically, this disclosure is unconditional — it must be made regardless of whether a reasonable consumer would already know they are interacting with AI. This provision applies to all AI systems, not just high-risk systems, and is codified separately in Title 42 RCW (government agencies) rather than the Title 19 RCW chapter covering private-sector deployers.
Statutory Text
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.