HB-2667
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2026-07-01
Washington House Bill 2667 — An Act Relating to Consumer Protections for Artificial Intelligence Systems
Establishes a comprehensive framework for deployers and developers of high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, education, insurance, legal services, criminal justice, and essential government services). Requires deployers to use industry-standard means to protect against algorithmic discrimination, conduct annual reviews, maintain a risk management program aligned with NIST AI RMF or equivalent, complete impact assessments, and notify consumers before AI-driven consequential decisions are made. Enforcement is exclusively through the attorney general under the Washington Consumer Protection Act, with a 45-day pre-suit notice and a 60-day cure period for first violations. Separately requires government agencies to disclose AI use to consumers. Also extends the existing AI task force through 2028 and creates a new AI workplace advisory group.
Summary

Establishes a comprehensive framework for deployers and developers of high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, education, insurance, legal services, criminal justice, and essential government services). Requires deployers to use industry-standard means to protect against algorithmic discrimination, conduct annual reviews, maintain a risk management program aligned with NIST AI RMF or equivalent, complete impact assessments, and notify consumers before AI-driven consequential decisions are made. Enforcement is exclusively through the attorney general under the Washington Consumer Protection Act, with a 45-day pre-suit notice and a 60-day cure period for first violations. Separately requires government agencies to disclose AI use to consumers. Also extends the existing AI task force through 2028 and creates a new AI workplace advisory group.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. The attorney general may bring an action in the name of the state or as parens patriae on behalf of persons residing in the state. Violations are treated as unfair or deceptive acts in trade or commerce under the Consumer Protection Act (RCW 19.86). Before commencing an action, the attorney general must provide 45 days' written notice of the alleged violation. For the first violation, the developer or deployer may cure the noticed violation within 60 days of receiving written notice. Private actions under RCW 19.86.090 are expressly excluded.
Penalties
Remedies available under the Washington Consumer Protection Act (RCW 19.86) as enforced by the attorney general, which may include injunctive relief, civil penalties up to $7,500 per violation, restitution, and costs. Private actions under RCW 19.86.090 are expressly excluded. The statute provides a rebuttable presumption of reasonable care for deployers who comply with the chapter.
Who Is Covered
"Deployer" means any person doing business in this state that deploys a high-risk artificial intelligence system in the state.
"Developer" means any person doing business in this state that develops, or intentionally and substantially modifies, a high-risk artificial intelligence system intended for use within the state.
What Is Covered
"High-risk artificial intelligence system": (a) Means any artificial intelligence system designed by its developer to, when deployed, make, or is a substantial factor in making, a consequential decision; and (b) Does not include: (i) Any artificial intelligence system that is intended to: (A) Perform any narrow procedural task; (B) Improve the result of a previously completed human activity; (C) Perform a preparatory task to an assessment relevant to a consequential decision; or (D) Detect any decision-making pattern, or any deviation from any preexisting decision-making pattern; (ii) Any antifraud technology, antimalware, antivirus, calculator, cybersecurity, database, data storage, firewall, internet domain registration, internet website loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, webcaching, webhosting, search engine, or similar technology; or (iii) Any technology that communicates in natural language for the purpose of providing users with information, making referrals or recommendations, answering questions, or generating other content, and is subject to an acceptable use policy that prohibits generating content that is unlawful.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.8 · Deployer · Automated Decisionmaking
Sec. 3(1)(a)-(b), (2)(a)-(b)
Plain Language
Beginning July 1, 2027, deployers must use industry-standard means to protect consumers from known or reasonably foreseeable algorithmic discrimination. In addition, deployers (or a contracted third party) must conduct at least annual reviews of each deployed high-risk AI system to verify it is not causing algorithmic discrimination. If a deployer discovers that a system has caused algorithmic discrimination, it must notify the attorney general within 90 days. A deployer that complies with the entire chapter benefits from a rebuttable presumption of reasonable care in enforcement actions. Algorithmic discrimination is defined by reference to Washington's existing anti-discrimination law (chapter 49.60 RCW) and federal law, and excludes testing done to identify or mitigate discrimination. No trade secret disclosure is required.
Statutory Text
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 9 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
Sec. 4(1)-(3)
Plain Language
Beginning July 1, 2027, deployers must implement and maintain a risk management policy and program governing deployment of each high-risk AI system. The program must specify the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks, and must be iteratively reviewed and updated throughout the system lifecycle. Reasonableness is judged by deployer size and complexity, system scope and intended uses, data sensitivity and volume, and adherence to a recognized framework such as the NIST AI RMF, ISO/IEC 42001, or another framework designated by the attorney general. A single program may cover multiple high-risk systems. Small deployers (fewer than 50 FTEs who do not use their own training data) may be exempt under Section 6 if additional conditions are met.
Statutory Text
(1) Beginning July 1, 2027, and except as provided in section 5(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 5(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system deployed on or after July 1, 2027, and again within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and intended uses, algorithmic discrimination risk analysis and mitigation steps, data inputs, outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring safeguards. Post-modification assessments must additionally disclose whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems. An impact assessment completed under another law satisfies this requirement if reasonably similar in scope. All impact assessments and supporting records must be retained for at least three years after final deployment. Small deployers (fewer than 50 FTEs, not using own training data) may be exempt under Section 6 conditions.
Statutory Text
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 6(1)-(2)
Plain Language
Small deployers (fewer than 50 FTEs) that do not use their own data to train the high-risk AI system are exempt from the impact assessment (Section 5(1)-(3)) and annual algorithmic discrimination review (Section 3(2)) requirements, provided three conditions are continuously met: (1) the system is used only for disclosed intended uses; (2) the system's continued learning relies on non-deployer data; and (3) the deployer makes available to consumers a developer-provided impact assessment that is substantially similar to what Section 5 requires. If any condition lapses, the exemption is lost. The exemption does not relieve the deployer of the general duty to use industry-standard means to protect against algorithmic discrimination (Section 3(1)(a)) or the risk management program requirement (Section 4).
Statutory Text
(1) The requirements in section 5 (1) through (3) of this act and section 3(2) of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed: (a) The deployer: (i) Employs fewer than 50 full-time equivalent employees; and (ii) Does not use the deployer's own data to train the high-risk artificial intelligence system; (b) The high-risk artificial intelligence system: (i) Is used for the intended uses that are disclosed by the deployer; and (ii) Continues learning based on data derived from sources other than the deployer's own data; and (c) The deployer makes available to consumers any impact assessment that: (i) The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and (ii) Includes information that is substantially similar to the information in the impact assessment required under section 5 of this act. (2) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Sec. 7(1)-(2)
Plain Language
Beginning July 1, 2026, every time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must notify the consumer before the decision is made. The notification must include the system's purpose, the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the system. This is the earliest operative obligation in the bill — it takes effect a full year before the risk management and impact assessment obligations.
Statutory Text
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
Sec. 3(2)(b)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, using the form and manner the attorney general prescribes. This is a reactive disclosure triggered by discovery, not a scheduled report. The obligation runs in parallel with the annual review requirement — the review is how discovery is expected to occur, but the notification obligation applies regardless of how discrimination is discovered.
Statutory Text
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
T-01 AI Identity Disclosure · T-01.1 · Government · Government System
Sec. 10(1)-(3)
Plain Language
Government agencies that deploy an AI system intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with an AI system. The disclosure must be clear, conspicuously posted, written in plain language, and free of dark patterns. A hyperlink to a separate web page is an acceptable format. The disclosure is unconditional — it must be provided even if a reasonable consumer would already realize they are interacting with AI. Note this provision applies to any AI system (not just high-risk), and the obligated party is a government agency rather than a private deployer. This section is codified in Title 42 RCW, separate from the private-sector obligations in Title 19 RCW.
Statutory Text
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.