HB-2157
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2027-01-01
Washington Substitute House Bill 2157 — An Act Relating to regulating high-risk artificial intelligence system development, deployment, and use; adding a new chapter to Title 19 RCW
Regulates developers and deployers of high-risk AI systems used to make or substantially factor into consequential decisions affecting Washington consumers across employment, housing, credit, healthcare, education, insurance, legal services, and criminal justice. Developers must provide deployers with documentation covering intended uses, limitations, discrimination risks, and evaluation summaries before making high-risk AI systems available. Deployers must implement a risk management policy and program, complete pre-deployment impact assessments (retained for three years), disclose AI use to consumers before consequential decisions, and explain adverse decisions. Developers of high-risk generative AI must ensure synthetic content outputs are identifiable and detectable. Creates a private right of action with injunctive relief and attorneys' fees, subject to a 45-day cure affirmative defense. Extensive carve-outs exist for financial institutions subject to ECOA/FCRA, insurers, HIPAA-covered entities, federally approved systems, and sandbox environments.
Summary

Regulates developers and deployers of high-risk AI systems used to make or substantially factor into consequential decisions affecting Washington consumers across employment, housing, credit, healthcare, education, insurance, legal services, and criminal justice. Developers must provide deployers with documentation covering intended uses, limitations, discrimination risks, and evaluation summaries before making high-risk AI systems available. Deployers must implement a risk management policy and program, complete pre-deployment impact assessments (retained for three years), disclose AI use to consumers before consequential decisions, and explain adverse decisions. Developers of high-risk generative AI must ensure synthetic content outputs are identifiable and detectable. Creates a private right of action with injunctive relief and attorneys' fees, subject to a 45-day cure affirmative defense. Extensive carve-outs exist for financial institutions subject to ECOA/FCRA, insurers, HIPAA-covered entities, federally approved systems, and sandbox environments.

Enforcement & Penalties
Enforcement Authority
Private right of action. Any person may file a civil action against a developer or deployer for a violation of the chapter. No designated agency enforcer. A 45-day cure period is available as an affirmative defense if the developer or deployer discovered the violation, cured it within 45 days, provided notice and evidence to the plaintiff, and is otherwise in compliance with the chapter.
Penalties
Injunctive relief and reasonable attorneys' fees and costs. No statutory minimum damages or per-violation penalty amounts are specified. The statute does not require proof of actual monetary harm for injunctive relief or fee-shifting.
Who Is Covered
"Deployer" means any person doing business in Washington that deploys or uses a high-risk artificial intelligence system to make a consequential decision in Washington.
"Developer" means any person doing business in Washington that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, sold, leased, given, or otherwise made available to deployers or consumers in Washington and who earns more than $100,000 in gross annual revenue.
What Is Covered
"High-risk artificial intelligence system" means any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision. A system or service is not a "high-risk artificial intelligence system" if it is intended to: (i) Perform a narrow procedural task; (ii) improve the result of a previously completed human activity; (iii) detect any decision-making patterns or any deviations from preexisting decision-making patterns; or (iv) perform a preparatory task to an assessment relevant to a consequential decision. (b) "High-risk artificial intelligence system" does not include any of the following technologies: (i) Antifraud technology that does not use facial recognition technology; (ii) Antimalware technology; (iii) Antivirus technology; (iv) Artificial intelligence-enabled video games; (v) Autonomous vehicle technology; (vi) Calculators; (vii) Cybersecurity technology; (viii) Databases; (ix) Data storage; (x) Firewall technology; (xi) Internet domain registration; (xii) Internet website loading; (xiii) Networking; (xiv) Spam and robocall filtering; (xv) Spell-checking technology; (xvi) Spreadsheets; (xvii) Web caching; (xviii) Web hosting or any similar technology; or (xix) Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an acceptable use policy that prohibits generating content that is discriminatory or unlawful.
Compliance Obligations 15 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Developer · Automated Decisionmaking
Sec. 2(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses. Compliance with all other developer obligations in Section 2 creates a rebuttable presumption of reasonable care. Self-testing to identify or prevent discrimination, pool-expansion for diversity, and acts by private clubs are expressly excluded from the definition of algorithmic discrimination.
Statutory Text
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 2(2)(a)-(c)
Plain Language
Developers may not distribute a high-risk AI system to deployers or other developers unless they provide comprehensive documentation covering: intended uses, known limitations and foreseeable discrimination risks, purpose and intended outputs, a performance and bias evaluation summary, discrimination mitigation measures, monitoring and use/misuse guidance, and any additional documentation reasonably needed for the deployer to understand and monitor the system. This is a pre-distribution gating requirement — the system may not be provided until these disclosures are made available to the recipient.
Statutory Text
(2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 2(3)
Plain Language
Developers must provide deployers with information and documentation — including system cards, predeployment impact assessments, and risk management policies — sufficient to enable the deployer or its contracted third party to complete the deployer-side impact assessment required by Section 3(3). This is a feasibility-qualified obligation; developers must provide what is feasible and necessary. This complements but is distinct from the Section 2(2) documentation — it specifically targets impact assessment enablement.
Statutory Text
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
G-01 AI Governance Program & Documentation · G-01.1 · Developer · Automated Decisionmaking
Sec. 2(5)
Plain Language
Developers of high-risk AI systems that conform to the NIST AI RMF, ISO/IEC 42001, or an equivalent nationally or internationally recognized AI risk management framework receive a presumption of compliance with the developer obligations in Section 2. This is a safe harbor — not a standalone obligation — but it incentivizes adoption of recognized risk management frameworks. Developers should document their conformity to invoke this presumption.
Statutory Text
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Automated Decisionmaking
Sec. 2(6)
Plain Language
Developers must update all Section 2 disclosures within 90 days of performing an intentional and substantial modification to a high-risk AI system. An intentional and substantial modification is a deliberate change that creates a new material risk of algorithmic discrimination, or for GPAI models, one that affects compliance or materially changes purpose. Routine deployer customizations and predetermined continuous-learning changes covered in the initial impact assessment are excluded from the modification definition.
Statutory Text
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
T-02 AI Content Labeling & Provenance · T-02.1T-02.2 · Developer · Automated DecisionmakingContent Generation
Sec. 2(7)(a)-(c)
Plain Language
Developers of high-risk generative AI systems that produce or substantially modify synthetic content must ensure outputs are identifiable and detectable using industry-standard tools or developer-provided tools, with identification applied at the time of generation. For audio, image, or video content in artistic or creative works, the identification must not hinder display or enjoyment. Three categories are exempt: text-only content, content published in the public interest or unlikely to mislead a reasonable person, and outputs from assistive editing tools that do not substantially alter input data or are used for law enforcement purposes.
Statutory Text
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Deployer · Automated Decisionmaking
Sec. 3(1)
Plain Language
Deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. Full compliance with all other deployer obligations in Section 3 creates a rebuttable presumption of reasonable care. This is the deployer-side analog to the developer duty in Section 2(1) and establishes the overarching standard of care against which deployers will be measured in private litigation.
Statutory Text
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated Decisionmaking
Sec. 3(2)(a)-(c)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions without first designing and implementing a formal risk management policy and program specifying the principles, processes, and personnel for identifying, mitigating, and documenting algorithmic discrimination risks. Alignment with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent framework creates a rebuttable presumption of compliance. This is a pre-deployment gating requirement — the system cannot be used until the risk management program is in place.
Statutory Text
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 3(3)(a)-(c)
Plain Language
Deployers may not deploy or use a high-risk AI system for consequential decisions without first completing a detailed impact assessment. The assessment must cover nine minimum elements: purpose and use cases, discrimination risks and mitigation steps, consistency with developer-intended uses, data categories processed, customization data used, performance metrics and limitations, transparency measures, post-deployment monitoring and user safeguards, and validity/reliability analysis. A single assessment may cover comparable systems, and assessments completed under other laws may satisfy this requirement if reasonably similar in scope. All impact assessments and supporting records — including raw performance data — must be retained for at least three years after final deployment. Impact assessments must be updated before significant updates are used for consequential decisions.
Statutory Text
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Automated Decisionmaking
Sec. 3(4)(a)-(e)
Plain Language
Before or at the time a deployer uses a high-risk AI system to interact with a consumer, the deployer must disclose that the consumer is interacting with an AI system. This is an unconditional disclosure — not triggered by whether the consumer could be misled. In addition, the deployer must simultaneously provide substantial contextual information: the system's purpose, nature, the consequential decision type, deployer contact information, and a plain-language description covering what personal characteristics the system measures, how it measures them, their relevance to the decision, the human components, and how automated components inform the decision. This is a comprehensive pre-interaction transparency obligation that goes beyond simple AI identity disclosure.
Statutory Text
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
H-01 Human Oversight of Automated Decisions · H-01.1 · Deployer · Automated Decisionmaking
Sec. 3(5)(a)-(c)
Plain Language
After making a consequential decision using a high-risk AI system, the deployer must transmit the decision to the consumer without undue delay. If the decision is adverse and relied on personal information beyond what the consumer directly provided, the deployer must also explain: the principal reasons for the decision, the degree and manner of AI contribution, the types of data processed, and the sources of that data. The adverse-decision explanation requirement is conditioned on two triggers: (1) the decision must be adverse, and (2) it must be based on data the consumer did not directly provide. If both conditions are not met, only the timely transmittal of the decision itself is required.
Statutory Text
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal information beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Sec. 3(6)
Plain Language
Deployers must publish or make readily available a clear public summary describing how they manage foreseeable algorithmic discrimination risks from their high-risk AI systems. This is a standalone public transparency obligation — separate from the impact assessment and from the consumer-facing disclosures at the point of interaction. The statement must be affirmatively made available, not merely produced on request.
Statutory Text
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
G-01 AI Governance Program & Documentation · G-01.2 · Deployer · Automated Decisionmaking
Sec. 3(7)
Plain Language
Deployers must update all Section 3 disclosures within 30 days of being notified by the developer that an intentional and substantial modification has been made to the high-risk AI system. This is a shorter window than the developer's 90-day update obligation, reflecting the deployer's downstream position. Deployers should establish a process for receiving and acting on developer modification notices promptly.
Statutory Text
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
G-02 Public Transparency & Documentation · G-02.1 · Deployer · Automated Decisionmaking
Sec. 3(8)
Plain Language
When a deployer makes an intentional and substantial modification to a high-risk AI system, the deployer is treated as a developer for documentation purposes and must comply with all Section 2 developer disclosure requirements — including making available intended use statements, limitation documentation, evaluation summaries, mitigation descriptions, use/misuse guidance, and impact assessment enablement documentation. This effectively means a modifying deployer assumes dual obligations.
Statutory Text
(8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
Other · DeveloperDeployer · Automated Decisionmaking
Sec. 4(13)
Plain Language
When a developer or deployer invokes any exemption (including the trade secret exemption) to withhold or redact information that would otherwise be required to be disclosed under this chapter, they must notify the person entitled to the disclosure and explain the basis for withholding or redacting. This ensures that consumers and deployers are not silently denied required information — they will at minimum know what was withheld and why, even if they cannot see the withheld content itself.
Statutory Text
(13) If a developer or deployer withholds information pursuant to an exemption set forth in this chapter for which disclosure would otherwise be required by this chapter, including the exemption from disclosure of trade secrets, the developer or deployer shall notify the subject of disclosure and provide a basis for withholding the information. If a developer or deployer redacts any information pursuant to an exemption from disclosure, the developer or deployer shall notify the subject of disclosure that the developer or deployer is redacting such information and provide the basis for such decision to redact.