HB-2157
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2027-01-01
Washington Substitute House Bill 2157 — An Act Relating to regulating high-risk artificial intelligence system development, deployment, and use; adding a new chapter to Title 19 RCW
Washington SHB 2157 regulates high-risk AI systems that autonomously make or substantially factor into consequential decisions affecting consumers in areas such as employment, housing, credit, healthcare, education, and insurance. Developers must provide deployers with documentation covering intended uses, known discrimination risks, performance evaluations, and mitigation measures, and must ensure synthetic content outputs are identifiable. Deployers must implement a risk management program, complete pre-deployment impact assessments, disclose AI use and system details to consumers before interaction, and provide explanations for adverse decisions. Extensive exemptions exist for financial institutions subject to ECOA/FCRA, insurers regulated by the state insurance commissioner, HIPAA-covered entities, federally approved AI systems, and chatbots with acceptable use policies prohibiting discriminatory content. Enforcement is exclusively via private right of action with injunctive relief and attorneys' fees, subject to a 45-day cure affirmative defense. The NIST AI RMF and ISO/IEC 42001 serve as safe harbor frameworks.
Summary

Washington SHB 2157 regulates high-risk AI systems that autonomously make or substantially factor into consequential decisions affecting consumers in areas such as employment, housing, credit, healthcare, education, and insurance. Developers must provide deployers with documentation covering intended uses, known discrimination risks, performance evaluations, and mitigation measures, and must ensure synthetic content outputs are identifiable. Deployers must implement a risk management program, complete pre-deployment impact assessments, disclose AI use and system details to consumers before interaction, and provide explanations for adverse decisions. Extensive exemptions exist for financial institutions subject to ECOA/FCRA, insurers regulated by the state insurance commissioner, HIPAA-covered entities, federally approved AI systems, and chatbots with acceptable use policies prohibiting discriminatory content. Enforcement is exclusively via private right of action with injunctive relief and attorneys' fees, subject to a 45-day cure affirmative defense. The NIST AI RMF and ISO/IEC 42001 serve as safe harbor frameworks.

Enforcement & Penalties
Enforcement Authority
Private right of action. Any person may file a civil action against a developer or deployer for a violation of the chapter. No designated agency enforcer. The developer or deployer has an affirmative defense if it discovered the violation, cured it within 45 days, provided notice and evidence of the cure to the person bringing the action, and is otherwise in compliance with the chapter.
Penalties
Injunctive relief and reasonable attorneys' fees and costs. No statutory damages or civil penalty amounts are specified. The court may enjoin the violation. No requirement that the plaintiff prove actual monetary harm to obtain injunctive relief and fees.
Who Is Covered
"Deployer" means any person doing business in Washington that deploys or uses a high-risk artificial intelligence system to make a consequential decision in Washington.
"Developer" means any person doing business in Washington that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, sold, leased, given, or otherwise made available to deployers or consumers in Washington and who earns more than $100,000 in gross annual revenue.
What Is Covered
"High-risk artificial intelligence system" means any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision. A system or service is not a "high-risk artificial intelligence system" if it is intended to: (i) Perform a narrow procedural task; (ii) improve the result of a previously completed human activity; (iii) detect any decision-making patterns or any deviations from preexisting decision-making patterns; or (iv) perform a preparatory task to an assessment relevant to a consequential decision. (b) "High-risk artificial intelligence system" does not include any of the following technologies: (i) Antifraud technology that does not use facial recognition technology; (ii) Antimalware technology; (iii) Antivirus technology; (iv) Artificial intelligence-enabled video games; (v) Autonomous vehicle technology; (vi) Calculators; (vii) Cybersecurity technology; (viii) Databases; (ix) Data storage; (x) Firewall technology; (xi) Internet domain registration; (xii) Internet website loading; (xiii) Networking; (xiv) Spam and robocall filtering; (xv) Spell-checking technology; (xvi) Spreadsheets; (xvii) Web caching; (xviii) Web hosting or any similar technology; or (xix) Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an acceptable use policy that prohibits generating content that is discriminatory or unlawful.
Compliance Obligations 15 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Developer · Automated Decisionmaking
Sec. 2(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the system's intended and contracted uses. Compliance with the full set of developer obligations in Section 2 creates a rebuttable presumption that the developer used reasonable care. Self-testing to identify or prevent discrimination is excluded from the definition of algorithmic discrimination and does not trigger liability.
Statutory Text
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 2(2)(a)-(c)
Plain Language
Before making a high-risk AI system available to any deployer or downstream developer, the developer must provide comprehensive documentation covering: intended uses, known limitations and discrimination risks, purpose and intended outputs, a summary of pre-deployment performance and bias evaluations, mitigation measures taken, usage guidelines (including what the system should and should not be used for and how humans should monitor it), and any additional documentation reasonably necessary for the deployer to understand outputs and monitor for discrimination. This is a condition precedent to distribution — the developer may not provide the system without first making this documentation available.
Statutory Text
(2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 2(3)
Plain Language
Developers must provide deployers (or their contracted third parties) with sufficient documentation to complete the deployer's required impact assessment. This includes artifacts such as system cards, predeployment impact assessments, and relevant risk management policies. The obligation is qualified by feasibility and necessity, but the developer bears the obligation to make the information available — the deployer should not have to request it.
Statutory Text
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
G-01 AI Governance Program & Documentation · G-01.1 · Developer · Automated Decisionmaking
Sec. 2(5)
Plain Language
Developers that conform their high-risk AI systems to the NIST AI RMF, ISO/IEC 42001, or another nationally or internationally recognized risk management framework receive a presumption of compliance with the developer obligations in Section 2. This is a safe harbor — it does not eliminate the underlying obligation but shifts the burden. Developers choosing not to follow one of these frameworks must independently demonstrate compliance.
Statutory Text
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Automated Decisionmaking
Sec. 2(6)
Plain Language
When a developer performs an intentional and substantial modification to a high-risk AI system, the developer must update all previously provided disclosures within 90 days to keep them accurate. Routine deployer customizations and predetermined continuous-learning changes covered in the initial impact assessment are excluded from the definition of intentional and substantial modification and do not trigger this update obligation.
Statutory Text
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
T-02 AI Content Labeling & Provenance · T-02.1T-02.2 · Developer · Automated DecisionmakingContent Generation
Sec. 2(7)(a)-(c)
Plain Language
Developers of high-risk generative AI systems that produce synthetic content must ensure their outputs are identifiable and detectable using industry-standard tools or developer-provided tools, and must apply such identification at the point of generation. For audio, image, or video content in artistic, creative, satirical, or fictional works, the identification must not hinder enjoyment of the work. Exemptions apply to text-only content, content in the public interest, content unlikely to mislead a reasonable person, outputs from assistive editing tools that do not substantially alter input data, and law enforcement-authorized outputs.
Statutory Text
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Deployer · Automated Decisionmaking
Sec. 3(1)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. Full compliance with the deployer obligations in Section 3 creates a rebuttable presumption that the deployer met this standard. This is the deployer-side counterpart to the developer's reasonable care obligation in Section 2(1).
Statutory Text
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated Decisionmaking
Sec. 3(2)(a)-(c)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions unless they have designed and implemented a risk management policy and program specifying the principles, processes, and personnel for identifying, mitigating, and documenting algorithmic discrimination risks. Aligning the program with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent framework creates a rebuttable presumption of compliance. This is a prerequisite to deployment — the program must exist before the system is used for consequential decisions.
Statutory Text
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 3(3)(a)-(c)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions without first completing a written impact assessment. The assessment must cover nine enumerated elements: purpose and use cases, known discrimination risks and mitigation steps, comparison of actual use to developer-intended use (for post-deployment assessments), input/output data categories, customization data, performance metrics and limitations, transparency measures, post-deployment monitoring and user safeguards, and validity/reliability analysis. A single assessment may cover comparable systems, and an assessment completed for another law can satisfy this requirement if reasonably similar in scope. All impact assessments and supporting records, including raw performance evaluation data, must be retained for at least three years after final deployment. The assessment must be updated before any significant system update is used for consequential decisions.
Statutory Text
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Automated Decisionmaking
Sec. 3(4)(a)-(e)
Plain Language
Before or at the time a deployer uses a high-risk AI system to interact with a consumer, the deployer must disclose that the consumer is interacting with an AI system. Simultaneously, the deployer must provide detailed information including: the system's purpose, its nature, the type of consequential decision being made, deployer contact information, and a plain-language description covering what personal attributes the system measures, how it measures them, their relevance to the decision, what human components exist, and how automated components inform decisions. This is an unconditional disclosure obligation — it is triggered whenever the system interacts with a consumer, regardless of whether the consumer could be misled.
Statutory Text
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
H-01 Human Oversight of Automated Decisions · H-01.1 · Deployer · Automated Decisionmaking
Sec. 3(5)(a)-(c)
Plain Language
Deployers must communicate consequential AI decisions to affected consumers without undue delay. When the decision is adverse and relied on personal information beyond what the consumer directly provided, the deployer must additionally provide a statement explaining: the principal reasons for the decision, the degree and manner in which the AI contributed, what types of data the system processed, and the sources of that data. Note the adverse-decision explanation is only triggered when the decision uses personal information from sources other than the consumer — if the decision is based solely on consumer-provided data, the explanation obligation does not apply (though the prompt-notification obligation still does).
Statutory Text
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal information beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Sec. 3(6)
Plain Language
Deployers must make publicly available a clear summary of how they manage foreseeable algorithmic discrimination risks for each high-risk AI system they deploy. 'Readily available' implies public accessibility — not merely available upon request. This is a standalone public transparency obligation separate from the individual consumer disclosures required by Section 3(4) and the impact assessment documentation required by Section 3(3).
Statutory Text
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
G-01 AI Governance Program & Documentation · G-01.2 · Deployer · Automated Decisionmaking
Sec. 3(7)
Plain Language
When a developer notifies a deployer of an intentional and substantial modification to a high-risk AI system, the deployer must update all of its consumer-facing disclosures within 30 days to ensure accuracy. This is a shorter window than the 90 days developers have under Section 2(6), reflecting the expectation that deployers can update their disclosures more quickly once they receive the developer's updated documentation.
Statutory Text
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
G-02 Public Transparency & Documentation · G-02.1 · Deployer · Automated Decisionmaking
Sec. 3(8)
Plain Language
When a deployer itself performs an intentional and substantial modification to a high-risk AI system — as opposed to receiving a modified system from a developer — the deployer steps into the developer's shoes and must comply with all of the documentation and disclosure obligations that Section 2 imposes on developers. This ensures that whoever modifies the system in a material way bears the documentation burden, regardless of whether they are formally classified as a developer or deployer.
Statutory Text
(8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
Other · DeveloperDeployer · Automated Decisionmaking
Sec. 4(13)
Plain Language
When a developer or deployer withholds or redacts information that would otherwise be required to be disclosed under this chapter — including under the trade secret exemption — they must notify the person who would have received the disclosure and explain the basis for withholding or redacting. This ensures that consumers and deployers know when information is being withheld and why, even when full disclosure is lawfully avoided.
Statutory Text
(13) If a developer or deployer withholds information pursuant to an exemption set forth in this chapter for which disclosure would otherwise be required by this chapter, including the exemption from disclosure of trade secrets, the developer or deployer shall notify the subject of disclosure and provide a basis for withholding the information. If a developer or deployer redacts any information pursuant to an exemption from disclosure, the developer or deployer shall notify the subject of disclosure that the developer or deployer is redacting such information and provide the basis for such decision to redact.