SB-6120
WA · State · USA
WA
USA
● Pending
Proposed Effective Date
2027-01-01
Washington Senate Bill 6120 — An Act Relating to regulating high-risk artificial intelligence system development, deployment, and use; adding a new chapter to Title 19 RCW
Regulates developers and deployers of high-risk AI systems used to make or substantially factor in consequential decisions affecting Washington consumers across domains including employment, housing, credit, healthcare, education, insurance, and legal services. Developers must provide deployers with documentation on intended uses, bias risks, performance evaluations, and mitigation measures, and must label synthetic content generated by high-risk generative AI systems. Deployers must implement risk management programs, complete impact assessments before deployment, disclose AI use to consumers before consequential decisions, explain adverse decisions, and publicly summarize their algorithmic discrimination risk management approach. Compliance with NIST AI RMF or ISO/IEC 42001 creates a rebuttable presumption of conformity. Enforcement is through private right of action with injunctive relief and attorneys' fees; an affirmative defense exists for entities that discover, cure within 45 days, and provide notice. Broad carve-outs exist for insurers, financial institutions subject to existing AI regulation, HIPAA-covered telemedicine providers, and federally approved AI systems.
Summary

Regulates developers and deployers of high-risk AI systems used to make or substantially factor in consequential decisions affecting Washington consumers across domains including employment, housing, credit, healthcare, education, insurance, and legal services. Developers must provide deployers with documentation on intended uses, bias risks, performance evaluations, and mitigation measures, and must label synthetic content generated by high-risk generative AI systems. Deployers must implement risk management programs, complete impact assessments before deployment, disclose AI use to consumers before consequential decisions, explain adverse decisions, and publicly summarize their algorithmic discrimination risk management approach. Compliance with NIST AI RMF or ISO/IEC 42001 creates a rebuttable presumption of conformity. Enforcement is through private right of action with injunctive relief and attorneys' fees; an affirmative defense exists for entities that discover, cure within 45 days, and provide notice. Broad carve-outs exist for insurers, financial institutions subject to existing AI regulation, HIPAA-covered telemedicine providers, and federally approved AI systems.

Enforcement & Penalties
Enforcement Authority
Private right of action. No designated agency enforcer. Any person may file a civil action against a developer or deployer for a violation of the chapter. An affirmative defense is available if the developer or deployer discovered the violation, cured it within 45 days, provided notice and evidence of cure to the plaintiff, and is otherwise in compliance with the chapter.
Penalties
Injunctive relief and reasonable attorneys' fees and costs. The statute does not specify statutory damages, actual damages, or civil penalties. The court may enjoin the violation and award reasonable attorneys' fees and costs. No requirement that the plaintiff prove actual monetary harm to obtain injunctive relief or fees.
Who Is Covered
"Deployer" means any person doing business in Washington that deploys or uses a high-risk artificial intelligence system to make a consequential decision in Washington.
"Developer" means any person doing business in Washington that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, sold, leased, given, or otherwise made available to deployers or consumers in Washington and who earns more than $100,000 in gross annual revenue.
What Is Covered
"High-risk artificial intelligence system" means any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision. A system or service is not a "high-risk artificial intelligence system" if it is intended to: (i) Perform a narrow procedural task; (ii) improve the result of a previously completed human activity; (iii) detect any decision-making patterns or any deviations from preexisting decision-making patterns; or (iv) perform a preparatory task to an assessment relevant to a consequential decision. (b) "High-risk artificial intelligence system" does not include any of the following technologies: (i) Antifraud technology that does not use facial recognition technology; (ii) Antimalware technology; (iii) Antivirus technology; (iv) Artificial intelligence-enabled video games; (v) Autonomous vehicle technology; (vi) Calculators; (vii) Cybersecurity technology; (viii) Databases; (ix) Data storage; (x) Firewall technology; (xi) Internet domain registration; (xii) Internet website loading; (xiii) Networking; (xiv) Spam and robocall filtering; (xv) Spell-checking technology; (xvi) Spreadsheets; (xvii) Web caching; (xviii) Web hosting or any similar technology; or (xix) Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an acceptable use policy that prohibits generating content that is discriminatory or unlawful.
Compliance Obligations 12 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.2H-02.3 · Developer · Automated Decisionmaking
Sec. 2(1)-(2)
Plain Language
Developers must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. Before providing a high-risk AI system to any deployer or other developer, the developer must deliver documentation covering: intended uses, known limitations and discrimination risks, performance and bias evaluation summaries, mitigation measures taken, and guidance on proper use and human monitoring. Compliance with all requirements of Section 2 creates a rebuttable presumption of reasonable care in any civil action. Developers that also serve as deployers are exempt from generating this documentation unless the system is provided to an unaffiliated deployer (Sec. 2(4)). Conformity with NIST AI RMF, ISO/IEC 42001, or an equivalent recognized framework creates an additional presumption of conformity (Sec. 2(5)).
Statutory Text
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section. (2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 2(3)
Plain Language
Developers must provide deployers with the information and artifacts — such as system cards, pre-deployment impact assessments, and risk management policies — that the deployer needs to complete its own impact assessment under Section 3(3). This obligation is scoped by feasibility and necessity. The intent is to prevent deployers from being unable to comply with their impact assessment obligations because the developer withheld upstream documentation.
Statutory Text
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Automated Decisionmaking
Sec. 2(6)
Plain Language
When a developer makes an intentional and substantial modification to a high-risk AI system, all disclosures required under Section 2 must be updated within 90 days to remain accurate. The definition of intentional and substantial modification narrows the trigger to changes that create new material discrimination risks — routine deployer customizations within scope and pre-approved continuous learning changes do not trigger the update obligation.
Statutory Text
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
T-02 AI Content Labeling & Provenance · T-02.1T-02.2 · Developer · Automated DecisionmakingContent Generation
Sec. 2(7)
Plain Language
Developers of high-risk generative AI systems that produce or substantially modify synthetic content must ensure outputs are identifiable and detectable using industry-standard tools or developer-provided tools at the time of generation. For artistic, creative, satirical, or fictional works, the identification must not hinder display or enjoyment. Significant carve-outs apply: text-only synthetic content, content published in the public interest, content unlikely to mislead a reasonable person, outputs from assistive editing tools that do not substantially alter inputs, and law enforcement-authorized crime detection uses are all exempt from the identification requirement.
Statutory Text
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Deployer · Automated Decisionmaking
Sec. 3(1)
Plain Language
Deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks when using high-risk AI systems to make consequential decisions. Full compliance with all deployer obligations in Section 3 creates a rebuttable presumption of reasonable care in any civil action. This is the deployer-side counterpart to the developer reasonable care obligation in Section 2(1).
Statutory Text
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated Decisionmaking
Sec. 3(2)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions unless they have designed and implemented a risk management policy and program covering the principles, processes, and personnel for identifying, mitigating, and documenting algorithmic discrimination risks. Alignment with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent recognized framework creates a rebuttable presumption of conformity. This is a deployment prerequisite — the deployer must have the program in place before using the system.
Statutory Text
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 3(3)
Plain Language
Deployers must complete a formal impact assessment before initially deploying a high-risk AI system and before any significant update is used for consequential decisions. The assessment must cover at minimum: purpose and use cases, known discrimination risks and mitigation steps, data input/output categories, customization data, performance metrics and limitations, transparency measures, post-deployment monitoring, and validity/reliability analysis. A single impact assessment may cover comparable systems, and an assessment completed for another law satisfies this requirement if reasonably similar in scope. All impact assessments and supporting records — including raw performance data — must be retained for at least three years following final deployment. This is a deployment prerequisite: the system may not be used for consequential decisions without a completed assessment.
Statutory Text
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3 · Deployer · Automated Decisionmaking
Sec. 3(4)
Plain Language
Before or at the time a high-risk AI system interacts with a consumer, the deployer must disclose: that the consumer is interacting with AI, the system's purpose and nature, the nature of the consequential decision, deployer contact information, and a plain-language description covering what personal attributes the system measures, how it measures them, their relevance to the decision, human oversight components, and how automated components inform decisions. This is a comprehensive pre-decision disclosure requirement — considerably more detailed than a simple AI identity notice.
Statutory Text
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
H-01 Human Oversight of Automated Decisions · H-01.1 · Deployer · Automated Decisionmaking
Sec. 3(5)
Plain Language
Deployers must transmit consequential decisions to affected consumers without undue delay. When the decision is adverse and relied on personal data beyond what the consumer directly provided, the deployer must also explain: the principal reasons for the decision, how much and in what way the AI system contributed, what types of data were used, and where that data came from. This adverse-decision explanation obligation is triggered only when the decision relied on data the consumer did not directly supply — if the decision is based solely on consumer-provided information, only the decision itself must be communicated.
Statutory Text
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal data beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Sec. 3(6)
Plain Language
Deployers must make a publicly accessible, clear summary statement describing how they manage algorithmic discrimination risks from their high-risk AI systems. This is a standalone public transparency obligation — separate from the impact assessment (which is internal/retained documentation) and the consumer-facing pre-decision disclosures. The statement must be 'readily available,' suggesting publication on a website or similar public channel.
Statutory Text
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
G-01 AI Governance Program & Documentation · G-01.2 · Deployer · Automated Decisionmaking
Sec. 3(7)-(8)
Plain Language
Deployers must update all required disclosures within 30 days after being notified by the developer of an intentional and substantial modification to the AI system. Separately, if a deployer itself performs an intentional and substantial modification, it must also comply with all developer-level documentation and disclosure requirements under Section 2. This means a deployer that significantly modifies a system effectively steps into the developer's shoes for documentation purposes.
Statutory Text
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate. (8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
Other · Automated Decisionmaking
Sec. 4(13)
Plain Language
When a developer or deployer relies on an exemption (including the trade secret exemption) to withhold or redact information that would otherwise be required to be disclosed under the chapter, it must notify the person entitled to the disclosure and explain the basis for withholding or redacting. This creates a transparency guardrail on exemption use — entities cannot silently omit required disclosures by claiming an exemption; they must affirmatively flag the omission and its justification.
Statutory Text
(13) If a developer or deployer withholds information pursuant to an exemption set forth in this chapter for which disclosure would otherwise be required by this chapter, including the exemption from disclosure of trade secrets, the developer or deployer shall notify the subject of disclosure and provide a basis for withholding the information. If a developer or deployer redacts any information pursuant to an exemption from disclosure, the developer or deployer shall notify the subject of disclosure that the developer or deployer is redacting such information and provide the basis for such decision to redact.