H-02
Human Oversight & Fairness
Non-Discrimination & Bias Assessment
AI systems used in high-stakes contexts must be tested and formally assessed for discriminatory impact across protected characteristics before deployment. Results must be documented and retained. Some jurisdictions require submission to regulators; others require independent third-party audits with public disclosure of results.
Applies to DeveloperDeployerGovernment Sector EmploymentFinancial ServicesHealthcareGovernment System
Bills — Enacted
2
unique bills
Bills — Proposed
51
Last Updated
2026-03-29
Core Obligation

AI systems used in high-stakes contexts must be tested and formally assessed for discriminatory impact across protected characteristics before deployment. Results must be documented and retained. Some jurisdictions require submission to regulators; others require independent third-party audits with public disclosure of results.

Sub-Obligations9 sub-obligations
ID
Name & Description
Enacted
Proposed
H-02.1
Internal bias testing The developer or deployer must conduct testing across protected characteristics using appropriate statistical methods before deployment.
1 enacted
26 proposed
H-02.2
Documented methodology The testing methodology must be documented in sufficient detail for third-party review, including: protected characteristics tested, statistical measures used, datasets tested, and results.
0 enacted
9 proposed
H-02.3
Algorithmic impact assessment A formal written assessment of the AI system's potential discriminatory impact must be completed before deployment, identifying risks and mitigation measures. Must be retained and available to regulators on request.
2 enacted
27 proposed
H-02.4
Regulator submission of assessment Proactive submission of the impact assessment to a regulatory authority on a defined schedule or upon request.
0 enacted
5 proposed
H-02.5
Public disclosure of assessment Public disclosure of a summary or the full impact assessment.
0 enacted
5 proposed
H-02.6
Independent third-party audit A qualified independent auditor with no material relationship to the developer or deployer must evaluate the system for bias and disparate impact. Currently required primarily for automated employment decision tools.
0 enacted
13 proposed
H-02.7
Public disclosure of audit results Audit results, including selection rates and impact ratios across protected categories, must be published prior to or contemporaneous with deployment.
0 enacted
9 proposed
H-02.8
Periodic Post-Deployment Discrimination Review Deployers must conduct periodic (at least annual) reviews of each deployed high-risk AI system to affirmatively verify the system is not causing algorithmic discrimination, separate from pre-deployment bias assessments. Reviews may be conducted internally or by a contracted third party.
2 enacted
14 proposed
H-02.10
Impact Assessment Records Retention Deployers must retain all impact assessments, associated records, and prior impact assessments for a period of time following the final deployment of each high-risk AI system, and make them available to regulators upon request.
1 enacted
12 proposed
Bills That Map This Requirement 53 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-01-01
H-02.3
Bus. & Prof. Code § 22756.1(a)(1)-(2), (c)(2)
Plain Language
Developers must complete a formal impact assessment before making any high-risk automated decision system publicly available on or after January 1, 2026. For systems already on the market before that date, the impact assessment obligation triggers only upon a substantial modification. The assessment must cover the system's purpose and intended uses, its outputs, data inputs, foreseeable discriminatory impacts on protected classifications, safeguards against algorithmic discrimination, and monitoring guidance for deployers. This assessment must be retained and is subject to regulatory request under § 22756.6.
(a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use. (2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts. (B) A description of the high-risk automated decision system's intended outputs. (C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system. (D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system. (E) A developer's impact assessment shall also include both of the following: (i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system. (ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.
Pending 2026-01-01
H-02.3
Bus. & Prof. Code § 22756.1(b), (c)(2)(F)-(H)
Plain Language
Deployers of high-risk automated decision systems first deployed after January 1, 2026, must perform an impact assessment within two years of deployment. The deployer's assessment must address how its use aligns with or deviates from the developer's intended uses, describe safeguards against discrimination, and explain monitoring and evaluation plans. State agencies may opt out of performing their own impact assessment if four conditions are met: (1) the system is used only as the developer intended, (2) no substantial modifications are made, (3) the developer complies with the procurement and confidential impact assessment requirements, (4) the agency has no reasonable basis to believe the system is likely to cause algorithmic discrimination, and the agency maintains its own governance program under § 22756.3.
(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system. (2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met: (A) The state agency does not make a substantial modification to the high-risk automated decision system. (B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d). (C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination. (D) The state agency is in compliance with Section 22756.3. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (F) A statement of the extent to which the deployer's use of the high-risk automated decision system is consistent with, or varies from, the developer's statement of the high-risk automated decision system's purpose and intended benefits, intended uses, and intended deployment contexts. (G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system. (H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.
Pending 2026-01-01
H-02.3
Bus. & Prof. Code § 22756.5(a)-(b)
Plain Language
Developers and deployers are prohibited from deploying or making available a high-risk automated decision system if the impact assessment finds the system is likely to cause algorithmic discrimination. However, an exception exists: deployment is permitted if the entity implements safeguards to mitigate the discrimination risks and then performs an updated impact assessment verifying that the algorithmic discrimination has been mitigated and is not reasonably likely to occur. This effectively creates a deploy-with-safeguards pathway conditioned on a second, confirmatory impact assessment.
(a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination. (b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.
Pending 2026-01-01
H-02.1
Health & Safety Code § 1339.76(c)(1)-(2)
Plain Language
Developers of AI models or systems used in healthcare settings must test their systems for biased impacts — meaning unintended impacts on individuals based on protected characteristics — in the outputs produced by the AI, using the patient population of the specific health facility, clinic, physician's office, or group practice where the AI is deployed. Testing must be conducted in conjunction with the health facility. Until the advisory board develops its standardized testing system, developers must use an existing testing system designated by the board; once the board's system is available, developers may choose to use it instead.
(c) (1) Developers of AI models or AI systems, in conjunction with health facilities, clinics, physician's offices, or offices of a group practice, shall test for biased impacts in the outputs produced by the specified AI model or AI system based on the health facility's patient population. (2) Developers shall use an existing testing system designated by the advisory board until the advisory board has developed its standardized testing system described in paragraph (2) of subdivision (b). After the advisory board has developed its testing system, developers may alternatively use the board's testing system.
Pending 2027-01-01
C.R.S. § 10-16-112.7(3)(c)-(d)
Plain Language
Entities using AI for utilization review must ensure the AI system does not discriminate against individuals in violation of state or federal law and is fairly and equitably applied, including in accordance with HHS regulations and guidance. While this provision does not mandate a specific bias testing methodology or independent audit, it creates an affirmative obligation to ensure non-discrimination — which in practice requires some form of testing or monitoring to verify compliance. The reference to HHS regulations and guidance incorporates federal non-discrimination standards such as those under Section 1557 of the ACA.
(c) THE ARTIFICIAL INTELLIGENCE SYSTEM IS NOT USED IN ANY WAY THAT DISCRIMINATES AGAINST INDIVIDUALS IN VIOLATION OF OTHER STATE OR FEDERAL LAWS; (d) THE ARTIFICIAL INTELLIGENCE SYSTEM IS FAIRLY AND EQUITABLY APPLIED, INCLUDING IN ACCORDANCE WITH APPLICABLE REGULATIONS AND GUIDANCE ISSUED BY THE FEDERAL DEPARTMENT OF HEALTH AND HUMAN SERVICES;
Enacted 2026-06-30
H-02.1H-02.3
C.R.S. § 6-1-1702(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the system's intended and contracted uses. This is a general duty of care standard — not a checklist. However, developers receive a rebuttable presumption of compliance if they satisfy the specific obligations in this section plus any AG rules adopted under § 6-1-1707. The safe harbor is significant: it shifts the burden to the AG to prove non-compliance after a developer demonstrates statutory compliance.
(1) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
Enacted 2026-06-30
H-02.1H-02.3
C.R.S. § 6-1-1703(1)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Like the parallel developer duty in § 6-1-1702(1), deployers receive a rebuttable presumption of compliance if they meet the section's specific obligations and any AG rules. This is the overarching deployer duty — the specific sub-obligations are mapped separately below.
(1) On and after June 30, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
Enacted 2026-06-30
H-02.3H-02.10
C.R.S. § 6-1-1703(3)(a)
Plain Language
Deployers (or their contracted third parties) must complete an impact assessment for each high-risk AI system at deployment and at least annually thereafter, plus within 90 days of any intentional and substantial modification. This is a continuing obligation — the annual cadence ensures the assessment stays current even absent modifications. Exceptions exist in subsections (3)(d), (3)(e), and (6) of the original statute.
(3) (a) Except as provided in subsections (3)(d), (3)(e), and (6) of this section: (I) A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system on or after June 30, 2026, shall complete an impact assessment for the high-risk artificial intelligence system; and (II) On and after June 30, 2026, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.
Enacted 2026-06-30
H-02.3
C.R.S. § 6-1-1703(3)(c)
Plain Language
When an impact assessment is triggered by an intentional and substantial modification (as opposed to the annual routine assessment), the deployer must include an additional statement disclosing whether the system was used consistently with or differently from the developer's intended uses. This requirement surfaces deployment drift — if the deployer has been using the system outside the developer's stated intended uses, this must be documented and disclosed in the post-modification impact assessment.
(c) In addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (3) following an intentional and substantial modification to a high-risk artificial intelligence system on or after June 30, 2026, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system.
Enacted 2026-06-30
H-02.8
C.R.S. § 6-1-1703(3)(g)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI system to affirmatively verify that it is not causing algorithmic discrimination. This is a periodic deployment review obligation — distinct from the pre-deployment impact assessment. The first review must be completed by June 30, 2026, with annual reviews thereafter. This review can be conducted by the deployer itself or a contracted third party.
(g) On or before June 30, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Pending 2026-10-01
H-02.1H-02.2H-02.6H-02.7H-02.4H-02.5H-02.10
Sec. 8(a)-(f)
Plain Language
Deployers must contract with a Labor Commissioner-approved independent auditor for a bias audit before deployment and annually thereafter; the initial audit must be completed no later than one year before intended deployment. Each audit must evaluate performance and error rates across subgroups, assess disparate impact against protected classes, examine data sources and output quality, evaluate scoring thresholds, and test for less discriminatory alternatives. The auditor must have no financial or operational interest in the deployer or developer. Within 30 days of audit completion, the deployer must file the full report and a plain-language summary with the Labor Commissioner and publish the summary on its website. If the audit identifies disparate impact, the system may not be deployed or continue operating unless the deployer demonstrates business necessity, has implemented Commissioner-approved corrective actions, and either no less discriminatory alternative exists or one has been implemented. All audit records must be retained for at least five years and made available to the Commissioner on request.
(a) (1) Prior to deploying an automated employment-related decision process, and annually thereafter, a deployer shall contract with an independent auditor to complete a bias audit. Such bias audit shall be done not later than one year prior to the date the deployer intends to deploy such automated employment-related decision process. (2) Each bias audit conducted pursuant to this subsection shall: (A) Evaluate the automated employment-related decision process performance and error rates across relevant subgroups; (B) Assess disparate impact caused by the automated employment-related decision process against protected classes; (C) Examine the sources of data processed by the automated employment-related decision process and quality of content, decisions, predictions or recommendations generated by the automated employment-related decision process; (D) Evaluate the effects of any thresholds, scoring or ranking criteria utilized by the automated employment-related decision process; and (E) Test for less discriminatory alternatives or adjustments to such automated employment-related decision process. (3) No deployer shall contract with an independent auditor who (A) has a financial or operational interest in the deployer or developer of the automated employment-related decision process, or (B) has not been approved by the Labor Commissioner pursuant to subsection (b) of this section. (b) The Labor Commissioner shall establish and implement an approval process of independent auditors to conduct bias audits pursuant to subsection (a) of this section and shall maintain a registry of independent auditors approved by such process. (c) Not later than thirty days after completing a bias audit pursuant to subsection (a) of this section, the deployer shall (1) in a form and manner prescribed by the Labor Commissioner, file a bias audit report and a plain-language summary of such report with the commissioner, and (2) publish a plain-language summary of such audit report on the deployer's Internet web site in a conspicuous place accessible to applicants for employment and employees. Such summary shall include (A) the methodology used in such bias audit, (B) the key findings and identified risks found by such bias audit, and (C) any corrective actions taken by the deployer. (d) No automated employment-related decision process shall be deployed or continue to be deployed by a deployer if the most recent bias audit conducted pursuant to subsection (a) of this section identified any disparate impact caused by such automated employment-related decision process, except where the deployer can demonstrate (1) a business necessity, (2) such deployer has implemented corrective actions approved by the Labor Commissioner, and (3) that either (A) no less discriminatory alternative is available, or (B) a less discriminatory alternative has been implemented by the deployer. (e) Each deployer shall maintain records relating to bias audits required pursuant to subsection (a) of this section for a period of not less than five years and shall make such records available to the Labor Commissioner upon request. (f) The Labor Commissioner may adopt regulations, in accordance with the provisions of chapter 54 of the general statutes, necessary to carry out the purposes of this section, including, but not limited to, establishing minimum qualifications for independent auditors and methodologic requirements for bias audits required pursuant to subsection (a) of this section.
Pending 2026-10-01
H-02.1
Sec. 18 (amending § 46a-60(b)(1)(A))
Plain Language
This amends Connecticut's anti-discrimination statute to expressly prohibit using an automated employment-related decision process in any manner that has the discriminatory effect of refusing to hire, discharging, or discriminating against individuals based on protected characteristics — including race, sex, age, disability, veteran status, and others. This is a disparate-impact standard: the employer need not intend discrimination; the effect is sufficient. Notably, the amendment requires courts and the CHRO to consider evidence of anti-bias testing or proactive compliance efforts — including quality, recency, scope, results, and response — as a mitigating factor. This creates a practical safe-harbor-like incentive for deployers who conduct robust bias audits under Section 8.
(A) For an employer, by the employer or the employer's agent, except in the case of a bona fide occupational qualification or need, to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment because of, or to use an automated employment-related decision process in any manner that has the effect of causing the employer to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment on the basis of, the individual's race, color, religious creed, age, sex, gender identity or expression, marital status, national origin, ancestry, present or past history of mental disability, intellectual disability, learning disability, physical disability, including, but not limited to, blindness, status as a veteran, status as a victim of domestic violence, status as a victim of sexual assault or status as a victim of trafficking in persons. In any action for a discriminatory practice in violation of this subparagraph involving an automated employment-related decision process, the commission or the court shall consider any evidence, or lack of evidence, of anti-bias testing or similar proactive efforts to avoid such discriminatory practice, including, but not limited to, the quality, efficacy, recency and scope of such testing or efforts, the results of such testing or efforts and the response thereto.
Enacted 2023-07-01
H-02.3H-02.8
Section 1(c)
Plain Language
Beginning February 1, 2024, the Department of Administrative Services must perform ongoing assessments of all AI systems used by state agencies to ensure they do not cause unlawful discrimination or disparate impact across an extensive list of protected characteristics (defined in Section 2(b)(1)(B)). The assessments must follow the policies and procedures established by the Office of Policy and Management. This is a continuing obligation — not a one-time pre-deployment check — requiring periodic review of deployed systems for bias. The protected characteristics include age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability, and lawful source of income.
(c) Beginning on February 1, 2024, the Department of Administrative Services shall perform ongoing assessments of systems that employ artificial intelligence and are in use by state agencies to ensure that no such system shall result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of section 2 of this act. The department shall perform such assessment in accordance with the policies and procedures established by the Office of Policy and Management pursuant to subsection (b) of section 2 of this act.
Pending 2025-07-01
H-02.1
O.C.G.A. § 10-16-2(a)
Plain Language
Developers are categorically prohibited from selling, distributing, or otherwise making available to deployers any automated decision system that results in algorithmic discrimination. The prohibition covers discrimination and disparate impact across a broad set of protected characteristics in the context of consequential decisions. Self-testing to identify or mitigate discrimination and diversity-expanding uses are carved out from the definition of algorithmic discrimination.
No developer shall sell, distribute, or otherwise make available to deployers an automated decision system that results in algorithmic discrimination.
Pending 2025-07-01
H-02.1H-02.2
O.C.G.A. § 10-16-2(e)
Plain Language
Developers have a continuous obligation to test for and mitigate algorithmic discrimination, invalidity, and errors — including ensuring representative data sources, implementing data governance, testing for disparate impact, and searching for less discriminatory alternatives. This obligation persists for as long as any deployer uses the system. Additionally, when a developer discovers (through its own testing or a deployer's credible report) that a deployed system has caused or is reasonably likely to have caused algorithmic discrimination, it must notify the Attorney General and all known deployers or other developers within 90 days.
(1) A developer of an automated decision system shall take steps to address risks of algorithmic discrimination, invalidity, and errors, including, but not limited to, ensuring suitability and representativeness of data sources, implementing data governance measures, testing the automated decision system for disparate impact, and searching for less discriminatory alternative decision methods. Developers shall continue assessing and mitigating the risk of algorithmic discrimination in their automated decision systems so long as such automated decision systems are in use by any deployer. (2) A developer of an automated decision system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the automated decision system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system without unreasonable delay but no later than 90 days after the date on which: (A) The developer discovers through the developer's ongoing testing and analysis that the developer's automated decision system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (B) The developer receives from a deployer a credible report that the automated decision system has been deployed and has caused algorithmic discrimination.
Pending 2025-07-01
H-02.1
O.C.G.A. § 10-16-3(a)
Plain Language
Deployers are categorically prohibited from using automated decision systems in a manner that results in algorithmic discrimination. This is a strict liability prohibition — deployers are liable for discriminatory outcomes regardless of intent. It complements the parallel prohibition on developers in § 10-16-2(a).
No deployer of an automated decision system shall use an automated decision system in a manner that results in algorithmic discrimination.
Pending 2025-07-01
H-02.3H-02.8H-02.10
O.C.G.A. § 10-16-3(e)-(j)
Plain Language
Deployers (or their contracted third parties) must complete a comprehensive impact assessment for each automated decision system before deployment, at least annually thereafter, and within 90 days of any intentional and substantial modification. The assessment must cover system purpose and use cases, algorithmic discrimination risks and mitigation, accessibility impacts, labor law compliance risks, privacy intrusion risks, data categories, validity and reliability analysis, transparency measures, and post-deployment monitoring. If the assessment reveals a discrimination risk, the deployer may not deploy until less discriminatory alternatives are searched for and implemented. A single assessment may cover a comparable set of systems. Impact assessments completed for other regulatory requirements satisfy this obligation if reasonably similar in scope. All impact assessments and records must be retained throughout deployment and for at least three years after final deployment. Small deployers meeting all § 10-16-6 conditions are exempt from this requirement.
(e) Except as otherwise provided for in this chapter: (1) A deployer, or a third party contracted by the deployer, that deploys an automated decision system shall complete an impact assessment for the automated decision system; and (2) A deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed automated decision system at least annually and within 90 days after any intentional and substantial modification to the automated decision system is made available. (f) An impact assessment completed pursuant to subsection (e) of this Code section shall include, at a minimum, and to the extent reasonably known by or available to the deployer: (1) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the automated decision system; (2) An analysis of whether the deployment of the automated decision system poses any known or reasonably foreseeable risks of: (A) Algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (B) Limits on accessibility for individuals who are pregnant, breastfeeding, or disabled, and, if so, what reasonable accommodations the deployer may provide that would mitigate any such limitations on accessibility; (C) Any violation of state or federal labor laws, including laws pertaining to wages, occupational health and safety, and the right to organize; or (D) Any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if such intrusion: (i) Would be offensive to a reasonable person; and (ii) May be redressed under the laws of this state; (3) A description of the categories of data the automated decision system processes as inputs and the outputs the automated decision system produces; (4) If the deployer used data to customize the automated decision system, an overview of the categories of data the deployer used to customize the automated decision system; (5) An analysis of the automated decision system's validity and reliability in accordance with contemporary social science standards, and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (6) A description of any transparency measures taken concerning the automated decision system, including any measures taken to disclose to a consumer that the automated decision system is in use when the automated decision system is in use; (7) A description of the post-deployment monitoring and user safeguards provided concerning the automated decision system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the automated decision system; and (8) When such impact assessment is completed following an intentional and substantial modification to an automated decision system, a statement disclosing the extent to which the automated decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of the automated decision system. (g) If the analysis required by paragraph (2) of subsection (f) of this Code section reveals a risk of algorithmic discrimination, the deployer shall not deploy the automated decision system until the developer or deployer takes reasonable steps to search for and implement less discriminatory alternative decision methods. (h) A single impact assessment may address a comparable set of automated decision systems deployed by a deployer. (i) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment shall satisfy the requirements established in this Code section if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this Code section. (j) A deployer shall maintain the most recently completed impact assessment for an automated decision system, all records concerning each impact assessment, and all prior impact assessments, if any, throughout the period of time that the automated decision system is deployed and for at least three years following the final deployment of the automated decision system.
Pending 2025-07-01
H-02.8
O.C.G.A. § 10-16-3(k)
Plain Language
Deployers must conduct at least annual reviews of each deployed automated decision system to affirmatively verify it is not causing algorithmic discrimination. This is a separate, ongoing operational review obligation distinct from the annual impact assessment update requirement in § 10-16-3(e)(2). Reviews may be conducted by the deployer itself or by a contracted third party.
At least annually a deployer, or a third party contracted by the deployer, shall review the deployment of each automated decision system deployed by the deployer to ensure that the automated decision system is not causing algorithmic discrimination.
Pending 2025-07-01
H-02.5
O.C.G.A. § 10-16-3(l)
Plain Language
Deployers must publish all impact assessments completed within the preceding three years on their public websites, in a format prescribed by the Attorney General. This is a public disclosure requirement — not a confidential regulatory filing. It ensures consumers, researchers, and the public can review how deployers have assessed algorithmic discrimination risks.
Deployers shall publish on their public websites all impact assessments completed within the preceding three years in a form and manner prescribed by the Attorney General.
Pending 2026-01-01
H-02.1H-02.2H-02.3H-02.5H-02.6
Section 15(a)-(b)
Plain Language
Before deploying any automated decision-making system, the employer must complete an initial impact assessment at least 30 days prior to implementation. The assessment must be signed by both the human reviewer(s) responsible for meaningful human review and a qualified independent auditor who has had no involvement with, employment relationship with, or financial interest in the system's developer or deployer within the preceding five years. Subsequent assessments are required at least every two years and before any material change. Each assessment must include, in plain language: (1) system objectives and effectiveness evaluation; (2) a description of algorithms, AI tools, design, and training; (3) testing across seven risk categories — disparate impact on protected characteristics, accessibility, privacy and job quality, cybersecurity, public health/safety, foreseeable misuse, and sensitive data handling; and (4) an employee notification mechanism. The independent auditor conflict-of-interest bar is strict — a five-year lookback covering development involvement, employment, and direct or material indirect financial interest.
(a) An employer seeking to use or apply an automated decision-making system permitted under Section 10 shall conduct an initial impact assessment, 30 days prior to implementation of the automated decision-making system, bearing the signature of: (1) one or more individuals responsible for meaningful human review of the system; and (2) an independent auditor. A person shall not be an independent auditor under this subsection if, at any point in the 5 years preceding the impact assessment, that person: (i) was involved in using, developing, offering, licensing, or deploying the automated decision-making system under review; (ii) had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision-making system under review; or (iii) had a direct or material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision-making system under review. (b) Following the initial impact assessment, additional impact assessments shall be conducted at least once every 2 years and prior to any material changes to the automated decision-making system. Each impact assessment shall include, in plain language: (1) a description of the objectives of the automated decision-making system; (2) an evaluation of the system's ability to achieve those objectives; (3) a description and evaluation of the algorithms, computational models, and artificial intelligence tools used, including: (A) a summary of underlying algorithms and artificial intelligence tools; and (B) a description of the design and training to be used; (4) testing for: (A) disparate impact or discrimination based on protected characteristics, including, but not limited to discriminating against, persons based on their race, color, religious creed, national origin, sex, disability or perceived disability, gender identity, sexual orientation, genetic information, pregnancy or a condition related to pregnancy, ancestry, or status as a veteran and any actions to mitigate any impacts; (B) accessibility limitations for persons with disabilities; (C) privacy and job quality impacts, including wages, hours, and conditions and safeguards; (D) cybersecurity vulnerabilities and safeguards; (E) public health or safety risks; (F) foreseeable misuse and safeguards; and (G) use, storage, and control of sensitive or personal data; and (5) a notification mechanism for employees impacted by the use of the automated decision-making system.
Pending 2026-01-01
Section 15(c)
Plain Language
If an impact assessment reveals the automated system produces discriminatory, biased, or inaccurate outcomes — or fails any of the employee notice, appeals, and alternative review requirements of Section 10(b) — the employer must immediately halt all use of the system and all reliance on its outputs. The employer must also take all steps necessary to remedy the identified harms. This is a mandatory shutdown obligation — there is no cure period or mitigation alternative. The system cannot resume until the deficiencies are resolved.
(c) If an impact assessment finds that an automated decision-making system produces discriminatory, biased, or inaccurate outcomes or fails to meet or negatively impacts any of the measures described in subsection (b) of Section 10, the employer shall immediately cease any use or function of that system and of any information produced by it, and shall take all steps necessary to remedy the discriminatory, biased or inaccurate outcomes produced by the automated decision-making system.
Pending 2027-01-01
H-02.3H-02.4
Section 10(a)-(c); Section 35(a)-(c)
Plain Language
By January 1, 2027, and annually thereafter, deployers must complete a formal impact assessment for each automated decision tool they use. The assessment must cover the tool's purpose, outputs, data types collected, an analysis of potential adverse impacts across protected characteristics, safeguards against algorithmic discrimination, human oversight mechanisms, and validity evaluation. A new impact assessment must also be performed as soon as feasible following any significant update. Within 60 days of completing each assessment, the deployer must submit it to the Attorney General. Knowing failure to submit triggers administrative fines of up to $10,000 per violation, with each day the tool is used without a submitted assessment counting as a separate violation. Deployers with fewer than 25 employees are exempt unless their tool impacted more than 999 people in the prior calendar year.
(a) On or before January 1, 2027, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses that includes all of the following: (1) a statement of the purpose of the automated decision tool and its intended benefits, uses, and deployment contexts; (2) a description of the automated decision tool's outputs and how they are used to make, or be a controlling factor in making, a consequential decision; (3) a summary of the type of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision; (4) an analysis of potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information from the deployer's use of the automated decision tool; (5) a description of the safeguards implemented, or that will be implemented, by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the automated decision tool known to the deployer at the time of the impact assessment; (6) a description of how the automated decision tool will be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision; and (7) a description of how the automated decision tool has been or will be evaluated for validity or relevance. (b) A deployer shall, in addition to the impact assessment required by subsection (a), perform, as soon as feasible, an impact assessment with respect to any significant update. (c) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year.

Section 35. (a) Within 60 days after completing an impact assessment required by this Act, a deployer shall provide the impact assessment to the Attorney General. (b) A deployer who knowingly violates this Section shall be liable for an administrative fine of not more than $10,000 per violation in an administrative enforcement action brought by the Attorney General. Each day on which an automated decision tool is used for which an impact assessment has not been submitted as required under this Section shall give rise to a distinct violation of this Section. (c) The Attorney General may share impact assessments with other State entities as appropriate.
Pending 2027-01-01
Section 30(a)-(c)
Plain Language
Deployers are prohibited from using an automated decision tool that results in algorithmic discrimination — unjustified differential treatment or disparate impacts based on protected characteristics. Beginning January 1, 2028, individuals may bring a private civil action for violations. The plaintiff bears the burden of proving that the tool resulted in algorithmic discrimination and caused actual harm. Available remedies include compensatory damages, declaratory relief, and reasonable attorney's fees and costs. Two carve-outs apply: (1) use of the tool solely for self-testing to identify or prevent discrimination, and (2) acts by private clubs not open to the public under the Civil Rights Act of 1964.
(a) A deployer shall not use an automated decision tool that results in algorithmic discrimination. (b) On and after January 1, 2028, a person may bring a civil action against a deployer for violation of this Section. In an action brought under this subsection, the plaintiff shall have the burden of proof to demonstrate that the deployer's use of the automated decision tool resulted in algorithmic discrimination that caused actual harm to the person bringing the civil action. (c) In addition to any other remedy at law, a deployer that violates this Section shall be liable to a prevailing plaintiff for any of the following: (1) compensatory damages; (2) declaratory relief; and (3) reasonable attorney's fees and costs.
Pending 2026-07-01
H-02.1H-02.2H-02.7H-02.8
IC 22-5-10.4-10(2)(A)-(B)
Plain Language
Before an employer may use any automated decision system output in an employment decision, the system must have completed predeployment testing and validation covering four areas: (i) system efficacy, (ii) compliance with a comprehensive list of federal employment discrimination statutes (Title VII, ADEA, ADA, GINA, EPA, Rehabilitation Act, Pregnant Workers Fairness Act), (iii) absence of discriminatory impact across race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), national origin, age, disability, and genetic information, and (iv) compliance with the NIST AI Risk Management Framework (January 2023) or its successor. Additionally, the system must be independently tested for discriminatory impact or bias at least annually, and the results of that independent testing must be made publicly available. The annual testing requirement is ongoing — not a one-time predeployment check.
An employer may not: (2) use an automated decision system output in making an employment related decision with respect to a covered individual unless: (A) the automated decision system used to generate the automated decision system output has had predeployment testing and validation with respect to: (i) the efficacy of the system; (ii) the compliance of the system with applicable employment discrimination laws, including Title VII of the Civil Rights Act of 1964 (42 U.S.C. 2000e et seq.), the Age Discrimination in Employment Act of 1967 (29 U.S.C. 621 et seq.), Title I of the Americans with Disabilities Act of 1990 (42 U.S.C. 12111 et seq.), Title II of the Genetic Information Nondiscrimination Act of 2008 (42 U.S.C. 2000ff et seq.), Section 6(d) of the Fair Labor Standards Act of 1938 (29 U.S.C. 206(d)), Sections 501 and 505 of the Rehabilitation Act of 1973 (29 U.S.C. 791 and 29 U.S.C. 793), and the Pregnant Workers Fairness Act (42 U.S.C. 2000gg); (iii) the lack of any potential discriminatory impact of the system, including discriminatory impact based on race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, or disability, and genetic information (including family medical history); and (iv) the compliance of the system with the Artificial Intelligence Risk Management Framework released by the National Institute of Standards and Technology on January 26, 2023, or a successor framework; (B) the automated decision system is, not less than annually, independently tested for discriminatory impact described in clause (A)(iii) or potential biases and the results of the test are made publicly available;
Pending 2026-01-01
Section 1(c)(1)(C)-(E),(H)
Plain Language
Health insurers and utilization review organizations must ensure their AI tools do not supplant healthcare provider decision-making, do not discriminate directly or indirectly against enrollees in violation of state or federal law, are applied fairly and equitably consistent with HHS regulations and guidance, and do not directly or indirectly cause harm to enrollees. The non-discrimination requirement encompasses both disparate treatment and disparate impact. The fair application standard incorporates any applicable federal regulations or HHS guidance as a compliance floor. The prohibition on supplanting provider decision-making reinforces that AI is a support tool, not a substitute for clinical judgment.
Each health insurer and utilization review organization shall ensure that the artificial intelligence, algorithm or other software tool used to review and approve, modify and delay or deny requests by providers: (C) does not supplant healthcare provider decision-making; (D) does not discriminate, directly or indirectly, against enrollees in violation of state or federal law; (E) is fairly and equitably applied, in accordance with any applicable regulations or guidance issued by the United States department of health and human services; (H) does not directly or indirectly cause harm to the enrollee.
Pre-filed 2025-07-07
H-02.1
Chapter 93M, Section 2(a)
Plain Language
Developers of AI systems available in Massachusetts must exercise reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination. This is a general duty of care — not limited to high-risk systems — requiring developers to proactively assess whether their systems produce differential treatment or impact across a broad list of protected characteristics. The duty encompasses identification, mitigation, and disclosure, making it a continuing obligation throughout the system lifecycle.
(a) Duty of Care: Developers must use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination.
Pre-filed 2025-07-07
H-02.3H-02.8
Chapter 93M, Section 3(b)
Plain Language
Deployers must conduct an annual impact assessment for every high-risk AI system they operate, covering the system's purpose and intended use, the data categories it processes and outputs it generates, and potential discrimination risks along with mitigation measures. Impact assessments must also be updated whenever a substantial modification is made to the system — this is in addition to the annual cadence, not a substitute for it. The state will provide templates to standardize and reduce the compliance burden. Note that the AG has rulemaking authority under Section 7 to further define impact assessment requirements.
(b) Impact Assessments: (1) Deployers must complete an annual impact assessment for each high-risk AI system, including: (i) The purpose and intended use of the system; (ii) Data categories used and outputs generated; (iii) Potential risks of discrimination and mitigation measures. (2) Impact assessments must be updated after any substantial modification to the system. State-provided templates for these assessments will be made available to reduce compliance burdens.
Pre-filed 2025-07-17
H-02.1H-02.2H-02.3
Ch. 93M § 2(a)-(b)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks arising from intended and contracted uses. They must provide deployers with comprehensive documentation including: foreseeable uses and misuses, training data summaries, known limitations and discrimination risks, pre-deployment evaluation methods for bias, data governance measures, intended outputs, mitigation steps taken, and guidance on human monitoring. A rebuttable presumption of reasonable care applies in AG enforcement actions if the developer complied with these requirements. Trade secrets and information protected by law need not be disclosed.
(a) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7. (b) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (i) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (ii) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (iii) the purpose of the high-risk artificial intelligence system; (iv) the intended benefits and uses of the high-risk artificial intelligence system; and (v) all other information necessary to allow the deployer to comply with the requirements of section 3; (3) documentation describing: (i) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of the high-risk artificial intelligence system; (iv) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (v) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination. (f) nothing in subsections (b) to (e) of this section requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
Pre-filed 2025-07-17
H-02.3
Ch. 93M § 3(a)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. A rebuttable presumption of compliance applies in attorney general enforcement actions if the deployer complied with all requirements of this section plus any AG-promulgated rules. This establishes the overarching deployer duty — the specific compliance mechanisms are detailed in Sections 3(b)-(e).
(a) Not later than 6 months after the effective date of this act, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
Pre-filed 2025-07-17
H-02.3H-02.8H-02.10
Ch. 93M § 3(c)
Plain Language
Deployers must complete a comprehensive impact assessment for each high-risk AI system before deployment and at least annually thereafter, plus within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and use cases, algorithmic discrimination risk analysis with mitigation steps, data input/output categories, customization data, performance metrics, transparency measures, and post-deployment monitoring. A single assessment may cover a comparable set of systems. Impact assessments completed under other applicable laws count if reasonably similar in scope. All assessments and records must be retained for at least three years after final deployment. Additionally, deployers must conduct at least annual reviews to affirmatively verify each system is not causing algorithmic discrimination. The small-deployer exemption under Section 3(f) applies.
(c) (1) except as provided in subsections (c)(4), (c)(5), and (f) of this section: (i) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall complete an impact assessment for the high-risk artificial intelligence system; and (ii) Not later than 6 months after the effective date of this act, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) an impact assessment completed pursuant to this subsection (c) must include, at a minimum, and to the extent reasonably known by or available to the deployer: (i) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (iii) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vi) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (vii) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) in addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (c) following an intentional and substantial modification to a high-risk artificial intelligence system not later than 6 months after the effective date of this act, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) a single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) if a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (c) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (c). (6) a deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (c), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) Not later than 6 months after the effective date of this act, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Pending 2025-10-08
H-02.1
G.L. c. 176O, § 12(g)(1)(E)-(F)
Plain Language
Carriers and utilization review organizations must ensure that AI tools used in utilization review do not discriminate — directly or indirectly — against any insured in violation of state or federal antidiscrimination law, including Massachusetts Chapter 151B. The tools must also be applied fairly and equitably, in accordance with applicable state and federal agency regulations and guidance. This is both a non-discrimination obligation and a fairness standard that applies on an ongoing basis.
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
Pre-filed 2025-01-14
H-02.3H-02.6
Chapter 149B, § 2(j)
Plain Language
Before using electronic monitoring (alone or with an ADS), an employer must have an independent impact assessment conducted. The assessment must be completed within one year before use (or within six months of the statute's effective date for existing monitoring). The auditor must be independent with no financial or legal conflicts. The assessment must evaluate data protection practices, identify allowable purposes, describe potential legal violations and mitigation steps, and assess negative impacts on employee privacy and job quality. The five-year look-back independence requirement for auditors is unusually strict.
(j) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated employment decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments must: (i) be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry best practices; (iv) identify which allowable purpose(s) described in this chapter; (vi) consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; and (vii) consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions.
Pre-filed 2025-01-14
H-02.3H-02.6H-02.7H-02.8
Chapter 149B, § 3(a)-(b)
Plain Language
Before using any ADS for employment decisions, an employer must have a comprehensive independent impact assessment conducted. The assessment must cover thirteen enumerated elements including: modeling techniques and attributes, scientific validity, proxy analysis for protected classes, training data disparities, output disparate impact, disability accessibility, post-deployment adverse impact risks, least-discriminatory-method analysis, legal compliance, privacy/job quality impacts, and a catch-all discrimination risk assessment. The completed assessment must be submitted to the Department of Labor Standards within 60 days for inclusion in a public registry, and distributed to affected employees. Annual follow-up assessments are required for as long as the tool remains in use, evaluating any changes in validity or disparate impact.
a) It shall be unlawful for an employer to use an automated employment decision tool for an employment decision, alone or in conjunction with electronic monitoring, unless such tool has been the subject of an impact assessment. Impact assessments must: (i) be conducted no more than one year prior to the use of such tool, or where the tool was in use by the employer before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) identify and describe the attributes and modeling techniques that the tool uses to produce outputs; (iv) evaluate whether those attributes and techniques are a scientifically valid means of evaluating an employee or candidate's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under chapter 151B or any other applicable law; (v) consider, identify, and describe any disparities in the data used to train or develop the tool and describe how those disparities may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy any disparate impact; (vi) consider, identify, and describe any outputs produced by the tool that may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy that disparate impact; (vii) evaluate whether the use of the tool may limit accessibility for persons with disabilities, or for persons with any specific disability, and what actions may be taken by the employer or vendor of the tool to reduce or remedy the concern; (viii) consider and describe potential sources of adverse impact against individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that may arise after the tool is deployed; (ix) identify and describe any other assessment of risks of discrimination or a disparate impact of the tool on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that arise over the course of the impact assessment, and what actions may be taken to reduce or remedy that risk; (x) for any finding of a disparate impact or limit on accessibility, evaluate whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a candidate's performance or ability to perform job functions; (xi) consider and describe any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (xii) consider and describe whether use of the tool may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (xiii) be submitted in its entirety or an accessible summary form to the department for inclusion in a public registry of such impact assessments within sixty days of completion and distributed to employees who may be subject to the tool. (b) An employer shall conduct or commission subsequent impact assessments each year that the tool is in use to assist or replace employment decisions. Subsequent impact assessments shall comply with the requirements of paragraph (a) of this section, and shall assess and describe any change in the validity or disparate impact of the tool.
Pre-filed 2025-01-14
H-02.3
Chapter 149B, § 3(e)-(f)
Plain Language
If an impact assessment finds disparate impact or accessibility limitations, the employer must immediately cease using the tool until remediation is complete. The employer must take reasonable steps to remedy the issue and document those steps in writing to employees, the auditor, and the Department. If the employer disputes the finding or believes its remediation is sufficient, it must provide a written explanation showing the tool is the least discriminatory method available. Separately, it is unlawful for any auditor, vendor, or employer to manipulate, conceal, or misrepresent impact assessment results — this is an independent prohibition that applies to all three parties.
(e) If an initial or subsequent impact assessment concludes that a data set, feature, or application of the automated employment decision tool results in a disparate impact on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, or unlawfully limits accessibility for persons with disabilities, an employer shall refrain from using the tool until it: (i) takes reasonable and appropriate steps to remedy that disparate impact or limit on accessibility and describe in writing to employees, the auditor, and the department what steps were taken; and (ii) if the employer believes the impact assessment finding of a disparate impact or limit on accessibility is erroneous, or that the steps taken in accordance with subparagraph (i) of this paragraph sufficiently address those findings such that the tool may be lawfully used in accordance with this article, describes in writing to employees, the auditor, and the department how the data set, feature, or application of the tool is the least discriminatory method of assessing an employee's performance or ability to complete essential functions of a position. (f) It shall be unlawful for an independent auditor, vendor, or employer to manipulate, conceal, or misrepresent the results of an impact assessment.
Pre-filed 2025-01-10
Ch. 176O § 12(g)(1)(D)-(F)
Plain Language
Carriers must ensure that AI utilization review tools do not supplant health care provider decision-making, do not discriminate directly or indirectly against any insured in violation of state or federal anti-discrimination law (including Massachusetts Chapter 151B), and are applied fairly and equitably in accordance with applicable regulations and agency guidance. The non-discrimination obligation is broad — covering both direct and indirect (disparate impact) discrimination — and is referenced to existing anti-discrimination frameworks rather than creating a standalone bias testing regime. The fair-and-equitable-application standard incorporates any future state or federal agency guidance as a continuing compliance benchmark.
(D) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making. (E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
Pending 2026-10-01
H-02.1
Ins. § 15-10B-05.1(c)(5)-(6)
Plain Language
Covered entities must ensure their AI utilization review tools do not result in unfair discrimination and are applied fairly and equitably. Compliance must align with applicable HHS regulations and guidance. While the bill does not prescribe a specific testing methodology, the obligation to ensure non-discrimination implicitly requires some form of monitoring or testing to verify the tool's outputs are not discriminatory across patient populations.
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Pending 2026-01-01
24-A MRSA §4304(8)(A)(2)-(3)
Plain Language
AI-derived utilization review and medical review determinations must not directly or indirectly discriminate against enrollees on an extensive list of protected characteristics, including race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. Determinations must also be fairly and equitably applied across all enrollees. The inclusion of 'indirectly' discriminate suggests proxy discrimination and disparate impact are covered, not only intentional discrimination. The protected class list is notably broader than typical employment or civil rights statutes — it includes predicted disability, expected length of life, degree of medical dependency, and quality of life.
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (2) Not directly or indirectly discriminate against an enrollee on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life or other health conditions; (3) Be fairly and equitably applied;
Pending 2026-02-24
H-02.3H-02.4H-02.5H-02.6H-02.8
Sec. 9(1)-(3)
Plain Language
Before using any automated decision tool or electronic monitoring tool, employers must commission a comprehensive impact assessment conducted by an independent, conflict-free third party. The assessment must be completed one year before implementation (or within 6 months of the act's effective date for tools already in use). It must evaluate the tool's objectives, algorithms, data, cybersecurity, potential biases across qualified characteristics, proxy discrimination risks under the Elliot-Larsen Civil Rights Act, disability accessibility, training data disparities, output disparities, privacy impacts, and job-quality effects. For any finding of disparate impact, the assessment must evaluate whether the tool uses the least discriminatory method available. Within 60 days of completion, the employer must submit the assessment to the Department of Labor and Economic Opportunity for inclusion in a public registry and distribute it to affected covered individuals. Annual reassessments are required for each year the tool remains in use.
Sec. 9. (1) Before an employer uses an automated decisions tool under section 4 or an electronic monitoring tool under section 5, the employer shall conduct an impact assessment of the tool that meets all of the following requirements: (a) Evaluates the tool's objectives, algorithms, data, cybersecurity vulnerabilities, and potential biases, including, but not limited to, discriminatory outcomes based on race, gender, or disability. (b) Is conducted 1 year before the tool is implemented, or, for a tool already in use on the effective date of this act, not more than 6 months after the effective date of this act. (c) Is conducted by an independent and impartial third party with no financial or legal conflicts of interests related to the use of the tool. (d) Identifies and describes the attributes and modeling techniques that the tool uses to produce outputs. (e) Evaluates whether the attributes and modeling techniques described in subdivision (d) are a scientifically valid means of evaluating a covered individual's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under the Elliot-Larsen civil rights act, 1976 PA 453, MCL 37.2101 to 37.2804. (f) Considers, identifies, and describes both of the following that may result in a disparate impact on a covered individual based on the covered individual's qualified characteristic, and what actions may be taken by the employer to reduce or remedy any disparate impact. (i) Any disparities in the data used to train or develop the tool. (ii) Any outputs produced by the tool. (g) Evaluates whether the use of the tool may limit accessibility for covered individuals with disabilities, or for covered individuals with any specific disability, and what actions may be taken by the employer to reduce or remedy the limit on accessibility. (h) Considers and describes potential sources of adverse impact against covered individuals or groups based on a qualified characteristic that may arise after the tool is implemented. (i) Identifies and describes any other assessment of risks of discrimination or a disparate impact of the tool on covered individuals or groups based on a qualified characteristic, and what actions may be taken to reduce or remedy that risk. (j) For any finding of a disparate impact or limit on accessibility, evaluates whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a covered individual's performance or ability to perform job functions. (k) Considers and describes any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent a violation. (l) Considers and describes whether use of the tool may negatively affect a covered individual's privacy or job quality, including wages, hours, and working conditions. (2) Not more than 60 days after an employer completes an assessment, the employer shall do both of the following: (a) Submit the assessment in its entirety or in an accessible summary form to the department for the department to include in a public registry of impact assessments. (b) Distribute the assessment to covered individuals who may be subject to the tool. (3) An employer shall conduct or commission subsequent impact assessments each year in which the electronic monitoring tool or automated decisions tool is in use. Subsequent impact assessments must comply with the requirements of subsection (1), as applicable, and must assess and describe any change in the validity or disparate impact of the tool.
Pending 2025-08-01
H-02.1
Minn. Stat. § 363A.08, subd. 9(b)(1)
Plain Language
Employers may not use artificial intelligence in any employment decision — including recruitment, hiring, promotion, discharge, discipline, training selection, and terms or conditions of employment — if that AI has the effect of subjecting employees or applicants to discrimination based on any protected characteristic under the MHRA. This is a disparate impact standard: the employer need not intend to discriminate; discriminatory effect is sufficient. The protected characteristics include race, color, creed, religion, national origin, sex, gender identity, marital status, public assistance status, familial status, local commission membership, disability, sexual orientation, and age. Employers should conduct bias testing across these characteristics before deploying AI in employment contexts.
(b) It is an unfair employment practice, with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, for an employer to: (1) use artificial intelligence that has the effect of subjecting an employee or applicant for employment to discrimination because of race, color, creed, religion, national origin, sex, gender identity, marital status, status with regard to public assistance, familial status, membership or activity in a local commission, disability, sexual orientation, or age;
Pending 2025-10-01
Section 1(1)(e)-(f)
Plain Language
AI tools used in utilization review must not discriminate directly or indirectly against enrollees in violation of state or federal law, including Montana's anti-discrimination statute (§ 49-2-309, which prohibits discrimination in insurance). The tools must also be fairly and equitably applied, in compliance with applicable HHS regulations and guidance. This imposes both a non-discrimination obligation tied to existing protected-class frameworks and a broader fairness requirement that incorporates evolving federal guidance. Issuers should monitor HHS rulemaking for AI-specific fairness standards.
(e) the use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law, including 49-2-309; (f) the artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services;
Pending 2026-02-01
H-02.1
Sec. 3(1)(a)-(b)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known risks of algorithmic discrimination arising from the system's intended and contracted uses. Compliance with all developer obligations under the Act creates a rebuttable presumption that reasonable care was used, but this presumption applies only to AG enforcement actions. Self-testing for bias and diversity-expanding uses are carved out of the definition of algorithmic discrimination.
(1)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section.
Pending 2026-02-01
H-02.1
Sec. 4(1)(a)-(b)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from each known risk of algorithmic discrimination. Full compliance with all deployer obligations under Section 4 creates a rebuttable presumption of reasonable care, but only in AG enforcement actions. This is the overarching deployer duty — the specific compliance obligations that follow (risk management, impact assessments, consumer notifications) flesh out what reasonable care requires in practice.
(1)(a) On and after February 1, 2026, a deployer of any high-risk artificial intelligence system shall use reasonable care to protect consumers from each known risk of algorithmic discrimination. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section.
Pending 2026-02-01
H-02.3H-02.10
Sec. 4(3)(a)-(f)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system before deployment and within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and intended uses, deployment context, benefits, analysis of algorithmic discrimination risks and mitigations, data input/output summaries, customization data, performance metrics and known limitations, transparency measures, and post-deployment monitoring and user safeguards. Post-modification assessments must also disclose how actual use compared to developer-intended use. A single assessment may cover comparable systems. Assessments completed for other regulatory compliance satisfy this requirement if reasonably similar in scope. Deployers must retain current assessments, all records, and prior assessments for at least three years after final deployment. Small deployer exemption applies under Sec. 4(6) conditions.
(3)(a) Except as otherwise provided in this subsection or subsection (6) of this section: (i) An impact assessment shall be completed for each high-risk artificial intelligence system deployed on or after February 1, 2026. Such impact assessment shall be completed by the deployer or by a third party contracted by the deployer; and (ii) On and after February 1, 2026, for each deployed high-risk artificial intelligence system, a deployer or a third party contracted by the deployer shall complete an impact assessment within ninety days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (b) An impact assessment completed pursuant to this subsection shall include to the extent reasonably known by or available to the deployer: (i) A statement by the deployer disclosing: (A) The purpose of the high-risk artificial intelligence system; (B) Any intended-use case for the high-risk artificial intelligence system; (C) The deployment context of the high-risk artificial intelligence system; and (D) Any benefit afforded by the high-risk artificial intelligence system; (ii) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known risk of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate any such risk; (iii) A high-level summary of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) Any metric used to evaluate the performance and any known limitation of the high-risk artificial intelligence system; (vi) A description of any transparency measure taken concerning the high-risk artificial intelligence system, including any measure taken to disclose to a consumer when the high-risk artificial intelligence system is in use; and (vii) A description of each postdeployment monitoring and user safeguard provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address any issue that arises from the deployment of the high-risk artificial intelligence system. (c) Any impact assessment completed pursuant to this subsection following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, shall include a statement that discloses the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from any use of the high-risk artificial intelligence system intended by the developer. (d) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (e) Any impact assessment completed to comply with another applicable law or regulation by a deployer or by a third party contracted by the deployer shall satisfy this subsection if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (f) A deployer shall maintain: (i) The most recently completed impact assessment required under this subsection for each high-risk artificial intelligence system of the deployer; (ii) Each record concerning each such impact assessment; and (iii) For at least three years following the final deployment of each high-risk artificial intelligence system, each prior impact assessment, if any, and each record concerning such impact assessment.
Pre-filed 2026-09-28
H-02.3H-02.4
Section 7(a)
Plain Language
All high-risk AI systems used in employment, housing, healthcare, education, criminal justice, or public services must undergo algorithmic impact assessments before deployment. Uniquely, the impact assessments are performed by the state's Office of Information Technology (OIT) — not by the deployer or developer — in a manner OIT will determine. This means entities deploying high-risk AI systems must submit their systems to OIT for assessment prior to going live. The statute does not specify the assessment methodology, timeline, or what documentation the deployer must provide to OIT, leaving those details to OIT rulemaking.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
Pre-filed 2026-02-02
H-02.4
Section 1.f.
Plain Language
The Department of Labor and Workforce Development must annually analyze the demographic data submitted by employers and report to the Governor and Legislature whether the data reveals racial bias in AI-enabled hiring. While this provision directly obligates the Department rather than employers, employers should be aware that their submitted data will be analyzed for racial bias indicators and the results will be made public through legislative reporting. This creates an indirect accountability mechanism — if racial bias patterns emerge, employers may face regulatory scrutiny or legislative action.
f. The Department of Labor and Workforce Development shall analyze the data reported in accordance with subsection e. of this act and report to the Governor and the Legislature, as provided pursuant to section 2 of P.L.1991, c.164 (C.52:14-19.1), each year whether the data discloses a racial bias in the use of artificial intelligence.
Pending 2026-05-13
H-02.3
Section 7(a)
Plain Language
All high-risk AI systems used in employment, housing, healthcare, education, criminal justice, or public services in New Jersey must undergo algorithmic impact assessments before deployment. Unlike most jurisdictions where the developer or deployer conducts the assessment, New Jersey assigns this responsibility to the Office of Information Technology within the Department of the Treasury. The specific assessment methodology will be determined by OIT. The practical compliance obligation for deployers is to submit their systems for assessment and not deploy until the assessment is complete. Violation carries civil penalties under section 8.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
Pending 2026-05-13
H-02.1
Section 9(a)-(b)
Plain Language
The Attorney General has authority to investigate complaints about AI-driven discrimination (AI systems producing biased outputs against protected classes) and unreasonable AI workplace surveillance (AI monitoring of employee computer usage and physical movements). Enforcement is through the Law Against Discrimination and the New Jersey Civil Rights Act, both of which carry their own penalty frameworks. While this provision primarily establishes an enforcement mechanism, it implicitly creates an obligation for AI deployers to ensure their systems do not produce discriminatory outputs and that AI workplace surveillance is not unreasonable. The 'unreasonable' standard for workplace surveillance is undefined, leaving significant interpretive discretion to the Attorney General and courts.
a. The Office of the Attorney General shall investigate complaints related to AI-driven discrimination, unreasonable AI workplace surveillance, and claims of violations of civil rights protections related to AI. The Attorney General shall enforce penalties pursuant to the "Law Against Discrimination," P.L.1945, c.169 (C.10:5-1 et seq.), and the "New Jersey Civil Rights Act," P.L.2004, c.143 (C.10:6-1 et seq.) for violations of this section. b. As used in this section: "AI-driven discrimination" means output resulting from AI systems that exhibit biases against individuals based on age, race, religion, or other protected classes. "AI workplace surveillance" means the use of AI to monitor and analyze employee behavior and performance through the use of technology tools that track employee activities including computer usage and physical movements.
Pending 2027-01-01
H-02.1H-02.2H-02.6H-02.3
GBL § 1551(1)(a)-(b)
Plain Language
Developers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses. A rebuttable presumption of reasonable care applies if the developer (1) complies with the documentation requirements in § 1551 and (2) retains an AG-approved independent third party to complete bias and governance audits. The AG must publish and annually update a list of qualified independent auditors. Self-testing to identify discrimination and pool-expansion activities are carved out from the definition of algorithmic discrimination.
(a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
Pending 2027-01-01
H-02.1H-02.2H-02.6H-02.3
GBL § 1552(1)(a)-(b)
Plain Language
Deployers of high-risk AI decision systems must exercise reasonable care to protect consumers from algorithmic discrimination. A rebuttable presumption of reasonable care applies if the deployer (1) complies with all § 1552 requirements and (2) retains an AG-approved independent third-party auditor to complete bias and governance audits. The AG must publish and annually update a list of qualified auditors. This mirrors the developer reasonable care obligation in § 1551(1) but applies to deployers.
(a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
Pending 2027-01-01
H-02.3H-02.10
GBL § 1552(3)(a)-(e)
Plain Language
Deployers must complete impact assessments for each high-risk AI decision system before deployment, at least annually thereafter, and within 90 days of any intentional and substantial modification. Each assessment must cover: system purpose and deployment context, algorithmic discrimination risk analysis and mitigation steps, data input and output descriptions, customization data overview, performance metrics and limitations, transparency measures, and post-deployment monitoring and safeguards. Post-modification assessments must also disclose how the system was used relative to the developer's intended uses. A single assessment may cover a comparable set of systems. Assessments completed for other regulatory purposes count if reasonably similar in scope. All impact assessments and associated records must be retained for at least three years after final deployment. The obligation may be shifted to the developer by contract under § 1552(7).
(a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
Pending 2027-01-01
H-02.8
GBL § 1552(4)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI decision system to verify it is not causing algorithmic discrimination. This is a separate, ongoing operational obligation distinct from the pre-deployment impact assessment — it requires affirmative verification that the live system is not producing discriminatory outcomes. Reviews may be conducted by the deployer or a contracted third party. The obligation may be shifted to the developer by contract under § 1552(7).
Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
Pending 2025-01-23
H-02.6H-02.7
Real Prop. Law § 227-g(2)(a)-(b)
Plain Language
Landlords may not use an automated housing decision making tool unless they have an independent auditor conduct a disparate impact analysis at least annually. The analysis must assess the tool's actual impact on groups defined by sex, race, ethnicity, and other protected classes, and must differentiate between selected and non-selected applicants. Before implementing or using the tool, a summary of the most recent analysis and the tool's distribution date must be posted on the landlord's website and in any digital housing listing where the tool will be used. This effectively creates a pre-condition to lawful use — the tool cannot be deployed until the audit summary is public. The audit requirement closely mirrors the NYC Local Law 144 (AEDT) model — independent auditor, annual cadence, public summary — but applies specifically to housing rather than employment.
It shall be unlawful for a landlord to implement or use an automated housing decision making tool, including the use of an automated housing decision making tool that issues a score, classification, or recommendation, that fails to comply with the following provisions: (a) No less than annually, a disparate impact analysis shall be conducted to assess the actual impact of any automated housing decision making tool used by any landlord to select applicants for housing within the state. Such disparate impact analysis shall be provided to the landlord. (b) A summary of the most recent disparate impact analysis of such tool as well as the distribution date of the tool to which the analysis applies shall be made publicly available on the website of the landlord prior to the implementation or use of such tool. Such summary shall also be made accessible through any listing for housing on a digital platform for which the landlord intends to use an automated housing decision making tool to screen applicants for housing.
Pending 2025-04-27
H-02.1H-02.3H-02.5
State Tech. Law § 505(1)-(4)
Plain Language
Designers, developers, and deployers must take proactive and continuous measures to prevent algorithmic discrimination across an expansive list of protected characteristics. Required measures include proactive equity assessments during system design, use of representative training data, protections against proxy discrimination (e.g., using non-protected features that correlate with protected characteristics), and ensuring accessibility for users with disabilities. Systems must undergo both pre-deployment and ongoing disparity testing with clear organizational oversight. The protected characteristics list is broad, including New York-specific categories such as domestic violence victim status, predisposing genetic characteristics, and prior arrest or conviction record.
1. No New York resident shall face discrimination by algorithms, and all automated systems shall be used and designed in an equitable manner.
2. The designers, developers, and deployers of automated systems shall take proactive and continuous measures to protect New York residents and communities from algorithmic discrimination, ensuring the use and design of these systems in an equitable manner.
3. The protective measures required by this section shall include proactive equity assessments as part of the system design, use of representative data, protection against proxies for demographic features, and assurance of accessibility for New York residents with disabilities in design and development.
4. Automated systems shall undergo pre-deployment and ongoing disparity testing and mitigation, under clear organizational oversight.
Pending 2025-04-27
H-02.5H-02.6H-02.7
State Tech. Law § 505(5)-(6)
Plain Language
All automated systems must undergo independent evaluations resulting in a plain-language algorithmic impact assessment that includes disparity testing results and mitigation measures. New York residents have the right to view these evaluations and reports. The scope is notable — this applies to all automated systems within the statute's coverage, not just high-risk systems. The statute does not define who qualifies as an 'independent' evaluator or specify publication timing or format requirements.
5. Independent evaluations and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, shall be conducted for all automated systems.
6. New York residents shall have the right to view such evaluations and reports.
Pending 2025-01-30
H-02.1
Insurance Law § 3224-e(a)(3)-(4)
Plain Language
Health care service plans must ensure that AI tools used in utilization review do not discriminate — directly or indirectly — against individuals based on a broad set of protected characteristics including race, color, religion, national origin, ancestry, age, sex, gender identity, gender expression, sexual orientation, disability (present or predicted), expected length of life, degree of medical dependency, quality of life, or other health conditions. The tool must also be fairly and equitably applied across all enrollees. The protected class list is notably broader than typical anti-discrimination provisions, including predicted disability, expected length of life, and quality of life.
(3) The use of the artificial intelligence, algorithm, or other software tool does not adversely discriminate, directly or indirectly, against an individual on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. (4) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied.
Pending 2025-01-01
H-02.3
Labor Law § 201-j(1)(a)-(f)
Plain Language
Before deploying any AI system, employers with more than 100 employees (that are not small businesses) must complete a written impact assessment covering the AI system's objectives, its ability to meet those objectives, its underlying algorithms and training data, its use of sensitive and personal data, and estimates of both past and future employee displacement. This assessment must be completed before any use of AI, updated at least every two years, and re-conducted before any material change to the AI system that could alter its outcomes or effects. The assessment is a precondition to use — employers may not use AI at all without it.
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; (e) an estimate of the number of employees already displaced due to artificial intelligence; and (f) an estimate of the number of employees expected to be displaced or otherwise affected due to the increased use of artificial intelligence in the workplace.
Pending 2025-08-18
Pub. Health Law § 4905-a(1)(e)-(f)
Plain Language
Utilization review agents must ensure their AI tools do not discriminate — directly or indirectly — against enrollees in violation of state or federal law. The tools must also be fairly and equitably applied, consistent with applicable HHS regulations and guidance. This creates both a non-discrimination compliance obligation and an affirmative fairness and equity standard. The indirect discrimination prohibition reaches proxy-based or disparate-impact discrimination, not just intentional discrimination.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending 2025-08-18
Ins. Law § 4905-a(1)(e)-(f)
Plain Language
Disability insurers must ensure their AI tools do not discriminate — directly or indirectly — against insureds in violation of state or federal law, and that the tools are fairly and equitably applied consistent with applicable HHS regulations and guidance. This mirrors the parallel Public Health Law provision and reaches both intentional and proxy-based or disparate-impact discrimination.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending
H-02.1H-02.3
Civil Rights Law § 86(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination resulting from the use, sale, or sharing of their systems. Before using, selling, or sharing a high-risk AI system, the developer or deployer must have completed an independent audit under § 87 confirming reasonable care was taken. Algorithmic discrimination covers unjustified differential treatment based on an extensive list of protected characteristics. Self-testing to identify and mitigate bias, pool-expansion efforts for diversity, and private club exemptions are carved out from the definition of algorithmic discrimination. This provision is also declared an unlawful discriminatory practice under Executive Law § 296(23), bringing it within the jurisdiction of New York's human rights enforcement framework.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
Pending 2025-09-05
H-02.1H-02.6H-02.8
Real Prop. Law § 442-m(1)
Plain Language
Real estate brokers and online housing platforms that use virtual agents, and online housing platforms that use AI tools, must have an independent auditor conduct a disparate impact analysis at least annually. The analysis must test whether the system's outputs differ across protected classes (sex, race, ethnicity, or other protected classes under New York law), whether any differentiation serves a substantial legitimate nondiscriminatory interest, and whether a less discriminatory alternative exists. A summary of the most recent analysis must be submitted to the attorney general's office. This is both a periodic independent audit obligation and a regulatory submission obligation.
No less than annually, any real estate broker or online housing platform that uses virtual agents to assist with searches for available properties for sale or rental properties, and any online housing platform that uses AI tools, shall have a disparate impact analysis conducted and shall submit a summary of the most recent disparate impact analysis to the attorney general's office.
Pending 2025-09-05
H-02.1H-02.2
Real Prop. Law § 442-m(2)(a)-(c)
Plain Language
Real estate brokers and online housing platforms using virtual agents or AI tools must undertake three ongoing anti-discrimination obligations: (1) proactively identify discriminatory algorithmic results and modify their systems to adopt less discriminatory alternatives, including assessing training data for discriminatory predictive patterns; (2) ensure that the underlying AI systems are similarly predictive across groups based on sex, race, ethnicity, and other protected classes, and correct any disparities; and (3) conduct regular end-to-end testing of advertising, captioning, and chatbot systems to detect discriminatory outcomes, including by comparing ad delivery across demographic audiences. These are continuous obligations — they require ongoing monitoring and remediation, not one-time assessments.
Any real estate broker or online housing platform that offers or uses virtual agents or AI tools shall: (a) proactively identify discriminatory algorithmic results and modify such virtual agents or AI tools to adopt less discriminatory alternatives, including but not limited to, assessing data used to train such virtual agents or AI tools and verifying that use of such data does not predict discriminatory outcomes; (b) ensure that the artificial intelligence or other computational or algorithmic systems upon which such virtual agents or AI tools are structured are similarly predictive across groups on the basis of sex, race, ethnicity or other protected classes, and make adjustments to correct any identified disparities in predictiveness for any such groups; and (c) conduct regular end-to-end testing of advertising, captioning, and chatbot systems to ensure that any discriminatory outcomes are detected, including but not limited to, comparing the delivery of advertisements across different demographic audiences.
Pending 2027-01-01
H-02.1H-02.2H-02.3
Civil Rights Law § 102(1)-(2)
Plain Language
Developers and deployers are prohibited from offering, licensing, or using a covered algorithm in any way that causes disparate impact or discrimination based on protected characteristics in connection with consequential actions. The prohibition covers both intentional discrimination and unjustified differential effects. The disparate impact standard requires the developer or deployer to prove a substantial, legitimate, nondiscriminatory interest, and even if proven, a less discriminatory alternative defeats the defense. The algorithm is presumed to be analyzed holistically (not component by component) unless the developer or deployer proves separability by preponderance of the evidence. Exemptions exist for self-testing to identify or mitigate discrimination, diversity pool expansion, good-faith non-commercial research, and private clubs.
1. A developer or deployer shall not offer, license, promote, sell, or use a covered algorithm in a manner that: (a) causes or contributes to a disparate impact in a manner that prevents; (b) otherwise discriminates in a manner that prevents; or (c) otherwise makes unavailable, the equal enjoyment of goods, services, or other activities or opportunities, related to a consequential action, on the basis of a protected characteristic. 2. This section shall not apply to: (a) the offer, licensing, or use of a covered algorithm for the sole purpose of: (i) a developer's or deployer's self-testing (or auditing by an independent auditor at a developer's or deployer's request) to identify, prevent, or mitigate discrimination, or otherwise to ensure compliance with obligations, under federal or state law; (ii) expanding an applicant, participant, or customer pool to raise the likelihood of increasing diversity or redressing historic discrimination; or (iii) conducting good faith security research, or other research, if conducting the research is not part or all of a commercial act; or (b) any private club or other establishment not in fact open to the public, as described in section 201(e) of the Civil Rights Act of 1964 (42 U.S.C. 2000a(e)).
Pending 2027-01-01
H-02.3H-02.6
Civil Rights Law § 103(1)-(3)
Plain Language
Before deploying, licensing, or offering a covered algorithm for any consequential action — including material changes to previously-deployed algorithms — both developers and deployers must conduct a preliminary evaluation of whether harm is plausible. If no harm is plausible, they must record and submit that finding to the Division. If harm is plausible, they must engage a qualified independent auditor (who cannot have any employment, financial, or development relationship with the developer or deployer) to conduct a comprehensive pre-deployment evaluation. The developer's evaluation covers design methodology, training and testing data, performance metrics, demographic representation, stakeholder consultation, and harm potential. The deployer's evaluation (§ 103(4), mapped separately) covers deployment context, necessity, proportionality, and deployment-specific harm potential. For material changes to existing algorithms, the evaluation scope may be limited to the changes.
1. Prior to deploying, licensing, or offering a covered algorithm (including deploying a material change to a previously-deployed covered algorithm or a material change made prior to deployment) for a consequential action, a developer or deployer shall conduct a pre-deployment evaluation in accordance with this section. 2. (a) The developer shall conduct a preliminary evaluation of the plausibility that any expected use of the covered algorithm may result in a harm. (b) The deployer shall conduct a preliminary evaluation of the plausibility that any intended use of the covered algorithm may result in a harm. (c) Based on the results of the preliminary evaluation, the developer or deployer shall: (i) in the event that a harm is not plausible, record a finding of no plausible harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for the finding, and submit such record to the division; and (ii) in the event that a harm is plausible, conduct a full pre-deployment evaluation as described in subdivision three or subdivision four of this section, as applicable. (d) When conducting a preliminary evaluation of a material change to, or new use of, a previously-deployed covered algorithm, the developer or deployer may limit the scope of the evaluation to whether use of the covered algorithm may result in a harm as a result of the material change or new use. 3. (a) If a developer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the developer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the covered algorithm's design and methodology, including the inputs the covered algorithm is designed to use to produce an output and the outputs the covered algorithm is designed to produce; (ii) how the covered algorithm was created, trained, and tested, including: (A) any metric used to test the performance of the covered algorithm; (B) defined benchmarks and goals that correspond to such metrics, including whether there was sufficient representation of demographic groups that are reasonably likely to use or be affected by the covered algorithm in the data used to create or train the algorithm, and whether there was reasonable testing, if any, across such demographic groups; (C) the outputs the covered algorithm actually produces in testing; (D) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the development of the covered algorithm, or a disclosure that no such consultation occurred; (E) a description of which protected characteristics, if any, were used for testing and evaluation, and how and why such characteristics were used, including: (1) whether the testing occurred in comparable contextual conditions to the conditions in which the covered algorithm is expected to be used; and (2) if protected characteristics were not available to conduct such testing, a description of alternative methods the developer used to conduct the required assessment; (F) any other computational algorithm incorporated into the development of the covered algorithm, regardless of whether such precursor computational algorithm involves a consequential action; (G) a description of the data and information used to develop, test, maintain, or update the covered algorithm, including: (1) each type of personal data used, each source from which the personal data was collected, and how each type of personal data was inferred and processed; (2) the legal authorization for collecting and processing the personal data; and (3) an explanation of how the data (including personal data) used is representative, proportional, and appropriate to the development and intended uses of the covered algorithm; and (H) a description of the training process for the covered algorithm which includes the training, validation, and test data utilized to confirm the intended outputs; (iii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and a description of such potential harm or disparate impact; (iv) alternative practices and recommendations to prevent or mitigate harm and recommendations for how the developer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (v) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the developer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
Pending 2027-01-01
H-02.3H-02.6
Civil Rights Law § 103(4)
Plain Language
When a deployer's preliminary evaluation identifies plausible harm, the deployer must engage an independent auditor to conduct a full pre-deployment evaluation covering deployment-specific factors: the algorithm's role in the consequential action, necessity and proportionality relative to the baseline process being replaced, data inputs and their representativeness, testing results in the deployment context, stakeholder consultation, potential for harm and disparate impact, and mitigation recommendations. The independent auditor must submit a report with findings and recommendations. This is the deployer's parallel obligation to the developer's pre-deployment evaluation — each party must independently satisfy its own evaluation requirements.
4. (a) If a deployer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the deployer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the manner in which the covered algorithm makes or contributes to a consequential action and the purpose for which the covered algorithm will be deployed; (ii) the necessity and proportionality of the covered algorithm in relation to its planned use, including the intended benefits and limitations of the covered algorithm and a description of the baseline process being enhanced or replaced by the covered algorithm, if applicable; (iii) the inputs that the deployer plans to use to produce an output, including: (A) the type of personal data and information used and how the personal data and information will be collected, inferred, and processed; (B) the legal authorization for collecting and processing the personal data; and (C) an explanation of how the data used is representative, proportional, and appropriate to the deployment of the covered algorithm; (iv) the outputs the covered algorithm is expected to produce and the outputs the covered algorithm actually produces in testing; (v) a description of any additional testing or training completed by the deployer for the context in which the covered algorithm will be deployed; (vi) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the deployment of the covered algorithm; (vii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities in the context in which the covered algorithm will be deployed and a description of such potential harm or disparate impact; (viii) alternative practices and recommendations to prevent or mitigate harm in the context in which the covered algorithm will be deployed and recommendations for how the deployer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (ix) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the deployer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
Pending 2027-01-01
H-02.6H-02.7H-02.8H-02.10
Civil Rights Law § 104(1)-(3)
Plain Language
Deployers must conduct annual post-deployment impact assessments of each covered algorithm. A preliminary assessment identifies whether harm occurred during the reporting period. If no harm is identified, the deployer records and submits that finding to the Division. If harm is identified, the deployer must engage an independent auditor for a full impact assessment covering: actual harms and disparate impact with methodology, data inputs, expected vs. actual outputs, how the algorithm was used in consequential actions, and mitigation steps including staff training. The auditor's report goes to the deployer, who must then share a summary with the developer within 30 days (subject to trade secret and privacy protections). This creates a continuous annual cycle of post-deployment monitoring with independent oversight when harm is found.
1. After the deployment of a covered algorithm, a deployer shall, on an annual basis, conduct an impact assessment in accordance with this section. The deployer shall conduct a preliminary impact assessment of the covered algorithm to identify any harm that resulted from the covered algorithm during the reporting period and: (a) if no resulting harm is identified by such assessment, shall record a finding of no harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for such finding, and submit such finding to the division; and (b) if a resulting harm is identified by such assessment, shall conduct a full impact assessment as described in subdivision two of this section. 2. In the event that the covered algorithm resulted in a harm during the reporting period, the deployer shall engage an independent auditor to conduct a full impact assessment with respect to the reporting period, including: (a) an assessment of the harm that resulted or was reasonably likely to have been produced during the reporting period; (b) a description of the extent to which the covered algorithm produced a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, including the methodology for such evaluation, of how the covered algorithm produced or likely produced such disparity; (c) a description of the types of data input into the covered algorithm during the reporting period to produce an output, including: (i) documentation of how data input into the covered algorithm to produce an output is represented and complete descriptions of each field of data; and (ii) whether and to what extent the data input into the covered algorithm to produce an output was used to train or otherwise modify the covered algorithm; (d) whether and to what extent the covered algorithm produced the outputs it was expected to produce; (e) a detailed description of how the covered algorithm was used to make a consequential action; (f) any action taken to prevent or mitigate harms, including how relevant staff are informed of, trained about, and implement harm mitigation policies and practices, and recommendations for how the deployer could monitor for and prevent harm after offering, licensing, or deploying the covered algorithm; and (g) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. 3. (a) After the engagement of the independent auditor, the independent auditor shall submit to the deployer a report on the impact assessment conducted under subdivision two of this section, including the findings and recommendations of such independent auditor. (b) Not later than thirty days after the submission of a report on an impact assessment under this section, a deployer shall submit to the developer of the covered algorithm a summary of such report, subject to the trade secret and privacy protections described in subdivision six of this section.
Pending 2027-01-01
H-02.8
Civil Rights Law § 104(4)
Plain Language
Developers must annually review all impact assessment summaries submitted by deployers of their covered algorithms. The review must cover how deployers are using the algorithm, the data being inputted, whether deployers are complying with contractual terms, real-world performance versus pre-deployment testing, whether the algorithm is causing or is likely causing harm or disparate impact, and whether the algorithm needs modification. This creates a feedback loop requiring developers to remain actively engaged in monitoring the downstream use of their algorithms and to take corrective action when warranted.
4. A developer shall, on an annual basis, review each impact assessment summary submitted by a deployer of its covered algorithm under subdivision three of this section for the following purposes: (a) to assess how the deployer is using the covered algorithm, including the methodology for assessing such use; (b) to assess the type of data the deployer is inputting into the covered algorithm to produce an output and the types of outputs the covered algorithm is producing; (c) to assess whether the deployer is complying with any relevant contractual agreement with the developer and whether any remedial action is necessary; (d) to compare the covered algorithm's performance in real-world conditions versus pre-deployment testing, including the methodology used to evaluate such performance; (e) to assess whether the covered algorithm is causing harm or is reasonably likely to be causing harm; (f) to assess whether the covered algorithm is causing, or is reasonably likely to be causing, a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and, if so, how and with respect to which protected characteristic; (g) to determine whether the covered algorithm needs modification; (h) to determine whether any other action is appropriate to ensure that the covered algorithm remains safe and effective; and (i) to undertake any other assessment or responsive action the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division.
Pending 2027-01-01
H-02.1H-02.3
Civ. Rights Law § 86(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination arising from the use, sale, or sharing of those systems. Before using, selling, or sharing a high-risk AI system, an independent audit confirming such reasonable care must be completed. The definition of algorithmic discrimination covers an extensive list of protected characteristics. Importantly, testing your own system to identify bias, expanding applicant pools to increase diversity, and acts by private clubs exempt under the federal Civil Rights Act are excluded from the definition of algorithmic discrimination. This is a foundational duty — failure to comply is an unlawful discriminatory practice actionable under the Human Rights Law.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system.
2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
Pending 2025-01-01
H-02.3
Labor Law § 201-j(1)
Plain Language
Before using any AI system, covered employers must complete an impact assessment covering: the AI's objectives, its ability to meet those objectives, a summary of its algorithms and training data, its use of sensitive personal data, data storage and user controls, an estimate of employees already displaced by AI, and an estimate of future displacement. Assessments must be updated at least every two years and before any material change to the AI system that could alter its outcomes. The employer may use a third party to conduct the assessment. Only businesses resident in New York with more than 100 employees that are not small businesses are covered.
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; (e) an estimate of the number of employees already displaced due to artificial intelligence; and (f) an estimate of the number of employees expected to be displaced or otherwise affected due to the increased use of artificial intelligence in the workplace.
Pending 2025-10-11
H-02.3H-02.8H-02.10
GBL § 1552(3)(a)-(e)
Plain Language
Deployers must complete an impact assessment for each high-risk AI decision system before deployment and at least annually thereafter, plus within 90 days of any intentional and substantial modification. The assessment must cover the system's purpose and deployment context, discrimination risk analysis and mitigation steps, data input categories and outputs, customization data, performance metrics and limitations, transparency measures, and post-deployment monitoring safeguards. Assessments following a substantial modification must also address whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems. Cross-compliance credit is available if another law requires a reasonably similar assessment. All impact assessments and related records must be retained for at least three years following final deployment. Deployers meeting the subdivision 7 conditions are exempt.
3. (a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
Pending 2025-10-11
H-02.8
GBL § 1552(4)
Plain Language
Deployers must conduct an annual review — separate from the impact assessment — of each deployed high-risk AI decision system to affirmatively verify it is not causing algorithmic discrimination. This ongoing review obligation applies in addition to the impact assessment requirement and may be performed by a contracted third party. Deployers meeting the subdivision 7 conditions are exempt.
4. Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
Pending 2025-10-11
H-02.6
GBL § 1551(1)(a)(ii), § 1552(1)(a)(ii)
Plain Language
Both developers and deployers may obtain a rebuttable presumption of reasonable care by retaining an AG-identified independent third party to complete bias and governance audits. While not mandatory, this audit requirement is the statutory path to the safe harbor. The audit must include at minimum an assessment of the system's disparate impact across enumerated protected characteristics. The AG maintains and publishes an annual list of qualified auditors.
(ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system.
Pending 2025-08-11
Pub. Health Law § 4905-a(1)(e)-(f)
Plain Language
Utilization review agents must ensure their AI tools do not discriminate directly or indirectly against enrollees in violation of state or federal law. The tools must also be fairly and equitably applied, including in compliance with any applicable HHS regulations and guidance. While the statute does not prescribe a specific bias testing methodology, the non-discrimination requirement implicitly necessitates monitoring for disparate impact.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending 2025-08-11
Ins. Law § 4905-a(1)(e)-(f)
Plain Language
Disability insurers must ensure their AI tools do not discriminate directly or indirectly against insureds in violation of state or federal law and are fairly and equitably applied in compliance with applicable HHS regulations and guidance. This mirrors the parallel non-discrimination obligation on utilization review agents under the Public Health Law.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending 2026-07-22
Exec. Law § 296(23)(a)
Plain Language
Employers may not use artificial intelligence in any employment decision—recruitment, hiring, promotion, renewal, training selection, discharge, discipline, tenure, or terms and conditions of employment—where the AI has the effect of discriminating against employees based on any protected class under the New York Human Rights Law. The statute applies a disparate impact standard: the AI need not be intentionally discriminatory; it is sufficient that its use has the effect of subjecting employees to discrimination. The prohibition also expressly bars using zip codes as a proxy for protected characteristics, closing a common indirect discrimination vector in algorithmic systems.
(a) It shall be an unlawful discriminatory practice for an employer to use artificial intelligence for recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment that has the effect of subjecting employees to discrimination on the basis of age, race, creed, color, national origin, citizenship or immigration status, sexual orientation, gender identity or expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, or status as a victim of domestic violence or to use zip codes as a proxy for such protected classes.
Pending 2026-10-06
35 Pa.C.S. § 3503(b)(2)-(3)
Plain Language
Facilities must ensure that AI algorithms and their training data do not directly or indirectly discriminate against patients in violation of federal or state law. Algorithms must be applied fairly and equitably, in accordance with applicable HHS regulations and guidance. This encompasses both training data bias and operational application bias.
(2) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
Pending 2026-10-06
40 Pa.C.S. § 5203(b)(4)-(5)
Plain Language
Insurers must ensure their AI algorithms and training data do not directly or indirectly discriminate against covered persons in violation of federal or state law, and that algorithms are applied fairly and equitably consistent with applicable HHS regulations and guidance.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
Pending 2026-10-06
40 Pa.C.S. § 5303(b)(4)-(5)
Plain Language
MA or CHIP managed care plans must ensure their AI algorithms and training data do not directly or indirectly discriminate against enrollees in violation of federal or state law, and that algorithms are applied fairly and equitably consistent with HHS regulations and guidance.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
Pending 2027-01-09
H-02.1
35 Pa.C.S. § 3503(b)(2)-(3)
Plain Language
Facilities must ensure that both their AI algorithms and training data sets do not directly or indirectly discriminate against patients in violation of federal or state law. Algorithms must also be applied fairly and equitably, consistent with applicable HHS regulations and guidance. This creates both a non-discrimination obligation and an affirmative fair-application requirement, with HHS guidance serving as a reference standard.
(2) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
Pending 2027-01-09
H-02.1
40 Pa.C.S. § 5203(b)(4)-(5)
Plain Language
Insurers must ensure that both their AI algorithms and training data sets do not directly or indirectly discriminate against covered persons in violation of federal or state law. Algorithms must also be fairly and equitably applied consistent with HHS regulations and guidance.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
Pending 2027-01-09
H-02.1
40 Pa.C.S. § 5303(b)(4)-(5)
Plain Language
MA or CHIP managed care plans must ensure that both their AI algorithms and training data sets do not directly or indirectly discriminate against enrollees in violation of federal or state law. Algorithms must be fairly and equitably applied consistent with HHS regulations and guidance.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
Pending 2026-02-12
H-02.3H-02.6H-02.7
§ 28-5.2-2(k)
Plain Language
Employers may not use electronic monitoring — alone or with an ADS — unless a pre-use impact assessment has been completed. The assessment must be conducted by an independent auditor with no financial or legal conflicts and no involvement with the ADS in the prior five years. It must be completed no more than one year before monitoring begins (or within six months of the chapter's effective date for existing monitoring). The assessment must evaluate data protection practices, identify allowable purposes, analyze potential legal violations and employee privacy and job quality impacts, and describe mitigation steps. The full assessment must be disclosed in plain language to all affected workers and their authorized representatives within 30 days of receipt. Workers and their representatives then have the right to comment on, challenge, and bargain over the proposed monitoring.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
Pending 2026-02-06
H-02.3H-02.6H-02.7
§ 28-5.2-2(k)
Plain Language
Employers may not use electronic monitoring — whether standalone or in conjunction with an ADS — without first obtaining an independent impact assessment. The assessment must be conducted within one year prior to deployment (or within six months of the law's effective date for existing monitoring). The auditor must be independent with no financial or legal conflicts, and cannot have been involved with the ADS's development or deployment, employed by the developer/deployer, or had financial interests in them within the prior five years. The assessment must evaluate data protection practices, identify allowable purposes, describe risks of legal violations and employee privacy/job quality impacts, and propose mitigation steps. The full assessment must be disclosed in plain language to all affected workers and their authorized representatives within 30 days of receipt, and workers have the right to comment on, challenge, and bargain over the proposed monitoring based on the findings.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
Pending
H-02.3
S.C. Code § 37-31-20(A)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination — unlawful differential treatment or impact disfavoring individuals based on protected characteristics. This is a general duty standard, not a checklist. Developers receive a rebuttable presumption of compliance in AG enforcement actions if they have complied with this section and any rules the AG adopts. Self-testing for bias and diversity-expansion uses are carved out from the definition of algorithmic discrimination.
(A) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought by the Attorney General pursuant to Section 37-31-60, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
Pending
H-02.3
S.C. Code § 37-31-30(A)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination. A rebuttable presumption of compliance applies in AG enforcement actions if the deployer has complied with this section and any AG-adopted rules. This mirrors the developer duty in § 37-31-20(A) but applies at the deployment stage.
(A) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought by the Attorney General pursuant to Section 37-31-70, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
Pending
H-02.3H-02.8H-02.10
S.C. Code § 37-31-30(C)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system before deployment and update it at least annually and within 90 days of any intentional and substantial modification. The assessment must cover system purpose, algorithmic discrimination risk analysis with mitigation steps, input/output data categories, performance metrics and limitations, transparency measures, and post-deployment monitoring safeguards. A single assessment may cover comparable systems, and assessments completed for other laws that are reasonably similar in scope satisfy this requirement. All impact assessments and records must be retained for at least three years after final deployment. Separately, deployers must conduct an annual review of each deployed system to confirm it is not causing algorithmic discrimination. The small deployer exemption in subsection (F) applies.
(C)(1) Except as provided in items (4), (5), and subsection (F) of this section: (a) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system shall complete an impact assessment for the high-risk artificial intelligence system; and (b) a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) An impact assessment completed pursuant to this subsection must include, at a minimum, and to the extent reasonably known by or available to the deployer: (a) a statement by the deployer disclosing the purpose, intended-use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (c) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (d) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (e) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (f) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (g) a description of the postdeployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) In addition to the information required under item (2), an impact assessment completed pursuant to this item following an intentional and substantial modification to a high-risk artificial intelligence system must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection, all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) At least annually, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Pending 2026-07-01
H-02.1H-02.8
§ 2.2-1202.2(B)(4)
Plain Language
State agencies must annually test their automated decision systems for algorithmic discrimination — defined broadly to cover unlawful differential treatment or impact across a wide range of protected characteristics including age, disability, ethnicity, genetic information, English proficiency, national origin, race, religion, reproductive health, sex, sexual orientation, and veteran status. Testing may be performed by the agency itself or by a contractor. The agency must also certify the system's compliance with federal and state law. This is a recurring annual obligation, not a one-time pre-deployment assessment.
Annually test, or ensure that an appropriate contractor employed by such agency annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
Pending 2026-07-01
H-02.1H-02.8
§ 15.2-1500.2(B)(4)
Plain Language
Local government entities must annually test their automated decision systems for algorithmic discrimination and certify compliance with federal and state law. Testing may be done by the entity itself or an appropriate contractor. This mirrors the state agency annual testing obligation and covers the same broad range of protected characteristics.
Annually test, or ensure that an appropriate contractor employed by such department, office, board, commission, agency, or instrumentality of local government annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
Pre-filed 2025-07-01
H-02.3
21 V.S.A. § 495q(g)
Plain Language
Before using any automated decision system, employers must create a written impact assessment covering eight mandatory elements: system description and purpose, data used, outputs and decision types, necessity assessment, validity and reliability evaluation per social science standards, a detailed risk assessment covering discrimination across protected characteristics, legal rights chilling, health/safety/dignity harms, privacy risks, and economic impacts — plus mitigation measures and methodology. The assessment must be provided to employees upon request, updated whenever a significant system change occurs, and a single assessment may cover comparable systems. This is a pre-deployment requirement — the employer cannot begin using the ADS until the impact assessment is completed.
(g) Impact assessment of automated decision systems. (1) Prior to utilizing an automated decision system, an employer shall create a written impact assessment of the system that includes, at a minimum: (A) a detailed description of the automated decision system and its purpose; (B) a description of the data utilized by the system; (C) a description of the outputs produced by the system and the types of employment-related decisions in which those outputs may be utilized; (D) an assessment of the necessity for the system, including reasons for utilizing the system to supplement nonautomated means of decision making; (E) a detailed assessment of the system's validity and reliability in accordance with contemporary social science standards and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (F) a detailed assessment of the potential risks of utilizing the system, including the risk of: (i) discrimination against employees on the basis of race, color, religion, national origin, sex, sexual orientation, gender identity, ancestry, place of birth, age, crime victim status, or physical or mental condition; (ii) violating employees' legal rights or chilling employees' exercise of legal rights; (iii) directly or indirectly harming employees' physical health, mental health, safety, sense of well-being, dignity, or autonomy; (iv) harm to employee privacy, including through potential security breaches or inadvertent disclosure of information; and (v) negative economic and material impacts to employees, including potential effects on compensation, benefits, work conditions, evaluations, advancement, and work opportunities; (G) a detailed summary of measures taken by the employer to address or mitigate the risks identified pursuant to subdivision (E) of this subdivision (1); and (H) a description of any methodology used in preparing the assessment. (2) An employer shall provide a copy of the assessment prepared pursuant to subdivision (1) of this subsection to an employee upon request. (3) An employer shall update the assessment required pursuant to this subsection any time a significant change or update is made to the automated decision system. (4) A single impact assessment may address a comparable set of automated decision systems deployed by an employer.
Pending 2025-07-01
9 V.S.A. § 4193b
Plain Language
Developers and deployers are categorically prohibited from using, selling, or sharing an automated decision system (or a product featuring one) for consequential decisions if it produces algorithmic discrimination. The prohibition covers differential treatment or disparate impact across a broad list of protected characteristics. Safe harbors exist for internal testing to identify and mitigate discrimination, expanding applicant pools for diversity purposes, and private clubs not open to the public. This is a strict liability-style prohibition — the system need only 'produce' discrimination, not intend it.
It shall be unlawful discrimination for a developer or deployer to use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that produces algorithmic discrimination.
Pending 2025-07-01
H-02.6
9 V.S.A. § 4193c(f)
Plain Language
Developers are prohibited from using, selling, or sharing an automated decision system for consequential decisions unless it has passed an independent audit under § 4193e. If the audit finds algorithmic discrimination, the developer must halt all distribution until a post-adjustment audit confirms the discrimination has been rectified. This creates a deployment gate — no system may enter the market for consequential decisions without first passing an independent audit, and a finding of discrimination triggers an automatic halt-and-fix obligation.
(f) A developer shall not use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that has not passed an independent audit, in accordance with section 4193e of this title. If an independent audit finds that an automated decision system for use in a consequential decision does produce algorithmic discrimination, the developer shall not use, sell, or share the system until the algorithmic discrimination has been proven to be rectified by a post-adjustment audit.
Pending 2025-07-01
H-02.6H-02.7
9 V.S.A. § 4193e(a)-(g)
Plain Language
Developers and deployers are jointly responsible for ensuring that an independent audit is conducted before deployment, six months after deployment, and at least every 18 months thereafter. The audit must cover data management policies, system validity and reliability by use case, comparative demographic performance analysis for algorithmic discrimination, compliance with existing laws, and evaluation of the risk management program. All audits must be delivered to the Attorney General regardless of findings. The auditor must be truly independent — no prior service relationship within 12 months, no involvement in the system, no employment or financial interest in the developer or deployer. The audit must be performed entirely without AI assistance. Auditor fees cannot be contingent on results, and no incentives may be offered for positive findings. Developer and deployer must contractually allocate audit responsibilities; absent agreement, they are jointly and severally liable.
(a) Prior to deployment of an automated decision system for use in a consequential decision, six months after deployment, and at least every 18 months thereafter for each calendar year an automated decision system is in use in consequential decisions after the first post-deployment audit, the developer and deployer shall be jointly responsible for ensuring that an independent audit is conducted in compliance with the provisions of this section to ensure that the product does not produce algorithmic discrimination and complies with the provisions of this subchapter. The developer and deployer shall enter into a contract specifying which party is responsible for the costs, oversight, and results of the audit. Absent an agreement of responsibility through contract, the developer and deployer shall be jointly and severally liable for any violations of this section. Regardless of final findings, the deployer or developer shall deliver all audits conducted under this section to the Attorney General. (b) A deployer or developer may contract with more than one auditor to fulfill the requirements of this section. (c) The audit shall include the following: (1) an analysis of data management policies, including whether personal or sensitive data relating to a consumer is subject to data security protection standards that comply with the requirements of applicable State law; (2) an analysis of the system validity and reliability according to each specified use case listed in the entity's reporting document filed by the developer or deployer pursuant to section 4193f of this title; (3) a comparative analysis of the system's performance when used on consumers of different demographic groups and a determination of whether the system produces algorithmic discrimination in violation of this subchapter by each intended and foreseeable identified use as identified by the deployer and developer pursuant to section 4193f of this title; (4) an analysis of how the technology complies with existing relevant federal, State, and local labor, civil rights, consumer protection, privacy, and data privacy laws; and (5) an evaluation of the developer's or deployer's documented risk management policy and program as set forth in section 4193g of this title for conformity with subsection 4193g(a) of this title. (e) The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer pursuant to section 4193f of this title. (f) An audit conducted under this section shall be completed in its entirety without the assistance of an automated decision system. (g)(1) An auditor shall be an independent entity, including an individual, nonprofit, firm, corporation, partnership, cooperative, or association. (2) For the purposes of this subchapter, no auditor may be commissioned by a developer or deployer of an automated decision system used in consequential decisions if the auditor: (A) has already been commissioned to provide any auditing or nonauditing service, including financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past 12 months; (B) is or was involved in using, developing, integrating, offering, licensing, or deploying the automated decision system; (C) has or had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision system; or (D) has or had a direct financial interest or a material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision system. (3) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
Pre-filed 2026-07-01
18 V.S.A. § 9423(a)(3)-(4)
Plain Language
Health plans must ensure that their AI utilization review tools are fairly applied in compliance with HHS regulations and guidance, and are configured and applied consistently across all health plans and insureds. The practical effect is a non-discrimination and consistency obligation: patients with similar clinical presentations must receive the same determination regardless of which plan they are on or other non-clinical factors. Health plans should be prepared to demonstrate through testing or configuration documentation that the tool does not produce disparate results for similarly situated patients.
(3) The artificial intelligence, algorithm, or other software tool is fairly applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services. (4) The artificial intelligence, algorithm, or other software tool is configured and applied in a standard, consistent manner for all health plans and insureds so that the resulting decisions are the same for all patients with similar clinical presentation and considerations.
Passed 2026-07-01
H-02.1
18 V.S.A. § 9771(a)(5)-(6)
Plain Language
Health plans must ensure their AI utilization review tools do not discriminate directly or indirectly against covered individuals in violation of state or federal law, and that the tools are applied fairly and equitably in accordance with HHS regulations and guidance. This creates both a non-discrimination compliance obligation and a fairness standard — the tool must not produce disparate outcomes, and it must be applied consistently across the covered population.
(5) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against covered individuals in violation of State or federal law. (6) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services.
Pending 2027-01-01
H-02.1H-02.3
Sec. 2(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the system's intended and contracted uses. Compliance with the full set of developer obligations in Section 2 creates a rebuttable presumption that the developer used reasonable care. Self-testing to identify or prevent discrimination is excluded from the definition of algorithmic discrimination and does not trigger liability.
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
Pending 2027-01-01
H-02.1H-02.3
Sec. 3(1)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. Full compliance with the deployer obligations in Section 3 creates a rebuttable presumption that the deployer met this standard. This is the deployer-side counterpart to the developer's reasonable care obligation in Section 2(1).
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
Pending 2027-01-01
H-02.3H-02.10
Sec. 3(3)(a)-(c)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions without first completing a written impact assessment. The assessment must cover nine enumerated elements: purpose and use cases, known discrimination risks and mitigation steps, comparison of actual use to developer-intended use (for post-deployment assessments), input/output data categories, customization data, performance metrics and limitations, transparency measures, post-deployment monitoring and user safeguards, and validity/reliability analysis. A single assessment may cover comparable systems, and an assessment completed for another law can satisfy this requirement if reasonably similar in scope. All impact assessments and supporting records, including raw performance evaluation data, must be retained for at least three years after final deployment. The assessment must be updated before any significant system update is used for consequential decisions.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
Pending 2026-07-01
H-02.1H-02.8
Sec. 3(1)(a)-(b), (2)(a)-(b)
Plain Language
Beginning July 1, 2027, deployers must use industry-standard means to protect consumers from known or reasonably foreseeable algorithmic discrimination. In addition, deployers (or a contracted third party) must conduct at least annual reviews of each deployed high-risk AI system to verify it is not causing algorithmic discrimination. If a deployer discovers that a system has caused algorithmic discrimination, it must notify the attorney general within 90 days. A deployer that complies with the entire chapter benefits from a rebuttable presumption of reasonable care in enforcement actions. Algorithmic discrimination is defined by reference to Washington's existing anti-discrimination law (chapter 49.60 RCW) and federal law, and excludes testing done to identify or mitigate discrimination. No trade secret disclosure is required.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 9 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
Pending 2026-07-01
H-02.3H-02.10
Sec. 5(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system deployed on or after July 1, 2027, and again within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and intended uses, algorithmic discrimination risk analysis and mitigation steps, data inputs, outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring safeguards. Post-modification assessments must additionally disclose whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems. An impact assessment completed under another law satisfies this requirement if reasonably similar in scope. All impact assessments and supporting records must be retained for at least three years after final deployment. Small deployers (fewer than 50 FTEs, not using own training data) may be exempt under Section 6 conditions.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
Pending 2026-07-01
H-02.3H-02.10
Sec. 6(1)-(2)
Plain Language
Small deployers (fewer than 50 FTEs) that do not use their own data to train the high-risk AI system are exempt from the impact assessment (Section 5(1)-(3)) and annual algorithmic discrimination review (Section 3(2)) requirements, provided three conditions are continuously met: (1) the system is used only for disclosed intended uses; (2) the system's continued learning relies on non-deployer data; and (3) the deployer makes available to consumers a developer-provided impact assessment that is substantially similar to what Section 5 requires. If any condition lapses, the exemption is lost. The exemption does not relieve the deployer of the general duty to use industry-standard means to protect against algorithmic discrimination (Section 3(1)(a)) or the risk management program requirement (Section 4).
(1) The requirements in section 5 (1) through (3) of this act and section 3(2) of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed: (a) The deployer: (i) Employs fewer than 50 full-time equivalent employees; and (ii) Does not use the deployer's own data to train the high-risk artificial intelligence system; (b) The high-risk artificial intelligence system: (i) Is used for the intended uses that are disclosed by the deployer; and (ii) Continues learning based on data derived from sources other than the deployer's own data; and (c) The deployer makes available to consumers any impact assessment that: (i) The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and (ii) Includes information that is substantially similar to the information in the impact assessment required under section 5 of this act. (2) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
Pending 2027-01-01
H-02.1H-02.2H-02.3
Sec. 2(1)-(2)
Plain Language
Developers must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. Before providing a high-risk AI system to any deployer or other developer, the developer must deliver documentation covering: intended uses, known limitations and discrimination risks, performance and bias evaluation summaries, mitigation measures taken, and guidance on proper use and human monitoring. Compliance with all requirements of Section 2 creates a rebuttable presumption of reasonable care in any civil action. Developers that also serve as deployers are exempt from generating this documentation unless the system is provided to an unaffiliated deployer (Sec. 2(4)). Conformity with NIST AI RMF, ISO/IEC 42001, or an equivalent recognized framework creates an additional presumption of conformity (Sec. 2(5)).
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section. (2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
Pending 2027-01-01
H-02.3
Sec. 3(1)
Plain Language
Deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks when using high-risk AI systems to make consequential decisions. Full compliance with all deployer obligations in Section 3 creates a rebuttable presumption of reasonable care in any civil action. This is the deployer-side counterpart to the developer reasonable care obligation in Section 2(1).
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
Pending 2027-01-01
H-02.3H-02.10
Sec. 3(3)
Plain Language
Deployers must complete a formal impact assessment before initially deploying a high-risk AI system and before any significant update is used for consequential decisions. The assessment must cover at minimum: purpose and use cases, known discrimination risks and mitigation steps, data input/output categories, customization data, performance metrics and limitations, transparency measures, post-deployment monitoring, and validity/reliability analysis. A single impact assessment may cover comparable systems, and an assessment completed for another law satisfies this requirement if reasonably similar in scope. All impact assessments and supporting records — including raw performance data — must be retained for at least three years following final deployment. This is a deployment prerequisite: the system may not be used for consequential decisions without a completed assessment.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
Pending 2026-07-01
H-02.1H-02.8
Sec. 3(1)(a)-(b), (2)(a)-(b)
Plain Language
Deployers must use industry-standard measures to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. At least annually beginning July 1, 2027, each deployer (or a contracted third party) must review every deployed high-risk AI system to verify it is not causing algorithmic discrimination. If discrimination is discovered, the deployer must notify the Attorney General within 90 days. Deployers who comply with the full chapter benefit from a rebuttable presumption of reasonable care in any AG enforcement action. Testing for discrimination mitigation and diversity expansion are explicitly excluded from the definition of algorithmic discrimination.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 10 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Pending 2026-07-01
H-02.3H-02.10
Sec. 6(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system deployed on or after July 1, 2027, and within 90 days of any intentional and substantial modification. The assessment must cover: the system's purpose and intended uses, an analysis of algorithmic discrimination risks and mitigation steps, input data categories, outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring and oversight. After a substantial modification, the assessment must also disclose how the system's actual use compared to the developer's intended uses. A single assessment may cover comparable systems. Cross-jurisdictional impact assessments that are reasonably similar in scope and effect satisfy this requirement. All impact assessments, supporting records, and prior assessments must be retained for at least three years after final deployment. Small deployers meeting the conditions of Section 7 (fewer than 50 FTEs, no own-data training, system used for disclosed purposes, developer's impact assessment made available) are exempt.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system.