H-02
Human Oversight & Fairness
Non-Discrimination & Bias Assessment
AI systems used in high-stakes contexts must be tested and formally assessed for discriminatory impact across protected characteristics before deployment. Results must be documented and retained. Some jurisdictions require submission to regulators; others require independent third-party audits with public disclosure of results.
Applies to DeveloperDeployerGovernment Sector EmploymentFinancial ServicesHealthcareGovernment System
Bills — Enacted
5
unique bills
Bills — Proposed
55
Last Updated
2026-03-29
Core Obligation

AI systems used in high-stakes contexts must be tested and formally assessed for discriminatory impact across protected characteristics before deployment. Results must be documented and retained. Some jurisdictions require submission to regulators; others require independent third-party audits with public disclosure of results.

Sub-Obligations9 sub-obligations
ID
Name & Description
Enacted
Proposed
H-02.1
Internal bias testing The developer or deployer must conduct testing across protected characteristics using appropriate statistical methods before deployment.
2 enacted
34 proposed
H-02.2
Documented methodology The testing methodology must be documented in sufficient detail for third-party review, including: protected characteristics tested, statistical measures used, datasets tested, and results.
1 enacted
9 proposed
H-02.3
Algorithmic impact assessment A formal written assessment of the AI system's potential discriminatory impact must be completed before deployment, identifying risks and mitigation measures. Must be retained and available to regulators on request.
4 enacted
32 proposed
H-02.4
Regulator submission of assessment Proactive submission of the impact assessment to a regulatory authority on a defined schedule or upon request.
0 enacted
4 proposed
H-02.5
Public disclosure of assessment Public disclosure of a summary or the full impact assessment.
0 enacted
4 proposed
H-02.6
Independent third-party audit A qualified independent auditor with no material relationship to the developer or deployer must evaluate the system for bias and disparate impact. Currently required primarily for automated employment decision tools.
1 enacted
17 proposed
H-02.7
Public disclosure of audit results Audit results, including selection rates and impact ratios across protected categories, must be published prior to or contemporaneous with deployment.
1 enacted
13 proposed
H-02.8
Periodic Post-Deployment Discrimination Review Deployers must conduct periodic (at least annual) reviews of each deployed high-risk AI system to affirmatively verify the system is not causing algorithmic discrimination, separate from pre-deployment bias assessments. Reviews may be conducted internally or by a contracted third party.
2 enacted
15 proposed
H-02.10
Impact Assessment Records Retention Deployers must retain all impact assessments, associated records, and prior impact assessments for a period of time following the final deployment of each high-risk AI system, and make them available to regulators upon request.
1 enacted
13 proposed
Bills That Map This Requirement 60 bills
Bill
Status
Sub-Obligations
Section
Pending 2025-07-01
H-02.1
2 CCR § 11009(f)
Plain Language
Employers and other covered entities may not use an automated-decision system, qualification standard, employment test, or proxy that discriminates against applicants or employees on any FEHA-protected basis. In any discrimination claim or defense, evidence of anti-bias testing (or the absence of such testing) is relevant — including the quality, efficacy, recency, scope, results, and the entity's response to those results. This means that failure to conduct anti-bias testing on an automated-decision system can itself be used as evidence of discrimination, and conversely, robust testing may support a defense.
(f) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on a basis protected by the Act, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
Pending 2025-07-01
H-02.1H-02.3
2 CCR § 11016(a)(2)
Plain Language
Employers may not use automated-decision systems (or any other method) in recruitment that restricts, excludes, classifies, or expresses preference for candidates on a FEHA-protected basis, or that uses advertising methods to communicate employment availability in a discriminatory manner. This extends to AI-driven job ad targeting, resume screening, and any other automated recruitment tool. The only exception is a permissible defense such as a bona fide occupational qualification.
(2) Prohibited Recruitment Practices. An employer or other covered entity shall not, unless pursuant to a permissible defense, engage in any recruitment activity, including but not limited to practices accomplished through the use of an automated-decision system, that:
(A) Restricts, excludes, or classifies individuals on a basis enumerated in the Act;
(B) Expresses a preference for individuals on a basis enumerated in the Act; or
(C) Communicates or uses advertising methods to communicate the availability of employment benefits in a manner intended to discriminate on a basis enumerated in the Act.
Pending 2025-07-01
H-02.1
2 CCR § 11016(b)(1)
Plain Language
Pre-employment inquiries, including those conducted through automated-decision systems, must not directly or indirectly identify individuals on a FEHA-protected basis unless a permissible defense applies. This means that automated screening questions, AI-driven assessments, and chatbot-based pre-employment inquiries must be designed to avoid eliciting or inferring protected-class information. Employers bear the burden of ensuring their automated systems do not function as proxy identifiers for protected characteristics.
(1) Limited Permissible Inquiries. An employer or other covered entity may make any pre-employment inquiries that do not discriminate on a basis enumerated in the Act. Inquiries, including but not limited to inquiries made through the use of an automated-decision system, that directly or indirectly identify an individual on a basis enumerated in the Act are unlawful unless made pursuant to a permissible defense.
Pending 2025-07-01
H-02.1H-02.3
2 CCR § 11016(c)(3)(A), (c)(5)
Plain Language
Employers using online application technology or automated-decision systems that screen, rank, or prioritize candidates based on scheduling availability, skills, dexterity, reaction time, or other characteristics must ensure these systems do not discriminate against individuals with disabilities, religious creed, or medical conditions. When an ADS has an adverse impact on protected groups, it is unlawful unless job-related and consistent with business necessity. Employers may need to provide reasonable accommodations — for example, including mechanisms in online applications for applicants to request accommodations, or adjusting ADS assessments for applicants with disabilities.
(3)(A) The use of online application technology that limits, screens out, ranks, or prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition. Such a practice having an adverse impact is unlawful unless job-related and consistent with business necessity and the online application technology includes a mechanism for the applicant to request an accommodation.

(5) Automated-Decision Systems. The use of an automated-decision system that, for example, measures an applicant's skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics protected under the Act. To avoid unlawful discrimination, an employer or other covered entity may need to provide reasonable accommodation to an applicant as required by Article 8 (religious creed) or Article 9 (disability) of these regulations.
Pending 2025-07-01
H-02.1H-02.3
2 CCR § 11017(a), (d)(1), (e)
Plain Language
Any employment selection policy, practice, or automated-decision system that has an adverse impact on applicants or employees based on FEHA-protected characteristics is unlawful unless the employer can demonstrate it is job-related and consistent with business necessity. The regulations incorporate the federal Uniform Guidelines on Employee Selection Procedures (29 C.F.R. 1607). ADS that analyze tone of voice, facial expressions, or other physical characteristics may discriminate based on race, national origin, gender, or disability. Employers must provide reasonable accommodations during testing and may need to modify ADS-administered assessments. Facially neutral ADS with adverse impact are only permissible upon a showing of job-relatedness and business necessity.
(a) Selection and Testing. Any policy or practice of an employer or other covered entity that has an adverse impact on employment opportunities of individuals on a basis enumerated in the Act is unlawful unless the policy or practice is job-related and consistent with business necessity (business necessity is defined in section 11010(b)). The Council herein adopts the Uniform Guidelines on Employee Selection Procedures promulgated by various federal agencies, including the EEOC and Department of Labor. [29 C.F.R. 1607 (1978)].

(d)(1) Automated-Decision Systems. An automated-decision system that, for example, analyzes an applicant's tone of voice, facial expressions or other physical characteristics or behavior may discriminate against individuals based on race, national origin, gender, disability, or other characteristics protected under the Act. To avoid unlawful discrimination, an employer or other covered entity may need to provide reasonable accommodation to an applicant as required by Article 8 (religious creed) or Article 9 (disability) of these regulations.

(e) Permissible Selection Devices. A testing device, automated-decision system, or other means of selection that is facially neutral, but that has an adverse impact (as defined in the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. 1607 (1978))) upon persons on a basis enumerated in the Act, is permissible only upon a showing that the selection practice is job-related and consistent with business necessity (business necessity is defined in section 11010(b)).
Pending 2025-07-01
H-02.1
2 CCR § 11017.1(a)(1)
Plain Language
Automated-decision systems may not be used to inquire into an applicant's criminal history prior to making a conditional offer of employment. This extends the Fair Chance Act's prohibition on pre-offer criminal history inquiries to cover ADS-conducted background checks and automated screening. Employers must ensure that any ADS used in pre-employment screening does not access or consider criminal history information before a conditional offer has been extended.
(1) Prohibited consideration under this subsection includes, but is not limited to, inquiring about criminal history through an employment application, background check, or internet searches, or the use of an automated-decision system.
Pending 2025-07-01
H-02.1
2 CCR § 11020(b)
Plain Language
All aiding and abetting prohibitions — including assisting in unlawful discrimination, inciting or soliciting violations, coercing discriminatory conduct, concealing evidence, and advertising on a prohibited basis — apply equally when the prohibited practice is conducted through an automated-decision system. This means that vendors who develop or deploy ADS tools that facilitate discriminatory employment practices may also be liable for aiding and abetting discrimination.
(b) The prohibited practices set forth in subsection (a) include any such practice conducted in whole or in part through the use of an automated-decision system.
Pending 2025-07-01
H-02.1
2 CCR § 11028(b), (c), (m)
Plain Language
Automated-decision systems that discriminate based on accent, English proficiency, or national origin (or a proxy thereof) are unlawful. ADS used in screening that penalizes accents, non-native English proficiency, or national origin characteristics must be justified by business necessity. Anti-bias testing (or its absence) is relevant evidence. This covers voice analysis AI, language proficiency screening tools, and any ADS that uses linguistic characteristics as selection criteria.
(b) Discrimination based on an applicant's or employee's accent is unlawful unless the employer proves that the individual's accent interferes materially with the applicant's or employee's ability to perform the job in question. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy).

(c) Discrimination based on an applicant's or employee's English proficiency is unlawful unless the employer is necessary to effectively fulfill the job duties of the position.) In determining business necessity in this context, relevant factors include, but are not limited to, the type of proficiency required (e.g., spoken, written, aural, and/or reading comprehension), the degree of proficiency required, and the nature and job duties of the position. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy).

(m) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of national origin or a proxy of national origin, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
Pending 2025-07-01
H-02.1
2 CCR § 11032(b)(4), (f)
Plain Language
Automated-decision systems and selection criteria (including qualification standards, employment tests, or proxies) that discriminate on the basis of sex are unlawful. This covers sex-based discrimination in pre-employment inquiries, applications, and employee selection. Anti-bias testing (or its absence) is relevant evidence in any such claim or defense. Employers using ADS for resume screening, interview analysis, or candidate scoring must ensure their systems do not produce discriminatory outcomes based on sex.
(4) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of sex, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.

(f) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of sex or any basis prohibited in subsections in (a) through (e) of this section, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
Pending 2025-07-01
H-02.1
2 CCR § 11038(b)
Plain Language
Automated-decision systems that discriminate against applicants or employees on the basis of pregnancy or perceived pregnancy are unlawful. Anti-bias testing evidence (or lack thereof) is relevant to any claim or defense. Employers must ensure ADS used in employment decisions do not disadvantage pregnant individuals or those perceived as pregnant.
(b) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of pregnancy or perceived pregnancy, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
Pending 2025-07-01
H-02.1
2 CCR § 11039(a)(1)(J)
Plain Language
Employers may not use automated-decision systems or selection criteria that discriminate on the basis of pregnancy or perceived pregnancy in any employment decision including hiring, training, promotion, discharge, or terms and conditions of employment. Anti-bias testing evidence is relevant to claims and defenses. This is the employer-specific parallel to the broader covered-entity provision in § 11038(b).
(J) use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of pregnancy or perceived pregnancy, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results; or
Pending 2025-07-01
H-02.1
2 CCR § 11056(a)
Plain Language
Automated-decision systems used in pre-employment inquiries must not ask applicants to disclose their marital status. This extends the existing prohibition on marital-status inquiries to ADS-mediated screening. The only exception is a permissible defense.
(a) Impermissible Inquiries. It is unlawful to ask an applicant to disclose their marital status as part of a pre-employment inquiry, including an inquiry made through the use of an automated-decision system, unless pursuant to a permissible defense.
Pending 2025-07-01
H-02.1
2 CCR § 11063(b)
Plain Language
Automated-decision systems that discriminate on the basis of religious creed are unlawful. Anti-bias testing evidence is relevant to claims and defenses. Employers using ADS for scheduling, screening, or selection must ensure systems do not disadvantage individuals based on religious observance, practice, or belief.
(b) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of religion, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
Pending 2025-07-01
H-02.1
2 CCR § 11070(a)(2), (b)(2)
Plain Language
Employers may not use automated-decision systems to advertise employment in ways that discourage applicants with disabilities. ADS-mediated pre-employment screening, application forms, and questionnaires must not ask questions that elicit disability information before a job offer is made. This includes questions about medical history, workers' compensation, hospitalization, medical leave, and physical or mental limitations — whether asked by a human interviewer or an automated system.
(a)(2) It is unlawful to advertise or publicize, including but not limited to through the use of an automated-decision system, an employment benefit in any way that discourages or is designed to discourage applicants with disabilities from applying to a greater extent than individuals without disabilities.

(b)(2) Prohibited Inquiries. It is unlawful to ask general questions on disability or questions likely to elicit information about a disability in an application form, automated-decision system, or pre-employment questionnaire or at any time before a job offer is made. Examples of prohibited inquiries are: [list of examples]
Pending 2025-07-01
H-02.1
2 CCR § 11071(e)
Plain Language
Any medical or psychological examination or disability-related inquiry conducted through an automated-decision system is subject to the same restrictions as those conducted by humans. This means ADS-administered tests, games, puzzles, or challenges that are likely to elicit information about a disability are treated as medical or psychological examinations and are subject to pre-offer prohibition, post-offer conditions, and confidentiality requirements under the FEHA disability discrimination framework.
(e) As used in this article, "medical or psychological examination" (a term that is defined in section 11065 of these regulations) or a disability-related inquiry includes any such examination or inquiry administered through the use of an automated-decision system. Such examination or inquiry may include a test, question, puzzle, game, or other challenge that is likely to elicit information about a disability.
Pending 2025-07-01
H-02.1
2 CCR § 11072(b)(1)-(3)
Plain Language
Employers may not use qualification standards, employment tests, proxies, or other selection criteria — including those administered through automated-decision systems — that screen out or have an adverse impact on individuals with disabilities. This covers ADS that use uncorrected vision or hearing assessments, skill tests, and any other automated selection mechanism. Such criteria are only permissible if job-related and no less discriminatory alternative serves the employer's goals equally effectively. Employers bear the burden of demonstrating both job-relatedness and the unavailability of less discriminatory alternatives.
(1) In general. It is unlawful for an employer or other covered entity to use a qualification standards, employment tests, proxy, or other selection criteria — including but not limited to those administered through the use of an automated-decision system — that screens out, tends to screen out, or otherwise has an adverse impact on an applicant or employee with a disability or a class of applicants or employees with disabilities, on the basis of disability. However, such standards, tests, or other selection criteria, as used by the employer or other covered entity, is not unlawful under this subsection when job-related for the position in question, and there is no less discriminatory standard, test, or other selection criteria that serves the employer's goals as effectively as the challenged standard, test, or other selection criteria.

(2) Qualification Standards and Tests Related to Uncorrected Vision or Uncorrected Hearing. An employer or other covered entity shall not use a qualification standards, employment tests, proxy, or other selection criteria — including but not limited to those administered through the use of an automated-decision system — that discriminates against an applicant or employee based on uncorrected vision or uncorrected hearing. However, such standards, tests, or other selection criteria, as used by the employer or other covered entity, is not unlawful under this subsection when job-related for the position in question, and there is no less discriminatory standard, test, or other selection criteria that serves the employer's goals as effectively as the challenged standard, test, or other selection criteria.

(3) An employer or other covered entity shall not make use of any testing criterion, including but not limited to through the use of an automated-decision system, that discriminates against applicants or employees with disabilities, unless:
(A) the test score or other selection criterion used is shown to be job-related for the position in question; and
(B) an alternative job-related test or criterion that does not discriminate against applicants or employees with disabilities is unavailable or would impose an undue hardship on the employer.
Pending 2025-07-01
H-02.1
2 CCR § 11076(a)
Plain Language
A presumption of age discrimination arises whenever a facially neutral practice — including the use of an automated-decision system — has an adverse impact on applicants or employees age 40 or older. Employers must demonstrate the practice is job-related and consistent with business necessity. Even if that showing is made, the practice may still be unlawful if a less discriminatory alternative exists. In layoff or salary reduction contexts, preferring lower-paid workers alone does not overcome the presumption. Employers using ADS that screen based on experience levels, graduation dates, or other age-correlated factors should ensure these do not produce adverse impact.
(a) Employers. Discrimination on the basis of age may be established by showing that a job applicant's or employee's age of 40 or older was considered in the denial of employment or an employment benefit. There is a presumption of discrimination whenever a facially neutral practice, including but not limited to the use of an automated-decision system, has an adverse impact on an applicant(s) or employee(s) age 40 or older, unless the practice is job-related and consistent with business necessity as defined in section 11010(b). In the context of layoffs or salary reduction efforts that have an adverse impact on an employee(s) age 40 or older, an employer's preference to retain a lower paid worker(s), alone, is insufficient to negate the presumption. The practice may still be impermissible, even where it is job-related and consistent with business necessity, where it is shown that an alternative practice could accomplish the business purpose equally well with a lesser discriminatory impact.
Pending 2025-07-01
H-02.1
2 CCR § 11079(b), (c)(1)
Plain Language
Pre-employment inquiries through automated-decision systems that directly or indirectly identify applicants by age are unlawful unless age is a bona fide occupational qualification. Online job applications may not require age entry, use drop-down menus with age-based cutoffs, or employ automated selection criteria that screen out applicants age 40 and older. This covers ADS that use graduation dates, years of experience caps, or other age-correlated fields as screening criteria.
(b) Pre-employment Inquiries. Unless age is a bona fide occupational qualification for the position at issue, pre-employment inquiries that would result in the direct or indirect identification of persons on the basis of age, including, but not limited to, inquiries made through the use of an automated-decision system, are unlawful. Examples of prohibited inquiries are requests for age, date of birth, or graduation dates, except where age is a bona fide occupational qualification. This provision applies to oral and written inquiries and interviews.

(c)(1) Subsection (c) prohibits the use of online job applications that require entry of age in order to access or complete an application, or the use of drop-down menus that contain age-based cut-off dates or utilize automated selection criteria or algorithms that have the effect of screening out applicants age 40 and older. Use of online application technology or an automated-decision system that limits or screens out older applicants is discriminatory unless age is a bona fide occupational qualification. (See section 11010(a).)
Pending 2025-07-01
H-02.1
2 CCR § 11028(g)
Plain Language
Employers may not use automated-decision systems to discriminate against applicants or employees based on their possession of an AB 60 driver's license (issued to undocumented immigrants). This extends the existing prohibition to ADS-mediated screening. Automated background check or document verification systems must not flag or penalize AB 60 licenses.
(g) It is unlawful for an employer or other covered entity to discriminate against an applicant or employee because they hold or present a driver's license issued under section 12801.9 of the Vehicle Code. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy).
Pending 2025-07-01
H-02.1
2 CCR § 11028(h)
Plain Language
Automated-decision systems that enforce citizenship requirements in a way that discriminates based on national origin or ancestry are unlawful unless a permissible defense applies. Employers using ADS for eligibility screening must ensure citizenship criteria are not pretextual or have discriminatory effects on national origin or ancestry-protected groups.
(h) Citizenship requirements. Citizenship requirements that are a pretext for discrimination or have the purpose or effect of discriminating against applicants or employees on the basis of national origin or ancestry are unlawful, unless pursuant to a permissible defense. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy).
Pending 2025-07-01
H-02.1
2 CCR § 11072(b)(5)
Plain Language
Employers must ensure that employment tests — including those administered through automated-decision systems — accurately measure the skills, aptitude, or criteria they purport to measure, rather than reflecting an applicant's or employee's disability. Reasonable accommodations must be made in testing conditions. For ADS-administered assessments, this means gamified tests, puzzle-based assessments, and timed evaluations must not inadvertently measure disability rather than job-relevant competencies. Accommodations include accessible test sites, Braille or digital formats, screen readers, voice recognition, additional time, interpreters, and other modifications.
(5) An employer or other covered entity shall select and administer tests concerning employment so as to ensure that, when administered to any applicant or employee, including an applicant or employee with a disability, the test results accurately reflect the applicant's or employee's job skills, aptitude, or whatever other criteria the test purports to measure, rather than reflecting the applicant's or employee's disability, except those skills affected by disability are the criteria that the tests purport to measure. Tests concerning employment include, but are not limited to, those administered through the use of an automated-decision system. To accomplish this end, reasonable accommodation shall be made in testing conditions.
Pending 2025-07-01
H-02.1
2 CCR § 11072(b)(5)(F)
Plain Language
When modifying an ADS-administered test is inappropriate, employers may need to use alternate tests or individualized assessments. Importantly, simply running a candidate through an automated-decision system — without additional human review or process — does not constitute an individualized assessment for purposes of disability accommodation. This means employers cannot rely solely on ADS output as a substitute for the individualized assessment required when disability accommodation is at issue.
(F) Alternate tests or individualized assessments may be necessary where test modification is inappropriate. Competent expert advice may be sought before attempting such modification since the validity of the test may be affected. The use of an automated-decision system, in the absence of additional process or actions, does not constitute an individualized assessment.
Pending 2026-01-01
H-02.3
Bus. & Prof. Code § 22756.1(a)(1)-(2), (c)(2)(A)-(E)
Plain Language
Developers must complete an impact assessment before making a high-risk automated decision system publicly available (for systems available on or after January 1, 2026) or upon making a substantial modification (for systems available before that date). The impact assessment must cover the system's purpose, intended uses, intended outputs, data inputs, foreseeable disproportionate impacts on protected classifications, safeguards against algorithmic discrimination, and monitoring guidance for deployers. Developers must make the impact assessment statements available to deployers and potential deployers.
(a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use. (2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system. (c) (1) A developer shall make available to deployers and potential deployers the statements included in the developer's impact assessment pursuant to paragraph (2). (2) An impact assessment prepared pursuant to this section shall include all of the following: (A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts. (B) A description of the high-risk automated decision system's intended outputs. (C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system. (D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system. (E) A developer's impact assessment shall also include both of the following: (i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system. (ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.
Pending 2026-01-01
H-02.3
Bus. & Prof. Code § 22756.1(b)(1)-(2), (c)(2)(F)-(H)
Plain Language
Deployers must perform an impact assessment within two years of deploying a high-risk automated decision system first deployed after January 1, 2026. The deployer's impact assessment must address how the deployer's use aligns with or deviates from the developer's intended uses, what safeguards the deployer has implemented against discrimination risks, and how the system is and will be monitored. State agencies that are deployers may opt out of performing their own impact assessment if they use the system only for its intended purpose, the developer complies with applicable procurement and impact assessment requirements, the state agency has no reasonable basis to believe algorithmic discrimination is likely, and the state agency maintains a governance program under § 22756.3.
(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system. (2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met: (A) The state agency does not make a substantial modification to the high-risk automated decision system. (B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d). (C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination. (D) The state agency is in compliance with Section 22756.3. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (F) A statement of the extent to which the deployer's use of the high-risk automated decision system is consistent with, or varies from, the developer's statement of the high-risk automated decision system's purpose and intended benefits, intended uses, and intended deployment contexts. (G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system. (H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.
Pending 2026-01-01
Bus. & Prof. Code § 22756.5(a)-(b)
Plain Language
Developers and deployers are prohibited from deploying or making available a high-risk automated decision system when their impact assessment determines the system is likely to produce algorithmic discrimination. An exception exists: deployment is permitted if the entity implements safeguards to mitigate the known discrimination risks and then performs an updated impact assessment confirming that algorithmic discrimination has been mitigated and is not reasonably likely to occur. This creates a deployment gate tied to impact assessment outcomes — systems flagged for likely discrimination cannot ship without remediation and re-assessment.
(a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination. (b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.
Failed 2026-01-01
H-02.1
Health & Safety Code § 1339.76(c)(1)-(3)
Plain Language
Developers of AI models or systems used in healthcare settings must, working together with the health facilities, clinics, physician's offices, or group practices that use them, test for biased impacts in the AI system's outputs. Testing must account for the specific patient population of the health facility. Until the advisory board develops its own standardized testing system, developers must use an existing testing system designated by the board. Once the board's system is available, developers may optionally use it and may obtain a statewide certification confirming their AI model or system meets the board's bias standards. The testing obligation is mandatory; the certification is voluntary.
(c) (1) Developers of AI models or AI systems, in conjunction with health facilities, clinics, physician's offices, or offices of a group practice, shall test for biased impacts in the outputs produced by the specified AI model or AI system based on the health facility's patient population. (2) Developers shall use an existing testing system designated by the advisory board until the advisory board has developed its standardized testing system described in paragraph (2) of subdivision (b). After the advisory board has developed its testing system, developers may alternatively use the board's testing system. (3) After the advisory board has created the certification described in paragraph (3) of subdivision (b), developers may use the advisory board's standardized testing system to certify their AI models or AI systems.
Pending 2027-01-01
C.R.S. § 10-16-112.7(3)(c)-(d)
Plain Language
Covered entities must ensure their AI utilization review systems do not discriminate against individuals in violation of any state or federal law and are applied fairly and equitably, including in compliance with HHS regulations and guidance. While this cross-references existing anti-discrimination frameworks rather than creating an independent bias testing regime, it creates an affirmative duty to ensure non-discrimination specifically in the AI utilization review context — the entity must actively verify that the AI system's application does not produce discriminatory results.
(c) THE ARTIFICIAL INTELLIGENCE SYSTEM IS NOT USED IN ANY WAY THAT DISCRIMINATES AGAINST INDIVIDUALS IN VIOLATION OF OTHER STATE OR FEDERAL LAWS; (d) THE ARTIFICIAL INTELLIGENCE SYSTEM IS FAIRLY AND EQUITABLY APPLIED, INCLUDING IN ACCORDANCE WITH APPLICABLE REGULATIONS AND GUIDANCE ISSUED BY THE FEDERAL DEPARTMENT OF HEALTH AND HUMAN SERVICES;
Enacted 2023-07-01
H-02.3
C.R.S. § 6-1-1309(1)-(6)
Plain Language
Controllers must conduct and document a data protection assessment before engaging in any processing activity that poses heightened risk to consumers — specifically: targeted advertising, profiling that risks disparate impact or substantial injury, selling personal data, or processing sensitive data. Each assessment must weigh the benefits of the processing against risks to consumers, factoring in safeguards, de-identification, consumer expectations, and the controller-consumer relationship. Assessments must be made available to the AG upon request but are confidential and exempt from open records laws; disclosure to the AG does not waive privilege. A single assessment may cover comparable processing operations. The requirement applies only to processing activities created after July 1, 2023.
(1) A controller shall not conduct processing that presents a heightened risk of harm to a consumer without conducting and documenting a data protection assessment of each of its processing activities that involve personal data acquired on or after the effective date of this section that present a heightened risk of harm to a consumer. (2) For purposes of this section, "processing that presents a heightened risk of harm to a consumer" includes the following: (a) Processing personal data for purposes of targeted advertising or for profiling if the profiling presents a reasonably foreseeable risk of: (I) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (II) Financial or physical injury to consumers; (III) A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if the intrusion would be offensive to a reasonable person; or (IV) Other substantial injury to consumers; (b) Selling personal data; and (c) Processing sensitive data. (3) Data protection assessments must identify and weigh the benefits that may flow, directly and indirectly, from the processing to the controller, the consumer, other stakeholders, and the public against the potential risks to the rights of the consumer associated with the processing, as mitigated by safeguards that the controller can employ to reduce the risks. The controller shall factor into this assessment the use of de-identified data and the reasonable expectations of consumers, as well as the context of the processing and the relationship between the controller and the consumer whose personal data will be processed. (4) A controller shall make the data protection assessment available to the attorney general upon request. The attorney general may evaluate the data protection assessment for compliance with the duties contained in section 6-1-1308 and with other laws, including this article 1. Data protection assessments are confidential and exempt from public inspection and copying under the "Colorado Open Records Act", part 2 of article 72 of title 24. The disclosure of a data protection assessment pursuant to a request from the attorney general under this subsection (4) does not constitute a waiver of any attorney-client privilege or work-product protection that might otherwise exist with respect to the assessment and any information contained in the assessment. (5) A single data protection assessment may address a comparable set of processing operations that include similar activities. (6) Data protection assessment requirements apply to processing activities created or generated after July 1, 2023, and are not retroactive.
Enacted 2023-07-01
C.R.S. § 6-1-1308(6)
Plain Language
Controllers must ensure their personal data processing does not violate existing state or federal anti-discrimination laws. This is a pass-through obligation — it does not create a new anti-discrimination standard but confirms that CPA-covered controllers remain subject to all existing discrimination prohibitions when processing personal data.
(6) Duty to avoid unlawful discrimination. A controller shall not process personal data in violation of state or federal laws that prohibit unlawful discrimination against consumers.
Enacted 2026-06-30
H-02.1H-02.3
C.R.S. § 6-1-1702(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the system's intended and contracted uses. This is a general duty of care standard — not a checklist. However, developers receive a rebuttable presumption of compliance if they satisfy the specific obligations in this section plus any AG rules adopted under § 6-1-1707. The safe harbor is significant: it shifts the burden to the AG to prove non-compliance after a developer demonstrates statutory compliance.
(1) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
Enacted 2026-06-30
H-02.1H-02.3
C.R.S. § 6-1-1703(1)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Like the parallel developer duty in § 6-1-1702(1), deployers receive a rebuttable presumption of compliance if they meet the section's specific obligations and any AG rules. This is the overarching deployer duty — the specific sub-obligations are mapped separately below.
(1) On and after June 30, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
Enacted 2026-06-30
H-02.3H-02.10
C.R.S. § 6-1-1703(3)(a)
Plain Language
Deployers (or their contracted third parties) must complete an impact assessment for each high-risk AI system at deployment and at least annually thereafter, plus within 90 days of any intentional and substantial modification. This is a continuing obligation — the annual cadence ensures the assessment stays current even absent modifications. Exceptions exist in subsections (3)(d), (3)(e), and (6) of the original statute.
(3) (a) Except as provided in subsections (3)(d), (3)(e), and (6) of this section: (I) A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system on or after June 30, 2026, shall complete an impact assessment for the high-risk artificial intelligence system; and (II) On and after June 30, 2026, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.
Enacted 2026-06-30
H-02.3
C.R.S. § 6-1-1703(3)(c)
Plain Language
When an impact assessment is triggered by an intentional and substantial modification (as opposed to the annual routine assessment), the deployer must include an additional statement disclosing whether the system was used consistently with or differently from the developer's intended uses. This requirement surfaces deployment drift — if the deployer has been using the system outside the developer's stated intended uses, this must be documented and disclosed in the post-modification impact assessment.
(c) In addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (3) following an intentional and substantial modification to a high-risk artificial intelligence system on or after June 30, 2026, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system.
Enacted 2026-06-30
H-02.8
C.R.S. § 6-1-1703(3)(g)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI system to affirmatively verify that it is not causing algorithmic discrimination. This is a periodic deployment review obligation — distinct from the pre-deployment impact assessment. The first review must be completed by June 30, 2026, with annual reviews thereafter. This review can be conducted by the deployer itself or a contracted third party.
(g) On or before June 30, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Pending 2026-10-01
H-02.1H-02.2H-02.6H-02.7
Sec. 8(a)(1)-(3)
Plain Language
Deployers must engage a Labor Commissioner-approved independent auditor to conduct a bias audit before deploying any automated employment-related decision process and annually thereafter. The initial audit must be completed no later than one year before intended deployment. The audit must evaluate performance and error rates across subgroups, assess disparate impact against protected classes, examine data sources and output quality, evaluate thresholds and scoring criteria, and test for less discriminatory alternatives. The auditor must have no financial or operational interest in the deployer or developer and must be on the Commissioner's approved registry.
(a) (1) Prior to deploying an automated employment-related decision process, and annually thereafter, a deployer shall contract with an independent auditor to complete a bias audit. Such bias audit shall be done not later than one year prior to the date the deployer intends to deploy such automated employment-related decision process. (2) Each bias audit conducted pursuant to this subsection shall: (A) Evaluate the automated employment-related decision process performance and error rates across relevant subgroups; (B) Assess disparate impact caused by the automated employment-related decision process against protected classes; (C) Examine the sources of data processed by the automated employment-related decision process and quality of content, decisions, predictions or recommendations generated by the automated employment-related decision process; (D) Evaluate the effects of any thresholds, scoring or ranking criteria utilized by the automated employment-related decision process; and (E) Test for less discriminatory alternatives or adjustments to such automated employment-related decision process. (3) No deployer shall contract with an independent auditor who (A) has a financial or operational interest in the deployer or developer of the automated employment-related decision process, or (B) has not been approved by the Labor Commissioner pursuant to subsection (b) of this section.
Pending 2026-10-01
Sec. 8(d)
Plain Language
A deployer may not deploy or continue deploying an automated employment-related decision process that has been found in its most recent bias audit to cause disparate impact, unless the deployer can demonstrate all three of: (1) business necessity, (2) implementation of corrective actions approved by the Labor Commissioner, and (3) either that no less discriminatory alternative exists or that a less discriminatory alternative has been implemented. This is a deployment-gating obligation — disparate impact findings trigger a conditional ban unless all three conditions are satisfied.
(d) No automated employment-related decision process shall be deployed or continue to be deployed by a deployer if the most recent bias audit conducted pursuant to subsection (a) of this section identified any disparate impact caused by such automated employment-related decision process, except where the deployer can demonstrate (1) a business necessity, (2) such deployer has implemented corrective actions approved by the Labor Commissioner, and (3) that either (A) no less discriminatory alternative is available, or (B) a less discriminatory alternative has been implemented by the deployer.
Pending 2026-10-01
H-02.1
Sec. 18(b)(1)(A) (amending § 46a-60(b)(1)(A))
Plain Language
This amendment to Connecticut's antidiscrimination statute (§ 46a-60) makes it a discriminatory employment practice to use an automated employment-related decision process in any manner that has the effect of causing discrimination on the basis of any protected characteristic. This is a disparate impact standard — intent is not required. Notably, the provision also creates an evidentiary consideration: in any discrimination action involving an automated process, the commission or court must consider evidence (or lack thereof) of anti-bias testing or similar proactive efforts, including the quality, efficacy, recency, and scope of such testing. This effectively incentivizes bias testing by making it relevant as evidence but does not create a safe harbor.
(A) For an employer, by the employer or the employer's agent, except in the case of a bona fide occupational qualification or need, to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment because of, or to use an automated employment-related decision process in any manner that has the effect of causing the employer to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment on the basis of, the individual's race, color, religious creed, age, sex, gender identity or expression, marital status, national origin, ancestry, present or past history of mental disability, intellectual disability, learning disability, physical disability, including, but not limited to, blindness, status as a veteran, status as a victim of domestic violence, status as a victim of sexual assault or status as a victim of trafficking in persons. In any action for a discriminatory practice in violation of this subparagraph involving an automated employment-related decision process, the commission or the court shall consider any evidence, or lack of evidence, of anti-bias testing or similar proactive efforts to avoid such discriminatory practice, including, but not limited to, the quality, efficacy, recency and scope of such testing or efforts, the results of such testing or efforts and the response thereto.
Enacted 2023-07-01
H-02.3H-02.8
Section 1(c)
Plain Language
Beginning February 1, 2024, the Department of Administrative Services must perform ongoing assessments of all AI systems used by state agencies to ensure they do not cause unlawful discrimination or disparate impact across an extensive list of protected characteristics (defined in Section 2(b)(1)(B)). The assessments must follow the policies and procedures established by the Office of Policy and Management. This is a continuing obligation — not a one-time pre-deployment check — requiring periodic review of deployed systems for bias. The protected characteristics include age, genetic information, color, ethnicity, race, creed, religion, national origin, ancestry, sex, gender identity or expression, sexual orientation, marital status, familial status, pregnancy, veteran status, disability, and lawful source of income.
(c) Beginning on February 1, 2024, the Department of Administrative Services shall perform ongoing assessments of systems that employ artificial intelligence and are in use by state agencies to ensure that no such system shall result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of section 2 of this act. The department shall perform such assessment in accordance with the policies and procedures established by the Office of Policy and Management pursuant to subsection (b) of section 2 of this act.
Enacted 2023-01-01
H-02.6H-02.7
N.Y.C. Admin. Code § 20-871(a)(1)-(2)
Plain Language
Employers and employment agencies in NYC may not use an AEDT to screen candidates for hiring or employees for promotion unless two conditions are met: (1) the tool has undergone an independent bias audit within the preceding year, and (2) a summary of the audit results and the distribution date of the tool have been posted on the employer's or employment agency's website before the tool is used. The bias audit must assess disparate impact across EEO-1 race/ethnicity and sex categories. The summary must remain posted for at least 6 months after the last use of the AEDT. This creates both a mandatory independent audit obligation and a public disclosure obligation as preconditions to lawful AEDT use.
In the city, it shall be unlawful for an employer or an employment agency to use an automated employment decision tool to screen a candidate or employee for an employment decision unless: 1. Such tool has been the subject of a bias audit conducted no more than one year prior to the use of such tool; and 2. A summary of the results of the most recent bias audit of such tool as well as the distribution date of the tool to which such audit applies has been made publicly available on the website of the employer or employment agency prior to the use of such tool.
Pending 2025-07-01
H-02.1
O.C.G.A. § 10-16-2(a)
Plain Language
Developers are categorically prohibited from selling, distributing, or making available to deployers any automated decision system that results in algorithmic discrimination. The prohibition covers discrimination or disparate impact across a broad set of protected characteristics in the context of consequential decisions. Self-testing for bias mitigation and diversity expansion are carved out, as are private clubs exempt under the Civil Rights Act.
No developer shall sell, distribute, or otherwise make available to deployers an automated decision system that results in algorithmic discrimination.
Pending 2025-07-01
H-02.1H-02.2
O.C.G.A. § 10-16-2(e)(1)
Plain Language
Developers must affirmatively address algorithmic discrimination, invalidity, and errors by ensuring representative training data, implementing data governance, testing for disparate impact, and exploring less discriminatory alternatives. This is not a one-time pre-deployment obligation — developers must continue assessing and mitigating discrimination risk for the entire period any deployer uses the system.
A developer of an automated decision system shall take steps to address risks of algorithmic discrimination, invalidity, and errors, including, but not limited to, ensuring suitability and representativeness of data sources, implementing data governance measures, testing the automated decision system for disparate impact, and searching for less discriminatory alternative decision methods. Developers shall continue assessing and mitigating the risk of algorithmic discrimination in their automated decision systems so long as such automated decision systems are in use by any deployer.
Pending 2025-07-01
H-02.1
O.C.G.A. § 10-16-3(a)
Plain Language
Deployers are categorically prohibited from using an automated decision system in any manner that results in algorithmic discrimination. This mirrors the developer prohibition in § 10-16-2(a) but applies at the deployment stage rather than the distribution stage.
No deployer of an automated decision system shall use an automated decision system in a manner that results in algorithmic discrimination.
Pending 2025-07-01
H-02.3H-02.8H-02.10
O.C.G.A. § 10-16-3(e)-(j)
Plain Language
Deployers must complete an impact assessment before deploying each automated decision system and repeat it at least annually and within 90 days of any intentional and substantial modification. The assessment must cover system purpose and benefits, algorithmic discrimination risk analysis with mitigation steps, accessibility limitations, labor law risks, privacy intrusion risks, data inputs and outputs, validity and reliability analysis against social science standards, transparency measures, and post-deployment monitoring. If the assessment reveals discrimination risk, deployment is blocked until less discriminatory alternatives are implemented. A single assessment may cover comparable systems. Assessments completed for other laws count if reasonably similar in scope. All assessments and associated records must be retained for at least three years after final deployment.
(e) Except as otherwise provided for in this chapter: (1) A deployer, or a third party contracted by the deployer, that deploys an automated decision system shall complete an impact assessment for the automated decision system; and (2) A deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed automated decision system at least annually and within 90 days after any intentional and substantial modification to the automated decision system is made available. (f) An impact assessment completed pursuant to subsection (e) of this Code section shall include, at a minimum, and to the extent reasonably known by or available to the deployer: (1) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the automated decision system; (2) An analysis of whether the deployment of the automated decision system poses any known or reasonably foreseeable risks of: (A) Algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (B) Limits on accessibility for individuals who are pregnant, breastfeeding, or disabled, and, if so, what reasonable accommodations the deployer may provide that would mitigate any such limitations on accessibility; (C) Any violation of state or federal labor laws, including laws pertaining to wages, occupational health and safety, and the right to organize; or (D) Any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if such intrusion: (i) Would be offensive to a reasonable person; and (ii) May be redressed under the laws of this state; (3) A description of the categories of data the automated decision system processes as inputs and the outputs the automated decision system produces; (4) If the deployer used data to customize the automated decision system, an overview of the categories of data the deployer used to customize the automated decision system; (5) An analysis of the automated decision system's validity and reliability in accordance with contemporary social science standards, and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (6) A description of any transparency measures taken concerning the automated decision system, including any measures taken to disclose to a consumer that the automated decision system is in use when the automated decision system is in use; (7) A description of the post-deployment monitoring and user safeguards provided concerning the automated decision system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the automated decision system; and (8) When such impact assessment is completed following an intentional and substantial modification to an automated decision system, a statement disclosing the extent to which the automated decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of the automated decision system. (g) If the analysis required by paragraph (2) of subsection (f) of this Code section reveals a risk of algorithmic discrimination, the deployer shall not deploy the automated decision system until the developer or deployer takes reasonable steps to search for and implement less discriminatory alternative decision methods. (h) A single impact assessment may address a comparable set of automated decision systems deployed by a deployer. (i) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment shall satisfy the requirements established in this Code section if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this Code section. (j) A deployer shall maintain the most recently completed impact assessment for an automated decision system, all records concerning each impact assessment, and all prior impact assessments, if any, throughout the period of time that the automated decision system is deployed and for at least three years following the final deployment of the automated decision system.
Pending 2025-07-01
H-02.8
O.C.G.A. § 10-16-3(k)
Plain Language
Deployers must conduct at least annual reviews of each deployed automated decision system specifically to verify it is not causing algorithmic discrimination. This is a standalone periodic review obligation separate from the annual impact assessment, focused specifically on ongoing discrimination detection rather than the broader assessment required by subsection (e).
At least annually a deployer, or a third party contracted by the deployer, shall review the deployment of each automated decision system deployed by the deployer to ensure that the automated decision system is not causing algorithmic discrimination.
Pending 2025-07-01
H-02.5
O.C.G.A. § 10-16-3(l)
Plain Language
Deployers must publicly post on their websites all impact assessments completed in the last three years. The Attorney General prescribes the form and manner. This is a rolling publication obligation — as new assessments are completed, they must be published and remain available for three years.
Deployers shall publish on their public websites all impact assessments completed within the preceding three years in a form and manner prescribed by the Attorney General.
Pending 2026-01-01
H-02.3H-02.6H-02.8
Section 15(a)-(b)
Plain Language
Before deploying any permitted ADMS, the employer must complete an initial impact assessment at least 30 days before implementation, signed by both the designated human reviewer and a qualified independent auditor. The auditor independence requirement is strict: anyone who in the prior 5 years was involved in developing, deploying, or licensing the system, had an employment relationship with the developer/deployer, or had a direct or material indirect financial interest in such entities is disqualified. After the initial assessment, subsequent assessments must be conducted at least every 2 years and before any material changes. Each assessment must cover, in plain language: system objectives and their achievability; algorithm and training descriptions; testing for disparate impact across a detailed list of protected characteristics, accessibility limitations, privacy and job quality impacts, cybersecurity vulnerabilities, public health/safety risks, foreseeable misuse, and sensitive data handling; and an employee notification mechanism.
(a) An employer seeking to use or apply an automated decision-making system permitted under Section 10 shall conduct an initial impact assessment, 30 days prior to implementation of the automated decision-making system, bearing the signature of: (1) one or more individuals responsible for meaningful human review of the system; and (2) an independent auditor. A person shall not be an independent auditor under this subsection if, at any point in the 5 years preceding the impact assessment, that person: (i) was involved in using, developing, offering, licensing, or deploying the automated decision-making system under review; (ii) had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision-making system under review; or (iii) had a direct or material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision-making system under review. (b) Following the initial impact assessment, additional impact assessments shall be conducted at least once every 2 years and prior to any material changes to the automated decision-making system. Each impact assessment shall include, in plain language: (1) a description of the objectives of the automated decision-making system; (2) an evaluation of the system's ability to achieve those objectives; (3) a description and evaluation of the algorithms, computational models, and artificial intelligence tools used, including: (A) a summary of underlying algorithms and artificial intelligence tools; and (B) a description of the design and training to be used; (4) testing for: (A) disparate impact or discrimination based on protected characteristics, including, but not limited to discriminating against, persons based on their race, color, religious creed, national origin, sex, disability or perceived disability, gender identity, sexual orientation, genetic information, pregnancy or a condition related to pregnancy, ancestry, or status as a veteran and any actions to mitigate any impacts; (B) accessibility limitations for persons with disabilities; (C) privacy and job quality impacts, including wages, hours, and conditions and safeguards; (D) cybersecurity vulnerabilities and safeguards; (E) public health or safety risks; (F) foreseeable misuse and safeguards; and (G) use, storage, and control of sensitive or personal data; and (5) a notification mechanism for employees impacted by the use of the automated decision-making system.
Pending 2026-01-01
Section 15(c)
Plain Language
If an impact assessment reveals that the ADMS produces discriminatory, biased, or inaccurate outcomes — or fails to meet the notice, appeals, and alternative review requirements of Section 10(b) — the employer must immediately stop using the system and all information it produced, and must take all steps necessary to remedy the harmful outcomes. This is a mandatory shutdown-and-remediate obligation triggered by assessment findings. There is no grace period or cure window — cessation must be immediate.
(c) If an impact assessment finds that an automated decision-making system produces discriminatory, biased, or inaccurate outcomes or fails to meet or negatively impacts any of the measures described in subsection (b) of Section 10, the employer shall immediately cease any use or function of that system and of any information produced by it, and shall take all steps necessary to remedy the discriminatory, biased or inaccurate outcomes produced by the automated decision-making system.
Pending 2026-01-01
H-02.5
Section 15(d)-(e)
Plain Language
Employers must notify all affected employees and their exclusive bargaining representatives of each impact assessment's results, and must provide a copy of the full assessment upon request. Additionally, each impact assessment must be published on the employer's website, subject to the redaction limitations described in Section 20. This dual disclosure obligation — direct employee notification plus public website publication — ensures transparency to both workers and the public.
(d) The employer shall notify affected employees and any exclusive bargaining representative, the results of each impact assessment, and provide a copy of the impact assessment upon request. (e) Each impact assessment shall be published on the employer's website, subject to the limitations set forth in Section 20.
Pending 2027-01-01
H-02.3H-02.4H-02.10
Sections 10(a)-(c) and 35(a)-(c)
Plain Language
Deployers must conduct a comprehensive impact assessment for each automated decision tool they use, initially by January 1, 2027, and annually thereafter. The assessment must cover the tool's purpose, outputs, data types collected, analysis of potential adverse impacts across protected characteristics, safeguards for algorithmic discrimination risks, human oversight and monitoring arrangements, and validity evaluation. A new impact assessment must also be performed as soon as feasible whenever a significant update occurs — meaning changes to the tool's use case, key functionality, or expected outcomes. Within 60 days of completing each assessment, the deployer must submit it to the Attorney General. Knowing failure to submit triggers administrative fines of up to $10,000 per violation, with each day of non-submission counting as a separate violation. Deployers with fewer than 25 employees are exempt unless their tool impacted more than 999 people in the prior year.
(a) On or before January 1, 2027, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses that includes all of the following: (1) a statement of the purpose of the automated decision tool and its intended benefits, uses, and deployment contexts; (2) a description of the automated decision tool's outputs and how they are used to make, or be a controlling factor in making, a consequential decision; (3) a summary of the type of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision; (4) an analysis of potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information from the deployer's use of the automated decision tool; (5) a description of the safeguards implemented, or that will be implemented, by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the automated decision tool known to the deployer at the time of the impact assessment; (6) a description of how the automated decision tool will be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision; and (7) a description of how the automated decision tool has been or will be evaluated for validity or relevance. (b) A deployer shall, in addition to the impact assessment required by subsection (a), perform, as soon as feasible, an impact assessment with respect to any significant update. (c) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year.

Section 35. Impact assessment. (a) Within 60 days after completing an impact assessment required by this Act, a deployer shall provide the impact assessment to the Attorney General. (b) A deployer who knowingly violates this Section shall be liable for an administrative fine of not more than $10,000 per violation in an administrative enforcement action brought by the Attorney General. Each day on which an automated decision tool is used for which an impact assessment has not been submitted as required under this Section shall give rise to a distinct violation of this Section. (c) The Attorney General may share impact assessments with other State entities as appropriate.
Pending 2027-01-01
Section 30(a)-(c)
Plain Language
Deployers are categorically prohibited from using an automated decision tool that results in algorithmic discrimination — unjustified differential treatment or adverse impacts based on protected characteristics. Beginning January 1, 2028, individuals harmed by algorithmic discrimination may bring a private civil action for compensatory damages, declaratory relief, and reasonable attorney's fees and costs. The plaintiff bears the burden of proving that the deployer's use of the tool resulted in algorithmic discrimination causing actual harm. Two carve-outs apply: self-testing to identify or prevent discrimination, and use to expand applicant pools for diversity or to redress historical discrimination. Private clubs exempt under the Civil Rights Act of 1964 are also excluded.
(a) A deployer shall not use an automated decision tool that results in algorithmic discrimination. (b) On and after January 1, 2028, a person may bring a civil action against a deployer for violation of this Section. In an action brought under this subsection, the plaintiff shall have the burden of proof to demonstrate that the deployer's use of the automated decision tool resulted in algorithmic discrimination that caused actual harm to the person bringing the civil action. (c) In addition to any other remedy at law, a deployer that violates this Section shall be liable to a prevailing plaintiff for any of the following: (1) compensatory damages; (2) declaratory relief; and (3) reasonable attorney's fees and costs.
Pending 2026-07-01
H-02.1H-02.2H-02.3
IC 22-5-10.4-10(2)(A)
Plain Language
Before using any automated decision system output in an employment decision, the employer must ensure the system has undergone predeployment testing and validation covering four areas: (1) system efficacy, (2) compliance with all enumerated federal employment discrimination statutes (Title VII, ADEA, ADA Title I, GINA Title II, EPA, Rehabilitation Act, and PWFA), (3) absence of discriminatory impact across race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), national origin, age, disability, and genetic information, and (4) compliance with the NIST AI Risk Management Framework or its successor. All four conditions must be satisfied before any ADS output may be used.
use an automated decision system output in making an employment related decision with respect to a covered individual unless: (A) the automated decision system used to generate the automated decision system output has had predeployment testing and validation with respect to: (i) the efficacy of the system; (ii) the compliance of the system with applicable employment discrimination laws, including Title VII of the Civil Rights Act of 1964 (42 U.S.C. 2000e et seq.), the Age Discrimination in Employment Act of 1967 (29 U.S.C. 621 et seq.), Title I of the Americans with Disabilities Act of 1990 (42 U.S.C. 12111 et seq.), Title II of the Genetic Information Nondiscrimination Act of 2008 (42 U.S.C. 2000ff et seq.), Section 6(d) of the Fair Labor Standards Act of 1938 (29 U.S.C. 206(d)), Sections 501 and 505 of the Rehabilitation Act of 1973 (29 U.S.C. 791 and 29 U.S.C. 793), and the Pregnant Workers Fairness Act (42 U.S.C. 2000gg); (iii) the lack of any potential discriminatory impact of the system, including discriminatory impact based on race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, or disability, and genetic information (including family medical history); and (iv) the compliance of the system with the Artificial Intelligence Risk Management Framework released by the National Institute of Standards and Technology on January 26, 2023, or a successor framework;
Pending 2026-07-01
H-02.6H-02.7H-02.8
IC 22-5-10.4-10(2)(B)
Plain Language
As an ongoing condition of lawful use, the automated decision system must be independently tested at least annually for discriminatory impact across the protected characteristics listed in the predeployment requirements (race, color, religion, sex, national origin, age, disability, genetic information) and for potential biases. The results of each annual test must be made publicly available. 'Independently tested' implies a third party with no material relationship to the employer, though the statute does not specify auditor qualifications.
(B) the automated decision system is, not less than annually, independently tested for discriminatory impact described in clause (A)(iii) or potential biases and the results of the test are made publicly available;
Pre-filed 2025-07-07
H-02.1
Chapter 93M, Section 2(a)
Plain Language
Developers must exercise reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination in their AI systems. This is a general duty of care obligation — it does not prescribe specific testing methodologies but requires affirmative steps to find and address discriminatory risks across all protected classifications under Massachusetts and federal law. The duty encompasses both pre-deployment identification and ongoing mitigation.
(a) Duty of Care: Developers must use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination.
Pre-filed 2025-07-07
H-02.3H-02.8H-02.10
Chapter 93M, Section 3(b)
Plain Language
Deployers of high-risk AI systems must complete a formal impact assessment annually for each system, covering the system's purpose and intended use, data categories and outputs, and discrimination risks with corresponding mitigation measures. Assessments must also be updated whenever a substantial modification is made to the system, regardless of the annual cycle. The state will provide templates to standardize and reduce the compliance burden. This creates both a periodic (annual) obligation and an event-driven (substantial modification) update requirement.
(b) Impact Assessments: (1) Deployers must complete an annual impact assessment for each high-risk AI system, including: (i) The purpose and intended use of the system; (ii) Data categories used and outputs generated; (iii) Potential risks of discrimination and mitigation measures. (2) Impact assessments must be updated after any substantial modification to the system. State-provided templates for these assessments will be made available to reduce compliance burdens.
Pre-filed
H-02.1H-02.3
Chapter 93M § 2(a)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the system. Compliance with this section and any AG rules creates a rebuttable presumption that reasonable care was used — but only in AG enforcement actions, not in any other legal proceeding. The self-testing and diversity-expansion carve-outs in the algorithmic discrimination definition mean that developers using their systems solely for bias testing or pool expansion are not subject to this duty.
(a) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
Pre-filed
H-02.1H-02.3
Chapter 93M § 3(a)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Compliance with Section 3 and any AG rules creates a rebuttable presumption that reasonable care was used — but this presumption applies only in AG enforcement actions, not in any other proceeding.
(a) Not later than 6 months after the effective date of this act, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
Pre-filed
H-02.3H-02.8H-02.10
Chapter 93M § 3(c)(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system before deployment, repeat it at least annually, and complete a new one within 90 days of any intentional and substantial modification. The impact assessment must cover: system purpose and benefits, algorithmic discrimination risk analysis and mitigation, data inputs and outputs, customization data, performance metrics and limitations, transparency measures, and post-deployment monitoring. A single assessment may cover comparable systems, and an assessment completed under another law satisfies this requirement if reasonably similar in scope. All impact assessments and records must be retained for at least three years after final deployment. Additionally, deployers must conduct at least annual reviews to verify each system is not causing algorithmic discrimination. Small deployers meeting the subsection (f) criteria are exempt.
(c) (1) except as provided in subsections (c)(4), (c)(5), and (f) of this section: (i) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall complete an impact assessment for the high-risk artificial intelligence system; and (ii) Not later than 6 months after the effective date of this act, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) an impact assessment completed pursuant to this subsection (c) must include, at a minimum, and to the extent reasonably known by or available to the deployer: (i) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (iii) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vi) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (vii) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) in addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (c) following an intentional and substantial modification to a high-risk artificial intelligence system not later than 6 months after the effective date of this act, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) a single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) if a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (c) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (c). (6) a deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (c), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) Not later than 6 months after the effective date of this act, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Pending 2025-10-08
G.L. c. 176O, § 12(g)(1)(E)-(F)
Plain Language
Carriers and utilization review organizations must ensure that AI tools used in utilization review do not discriminate directly or indirectly against any insured in violation of state or federal law, including Massachusetts anti-discrimination law (Chapter 151B). The tools must also be applied fairly and equitably, consistent with applicable state and federal agency regulations and guidance. This imposes both a non-discrimination obligation and an affirmative fairness standard, though it does not specify testing methodology or audit requirements.
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
Pending 2025-01-14
H-02.3H-02.6
Ch. 149B § 2(j)
Plain Language
Employers may not use electronic monitoring (alone or with an ADS) unless the monitoring has been independently assessed. The impact assessment must be conducted within one year before use (or within six months of the effective date for existing monitoring), by an independent, impartial party free of financial or legal conflicts. It must evaluate data protection and security practices, identify the allowable purpose, describe how monitoring could violate applicable law and what steps to prevent violations, and assess whether monitoring may negatively affect employees' privacy and job quality.
(j) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated employment decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments must: (i) be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry best practices; (iv) identify which allowable purpose(s) described in this chapter; (vi) consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; and (vii) consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions.
Pending 2025-01-14
H-02.1H-02.2H-02.3H-02.6H-02.7
Ch. 149B § 3(a)
Plain Language
Employers may not use an automated employment decision tool unless it has undergone an independent impact assessment within the prior year (or within six months of the effective date for existing tools). The assessment must be conducted by an independent party and must comprehensively evaluate: modeling attributes and techniques for scientific validity and proxy discrimination; training data disparities; output disparities across all Massachusetts protected classes; disability accessibility limitations; post-deployment adverse impact risks; whether each problematic feature is the least discriminatory available method; potential legal violations; and privacy and job quality impacts. The completed assessment (or accessible summary) must be submitted to the Department of Labor Standards for a public registry within 60 days and distributed to affected employees.
a) It shall be unlawful for an employer to use an automated employment decision tool for an employment decision, alone or in conjunction with electronic monitoring, unless such tool has been the subject of an impact assessment. Impact assessments must: (i) be conducted no more than one year prior to the use of such tool, or where the tool was in use by the employer before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) identify and describe the attributes and modeling techniques that the tool uses to produce outputs; (iv) evaluate whether those attributes and techniques are a scientifically valid means of evaluating an employee or candidate's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under chapter 151B or any other applicable law; (v) consider, identify, and describe any disparities in the data used to train or develop the tool and describe how those disparities may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy any disparate impact; (vi) consider, identify, and describe any outputs produced by the tool that may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy that disparate impact; (vii) evaluate whether the use of the tool may limit accessibility for persons with disabilities, or for persons with any specific disability, and what actions may be taken by the employer or vendor of the tool to reduce or remedy the concern; (viii) consider and describe potential sources of adverse impact against individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that may arise after the tool is deployed; (ix) identify and describe any other assessment of risks of discrimination or a disparate impact of the tool on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that arise over the course of the impact assessment, and what actions may be taken to reduce or remedy that risk; (x) for any finding of a disparate impact or limit on accessibility, evaluate whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a candidate's performance or ability to perform job functions; (xi) consider and describe any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (xii) consider and describe whether use of the tool may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (xiii) be submitted in its entirety or an accessible summary form to the department for inclusion in a public registry of such impact assessments within sixty days of completion and distributed to employees who may be subject to the tool.
Pending 2025-01-14
H-02.8
Ch. 149B § 3(b)
Plain Language
Employers must conduct or commission annual impact assessments for the entire duration that an automated employment decision tool remains in use. Each subsequent assessment must meet the same thirteen requirements as the initial assessment and must specifically evaluate any changes in the tool's validity or disparate impact since the prior assessment.
(b) An employer shall conduct or commission subsequent impact assessments each year that the tool is in use to assist or replace employment decisions. Subsequent impact assessments shall comply with the requirements of paragraph (a) of this section, and shall assess and describe any change in the validity or disparate impact of the tool.
Pending 2025-01-14
H-02.3
Ch. 149B § 3(e)
Plain Language
If an impact assessment finds disparate impact across any protected class or unlawful accessibility limitations, the employer must immediately cease using the tool and may not resume until it either (1) takes reasonable remedial steps and provides written documentation of those steps to employees, the auditor, and the Department, or (2) demonstrates in writing that the finding is erroneous or that the problematic feature is the least discriminatory available method. This creates a mandatory stop-use-and-remediate obligation tied to adverse assessment findings.
(e) If an initial or subsequent impact assessment concludes that a data set, feature, or application of the automated employment decision tool results in a disparate impact on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, or unlawfully limits accessibility for persons with disabilities, an employer shall refrain from using the tool until it: (i) takes reasonable and appropriate steps to remedy that disparate impact or limit on accessibility and describe in writing to employees, the auditor, and the department what steps were taken; and (ii) if the employer believes the impact assessment finding of a disparate impact or limit on accessibility is erroneous, or that the steps taken in accordance with subparagraph (i) of this paragraph sufficiently address those findings such that the tool may be lawfully used in accordance with this article, describes in writing to employees, the auditor, and the department how the data set, feature, or application of the tool is the least discriminatory method of assessing an employee's performance or ability to complete essential functions of a position.
Pending 2025-01-10
Ch. 176O § 12(g)(1)(E)-(F)
Plain Language
AI tools used in utilization review must not discriminate — directly or indirectly — against any insured in violation of state or federal anti-discrimination law, including Massachusetts Chapter 151B (the state's primary anti-discrimination statute). The tool must also be fairly and equitably applied, consistent with applicable regulatory guidance. While the bill does not prescribe a specific bias testing methodology or impact assessment, carriers must be able to demonstrate that their AI tools are non-discriminatory and equitably applied across their insured population.
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
Pending 2026-10-01
H-02.1
Ins. § 15–10B–05.1(c)(5)-(6)
Plain Language
Covered entities must ensure that AI tools used in utilization review do not result in unfair discrimination and are applied fairly and equitably, including in compliance with applicable HHS regulations and guidance. While this provision does not specify a detailed bias testing methodology, it creates an affirmative obligation to monitor for and prevent discriminatory outcomes from AI-driven utilization review determinations.
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Pending 2026-10-01
H-02.1
Insurance Article § 15–10B–05.1(c)(5)-(6)
Plain Language
Carriers must ensure that AI tools used in utilization review do not result in unfair discrimination and are applied fairly and equitably, including in compliance with applicable HHS regulations and guidance. This imposes an ongoing non-discrimination obligation on AI-driven coverage decisions. This is existing law reenacted without amendment, but the new quarterly reporting of AI grievances by race and gender (§ 15–10A–06(a)(1)(iii)(9)(B)) creates a monitoring mechanism to enforce these requirements.
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Failed 2026-01-01
H-02.1
24-A MRSA §4304(8)(A)(2)-(3)
Plain Language
AI-derived utilization review determinations must not directly or indirectly discriminate against enrollees on an extensive list of protected characteristics, including race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. Determinations must also be fairly and equitably applied. Notably, the protected categories go beyond typical employment discrimination lists to include health-specific characteristics such as predicted disability, expected length of life, and degree of medical dependency — carriers should ensure their AI tools are tested against these categories.
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (2) Not directly or indirectly discriminate against an enrollee on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life or other health conditions; (3) Be fairly and equitably applied;
Pending 2026-02-24
H-02.3H-02.6H-02.4H-02.5
Sec. 9(1)-(3)
Plain Language
Before deploying any automated decision tool or electronic monitoring tool, employers must commission an independent third-party impact assessment that evaluates the tool's algorithms, data, potential biases (including proxy discrimination under the Elliot-Larsen Civil Rights Act), accessibility limitations for disabled individuals, cybersecurity vulnerabilities, and effects on privacy and job quality. For new tools, the assessment must be completed one year before implementation; for tools already in use, within six months of the act's effective date. Within 60 days of completion, the employer must submit the assessment to the Department for inclusion in a public registry and distribute it to affected covered individuals. Annual reassessments are required for as long as the tool remains in use, evaluating any changes in validity or disparate impact.
Sec. 9. (1) Before an employer uses an automated decisions tool under section 4 or an electronic monitoring tool under section 5, the employer shall conduct an impact assessment of the tool that meets all of the following requirements: (a) Evaluates the tool's objectives, algorithms, data, cybersecurity vulnerabilities, and potential biases, including, but not limited to, discriminatory outcomes based on race, gender, or disability. (b) Is conducted 1 year before the tool is implemented, or, for a tool already in use on the effective date of this act, not more than 6 months after the effective date of this act. (c) Is conducted by an independent and impartial third party with no financial or legal conflicts of interests related to the use of the tool. (d) Identifies and describes the attributes and modeling techniques that the tool uses to produce outputs. (e) Evaluates whether the attributes and modeling techniques described in subdivision (d) are a scientifically valid means of evaluating a covered individual's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under the Elliot-Larsen civil rights act, 1976 PA 453, MCL 37.2101 to 37.2804. (f) Considers, identifies, and describes both of the following that may result in a disparate impact on a covered individual based on the covered individual's qualified characteristic, and what actions may be taken by the employer to reduce or remedy any disparate impact. (i) Any disparities in the data used to train or develop the tool. (ii) Any outputs produced by the tool. (g) Evaluates whether the use of the tool may limit accessibility for covered individuals with disabilities, or for covered individuals with any specific disability, and what actions may be taken by the employer to reduce or remedy the limit on accessibility. (h) Considers and describes potential sources of adverse impact against covered individuals or groups based on a qualified characteristic that may arise after the tool is implemented. (i) Identifies and describes any other assessment of risks of discrimination or a disparate impact of the tool on covered individuals or groups based on a qualified characteristic, and what actions may be taken to reduce or remedy that risk. (j) For any finding of a disparate impact or limit on accessibility, evaluates whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a covered individual's performance or ability to perform job functions. (k) Considers and describes any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent a violation. (l) Considers and describes whether use of the tool may negatively affect a covered individual's privacy or job quality, including wages, hours, and working conditions. (2) Not more than 60 days after an employer completes an assessment, the employer shall do both of the following: (a) Submit the assessment in its entirety or in an accessible summary form to the department for the department to include in a public registry of impact assessments. (b) Distribute the assessment to covered individuals who may be subject to the tool. (3) An employer shall conduct or commission subsequent impact assessments each year in which the electronic monitoring tool or automated decisions tool is in use. Subsequent impact assessments must comply with the requirements of subsection (1), as applicable, and must assess and describe any change in the validity or disparate impact of the tool.
Pending 2025-08-01
Minn. Stat. § 363A.08, subd. 9(b)(1)
Plain Language
Employers may not use AI in any employment context — including recruitment, hiring, promotion, termination, discipline, training selection, or setting terms of employment — if the AI has the effect of discriminating against employees or applicants based on any protected characteristic under the Minnesota Human Rights Act. This is a disparate impact standard: the employer need not intend to discriminate; it is sufficient that the AI has a discriminatory effect. The protected characteristics are extensive and include race, color, creed, religion, national origin, sex, gender identity, marital status, public assistance status, familial status, local commission membership, disability, sexual orientation, and age. Because this is added as an unfair employment practice under existing Chapter 363A, all existing MHRA enforcement mechanisms, remedies, and defenses apply.
(b) It is an unfair employment practice, with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, for an employer to: (1) use artificial intelligence that has the effect of subjecting an employee or applicant for employment to discrimination because of race, color, creed, religion, national origin, sex, gender identity, marital status, status with regard to public assistance, familial status, membership or activity in a local commission, disability, sexual orientation, or age;
Failed 2025-10-01
Section 1(1)(e)-(f)
Plain Language
Health insurance issuers must ensure their AI tools do not discriminate against enrollees — directly or indirectly — in violation of state or federal anti-discrimination law, including Montana's Human Rights Act (MCA § 49-2-309). The tools must also be fairly and equitably applied, consistent with applicable HHS regulations and guidance. This imposes both a non-discrimination obligation and an affirmative fair-application requirement, though the bill does not specify a particular testing or assessment methodology.
(e) the use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law, including 49-2-309; (f) the artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services;
Enacted 2025-12-15
H-02.1H-02.2H-02.3
N.J.A.C. 13:16-2.1(a)-(b)
Plain Language
All covered entities in New Jersey must ensure that their practices and policies — even facially neutral ones adopted without discriminatory intent — do not actually or predictably result in a disproportionately negative effect on members of any protected class. A practice causing such disparate impact violates the LAD unless the entity can show it is necessary to achieve a substantial, legitimate, nondiscriminatory interest and no less discriminatory alternative exists. Notably, liability can attach before a policy is implemented if it has been approved, announced, or finalized and there is evidence of predictable disparate impact. Policies still in internal deliberation cannot be challenged.
(a) Practices and policies that have a disparate impact, as defined at (b) below, on members of a protected class, even if these practices and policies are not discriminatory on their face (that is, facially neutral) and are not motivated by discriminatory intent, will be considered discriminatory and a violation of the Act, unless it is shown that such practices and policies are necessary to achieve a substantial, legitimate, nondiscriminatory interest and there is no less discriminatory alternative that would achieve the same interest. (b) A practice or policy has a disparate impact where it actually or predictably results in a disproportionately negative effect on members of a protected class. A practice or policy predictably can have a disparate impact when there is evidence that the practice or policy will have a disparate impact even though the practice or policy has not yet been implemented, if the practice or policy has been approved, announced, or otherwise finalized. However, a practice or policy that is simply being debated or deliberated internally by a covered entity cannot be challenged pursuant to this chapter before it is implemented, approved, announced, or otherwise finalized.
Enacted 2025-12-15
H-02.1
N.J.A.C. 13:16-3.2(c)(1)-(3)
Plain Language
Employers using automated employment decision tools — including AI resume screeners, facial analysis technology for interviews, and scheduling filters — must ensure these tools do not create disparate impact on protected classes. The rules establish that tools trained on non-representative data (e.g., a mostly white, cisgender male workforce) may produce biased outputs. Critically, subsection (c)(3) effectively requires pre-deployment testing: an employer's use of an automated tool that has not been adequately tested and shown to not adversely affect members of a protected class before use may itself constitute a disparate impact violation. Scheduling-based automated tools must also include a reasonable accommodation request mechanism. Facial analysis technology is flagged as particularly high-risk for bias against people with darker skin, disabilities, or religious headwear/facial hair.
(c) Automated employment decision technology practices are as follows: 1. The use of automated employment decision tools to make employment decisions, including, but not limited to, decisions related to advertising, recruiting, screening, interviewing, hiring, and compensation, or any other terms, conditions, or privileges of employment, may have a disparate impact on applicants and employees based on their race, national origin, gender, disability, religion, and other protected characteristics. By way of example, but not limitation, an automated employment decision tool that uses data on a company's current employees to inform a search for candidates may have a disparate impact on members of protected classes that are not well represented in that company or industry. If most current employees at a computer science company are white, cisgender men, an automated employment decision tool that assesses applicants based on that pool may score women applicants lower because their resumes list "women's field hockey" rather than "football," or score Black applicants lower because their resumes list "Black Student Alliance," an organization in which the company's current employees are less likely to have been involved; 2. The use of an automated employment decision tool that limits or screens out applicants based on their schedule may have a disparate impact on applicants based on their religion, disability, or medical condition and must include a mechanism for applicants to request a reasonable accommodation. By way of example, but not limitation, an application asking if an applicant is available to work a proposed schedule of Monday through Saturday may screen out applicants who answer the question in the negative due to religious practices they engage in on Saturdays; and 3. An employer's use of an automated employment decision tool that has not been adequately tested and shown to not adversely affect people in a protected class before its use may have a disparate impact on members of that protected class. By way of example, but not limitation, an employer's use of facial analysis technology to detect personality traits during virtual interviews is likely to result in lower scores for interviewees whose facial expressions the tools have not been tested on and designed to read. If the technology was tested exclusively or predominantly on white people with no disabilities, then use of the technology may disproportionately impact interviewees with darker skin or interviewees with disabilities because the technology cannot match their facial expressions to those programmed into the tool and may not account for interviewees who cannot make certain facial expressions. i. The use of facial analysis technology may disproportionately impact interviewees wearing religious headwear or maintaining religiously mandated facial hair if the technology has not been tested on people with similar religious practices.
Enacted 2025-12-15
N.J.A.C. 13:16-2.4(e)
Plain Language
If a covered entity uses an outside vendor's products, systems, or procedures — including third-party AI tools, scoring algorithms, or screening products — and those products cause a disparate impact, the entity cannot disclaim liability by pointing to the vendor. The covered entity must take reasonable steps to ensure that the vendor's tools comply with the LAD and these rules. This creates a vendor due diligence obligation: employers, housing providers, and other covered entities must affirmatively evaluate whether third-party tools they adopt produce discriminatory outcomes before and during use.
(e) If a respondent's practice or policy that results in a disparate impact based on a protected characteristic relies on conduct, standards, products, procedures, or systems of an outside person or vendor, the respondent must take reasonable steps to ensure that the outside person or vendor's conduct, standards, products, procedures, or systems are consistent with the Act and this chapter.
Enacted 2025-12-15
H-02.1H-02.2
N.J.A.C. 13:16-2.2(a)-(f)
Plain Language
In employment, public accommodations, and contracting contexts, a three-step burden-shifting framework applies to disparate impact claims. First, the complainant must show empirical (not speculative) evidence that the challenged practice has a disparate impact. Second, the respondent must demonstrate the practice is necessary to achieve a substantial, legitimate, nondiscriminatory interest — in employment, this means job-related and consistent with business necessity. Third, even if justified, the practice is unlawful if the complainant can identify a less discriminatory alternative. For product counsel, this means that any AI or automated system deployed in employment, public accommodations, or contracting must be defensible under all three steps: you need empirical evidence the tool does not cause disparate impact, a documented business necessity justification, and analysis showing no less discriminatory alternative was available.
(a) A complainant challenging a practice or policy of a covered entity must show the practice or policy challenged has a disparate impact on members of a protected class. (b) In the employment, public accommodations, and contracting contexts, if the complainant meets the burden of proof at (a) above, the respondent has the burden of showing that the challenged practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest. In the employment context, whether a practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest is equivalent to whether the practice or policy is job related and consistent with a legitimate business necessity. A practice or policy is job related when it bears a demonstrable relationship to successful performance of the job and measures the person's fitness for the specific job. (c) In the employment, public accommodations, and contracting contexts, if the respondent meets the burden at (b) above, the complainant has the burden of showing that there is a less discriminatory alternative means of achieving the substantial, legitimate, nondiscriminatory interest. (d) To meet its burden of proof at (a), (b), or (c) above, a party must provide empirical evidence, meaning evidence that is not hypothetical or speculative, to support its allegations. For example, a complainant would not meet its burden to show an employment policy has a disparate impact on job applicants based on gender by speculating that the policy harms women more than men, but could meet its burden by providing empirical evidence, which could include applicant files or data or applicant selection rates by gender. Anecdotal evidence, while not sufficient on its own, may be introduced along with empirical evidence. For example, a complainant would not meet its burden to show an employment policy has a disparate impact on job applicants based on gender by solely providing that they know women who applied and did not receive a position but men who did. However, a complainant could introduce anecdotal evidence along with empirical evidence, such as applicant selection rates by gender. (e) The opposing party may rebut whether the party with the burden of proof at (a), (b), or (c) above has met its burden. (f) Additional proof may be required when challenging or defending particular practices or policies. Such requirements are noted in this chapter, where relevant.
Enacted 2025-12-15
H-02.1
N.J.A.C. 13:16-2.4(b)(1)-(2), (c)
Plain Language
When defending a practice that has been shown to cause disparate impact, the covered entity must prove two things: (1) the practice serves a core interest directly related to the entity's function that is genuine and non-pretextual and does not itself discriminate, and (2) the practice actually carries out that interest effectively. This is a case-specific, fact-based inquiry — generic justifications will not suffice. Notably, pursuing diversity or increasing access for underrepresented groups can itself constitute a legitimate justification. For AI tool deployers, this means you must be prepared to demonstrate with evidence that each automated tool serves a genuine business function and actually achieves its stated purpose.
(b) To establish that a challenged practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest, a respondent must establish that: 1. The practice or policy is necessary to achieve one or more substantial, legitimate, nondiscriminatory interests, where "substantial interest" means a core interest of the entity that has a direct relationship to the function of that entity, "legitimate" means that a justification is genuine and not false or pretextual, and "nondiscriminatory" means that the justification for a challenged practice or policy does not itself discriminate based on a protected characteristic; and 2. The practice or policy effectively carries out the identified interest. (c) The determination of whether an interest is substantial, legitimate, and nondiscriminatory requires a case-specific, fact-based inquiry. An interest in achieving diversity or increasing access for underrepresented or underserved members of a protected class may constitute a substantial, legitimate, nondiscriminatory interest.
Enacted 2025-12-15
H-02.1
N.J.A.C. 13:16-3.1(a)-(c)
Plain Language
All employment practices — from hiring and screening to compensation and termination — are subject to disparate impact analysis. Employers, labor organizations, and employment agencies must ensure their practices are job-related and consistent with business necessity if challenged, and must be prepared to show no less discriminatory alternative exists. Affirmative recruitment efforts to attract underrepresented groups are expressly permitted and will not create liability under this chapter. For AI tool deployers in the employment context, every automated screening, scoring, or decision tool must be defensible as job-related and necessary.
(a) Employment practices and policies may be unlawful if they have a disparate impact on members of a protected class. An employment practice or policy that has a disparate impact is prohibited unless, in accordance with N.J.A.C. 13:16-2.2, a respondent shows it is necessary to achieve a substantial, legitimate, nondiscriminatory interest. Whether an employment practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest is equivalent to whether the practice or policy is job related and consistent with a legitimate business necessity. An employment practice or policy may still be prohibited if necessary to achieve a substantial, legitimate, nondiscriminatory interest if a complainant shows there is a less discriminatory alternative that would achieve the same interest. (b) Nothing in this subchapter shall preclude affirmative efforts to utilize recruitment practices to attract an individual who is a member of an underrepresented or underserved member of a protected class covered by the Act. (c) This subchapter applies to the practices and policies of employers, labor organizations, employment agencies, and other covered entities.
Failed 2026-02-01
H-02.1H-02.3
Sec. 3(1)(a)-(b)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known risks of algorithmic discrimination arising from the system's intended and contracted uses. Compliance with all of Section 3's developer obligations creates a rebuttable presumption that reasonable care was used, but only in AG enforcement actions. Self-testing for bias and diversity expansion efforts are expressly carved out from the definition of algorithmic discrimination.
(1)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section.
Failed 2026-02-01
H-02.1H-02.3
Sec. 4(1)(a)-(b)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known algorithmic discrimination risks. Compliance with all deployer obligations in Section 4 creates a rebuttable presumption of reasonable care, applicable only in AG enforcement actions. This is the deployer counterpart to the developer's reasonable care obligation in Section 3(1).
(1)(a) On and after February 1, 2026, a deployer of any high-risk artificial intelligence system shall use reasonable care to protect consumers from each known risk of algorithmic discrimination. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section.
Failed 2026-02-01
H-02.3H-02.10
Sec. 4(3)(a)-(f)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system deployed on or after February 1, 2026, and within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and use cases, deployment context, benefits, algorithmic discrimination risk analysis and mitigation, data input/output categories, customization data, performance metrics, transparency measures, and post-deployment monitoring safeguards. Post-modification assessments must also disclose whether actual use deviated from the developer's intended use. A single assessment may cover comparable systems. Assessments completed under other substantially equivalent laws satisfy this requirement. Deployers must retain the most recent assessment and all records, plus all prior assessments for at least three years after final deployment. Small deployers meeting the Section 4(6) exemption criteria are exempt.
(3)(a) Except as otherwise provided in this subsection or subsection (6) of this section: (i) An impact assessment shall be completed for each high-risk artificial intelligence system deployed on or after February 1, 2026. Such impact assessment shall be completed by the deployer or by a third party contracted by the deployer; and (ii) On and after February 1, 2026, for each deployed high-risk artificial intelligence system, a deployer or a third party contracted by the deployer shall complete an impact assessment within ninety days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (b) An impact assessment completed pursuant to this subsection shall include to the extent reasonably known by or available to the deployer: (i) A statement by the deployer disclosing: (A) The purpose of the high-risk artificial intelligence system; (B) Any intended-use case for the high-risk artificial intelligence system; (C) The deployment context of the high-risk artificial intelligence system; and (D) Any benefit afforded by the high-risk artificial intelligence system; (ii) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known risk of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate any such risk; (iii) A high-level summary of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) Any metric used to evaluate the performance and any known limitation of the high-risk artificial intelligence system; (vi) A description of any transparency measure taken concerning the high-risk artificial intelligence system, including any measure taken to disclose to a consumer when the high-risk artificial intelligence system is in use; and (vii) A description of each postdeployment monitoring and user safeguard provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address any issue that arises from the deployment of the high-risk artificial intelligence system. (c) Any impact assessment completed pursuant to this subsection following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, shall include a statement that discloses the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from any use of the high-risk artificial intelligence system intended by the developer. (d) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (e) Any impact assessment completed to comply with another applicable law or regulation by a deployer or by a third party contracted by the deployer shall satisfy this subsection if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (f) A deployer shall maintain: (i) The most recently completed impact assessment required under this subsection for each high-risk artificial intelligence system of the deployer; (ii) Each record concerning each such impact assessment; and (iii) For at least three years following the final deployment of each high-risk artificial intelligence system, each prior impact assessment, if any, and each record concerning such impact assessment.
Pending
H-02.1H-02.2H-02.3
Section 2(j)
Plain Language
Employers, public entities, vendors, and contractors may not use AI decision systems, monitoring tools, or surveillance to obtain, infer, analyze, or factor into employment or public benefits decisions any data about protected-class characteristics (as defined under New Jersey's Law Against Discrimination), union membership or advocacy, or any other characteristic not directly related to work performance, work qualifications, or benefits eligibility. This prohibition extends to both direct inputs and inferred attributes — systems may not be designed to derive protected characteristics from non-protected data for use in decisions. A narrow carve-out permits ABSDS to retain information essential to providing specific public services (e.g., student academic records, individual health information) and information required to confirm beneficiary identity or determine eligibility.
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: j. Use, deploy, develop, produce, sell, or offer for sale, an EMT or other surveillance of an employee, service beneficiary, or applicant for employment, or use, deploy, develop, produce, sell, or offer for sale, an AEDS or ABSDS, to obtain, infer, analyze, or use in making a hiring decision or other employment-related decision or decision regarding public benefits or services, any data or information about the employee's, service beneficiary's, or applicant for employment's being in or perceived to be in a classification, or having or being perceived to have a characteristic, protected under section 11 of P.L.1945, c.169 (C.10:5-12), or information about present or past union membership or advocacy or any other classification or characteristic, other than unlawful behavior, of the employee or applicant for employment which is not directly related to work performance or work qualifications, or of any other classification or characteristic of a service beneficiary which is not specifically required to confirm the identity of the beneficiary or determine eligibility for public benefits or services. An employer or public entity may not, in providing employee, applicant, or service beneficiary data or information for the AEDS or ABSDS or in directly making employment-related decisions or decisions about public benefits or services, use data or information about employee, applicant, or beneficiary classification or characteristics as identified in this subsection. It shall not be a violation of this subsection for an ABSDS to retain and use information essential to providing specific public services, such as student academic records in educational services and individual health information in health services, and information specifically required to determine eligibility for the public benefits or services;
Pending
H-02.3H-02.6H-02.7
Section 3(a)–(e)
Plain Language
No employer or public entity may implement an AEDS or EMT unless: (1) the system is verified through pretesting, validation, and impact assessment to serve one of six allowable purposes (assisting work functions, quality assurance, performance assessment, legal compliance, health/safety, or wage/benefit administration); (2) the system is limited to the least invasive means, smallest necessary scope of employees and data, and minimum collection frequency needed for those purposes; (3) data access is restricted to authorized agents; and (4) a pre-deployment independent impact assessment — conducted by an independent auditor (or by the Department of Labor for public employees) — affirms compliance with all substantive prohibitions in the act, including disparate impact analysis of training data across protected characteristics and verification of human oversight procedures. Vendors must provide the auditor or department with full system documentation, including design specifications, training data sources, accuracy/error rate analysis, and quantified estimates of employment displacement impacts. The five-year independence standard for auditors is notably strict.
An employer or public entity, or vendor acting on behalf of an employer or public entity shall not implement the use of an AEDS or an EMT or other surveillance of employees, or use an AEDS or information obtained through the EMT when making employment-related decisions regarding employees or applicants for employment, unless all of the following conditions are met: a. The EMT or other surveillance, and the AEDS, are primarily intended and demonstrably verified through appropriate pretesting, validation, and relevant impact assessments conducted pursuant to this section to accomplish any of the following allowable purposes: (1) assisting an employee to accomplish essential work functions; (2) ensuring the quality of goods and services; (3) making periodic assessments of employee performance, including to assist in making employment-related decisions; (4) ensuring compliance with provisions of employment, labor, or other relevant laws; (5) protecting the health, safety, or security of employees and the public; or (6) administering wages and benefits. b. The EMT and surveillance and the AEDS shall: (1) be limited to what is necessary to accomplish the allowable purposes specified in subsection a. of this section; (2) be used exclusively to accomplish those purposes; (3) use the means least invasive to employees or applicants for employment needed to accomplish those purposes; (4) be limited to the smallest number of employees and least amount of data and information needed to accomplish those purposes, and (5) have data and information collected no more frequently than is necessary to accomplish those purposes. c. The data and information about an employee or applicant collected by an EMT or other surveillance or used by the AEDS shall be accessed only by authorized agents of the employer, the public entity, or the employee or the employee's authorized representative. d. Prior to deployment or implementation, an objective and impartial impact assessment of the AEDS or EMT, including an assessment of the economic impacts of factors such as wages, hours, benefits, work opportunities, and advancement, has been conducted by an independent auditor, or, if the AEDS or EMT is to be applied to public employees, by the department, in which the auditor or the department determines and affirms in a report, with supporting documentation indicating: (1) that the EMT requires the implementation of procedures to ensure that it is used in a manner that complies with the requirements of subsections c., d., e., f., and g. of section 2 of this act; (2) that the AEDS or EMT complies with the requirements of subsections a., b., h. i., k. and l. of section 2 of this act and subsections a. and b. of this section, including the implementation of effective procedures to remedy potential risks to worker rights, including privacy, health and safety, dignity and autonomy, and to prevent inhibiting legally protected activity, including organizing and collective bargaining. (3) that the AEDS or EMT complies with the requirements of subsection j. of section 2 of this act, including that the auditor or the department, with respect to classifications and characteristics identified in that subsection of employees or applicants for employment, considers, identifies, and describes any disparities in the data used to train or develop the AEDS that may result in the outputs of the AEDS having a disparate, adverse impact on employees or applicants, and that the auditor or the department determines that the AEDS includes provisions to effectively remedy any such disparate, adverse impact; and (4) that the AEDS or EMT requires the implementation of effective procedures for monitoring, feedback, and ongoing human oversight, including full compliance with the requirements of section 9 of this act, as needed to prevent or remedy any potential discriminatory, biased, inaccurate, or harmful outcomes. e. The vendor has provided the auditor or the department with access to all information needed to conduct the impact assessment of either an AEDS or an EMT, including, in the case of an AEDS: (1) all documentation about its design and development, its technical specifications, the sources of data used to develop and train it, the individuals involved in its development, and a historical record of past versions of the AEDS; (2) a detailed description of its intended purpose, deployment context, rationale for use, the categories, sources, and methods of data it utilizes; (3) outputs and the types of employment-related decisions in which those outputs may be used; (4) what the benefits and effects are of using the AEDS to supplement non-automated decision-making, and the impacts its use may have on overall efficiency and output for the public entity or employer that deploys it, including quantified estimates of: the amounts of cost savings for the employer or public entity; any anticipated reductions of employment by the employer or public entity; any offset to the employment reductions caused by new employment related to the human oversight requirements of section 9 of this act; and the percentage of the cost savings attributable to reductions of employment, and these estimates shall be featured prominently in the summary of the impact assessment submitted to the department pursuant to subsection g. of this section and section 4 of this act and included in the notices provided to employees or service beneficiaries pursuant to section 6 of this act; and (5) an analysis of the accuracy, reliability, validity, and error rates of the AEDS, including the reasonably foreseeable effects of tuning, retraining, or modification.
Pending
H-02.3H-02.10
Section 3(f)
Plain Language
Impact assessments must be conducted within one year before deployment. For systems already in use when the act takes effect, the assessment must be completed within six months. Assessments must be updated whenever there is a substantial change to the data categories, metrics, thresholds, or benchmarks used by the system, or any substantial modification, retraining, repurposing, or updating that could change outputs. Updated assessments are subject to the same requirements as initial assessments, and the system must cease operating until the update is complete and approved.
The impact assessment shall be conducted not more than one year prior to deployment. For an AEDS or EMT already in use on the effective date of this act, the impact assessment shall be completed within six months after the effective date. Impact assessments shall be updated upon any substantial change in the categories, sources, quotas, metrics, thresholds, or benchmarks used by the EMT or the AEDS, or any substantial modification, retraining, repurposing, or updating which may change outputs of an AEDS. Any subsequent impact assessment or update conducted pursuant to this subsection shall be subject, in the same manner as an initial impact assessment, to all of the requirements of subsections d., e., g., and h. of this section. Until those requirements are met, the AEDS or EMT shall not be permitted to operate.
Pending
H-02.3H-02.4
Section 4(a)–(f)
Plain Language
Public entities may not implement an ABSDS for public benefits or services decisions unless the Department of Labor and Workforce Development has conducted and affirmed a pre-deployment impact assessment covering compliance with all substantive prohibitions, disparate impact analysis of training data across protected characteristics, and human oversight procedures — including protections against incorrect fraud-based benefit denials. Vendors must provide the department full system documentation, including design, training data, accuracy/error rate analysis, and quantified employment displacement estimates. Data access is restricted to authorized agents. Assessments must be completed within one year before deployment (one year after the effective date for existing systems), updated upon substantial modifications, and submitted with summaries to the department's public registry within 60 days. Vendor pays the department's assessment costs. The system may not operate until all requirements are met.
A public entity, or vendor acting on behalf of a public entity, shall not implement the use an ABSDS, or use the ABSDS when making decisions regarding provision of public benefits or services to service beneficiaries, unless all of the following conditions are met: a. An objective and impartial impact assessment of the ABSDS, including an assessment of its economic impacts of factors such as wages, hours, benefits, work opportunities, and advancement, has been conducted by the department, in which the department determines and affirms in a report, with supporting documentation indicating: (1) that the ABSDS complies with the requirements of subsections a., b., k. and l. of section 2 of this act, including by requiring the implementation of effective procedures to remedy potential risks to the rights of service beneficiaries, including privacy, health and safety, dignity and autonomy, and to prevent inhibiting legally protected activity; (2) that the ABSDS complies with the requirements of subsection j. of section 2 of this act, including that the department, with respect to classifications and characteristics identified in that subsection of service beneficiaries, considers, identifies, and describes any disparities in the data used to train or develop the ABSDS that may result in the outputs of the ABSDS having a disparate, adverse impact on service beneficiaries, and that the department determines that the ABSDS includes provisions to effectively remedy any such disparate, adverse impact; and (3) that the ABSDS requires the implementation of effective procedures for monitoring, feedback, and ongoing human oversight, including full compliance with the requirements of section 9 of this act, as needed to prevent or remedy any potential discriminatory, biased, inaccurate, or harmful outcomes, including incorrect denials of public benefits or services based on mistaken claims of fraud by beneficiaries. b. The vendor has provided the department with access to all information needed to conduct the impact assessment of an ABSDS, including: (1) all documentation about its design and development, its technical specifications, the sources of data used to develop and train it, the individuals involved in its development, and a historical record of past versions of the ABSDS; (2) a detailed description of its intended purpose, deployment context, rationale for use, the categories, sources, and methods of data it utilizes; (3) outputs and the types of employment-related decisions in which such outputs may be used; (4) what the benefits and effects are of using the ABSDS to supplement non-automated decision-making, and the impacts its use may have on overall efficiency and output for the public entity that deploys it, including quantified estimates of: the amounts of savings for the public entity; any anticipated reductions of employment by the employer or public entity; any offset to the employment reductions caused by new employment related to the human oversight requirements of section 9 of this act; and the percentage of cost savings attributable to reductions of employment, and these estimates shall be featured prominently in the summary of the impact assessment submitted to the department pursuant to subsection e. of this section and section 4 of this act and included in the notices submitted to employees or service beneficiaries pursuant to section 6 of this act; and (5) an analysis of the accuracy, reliability, validity, and error rates of the ABSDS, including the reasonably foreseeable effects of tuning, retraining, or modification. c. The data and information used by the ABSDS shall be accessed only by authorized agents of the public entity or service beneficiary. d. The impact assessment shall be conducted not more than one year prior to deployment. For an ABSDS already in use on the effective date of this act, the impact assessment shall be completed within one year after the effective date. Impact assessments shall be updated upon any substantial change in the categories, sources, quotas, metrics, thresholds, or benchmarks used by the ABSDS, or any substantial modification, retraining, repurposing, or updating which may change outputs of an ABSDS. Any subsequent impact assessment or update conducted pursuant to this subsection shall be subject, in the same manner as an initial impact assessment, to all of the requirements of subsections a. b., and e. of this section. Until those requirements are met, the ABSDS shall not be permitted to operate. e. The report of the impact assessment shall include all of the information and data used in making its determinations, including the full data and information provided pursuant to subsections a. and b. of this section, and shall, within 60 days of its completion, be submitted in its entirety, together with an accessible summary of the report, to the department, for inclusion in a public registry of impact assessments maintained by the department, and to the vendor, who shall provide it to any public entity seeking to implement the ABSDS. Impact assessments in the public registry shall be made available to affected service recipients, entities, applicants for employment and their authorized representatives. f. The vendor shall pay the department the full amount of the direct costs of making the impact assessment of the ABSDS.
Pre-filed
H-02.3
Section 7(a)
Plain Language
Any AI system used in employment, housing, healthcare, education, criminal justice, or public services must undergo an algorithmic impact assessment before deployment. Unusually, the assessment is performed by the state Office of Information Technology rather than by the developer or deployer themselves. The methodology is left to OIT to determine. This is a pre-deployment gate — the AI system may not be deployed until the assessment is complete. Violations are subject to civil penalties under Section 8.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
Pre-filed
H-02.1
Section 9(a)-(b)
Plain Language
The Attorney General may investigate and enforce complaints about AI-driven discrimination (AI outputs exhibiting bias based on protected characteristics) and unreasonable AI workplace surveillance (AI-powered monitoring of employee behavior, computer usage, and physical movements). Enforcement piggybacks on the existing Law Against Discrimination and New Jersey Civil Rights Act penalty frameworks. While this provision is largely an enforcement mechanism, it creates a substantive prohibition against AI-driven discrimination and unreasonable AI workplace surveillance — entities deploying AI systems must ensure their systems do not produce discriminatory outputs or conduct unreasonable employee surveillance. The 'unreasonable' standard for workplace surveillance is undefined and will likely be developed through AG enforcement actions.
a. The Office of the Attorney General shall investigate complaints related to AI-driven discrimination, unreasonable AI workplace surveillance, and claims of violations of civil rights protections related to AI. The Attorney General shall enforce penalties pursuant to the "Law Against Discrimination," P.L.1945, c.169 (C.10:5-1 et seq.), and the "New Jersey Civil Rights Act," P.L.2004, c.143 (C.10:6-1 et seq.) for violations of this section. b. As used in this section: "AI-driven discrimination" means output resulting from AI systems that exhibit biases against individuals based on age, race, religion, or other protected classes. "AI workplace surveillance" means the use of AI to monitor and analyze employee behavior and performance through the use of technology tools that track employee activities including computer usage and physical movements.
Pending 2026-02-02
H-02.1
Section 1.d.
Plain Language
Employers using AI video interview analysis to screen candidates for in-person interviews must collect race and ethnicity demographic data at two stages: (1) which applicants are and are not advanced to in-person interviews following AI analysis, and (2) which applicants are ultimately offered positions or hired. This is a data collection and recordkeeping obligation that feeds into the annual reporting requirement under subsection e. The data enables analysis of whether AI-driven screening produces racially disparate outcomes.
An employer that uses an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview shall collect and report the following demographic data: (1) the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview after the use of artificial intelligence analysis; and (2) the race and ethnicity of applicants who are offered a position or hired.
Pending 2026-05-13
H-02.3
Section 7(a)
Plain Language
Any AI system used in employment, housing, healthcare, education, criminal justice, or public services in New Jersey must undergo an algorithmic impact assessment before deployment. Unusually, the assessment is performed by the state Office of Information Technology rather than by the deployer or developer — the statute delegates the manner of assessment entirely to OIT. This creates a government gatekeeping function for high-risk AI deployment. Violations are subject to $1,000–$2,000 civil penalties under section 8.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
Pending 2027-01-01
H-02.1H-02.2H-02.3H-02.6H-02.7
GBL § 1551(1)(a)-(b)
Plain Language
Developers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. To earn a rebuttable presumption of compliance in an AG enforcement action, a developer must (1) comply with the documentation requirements of § 1551 and (2) retain an independent third-party auditor from a list published annually by the AG to complete bias and governance audits. The AG must publish the first list of qualified auditors by January 1, 2026. Self-testing and diversity-expansion uses of high-risk systems are expressly excluded from the definition of algorithmic discrimination.
(a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
Pending 2027-01-01
H-02.1H-02.3H-02.6H-02.7
GBL § 1552(1)(a)-(b)
Plain Language
Deployers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination. A rebuttable presumption of compliance in AG enforcement actions is available if the deployer (1) complies with all § 1552 obligations and (2) retains an AG-identified independent auditor to complete bias and governance audits. The AG must publish and maintain the qualified auditor list beginning January 1, 2027. This mirrors the developer reasonable care obligation in § 1551(1) but applies to deployers.
(a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
Pending 2027-01-01
H-02.3H-02.10
GBL § 1552(3)(a)-(e)
Plain Language
Deployers must complete impact assessments of high-risk AI decision systems before deployment, at least annually thereafter, and within 90 days of any intentional and substantial modification. Each assessment must cover: system purpose and benefits; algorithmic discrimination risks and mitigation; data inputs and outputs; customization data; performance metrics and limitations; transparency measures; and post-deployment monitoring and safeguards. Post-modification assessments must additionally disclose whether actual use deviated from developer-intended uses. A single assessment may cover comparable systems, and assessments completed under other laws may satisfy this requirement if reasonably similar in scope. All impact assessments and related records must be retained for at least three years after final deployment. Predetermined learning changes documented in the initial assessment are excluded from the 'intentional and substantial modification' trigger.
(a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
Pending 2027-01-01
H-02.8
GBL § 1552(4)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI decision system to affirmatively verify it is not causing algorithmic discrimination. This is a distinct obligation from the impact assessment — it focuses on operational performance monitoring rather than forward-looking risk assessment. Reviews may be performed by the deployer directly or by a contracted third party. The § 1552(7) developer-assumption exemption may apply.
Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
Pending 2025-01-01
H-02.6H-02.7H-02.8
Real Prop. Law § 227-g(2)(a)-(b)
Plain Language
A landlord may not use an automated housing decision making tool to screen applicants unless an independent auditor conducts a disparate impact analysis at least annually assessing whether the tool adversely impacts any group based on sex, race, ethnicity, or other protected class. The analysis must differentiate between applicants who were selected and those who were not. A summary of the most recent analysis and the distribution date of the tool version it covers must be publicly posted on the landlord's website before the tool is used and must also be accessible through any housing listing on a digital platform where the landlord intends to use the tool. This combines an independent audit mandate, a periodic review cadence, and a public disclosure obligation.
It shall be unlawful for a landlord to implement or use an automated housing decision making tool, including the use of an automated housing decision making tool that issues a score, classification, or recommendation, that fails to comply with the following provisions: (a) No less than annually, a disparate impact analysis shall be conducted to assess the actual impact of any automated housing decision making tool used by any landlord to select applicants for housing within the state. Such disparate impact analysis shall be provided to the landlord. (b) A summary of the most recent disparate impact analysis of such tool as well as the distribution date of the tool to which the analysis applies shall be made publicly available on the website of the landlord prior to the implementation or use of such tool. Such summary shall also be made accessible through any listing for housing on a digital platform for which the landlord intends to use an automated housing decision making tool to screen applicants for housing.
Pending 2025-04-27
H-02.1H-02.3
State Tech. Law § 505(1)-(4)
Plain Language
Designers, developers, and deployers must take proactive and continuous measures to prevent algorithmic discrimination. Required measures include equity assessments during system design, use of representative data, protection against proxy variables for demographic features, and accessibility assurance for persons with disabilities. Automated systems must undergo pre-deployment and ongoing disparity testing and mitigation under clear organizational oversight. The list of protected characteristics is expansive, including all New York Human Rights Law categories plus any other classification protected by law.
1. No New York resident shall face discrimination by algorithms, and all automated systems shall be used and designed in an equitable manner.
2. The designers, developers, and deployers of automated systems shall take proactive and continuous measures to protect New York residents and communities from algorithmic discrimination, ensuring the use and design of these systems in an equitable manner.
3. The protective measures required by this section shall include proactive equity assessments as part of the system design, use of representative data, protection against proxies for demographic features, and assurance of accessibility for New York residents with disabilities in design and development.
4. Automated systems shall undergo pre-deployment and ongoing disparity testing and mitigation, under clear organizational oversight.
Pending 2025-04-27
H-02.5H-02.6H-02.7
State Tech. Law § 505(5)-(6)
Plain Language
All automated systems must undergo independent evaluations, and the results must be documented in plain-language algorithmic impact assessments that include disparity testing results and descriptions of mitigation steps taken. New York residents have the right to view these evaluations and reports, effectively requiring public disclosure. This applies to all automated systems — not limited to high-risk or employment contexts — making the scope significantly broader than typical independent audit requirements.
5. Independent evaluations and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, shall be conducted for all automated systems.
6. New York residents shall have the right to view such evaluations and reports.
Pending 2025-01-30
H-02.1
Insurance Law § 3224-e(a)(3)-(4)
Plain Language
AI tools used in utilization review must not discriminate — directly or indirectly — against individuals based on an extensive list of protected characteristics, including race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, disability (present or predicted), expected length of life, degree of medical dependency, quality of life, or other health conditions. The tool must also be applied fairly and equitably. Note that the protected characteristic list is broader than typical anti-discrimination provisions — it includes predicted disability, expected length of life, degree of medical dependency, and quality of life, which are healthcare-specific characteristics.
(3) The use of the artificial intelligence, algorithm, or other software tool does not adversely discriminate, directly or indirectly, against an individual on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. (4) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied.
Pending 2025-01-01
H-02.3
Labor Law § 201-j(1)(a)-(d)
Plain Language
Before implementing any AI system, covered employers must conduct and complete an impact assessment covering the system's objectives, its ability to achieve those objectives, a summary of its algorithms and computational tools, the training data used in its development, and how it handles sensitive and personal data. Assessments must be repeated at least every two years and whenever a material change to the AI system could alter its outcomes or effects. This obligation applies to New York-resident businesses with more than 100 employees that are not independently owned small businesses. The assessment covers the technical design and data governance aspects of the AI system but also extends to workforce displacement estimates (mapped separately).
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data;
Pending 2025-08-18
Pub. Health Law § 4905-a(1)(e)-(f)
Plain Language
Utilization review agents must ensure their AI tools do not discriminate — directly or indirectly — against enrollees in violation of state or federal anti-discrimination law. The tools must also be applied fairly and equitably, in accordance with applicable HHS regulations and guidance. This is both a non-discrimination obligation (no disparate treatment or disparate impact in violation of law) and an equity obligation (fair and equitable application consistent with federal guidance). The statute does not prescribe specific testing methodologies, but the obligation to ensure non-discrimination implies a need for monitoring and assessment.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending 2025-08-18
Ins. Law § 4905-a(1)(e)-(f)
Plain Language
Disability insurers must ensure their AI tools do not discriminate — directly or indirectly — against insureds in violation of state or federal anti-discrimination law, and that the tools are applied fairly and equitably in accordance with applicable HHS regulations and guidance. This mirrors the Public Health Law obligation for insurers regulated under the Insurance Law.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending 2026-06-09
H-02.1H-02.3
Civ. Rights Law § 86(1)–(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination resulting from the use, sale, or sharing of the system. Before using, selling, or sharing a high-risk AI system, the developer or deployer must have completed an independent audit under § 87 confirming compliance with this reasonable-care standard. The definition of algorithmic discrimination covers an extensive list of protected characteristics and expressly exempts internal bias testing, diversity pool expansion, and private club operations. Failure to comply is an unlawful discriminatory practice.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
Pending 2026-06-09
H-02.6H-02.7
Civ. Rights Law § 87(1)–(3), (5)–(9)
Plain Language
Both developers and deployers of high-risk AI systems must engage independent third-party auditors to evaluate their systems for algorithmic discrimination and risk management program conformity. Developers must complete a first audit within six months of offering or deploying the system, then annually thereafter. Deployers must complete a first audit within six months of deployment, a second within one year, then biennially. Deployer audits must also assess system accuracy and reliability. Auditor independence requirements are strict: no prior 12-month business relationship with the commissioning entity, no current or planned commercial competition, no contingent fees or bonuses for positive results. Auditors must have access to all prior regulatory reports; audits may use AI tools in part but may not be completed entirely by AI and may not use a different high-risk AI system to complete the audit. An audit satisfying equivalent federal, state, or local requirements may serve as a substitute. The AG may promulgate additional rules on auditor independence and community engagement. Note that the audit requirement takes effect two years after enactment (one year later than other provisions).
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
Pending 2026-06-09
Civ. Rights Law § 87(4)
Plain Language
The Attorney General has discretionary authority to promulgate additional audit rules to ensure audits properly assess algorithmic discrimination and compliance, and to recommend updated auditing frameworks to the legislature based on nationally or internationally recognized standards such as ISO frameworks. This creates a delegated rulemaking power but no immediate additional compliance obligation for developers or deployers — the obligation may expand through future rulemaking.
4. At the attorney general's discretion, the attorney general may: (a) promulgate further rules as necessary to ensure that audits under this section assess whether or not AI systems produce algorithmic discrimination and otherwise comply with the provisions of this article; and (b) recommend an updated AI system auditing framework to the legislature, where such recommendations are based on a standard or framework (i) designed to evaluate the risks of AI systems, and (ii) that is nationally or internationally recognized and consensus-driven, including but not limited to a relevant framework or standard created by the International Standards Organization.
Pending 2025-09-05
H-02.1H-02.3H-02.4H-02.6H-02.8
Real Prop. Law § 442-m(1)
Plain Language
Real estate brokers and online housing platforms using virtual agents or AI tools must have an independent auditor conduct a disparate impact analysis at least annually. The analysis must test whether the tool produces adverse impacts across protected classes (sex, race, ethnicity, and others under New York's Human Rights Law), whether any differential output serves a legitimate nondiscriminatory interest, and whether less discriminatory alternatives exist. A summary of the most recent analysis must be submitted to the attorney general's office. This is both a periodic independent audit obligation and a proactive regulatory submission obligation.
No less than annually, any real estate broker or online housing platform that uses virtual agents to assist with searches for available properties for sale or rental properties, and any online housing platform that uses AI tools, shall have a disparate impact analysis conducted and shall submit a summary of the most recent disparate impact analysis to the attorney general's office.
Pending 2025-09-05
H-02.1H-02.2H-02.8
Real Prop. Law § 442-m(2)(a)-(c)
Plain Language
Real estate brokers and online housing platforms using virtual agents or AI tools must undertake three ongoing anti-discrimination obligations: (1) proactively identify discriminatory outputs and modify systems to use less discriminatory alternatives, including reviewing training data for discriminatory predictive patterns; (2) ensure predictive parity across sex, race, ethnicity, and other protected classes, correcting any identified disparities; and (3) conduct regular end-to-end testing of advertising, captioning, and chatbot systems to detect discriminatory outcomes, including comparing ad delivery across demographic groups. These are continuous operational obligations, not one-time pre-deployment checks.
Any real estate broker or online housing platform that offers or uses virtual agents or AI tools shall: (a) proactively identify discriminatory algorithmic results and modify such virtual agents or AI tools to adopt less discriminatory alternatives, including but not limited to, assessing data used to train such virtual agents or AI tools and verifying that use of such data does not predict discriminatory outcomes; (b) ensure that the artificial intelligence or other computational or algorithmic systems upon which such virtual agents or AI tools are structured are similarly predictive across groups on the basis of sex, race, ethnicity or other protected classes, and make adjustments to correct any identified disparities in predictiveness for any such groups; and (c) conduct regular end-to-end testing of advertising, captioning, and chatbot systems to ensure that any discriminatory outcomes are detected, including but not limited to, comparing the delivery of advertisements across different demographic audiences.
Pending 2027-01-01
H-02.1H-02.2H-02.3
Civil Rights Law § 102(1)-(2)
Plain Language
Developers and deployers are prohibited from offering, licensing, promoting, selling, or using a covered algorithm for consequential actions in a manner that discriminates or causes disparate impact on the basis of any protected characteristic. The list of protected characteristics is exceptionally broad, including race, sex, disability, income level, immigration status, limited English proficiency, biometric information, and any other classification protected by federal or New York law. An action causing differential effect is unjustified unless the developer or deployer proves it is necessary for a substantial, legitimate, nondiscriminatory interest and that no less-discriminatory alternative exists. The algorithm is presumed to be analyzed holistically unless the developer or deployer proves separability by preponderance of evidence. Carve-outs exist for self-testing to prevent discrimination, diversity expansion, good-faith non-commercial research, and private clubs.
1. A developer or deployer shall not offer, license, promote, sell, or use a covered algorithm in a manner that: (a) causes or contributes to a disparate impact in a manner that prevents; (b) otherwise discriminates in a manner that prevents; or (c) otherwise makes unavailable, the equal enjoyment of goods, services, or other activities or opportunities, related to a consequential action, on the basis of a protected characteristic. 2. This section shall not apply to: (a) the offer, licensing, or use of a covered algorithm for the sole purpose of: (i) a developer's or deployer's self-testing (or auditing by an independent auditor at a developer's or deployer's request) to identify, prevent, or mitigate discrimination, or otherwise to ensure compliance with obligations, under federal or state law; (ii) expanding an applicant, participant, or customer pool to raise the likelihood of increasing diversity or redressing historic discrimination; or (iii) conducting good faith security research, or other research, if conducting the research is not part or all of a commercial act; or (b) any private club or other establishment not in fact open to the public, as described in section 201(e) of the Civil Rights Act of 1964 (42 U.S.C. 2000a(e)).
Pending 2027-01-01
H-02.1H-02.2H-02.3H-02.6
Civil Rights Law § 103(1)-(3)
Plain Language
Before deploying, licensing, or offering any covered algorithm for a consequential action — including material changes to previously deployed algorithms — developers and deployers must conduct a two-stage pre-deployment evaluation. First, a preliminary evaluation assesses whether harm is plausible. If harm is not plausible, the developer or deployer must document a finding of no plausible harm and submit it to the Division. If harm is plausible, the developer must engage a qualified independent auditor to conduct a full pre-deployment evaluation. The full evaluation must cover algorithm design and methodology, training and testing data and methods (including demographic representation and protected characteristic testing), potential for harm and disparate impact, and recommendations for mitigation. The independent auditor must have no financial or employment relationship with the developer or deployer beyond the auditing engagement. The auditor submits a report with findings and recommendations to the developer. For material changes to existing algorithms, the scope may be limited to harms arising from the change.
1. Prior to deploying, licensing, or offering a covered algorithm (including deploying a material change to a previously-deployed covered algorithm or a material change made prior to deployment) for a consequential action, a developer or deployer shall conduct a pre-deployment evaluation in accordance with this section. 2. (a) The developer shall conduct a preliminary evaluation of the plausibility that any expected use of the covered algorithm may result in a harm. (b) The deployer shall conduct a preliminary evaluation of the plausibility that any intended use of the covered algorithm may result in a harm. (c) Based on the results of the preliminary evaluation, the developer or deployer shall: (i) in the event that a harm is not plausible, record a finding of no plausible harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for the finding, and submit such record to the division; and (ii) in the event that a harm is plausible, conduct a full pre-deployment evaluation as described in subdivision three or subdivision four of this section, as applicable. (d) When conducting a preliminary evaluation of a material change to, or new use of, a previously-deployed covered algorithm, the developer or deployer may limit the scope of the evaluation to whether use of the covered algorithm may result in a harm as a result of the material change or new use. 3. (a) If a developer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the developer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the covered algorithm's design and methodology, including the inputs the covered algorithm is designed to use to produce an output and the outputs the covered algorithm is designed to produce; (ii) how the covered algorithm was created, trained, and tested, including: (A) any metric used to test the performance of the covered algorithm; (B) defined benchmarks and goals that correspond to such metrics, including whether there was sufficient representation of demographic groups that are reasonably likely to use or be affected by the covered algorithm in the data used to create or train the algorithm, and whether there was reasonable testing, if any, across such demographic groups; (C) the outputs the covered algorithm actually produces in testing; (D) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the development of the covered algorithm, or a disclosure that no such consultation occurred; (E) a description of which protected characteristics, if any, were used for testing and evaluation, and how and why such characteristics were used, including: (1) whether the testing occurred in comparable contextual conditions to the conditions in which the covered algorithm is expected to be used; and (2) if protected characteristics were not available to conduct such testing, a description of alternative methods the developer used to conduct the required assessment; (F) any other computational algorithm incorporated into the development of the covered algorithm, regardless of whether such precursor computational algorithm involves a consequential action; (G) a description of the data and information used to develop, test, maintain, or update the covered algorithm, including: (1) each type of personal data used, each source from which the personal data was collected, and how each type of personal data was inferred and processed; (2) the legal authorization for collecting and processing the personal data; and (3) an explanation of how the data (including personal data) used is representative, proportional, and appropriate to the development and intended uses of the covered algorithm; and (H) a description of the training process for the covered algorithm which includes the training, validation, and test data utilized to confirm the intended outputs; (iii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and a description of such potential harm or disparate impact; (iv) alternative practices and recommendations to prevent or mitigate harm and recommendations for how the developer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (v) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the developer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
Pending 2027-01-01
H-02.1H-02.2H-02.3H-02.6
Civil Rights Law § 103(4)
Plain Language
Deployers face a parallel pre-deployment evaluation obligation when their preliminary assessment finds harm is plausible. The deployer must engage an independent auditor to conduct a full evaluation covering the deployment context: how the algorithm contributes to consequential actions, necessity and proportionality relative to the baseline process being replaced, data inputs and their representativeness, expected versus actual outputs in testing, stakeholder consultation, and potential for harm or disparate impact. The deployer's evaluation is context-specific — it focuses on how the algorithm will be used in the deployer's particular environment, as opposed to the developer's evaluation which focuses on the algorithm's general design and training. The independent auditor submits a report with findings and recommendations to the deployer.
4. (a) If a deployer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the deployer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the manner in which the covered algorithm makes or contributes to a consequential action and the purpose for which the covered algorithm will be deployed; (ii) the necessity and proportionality of the covered algorithm in relation to its planned use, including the intended benefits and limitations of the covered algorithm and a description of the baseline process being enhanced or replaced by the covered algorithm, if applicable; (iii) the inputs that the deployer plans to use to produce an output, including: (A) the type of personal data and information used and how the personal data and information will be collected, inferred, and processed; (B) the legal authorization for collecting and processing the personal data; and (C) an explanation of how the data used is representative, proportional, and appropriate to the deployment of the covered algorithm; (iv) the outputs the covered algorithm is expected to produce and the outputs the covered algorithm actually produces in testing; (v) a description of any additional testing or training completed by the deployer for the context in which the covered algorithm will be deployed; (vi) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the deployment of the covered algorithm; (vii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities in the context in which the covered algorithm will be deployed and a description of such potential harm or disparate impact; (viii) alternative practices and recommendations to prevent or mitigate harm in the context in which the covered algorithm will be deployed and recommendations for how the deployer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (ix) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the deployer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
Pending 2027-01-01
H-02.6H-02.8
Civil Rights Law § 104(1)-(3)
Plain Language
Deployers must conduct annual post-deployment impact assessments for each covered algorithm. The process follows the same two-stage structure as the pre-deployment evaluation: a preliminary assessment identifies whether harm occurred during the reporting period. If no harm is identified, the deployer documents a no-harm finding and submits it to the Division. If harm occurred, the deployer must engage an independent auditor for a full impact assessment covering: the nature and extent of harm, disparate impact analysis with methodology, data inputs and their use for retraining, whether outputs matched expectations, how the algorithm was used in consequential actions, and mitigation actions taken. The auditor's report goes to the deployer, and within 30 days the deployer must share a summary with the developer. This is a continuing annual obligation for the entire life of the deployment.
1. After the deployment of a covered algorithm, a deployer shall, on an annual basis, conduct an impact assessment in accordance with this section. The deployer shall conduct a preliminary impact assessment of the covered algorithm to identify any harm that resulted from the covered algorithm during the reporting period and: (a) if no resulting harm is identified by such assessment, shall record a finding of no harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for such finding, and submit such finding to the division; and (b) if a resulting harm is identified by such assessment, shall conduct a full impact assessment as described in subdivision two of this section. 2. In the event that the covered algorithm resulted in a harm during the reporting period, the deployer shall engage an independent auditor to conduct a full impact assessment with respect to the reporting period, including: (a) an assessment of the harm that resulted or was reasonably likely to have been produced during the reporting period; (b) a description of the extent to which the covered algorithm produced a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, including the methodology for such evaluation, of how the covered algorithm produced or likely produced such disparity; (c) a description of the types of data input into the covered algorithm during the reporting period to produce an output, including: (i) documentation of how data input into the covered algorithm to produce an output is represented and complete descriptions of each field of data; and (ii) whether and to what extent the data input into the covered algorithm to produce an output was used to train or otherwise modify the covered algorithm; (d) whether and to what extent the covered algorithm produced the outputs it was expected to produce; (e) a detailed description of how the covered algorithm was used to make a consequential action; (f) any action taken to prevent or mitigate harms, including how relevant staff are informed of, trained about, and implement harm mitigation policies and practices, and recommendations for how the deployer could monitor for and prevent harm after offering, licensing, or deploying the covered algorithm; and (g) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. 3. (a) After the engagement of the independent auditor, the independent auditor shall submit to the deployer a report on the impact assessment conducted under subdivision two of this section, including the findings and recommendations of such independent auditor. (b) Not later than thirty days after the submission of a report on an impact assessment under this section, a deployer shall submit to the developer of the covered algorithm a summary of such report, subject to the trade secret and privacy protections described in subdivision six of this section.
Pending 2027-01-01
H-02.8
Civil Rights Law § 104(4)
Plain Language
Developers must annually review every impact assessment summary received from deployers of their covered algorithms. The review must assess deployer usage patterns, data inputs and outputs, contractual compliance, real-world versus pre-deployment performance, ongoing harm potential, disparate impact by protected characteristic, need for algorithm modification, and any other Division-prescribed responsive actions. This creates a feedback loop: deployers conduct annual impact assessments and share summaries with developers, who must then affirmatively review those summaries and determine whether corrective action is needed. This obligation runs parallel to the deployer's annual assessment — developers cannot passively receive deployer summaries without acting on them.
4. A developer shall, on an annual basis, review each impact assessment summary submitted by a deployer of its covered algorithm under subdivision three of this section for the following purposes: (a) to assess how the deployer is using the covered algorithm, including the methodology for assessing such use; (b) to assess the type of data the deployer is inputting into the covered algorithm to produce an output and the types of outputs the covered algorithm is producing; (c) to assess whether the deployer is complying with any relevant contractual agreement with the developer and whether any remedial action is necessary; (d) to compare the covered algorithm's performance in real-world conditions versus pre-deployment testing, including the methodology used to evaluate such performance; (e) to assess whether the covered algorithm is causing harm or is reasonably likely to be causing harm; (f) to assess whether the covered algorithm is causing, or is reasonably likely to be causing, a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and, if so, how and with respect to which protected characteristic; (g) to determine whether the covered algorithm needs modification; (h) to determine whether any other action is appropriate to ensure that the covered algorithm remains safe and effective; and (i) to undertake any other assessment or responsive action the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division.
Pending 2026-01-01
H-02.1H-02.3
Civ. Rights Law § 86(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination — defined broadly to cover unjustified differential treatment across a wide range of protected characteristics. Before using, selling, or sharing a high-risk AI system, they must have completed an independent audit confirming this duty has been met. Testing to identify and mitigate bias is explicitly carved out of the definition of algorithmic discrimination, as is expanding applicant pools for diversity purposes. This is both a substantive standard of care and a pre-condition for lawful deployment.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system.
2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
Pending 2026-01-01
H-02.6H-02.7
Civ. Rights Law § 87(1)-(3), (5)-(9)
Plain Language
Developers and deployers of high-risk AI systems must engage independent third-party auditors to assess whether they have taken reasonable care to prevent algorithmic discrimination and whether their risk management programs conform to statutory requirements. Developer audits are due within six months of initial offering/deployment, then annually. Deployer audits are due within six months of deployment, then annually for one year, then biennially. Deployer audits additionally cover system accuracy and reliability. Independence requirements are strict: auditors cannot have provided any services to the commissioning entity in the prior 12 months, cannot be competitors, cannot receive contingent fees, and must receive all prior reports filed under § 88. Audits may use AI tools to assist (e.g., testing the system in a controlled environment) but cannot be completed entirely by AI and cannot use a different high-risk AI system. An audit completed under another law satisfies these requirements if it covers all required elements. This section takes effect two years after enactment.
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section.
(a) A developer of a high-risk AI system shall complete at least:
(i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and
(ii) one audit every one year following the submission of the first audit.
(b) A developer audit under this section shall include:
(i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and
(ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine.
2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section.
(a) A deployer of a high-risk AI system shall complete at least:
(i) a first audit within six months after initial deployment;
(ii) a second audit within one year following the submission of the first audit; and
(iii) one audit every two years following the submission of the second audit.
(b) A deployer audit under this section shall include:
(i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system;
(ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and
(iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine.
3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section.
5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article.
6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system.
(a) Acceptable auditor uses of an AI system include, but are not limited to:
(i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or
(ii) detecting patterns in the behavior of an audited AI system.
(b) An auditor shall not:
(i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or
(ii) use an AI system to draft an audit under this section without meaningful human review and oversight.
7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association.
(b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity:
(i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or
(ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit.
(c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited.
9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
Pending 2025-01-01
H-02.3
Labor Law § 201-j(1)(a)-(f)
Plain Language
Before using or applying any AI system, covered employers (New York-resident businesses with more than 100 employees that are not small businesses) must conduct a formal impact assessment. The assessment must cover the AI's objectives, its ability to meet those objectives, a summary of underlying algorithms and training data, the system's use of sensitive and personal data, and critically, estimates of the number of employees already displaced and expected to be displaced by the AI. Assessments must be repeated at least every two years and before any material change that could alter the AI system's outcomes. This obligation is notable for its workforce displacement focus — the required content goes well beyond typical bias or discrimination assessments to require quantified estimates of job losses.
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; (e) an estimate of the number of employees already displaced due to artificial intelligence; and (f) an estimate of the number of employees expected to be displaced or otherwise affected due to the increased use of artificial intelligence in the workplace.
Pending 2025-10-11
H-02.3H-02.6H-02.7
GBL § 1551(1)(a)-(b)
Plain Language
Developers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. A rebuttable presumption of reasonable care applies if the developer both complies with the documentation requirements in § 1551 and retains an AG-identified independent third party to complete bias and governance audits. The AG must publish and annually update a list of qualified independent auditors. The safe harbor incentivizes — but does not mandate — independent audits; developers who forgo audits lose the rebuttable presumption but may still demonstrate reasonable care by other means.
1. (a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
Pending 2025-10-11
H-02.3H-02.6H-02.7
GBL § 1552(1)(a)-(b)
Plain Language
Deployers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or foreseeable algorithmic discrimination risks. A rebuttable presumption of reasonable care applies if the deployer both complies with § 1552's risk management, impact assessment, and annual review requirements and retains an AG-identified independent third party for bias and governance audits. As with developers, the audit is incentivized through the safe harbor but not strictly mandated — deployers who forgo audits must demonstrate reasonable care by other means.
1. (a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
Pending 2025-10-11
H-02.3H-02.10
GBL § 1552(3)(a)-(e)
Plain Language
Deployers must complete an impact assessment for each high-risk AI decision system before deployment, then annually and within 90 days of any intentional and substantial modification. Each assessment must cover: system purpose, use cases, and benefits; algorithmic discrimination risk analysis and mitigation; input data categories and outputs; any customization data; performance metrics and limitations; transparency measures; and post-deployment monitoring safeguards. Post-modification assessments must also disclose whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems. Assessments completed under other substantially similar laws are accepted. All assessments and associated records must be retained for at least three years after final deployment. Deployers meeting the § 1552(7) delegation conditions are exempt.
3. (a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
Pending 2025-10-11
H-02.8
GBL § 1552(4)
Plain Language
Deployers (or their contracted third parties) must conduct at least annual reviews of each deployed high-risk AI decision system to affirmatively verify it is not causing algorithmic discrimination. This is a distinct, ongoing operational obligation separate from the initial and periodic impact assessments under § 1552(3). The first review must be completed by January 1, 2027. Deployers meeting the § 1552(7) delegation conditions are exempt.
4. Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
Pending 2025-08-11
H-02.1
Pub. Health Law § 4905-a(1)(e)-(f)
Plain Language
Utilization review agents must ensure that the AI tool does not discriminate directly or indirectly against enrollees in violation of state or federal law, and that it is applied fairly and equitably in accordance with applicable HHS regulations and guidance. This imposes both a non-discrimination obligation and an affirmative fairness requirement. The reference to indirect discrimination captures disparate impact, not just intentional discrimination.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending 2025-08-11
H-02.1
Ins. Law § 4905-a(1)(e)-(f)
Plain Language
Disability insurers must ensure that the AI tool does not discriminate directly or indirectly against insureds in violation of state or federal law, and that it is applied fairly and equitably in accordance with applicable HHS regulations and guidance. This Insurance Law parallel mirrors the Public Health Law non-discrimination and fairness requirements.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
Pending 2026-07-22
H-02.1
Exec. Law § 296(23)(a)
Plain Language
Employers may not use artificial intelligence for any employment decision — including recruitment, hiring, promotion, renewal, training selection, discharge, discipline, tenure, or terms and conditions of employment — if doing so has the effect of subjecting employees to discrimination on the basis of any protected class under the New York Human Rights Law. The statute explicitly prohibits using zip codes as a proxy for protected classes. This is a disparate impact standard — the prohibition is triggered by discriminatory effect, not just discriminatory intent. The covered protected classes are extensive, including age, race, creed, color, national origin, citizenship or immigration status, sexual orientation, gender identity or expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, and domestic violence victim status.
(a) It shall be an unlawful discriminatory practice for an employer to use artificial intelligence for recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment that has the effect of subjecting employees to discrimination on the basis of age, race, creed, color, national origin, citizenship or immigration status, sexual orientation, gender identity or expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, or status as a victim of domestic violence or to use zip codes as a proxy for such protected classes.
Pending 2025-01-01
H-02.1H-02.3
State Technology Law § 403(2)
Plain Language
If a required impact assessment finds that an automated decision-making system produces discriminatory or biased outcomes, the agency must immediately cease all use of that system — including ceasing reliance on any information the system has already produced. This is a mandatory shutdown requirement with no cure period or remediation option: the statute says 'shall cease,' not 'shall mitigate.' The prohibition extends to derivative outputs (information produced using the system), which means agencies cannot continue using conclusions or recommendations the biased system previously generated.
Notwithstanding the provisions of this article or any other law, if an impact assessment finds that the automated decision-making system produces discriminatory or biased outcomes, the state agency shall cease any utilization, application, or function of such automated decision-making system, and of any information produced using such system.
Pending 2026-11-01
36 O.S. § 6567(A)(4)
Plain Language
AI tools used in utilization review must not discriminate against enrollees in violation of state or federal law. This is a general non-discrimination pass-through provision requiring that AI tools comply with existing anti-discrimination law in the healthcare context. While it creates no new protected classes or testing obligations beyond existing law, it makes clear that AI-driven discrimination constitutes a violation of the utilization review act itself, exposing entities to the act's penalty provisions in addition to existing anti-discrimination remedies.
4. Does not discriminate against enrollees in violation of state and federal law;
Pending 2026-10-06
H-02.1
35 Pa.C.S. § 3503(b)(2)-(3)
Plain Language
Facilities must ensure that their AI-based algorithms and training data do not directly or indirectly discriminate against patients in violation of federal or state law. The algorithms must be applied fairly and equitably, consistent with any applicable HHS regulations or guidance. This imposes both a non-discrimination obligation and an affirmative fairness standard that incorporates federal guidance by reference.
(2) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
Pending 2026-10-06
H-02.1
40 Pa.C.S. § 5203(b)(4)-(5)
Plain Language
Insurers must ensure that AI algorithms and training data used in utilization review do not directly or indirectly discriminate against covered persons in violation of federal or state law. The algorithms must be applied fairly and equitably, consistent with applicable HHS regulations or guidance. This imposes both a non-discrimination obligation and an affirmative fairness standard on insurer AI use in utilization review.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
Pending 2026-10-06
H-02.1
40 Pa.C.S. § 5303(b)(4)-(5)
Plain Language
MA/CHIP managed care plans must ensure their AI algorithms and training data do not discriminate against enrollees in violation of federal or state law. The algorithms must be applied fairly and equitably, consistent with applicable HHS regulations and guidance. This parallels the insurer non-discrimination requirement.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
Pending 2027-01-09
H-02.1
35 Pa.C.S. § 3503(b)(2)-(3)
Plain Language
Facilities must ensure that their AI-based algorithms and training datasets do not discriminate — directly or indirectly — against patients in violation of federal or state law. The algorithms must be fairly and equitably applied, including compliance with any applicable HHS regulations or guidance. This creates both a non-discrimination obligation and an affirmative fairness requirement for clinical AI tools.
(2) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
Pending 2027-01-09
H-02.1
40 Pa.C.S. § 5203(b)(4)-(5)
Plain Language
Insurers must ensure their AI-based algorithms and training datasets do not discriminate — directly or indirectly — against covered persons in violation of federal or state law. The algorithms must be fairly and equitably applied consistent with applicable HHS regulations or guidance. This applies specifically to AI used in the utilization review process.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
Pending 2027-01-09
H-02.1
40 Pa.C.S. § 5303(b)(4)-(5)
Plain Language
MA or CHIP managed care plans must ensure their AI-based algorithms and training datasets do not discriminate against enrollees in violation of federal or state law, and must be fairly and equitably applied consistent with HHS guidance. This mirrors the insurer non-discrimination requirement but applies to Medicaid/CHIP managed care plans.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
Pending
H-02.3H-02.6H-02.7
§ 28-5.2-2(k)
Plain Language
Employers may not use electronic monitoring — alone or with an ADS — unless the proposed use has undergone an independent impact assessment. The assessment must be conducted within one year prior to use (or within six months for monitoring already in place when the law takes effect), by an independent and impartial auditor with no financial or legal conflicts of interest and no involvement with the ADS in the preceding five years. The assessment must evaluate data protection and cybersecurity practices, identify allowable purposes, describe potential legal violations and steps to prevent them, and assess negative impacts on employee privacy and job quality. The full assessment must be disclosed in plain language to all affected workers and authorized representatives within 30 days, and workers have the right to comment on, challenge, and bargain over the proposed monitoring based on the findings.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
Pending 2026-02-06
H-02.3H-02.6H-02.7
§ 28-5.2-2(k)
Plain Language
Employers may not use electronic monitoring (alone or with an ADS) unless the monitoring has been the subject of an independent impact assessment conducted within one year of use (or within six months of the chapter's effective date for existing monitoring). The assessor must be independent with no financial or legal conflicts, and a strict five-year look-back applies for conflicts. The assessment must evaluate data security, identify allowable purposes, describe potential legal violations and remediation steps, and analyze negative impacts on employee privacy and job quality. The full assessment must be disclosed in plain language to affected workers and their authorized representatives within 30 days. Workers have the right to comment on, challenge, and collectively bargain over the proposed monitoring based on the assessment's findings.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
Pending 2025-01-01
H-02.1H-02.3
Section 37-31-20(A)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect South Carolina consumers from known or reasonably foreseeable algorithmic discrimination risks arising from intended and contracted uses. A rebuttable presumption of reasonable care applies if the developer complied with all requirements of Section 37-31-20 and any AG-adopted rules. Self-testing for bias mitigation and diversity expansion uses are carved out from the definition of algorithmic discrimination.
(A) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought by the Attorney General pursuant to Section 37-31-60, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
Pending 2025-01-01
H-02.3
Section 37-31-30(A)
Plain Language
Deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination when deploying high-risk AI systems. A rebuttable presumption of compliance applies if the deployer complied with all requirements of Section 37-31-30 and any AG-adopted rules. This is the deployer-side analog to the developer duty in Section 37-31-20(A).
(A) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought by the Attorney General pursuant to Section 37-31-70, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
Pending 2025-01-01
H-02.3H-02.8H-02.10
Section 37-31-30(C)(1)-(7)
Plain Language
Deployers must complete an impact assessment before deploying each high-risk AI system, and repeat it at least annually and within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and deployment context, algorithmic discrimination risk analysis and mitigation, data inputs and outputs, customization data, performance metrics, transparency measures, and post-deployment monitoring safeguards. A single assessment may cover comparable systems, and an assessment completed under another law satisfies this requirement if substantially similar in scope. All impact assessments and records must be retained for at least three years after final deployment. Additionally, deployers must conduct at least annual reviews to verify that deployed systems are not causing algorithmic discrimination. Small deployers meeting the subsection (F) criteria are exempt.
(C)(1) Except as provided in items (4), (5), and subsection (F) of this section: (a) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system shall complete an impact assessment for the high-risk artificial intelligence system; and (b) a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) An impact assessment completed pursuant to this subsection must include, at a minimum, and to the extent reasonably known by or available to the deployer: (a) a statement by the deployer disclosing the purpose, intended-use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (c) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (d) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (e) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (f) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (g) a description of the postdeployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) In addition to the information required under item (2), an impact assessment completed pursuant to this item following an intentional and substantial modification to a high-risk artificial intelligence system must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection, all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) At least annually, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Failed 2026-07-01
H-02.1H-02.8
Va. Code § 2.2-1202.2(B)(4)
Plain Language
State agencies must annually test their automated decision systems for algorithmic discrimination — meaning unlawful differential treatment or impact based on a broad list of protected characteristics — and certify the system's compliance with federal and state law. Testing may be performed by the agency itself or by a contractor engaged by the agency. This is an ongoing annual obligation, not a one-time pre-deployment check.
The Director shall require any state agency that uses an automated decision system as a substantial factor in any employment decision to: 4. Annually test, or ensure that an appropriate contractor employed by such agency annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
Failed 2026-07-01
H-02.1H-02.8
Va. Code § 15.2-1500.2(B)(4)
Plain Language
Local government entities must annually test their automated decision systems for algorithmic discrimination and certify compliance with federal and state law. Testing may be conducted internally or by a contractor. Mirrors the state agency obligation under § 2.2-1202.2(B)(4).
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall: 4. Annually test, or ensure that an appropriate contractor employed by such department, office, board, commission, agency, or instrumentality of local government annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
Pending 2025-07-01
H-02.3
21 V.S.A. § 495q(g)
Plain Language
Before deploying any ADS, employers must complete a written impact assessment covering eight mandatory elements: system description and purpose, data used, outputs and decision types, necessity justification, validity/reliability assessment against social science standards, a five-part risk assessment (discrimination, legal rights chilling, health/safety/dignity harms, privacy harms, economic impacts), risk mitigation measures, and methodology description. The assessment must be provided to employees upon request, updated whenever the ADS undergoes significant changes, and may cover a comparable set of systems in a single document. The discrimination risk analysis must cover an extensive list of protected characteristics including ancestry, crime victim status, and physical or mental condition.
(g) Impact assessment of automated decision systems. (1) Prior to utilizing an automated decision system, an employer shall create a written impact assessment of the system that includes, at a minimum: (A) a detailed description of the automated decision system and its purpose; (B) a description of the data utilized by the system; (C) a description of the outputs produced by the system and the types of employment-related decisions in which those outputs may be utilized; (D) an assessment of the necessity for the system, including reasons for utilizing the system to supplement nonautomated means of decision making; (E) a detailed assessment of the system's validity and reliability in accordance with contemporary social science standards and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (F) a detailed assessment of the potential risks of utilizing the system, including the risk of: (i) discrimination against employees on the basis of race, color, religion, national origin, sex, sexual orientation, gender identity, ancestry, place of birth, age, crime victim status, or physical or mental condition; (ii) violating employees' legal rights or chilling employees' exercise of legal rights; (iii) directly or indirectly harming employees' physical health, mental health, safety, sense of well-being, dignity, or autonomy; (iv) harm to employee privacy, including through potential security breaches or inadvertent disclosure of information; and (v) negative economic and material impacts to employees, including potential effects on compensation, benefits, work conditions, evaluations, advancement, and work opportunities; (G) a detailed summary of measures taken by the employer to address or mitigate the risks identified pursuant to subdivision (E) of this subdivision (1); and (H) a description of any methodology used in preparing the assessment. (2) An employer shall provide a copy of the assessment prepared pursuant to subdivision (1) of this subsection to an employee upon request. (3) An employer shall update the assessment required pursuant to this subsection any time a significant change or update is made to the automated decision system. (4) A single impact assessment may address a comparable set of automated decision systems deployed by an employer.
Pending 2025-07-01
H-02.1
9 V.S.A. § 4193b
Plain Language
Developers and deployers are categorically prohibited from using, selling, or sharing an automated decision system for consequential decisions if the system produces algorithmic discrimination — meaning differential treatment or impact disfavoring individuals based on a broad list of protected characteristics. This is a strict liability prohibition: the system must not produce discriminatory outcomes regardless of intent. Testing to identify and mitigate discrimination, expanding applicant pools for diversity, and private club exemptions are carved out from the definition of algorithmic discrimination.
It shall be unlawful discrimination for a developer or deployer to use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that produces algorithmic discrimination.
Pending 2025-07-01
H-02.6
9 V.S.A. § 4193c(f)
Plain Language
Developers may not use, sell, or share an automated decision system for consequential decisions unless the system has passed an independent audit under § 4193e. If the audit reveals algorithmic discrimination, the developer must halt all use, sale, or sharing until a post-adjustment audit confirms the discrimination has been rectified. This is a deployment-gating obligation — no system may enter the market without clearing the independent audit, and a discriminatory finding triggers a mandatory stop-ship until remediation is verified.
(f) A developer shall not use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that has not passed an independent audit, in accordance with section 4193e of this title. If an independent audit finds that an automated decision system for use in a consequential decision does produce algorithmic discrimination, the developer shall not use, sell, or share the system until the algorithmic discrimination has been proven to be rectified by a post-adjustment audit.
Pending 2025-07-01
H-02.6H-02.7
9 V.S.A. § 4193e(a)-(c)
Plain Language
Developers and deployers are jointly responsible for ensuring an independent audit is conducted at three stages: before deployment, six months after deployment, and at least every 18 months thereafter. The audit must cover data management and security compliance, system validity and reliability per use case, comparative demographic performance analysis for algorithmic discrimination, compliance with federal/state/local labor, civil rights, consumer protection, and privacy laws, and an evaluation of the risk management program. All completed audits must be delivered to the Attorney General regardless of findings. Developer and deployer must contractually allocate audit responsibilities; absent a contract, they are jointly and severally liable. Multiple auditors may be used.
(a) Prior to deployment of an automated decision system for use in a consequential decision, six months after deployment, and at least every 18 months thereafter for each calendar year an automated decision system is in use in consequential decisions after the first post-deployment audit, the developer and deployer shall be jointly responsible for ensuring that an independent audit is conducted in compliance with the provisions of this section to ensure that the product does not produce algorithmic discrimination and complies with the provisions of this subchapter. The developer and deployer shall enter into a contract specifying which party is responsible for the costs, oversight, and results of the audit. Absent an agreement of responsibility through contract, the developer and deployer shall be jointly and severally liable for any violations of this section. Regardless of final findings, the deployer or developer shall deliver all audits conducted under this section to the Attorney General. (b) A deployer or developer may contract with more than one auditor to fulfill the requirements of this section. (c) The audit shall include the following: (1) an analysis of data management policies, including whether personal or sensitive data relating to a consumer is subject to data security protection standards that comply with the requirements of applicable State law; (2) an analysis of the system validity and reliability according to each specified use case listed in the entity's reporting document filed by the developer or deployer pursuant to section 4193f of this title; (3) a comparative analysis of the system's performance when used on consumers of different demographic groups and a determination of whether the system produces algorithmic discrimination in violation of this subchapter by each intended and foreseeable identified use as identified by the deployer and developer pursuant to section 4193f of this title; (4) an analysis of how the technology complies with existing relevant federal, State, and local labor, civil rights, consumer protection, privacy, and data privacy laws; and (5) an evaluation of the developer's or deployer's documented risk management policy and program as set forth in section 4193g of this title for conformity with subsection 4193g(a) of this title.
Pending 2025-07-01
H-02.6
9 V.S.A. § 4193e(f)-(g)
Plain Language
The audit must be completed entirely without AI assistance. The auditor must be truly independent — disqualified if they have provided any service to the commissioning company in the past 12 months, were involved in building or deploying the system, have an employment relationship with the developer or deployer, or have any direct or material indirect financial interest in them. Audit fees cannot be contingent on results, and no incentives or bonuses for positive findings are permitted. These are among the most stringent auditor independence requirements in any state ADS statute.
(f) An audit conducted under this section shall be completed in its entirety without the assistance of an automated decision system. (g)(1) An auditor shall be an independent entity, including an individual, nonprofit, firm, corporation, partnership, cooperative, or association. (2) For the purposes of this subchapter, no auditor may be commissioned by a developer or deployer of an automated decision system used in consequential decisions if the auditor: (A) has already been commissioned to provide any auditing or nonauditing service, including financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past 12 months; (B) is or was involved in using, developing, integrating, offering, licensing, or deploying the automated decision system; (C) has or had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision system; or (D) has or had a direct financial interest or a material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision system. (3) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
Pre-filed 2026-07-01
18 V.S.A. § 9423(a)(3)-(4)
Plain Language
Health plans must ensure their AI utilization review tools are fairly applied in compliance with applicable HHS regulations and guidance, and are configured and applied consistently across all health plans and insureds so that patients with similar clinical presentations receive the same decisions. This is a fairness and consistency requirement — it prohibits arbitrary variation in AI-driven outcomes across plans or patient populations but does not prescribe a specific bias testing methodology.
(3) The artificial intelligence, algorithm, or other software tool is fairly applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services. (4) The artificial intelligence, algorithm, or other software tool is configured and applied in a standard, consistent manner for all health plans and insureds so that the resulting decisions are the same for all patients with similar clinical presentation and considerations.
Passed 2026-07-01
H-02.1
18 V.S.A. § 9771(a)(4)-(6)
Plain Language
Health plans must ensure AI utilization review tools do not supplant provider decision-making (reinforcing the human oversight requirement in § 9771(b)), do not discriminate directly or indirectly against covered individuals in violation of state or federal law, and are fairly and equitably applied consistent with HHS regulations and guidance. The nondiscrimination obligation covers both direct and indirect (disparate impact) discrimination.
(4) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision making. (5) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against covered individuals in violation of State or federal law. (6) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services.
Pending 2027-01-01
H-02.3
Sec. 2(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses. Compliance with all other developer obligations in Section 2 creates a rebuttable presumption of reasonable care. Self-testing to identify or prevent discrimination, pool-expansion for diversity, and acts by private clubs are expressly excluded from the definition of algorithmic discrimination.
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
Pending 2027-01-01
H-02.3
Sec. 3(1)
Plain Language
Deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. Full compliance with all other deployer obligations in Section 3 creates a rebuttable presumption of reasonable care. This is the deployer-side analog to the developer duty in Section 2(1) and establishes the overarching standard of care against which deployers will be measured in private litigation.
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
Pending 2027-01-01
H-02.3H-02.10
Sec. 3(3)(a)-(c)
Plain Language
Deployers may not deploy or use a high-risk AI system for consequential decisions without first completing a detailed impact assessment. The assessment must cover nine minimum elements: purpose and use cases, discrimination risks and mitigation steps, consistency with developer-intended uses, data categories processed, customization data used, performance metrics and limitations, transparency measures, post-deployment monitoring and user safeguards, and validity/reliability analysis. A single assessment may cover comparable systems, and assessments completed under other laws may satisfy this requirement if reasonably similar in scope. All impact assessments and supporting records — including raw performance data — must be retained for at least three years after final deployment. Impact assessments must be updated before significant updates are used for consequential decisions.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
Pending 2026-07-01
H-02.1
Sec. 3(1)(a)-(b)
Plain Language
Deployers of high-risk AI systems must use industry-standard measures to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination — defined as unlawful differential impact based on protected characteristics under Washington's anti-discrimination law (RCW 49.60) or federal law. Compliance with the entire chapter creates a rebuttable presumption of reasonable care in any attorney general enforcement action. Testing for bias mitigation and diversity-expanding uses are expressly excluded from the definition of algorithmic discrimination.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 9 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter.
Pending 2026-07-01
H-02.8
Sec. 3(2)(a)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI system to verify it is not causing algorithmic discrimination. Reviews may be conducted by the deployer itself or by a contracted third party. The first review must be completed by July 1, 2027, with subsequent reviews at least annually thereafter. This is a post-deployment monitoring obligation separate from the pre-deployment impact assessment required under Section 5.
(2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Pending 2026-07-01
H-02.3H-02.10
Sec. 5(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system before or at deployment, and again within 90 days after any intentional and substantial modification. The assessment must cover the system's purpose and use cases, algorithmic discrimination risk analysis and mitigation steps, data inputs and outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring safeguards. After a substantial modification, the assessment must also disclose whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems, and an assessment completed for another law satisfies this requirement if reasonably similar in scope. Deployers must retain the most recent impact assessment, supporting records, and all prior assessments for at least three years after final deployment. The small-deployer exemption in Sec. 6 exempts deployers with fewer than 50 FTEs that do not use their own data to train the system, provided they make the developer's impact assessment available to consumers. Trade secrets and confidential information need not be disclosed.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
Pending 2026-07-01
H-02.3
Sec. 6(1)
Plain Language
Small deployers — those with fewer than 50 full-time equivalent employees that do not use their own data to train the high-risk AI system — are exempt from the impact assessment and annual review requirements, provided three conditions are all continuously met: (1) the system is used only for its disclosed intended uses, (2) it continues learning only from non-deployer data, and (3) the deployer makes available to consumers a substantially similar impact assessment completed by the developer. This exemption is conditional and must be maintained throughout deployment — if any condition ceases to be met, the full obligations apply. This is a safe harbor modifying the impact assessment and annual review obligations, not an independent compliance obligation.
(1) The requirements in section 5 (1) through (3) of this act and section 3(2) of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed: (a) The deployer: (i) Employs fewer than 50 full-time equivalent employees; and (ii) Does not use the deployer's own data to train the high-risk artificial intelligence system; (b) The high-risk artificial intelligence system: (i) Is used for the intended uses that are disclosed by the deployer; and (ii) Continues learning based on data derived from sources other than the deployer's own data; and (c) The deployer makes available to consumers any impact assessment that: (i) The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and (ii) Includes information that is substantially similar to the information in the impact assessment required under section 5 of this act.
Pending 2027-01-01
H-02.1H-02.2
Sec. 2(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses. Compliance with the full set of developer obligations in Section 2 creates a rebuttable presumption of reasonable care. The self-testing and diversity-expansion carve-outs in the definition of algorithmic discrimination mean that using the system solely to test for or mitigate bias does not itself constitute prohibited discrimination.
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
Pending 2027-01-01
H-02.1H-02.2
Sec. 3(1)
Plain Language
Deployers of high-risk AI systems have a general duty to exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. Full compliance with all deployer obligations in Section 3 creates a rebuttable presumption of reasonable care. This parallels the developer duty in Section 2(1) but applies to the deployment and operational context rather than the development phase.
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
Pending 2027-01-01
H-02.3H-02.10
Sec. 3(3)(a)-(c)
Plain Language
Deployers must complete a comprehensive impact assessment before initially deploying a high-risk AI system and before using any significant update for consequential decisions. The assessment must cover at minimum: purpose and intended uses, discrimination risks and mitigation steps, data categories processed, customization data, performance metrics, transparency measures, post-deployment monitoring, oversight processes, and a validity and reliability analysis. A single assessment may cover comparable systems. Cross-compliance is available — an impact assessment completed for another law satisfies this requirement if reasonably similar in scope. All impact assessments and records, including raw performance evaluation data, must be retained for at least three years following final deployment.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
Pending 2026-07-01
H-02.1H-02.8
Sec. 3(1)(a)-(b), (2)(a)-(b)
Plain Language
Deployers must use industry-standard measures to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination from their high-risk AI systems. Beginning July 1, 2027, deployers must also conduct at least annual reviews (internally or via a contracted third party) to verify each deployed high-risk AI system is not causing algorithmic discrimination. If discrimination is discovered, the deployer must notify the AG within 90 days. Compliance with the full chapter creates a rebuttable presumption of reasonable care. Trade secret protections apply — deployers need not disclose proprietary information. Algorithmic discrimination is defined by reference to Washington's Law Against Discrimination (chapter 49.60 RCW) and federal law, with carve-outs for bias testing, diversity expansion, and private clubs.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 10 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
Pending 2026-07-01
H-02.3H-02.10
Sec. 6(1)-(7)
Plain Language
Deployers must complete an impact assessment before deploying any high-risk AI system on or after July 1, 2027, and within 90 days after any intentional and substantial modification. The assessment must cover: system purpose and intended uses, algorithmic discrimination risk analysis with mitigation steps, data input categories, outputs, performance metrics and limitations, transparency measures, and post-deployment monitoring and safeguards. Post-modification assessments must also disclose how actual use compared to the developer's intended uses. A single assessment may cover comparable systems. Assessments completed for other legal compliance purposes satisfy this requirement if reasonably similar in scope and effect. Records must be maintained for at least three years following final deployment, including the most recent assessment, supporting records, and prior assessments. Small deployer exemptions under Sec. 7 apply if the deployer has fewer than 50 FTEs, does not use its own data to train the system, uses the system for its disclosed intended uses, and makes the developer's impact assessment available to consumers. Trade secret protections apply.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.