AI systems used in high-stakes contexts must be tested and formally assessed for discriminatory impact across protected characteristics before deployment. Results must be documented and retained. Some jurisdictions require submission to regulators; others require independent third-party audits with public disclosure of results.
(f) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on a basis protected by the Act, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
(2) Prohibited Recruitment Practices. An employer or other covered entity shall not, unless pursuant to a permissible defense, engage in any recruitment activity, including but not limited to practices accomplished through the use of an automated-decision system, that: (A) Restricts, excludes, or classifies individuals on a basis enumerated in the Act; (B) Expresses a preference for individuals on a basis enumerated in the Act; or (C) Communicates or uses advertising methods to communicate the availability of employment benefits in a manner intended to discriminate on a basis enumerated in the Act.
(1) Limited Permissible Inquiries. An employer or other covered entity may make any pre-employment inquiries that do not discriminate on a basis enumerated in the Act. Inquiries, including but not limited to inquiries made through the use of an automated-decision system, that directly or indirectly identify an individual on a basis enumerated in the Act are unlawful unless made pursuant to a permissible defense.
(3)(A) The use of online application technology that limits, screens out, ranks, or prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition. Such a practice having an adverse impact is unlawful unless job-related and consistent with business necessity and the online application technology includes a mechanism for the applicant to request an accommodation. (5) Automated-Decision Systems. The use of an automated-decision system that, for example, measures an applicant's skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics protected under the Act. To avoid unlawful discrimination, an employer or other covered entity may need to provide reasonable accommodation to an applicant as required by Article 8 (religious creed) or Article 9 (disability) of these regulations.
(a) Selection and Testing. Any policy or practice of an employer or other covered entity that has an adverse impact on employment opportunities of individuals on a basis enumerated in the Act is unlawful unless the policy or practice is job-related and consistent with business necessity (business necessity is defined in section 11010(b)). The Council herein adopts the Uniform Guidelines on Employee Selection Procedures promulgated by various federal agencies, including the EEOC and Department of Labor. [29 C.F.R. 1607 (1978)]. (d)(1) Automated-Decision Systems. An automated-decision system that, for example, analyzes an applicant's tone of voice, facial expressions or other physical characteristics or behavior may discriminate against individuals based on race, national origin, gender, disability, or other characteristics protected under the Act. To avoid unlawful discrimination, an employer or other covered entity may need to provide reasonable accommodation to an applicant as required by Article 8 (religious creed) or Article 9 (disability) of these regulations. (e) Permissible Selection Devices. A testing device, automated-decision system, or other means of selection that is facially neutral, but that has an adverse impact (as defined in the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. 1607 (1978))) upon persons on a basis enumerated in the Act, is permissible only upon a showing that the selection practice is job-related and consistent with business necessity (business necessity is defined in section 11010(b)).
(1) Prohibited consideration under this subsection includes, but is not limited to, inquiring about criminal history through an employment application, background check, or internet searches, or the use of an automated-decision system.
(b) The prohibited practices set forth in subsection (a) include any such practice conducted in whole or in part through the use of an automated-decision system.
(b) Discrimination based on an applicant's or employee's accent is unlawful unless the employer proves that the individual's accent interferes materially with the applicant's or employee's ability to perform the job in question. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy). (c) Discrimination based on an applicant's or employee's English proficiency is unlawful unless the employer is necessary to effectively fulfill the job duties of the position.) In determining business necessity in this context, relevant factors include, but are not limited to, the type of proficiency required (e.g., spoken, written, aural, and/or reading comprehension), the degree of proficiency required, and the nature and job duties of the position. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy). (m) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of national origin or a proxy of national origin, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
(4) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of sex, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results. (f) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of sex or any basis prohibited in subsections in (a) through (e) of this section, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
(b) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of pregnancy or perceived pregnancy, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
(J) use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of pregnancy or perceived pregnancy, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results; or
(a) Impermissible Inquiries. It is unlawful to ask an applicant to disclose their marital status as part of a pre-employment inquiry, including an inquiry made through the use of an automated-decision system, unless pursuant to a permissible defense.
(b) It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on the basis of religion, subject to any available defense. Relevant to any such claim or available defense is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.
(a)(2) It is unlawful to advertise or publicize, including but not limited to through the use of an automated-decision system, an employment benefit in any way that discourages or is designed to discourage applicants with disabilities from applying to a greater extent than individuals without disabilities. (b)(2) Prohibited Inquiries. It is unlawful to ask general questions on disability or questions likely to elicit information about a disability in an application form, automated-decision system, or pre-employment questionnaire or at any time before a job offer is made. Examples of prohibited inquiries are: [list of examples]
(e) As used in this article, "medical or psychological examination" (a term that is defined in section 11065 of these regulations) or a disability-related inquiry includes any such examination or inquiry administered through the use of an automated-decision system. Such examination or inquiry may include a test, question, puzzle, game, or other challenge that is likely to elicit information about a disability.
(1) In general. It is unlawful for an employer or other covered entity to use a qualification standards, employment tests, proxy, or other selection criteria — including but not limited to those administered through the use of an automated-decision system — that screens out, tends to screen out, or otherwise has an adverse impact on an applicant or employee with a disability or a class of applicants or employees with disabilities, on the basis of disability. However, such standards, tests, or other selection criteria, as used by the employer or other covered entity, is not unlawful under this subsection when job-related for the position in question, and there is no less discriminatory standard, test, or other selection criteria that serves the employer's goals as effectively as the challenged standard, test, or other selection criteria. (2) Qualification Standards and Tests Related to Uncorrected Vision or Uncorrected Hearing. An employer or other covered entity shall not use a qualification standards, employment tests, proxy, or other selection criteria — including but not limited to those administered through the use of an automated-decision system — that discriminates against an applicant or employee based on uncorrected vision or uncorrected hearing. However, such standards, tests, or other selection criteria, as used by the employer or other covered entity, is not unlawful under this subsection when job-related for the position in question, and there is no less discriminatory standard, test, or other selection criteria that serves the employer's goals as effectively as the challenged standard, test, or other selection criteria. (3) An employer or other covered entity shall not make use of any testing criterion, including but not limited to through the use of an automated-decision system, that discriminates against applicants or employees with disabilities, unless: (A) the test score or other selection criterion used is shown to be job-related for the position in question; and (B) an alternative job-related test or criterion that does not discriminate against applicants or employees with disabilities is unavailable or would impose an undue hardship on the employer.
(a) Employers. Discrimination on the basis of age may be established by showing that a job applicant's or employee's age of 40 or older was considered in the denial of employment or an employment benefit. There is a presumption of discrimination whenever a facially neutral practice, including but not limited to the use of an automated-decision system, has an adverse impact on an applicant(s) or employee(s) age 40 or older, unless the practice is job-related and consistent with business necessity as defined in section 11010(b). In the context of layoffs or salary reduction efforts that have an adverse impact on an employee(s) age 40 or older, an employer's preference to retain a lower paid worker(s), alone, is insufficient to negate the presumption. The practice may still be impermissible, even where it is job-related and consistent with business necessity, where it is shown that an alternative practice could accomplish the business purpose equally well with a lesser discriminatory impact.
(b) Pre-employment Inquiries. Unless age is a bona fide occupational qualification for the position at issue, pre-employment inquiries that would result in the direct or indirect identification of persons on the basis of age, including, but not limited to, inquiries made through the use of an automated-decision system, are unlawful. Examples of prohibited inquiries are requests for age, date of birth, or graduation dates, except where age is a bona fide occupational qualification. This provision applies to oral and written inquiries and interviews. (c)(1) Subsection (c) prohibits the use of online job applications that require entry of age in order to access or complete an application, or the use of drop-down menus that contain age-based cut-off dates or utilize automated selection criteria or algorithms that have the effect of screening out applicants age 40 and older. Use of online application technology or an automated-decision system that limits or screens out older applicants is discriminatory unless age is a bona fide occupational qualification. (See section 11010(a).)
(g) It is unlawful for an employer or other covered entity to discriminate against an applicant or employee because they hold or present a driver's license issued under section 12801.9 of the Vehicle Code. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy).
(h) Citizenship requirements. Citizenship requirements that are a pretext for discrimination or have the purpose or effect of discriminating against applicants or employees on the basis of national origin or ancestry are unlawful, unless pursuant to a permissible defense. This prohibition also applies where such discrimination resulted, in whole or in part, from an employer's or other covered entity's use of an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy).
(5) An employer or other covered entity shall select and administer tests concerning employment so as to ensure that, when administered to any applicant or employee, including an applicant or employee with a disability, the test results accurately reflect the applicant's or employee's job skills, aptitude, or whatever other criteria the test purports to measure, rather than reflecting the applicant's or employee's disability, except those skills affected by disability are the criteria that the tests purport to measure. Tests concerning employment include, but are not limited to, those administered through the use of an automated-decision system. To accomplish this end, reasonable accommodation shall be made in testing conditions.
(F) Alternate tests or individualized assessments may be necessary where test modification is inappropriate. Competent expert advice may be sought before attempting such modification since the validity of the test may be affected. The use of an automated-decision system, in the absence of additional process or actions, does not constitute an individualized assessment.
(a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use. (2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system. (c) (1) A developer shall make available to deployers and potential deployers the statements included in the developer's impact assessment pursuant to paragraph (2). (2) An impact assessment prepared pursuant to this section shall include all of the following: (A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts. (B) A description of the high-risk automated decision system's intended outputs. (C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system. (D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system. (E) A developer's impact assessment shall also include both of the following: (i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system. (ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.
(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system. (2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met: (A) The state agency does not make a substantial modification to the high-risk automated decision system. (B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d). (C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination. (D) The state agency is in compliance with Section 22756.3. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (F) A statement of the extent to which the deployer's use of the high-risk automated decision system is consistent with, or varies from, the developer's statement of the high-risk automated decision system's purpose and intended benefits, intended uses, and intended deployment contexts. (G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system. (H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.
(a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination. (b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.
(c) (1) Developers of AI models or AI systems, in conjunction with health facilities, clinics, physician's offices, or offices of a group practice, shall test for biased impacts in the outputs produced by the specified AI model or AI system based on the health facility's patient population. (2) Developers shall use an existing testing system designated by the advisory board until the advisory board has developed its standardized testing system described in paragraph (2) of subdivision (b). After the advisory board has developed its testing system, developers may alternatively use the board's testing system. (3) After the advisory board has created the certification described in paragraph (3) of subdivision (b), developers may use the advisory board's standardized testing system to certify their AI models or AI systems.
(c) THE ARTIFICIAL INTELLIGENCE SYSTEM IS NOT USED IN ANY WAY THAT DISCRIMINATES AGAINST INDIVIDUALS IN VIOLATION OF OTHER STATE OR FEDERAL LAWS; (d) THE ARTIFICIAL INTELLIGENCE SYSTEM IS FAIRLY AND EQUITABLY APPLIED, INCLUDING IN ACCORDANCE WITH APPLICABLE REGULATIONS AND GUIDANCE ISSUED BY THE FEDERAL DEPARTMENT OF HEALTH AND HUMAN SERVICES;
(1) A controller shall not conduct processing that presents a heightened risk of harm to a consumer without conducting and documenting a data protection assessment of each of its processing activities that involve personal data acquired on or after the effective date of this section that present a heightened risk of harm to a consumer. (2) For purposes of this section, "processing that presents a heightened risk of harm to a consumer" includes the following: (a) Processing personal data for purposes of targeted advertising or for profiling if the profiling presents a reasonably foreseeable risk of: (I) Unfair or deceptive treatment of, or unlawful disparate impact on, consumers; (II) Financial or physical injury to consumers; (III) A physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if the intrusion would be offensive to a reasonable person; or (IV) Other substantial injury to consumers; (b) Selling personal data; and (c) Processing sensitive data. (3) Data protection assessments must identify and weigh the benefits that may flow, directly and indirectly, from the processing to the controller, the consumer, other stakeholders, and the public against the potential risks to the rights of the consumer associated with the processing, as mitigated by safeguards that the controller can employ to reduce the risks. The controller shall factor into this assessment the use of de-identified data and the reasonable expectations of consumers, as well as the context of the processing and the relationship between the controller and the consumer whose personal data will be processed. (4) A controller shall make the data protection assessment available to the attorney general upon request. The attorney general may evaluate the data protection assessment for compliance with the duties contained in section 6-1-1308 and with other laws, including this article 1. Data protection assessments are confidential and exempt from public inspection and copying under the "Colorado Open Records Act", part 2 of article 72 of title 24. The disclosure of a data protection assessment pursuant to a request from the attorney general under this subsection (4) does not constitute a waiver of any attorney-client privilege or work-product protection that might otherwise exist with respect to the assessment and any information contained in the assessment. (5) A single data protection assessment may address a comparable set of processing operations that include similar activities. (6) Data protection assessment requirements apply to processing activities created or generated after July 1, 2023, and are not retroactive.
(6) Duty to avoid unlawful discrimination. A controller shall not process personal data in violation of state or federal laws that prohibit unlawful discrimination against consumers.
(1) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
(1) On and after June 30, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
(3) (a) Except as provided in subsections (3)(d), (3)(e), and (6) of this section: (I) A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system on or after June 30, 2026, shall complete an impact assessment for the high-risk artificial intelligence system; and (II) On and after June 30, 2026, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.
(c) In addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (3) following an intentional and substantial modification to a high-risk artificial intelligence system on or after June 30, 2026, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system.
(g) On or before June 30, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
(a) (1) Prior to deploying an automated employment-related decision process, and annually thereafter, a deployer shall contract with an independent auditor to complete a bias audit. Such bias audit shall be done not later than one year prior to the date the deployer intends to deploy such automated employment-related decision process. (2) Each bias audit conducted pursuant to this subsection shall: (A) Evaluate the automated employment-related decision process performance and error rates across relevant subgroups; (B) Assess disparate impact caused by the automated employment-related decision process against protected classes; (C) Examine the sources of data processed by the automated employment-related decision process and quality of content, decisions, predictions or recommendations generated by the automated employment-related decision process; (D) Evaluate the effects of any thresholds, scoring or ranking criteria utilized by the automated employment-related decision process; and (E) Test for less discriminatory alternatives or adjustments to such automated employment-related decision process. (3) No deployer shall contract with an independent auditor who (A) has a financial or operational interest in the deployer or developer of the automated employment-related decision process, or (B) has not been approved by the Labor Commissioner pursuant to subsection (b) of this section.
(d) No automated employment-related decision process shall be deployed or continue to be deployed by a deployer if the most recent bias audit conducted pursuant to subsection (a) of this section identified any disparate impact caused by such automated employment-related decision process, except where the deployer can demonstrate (1) a business necessity, (2) such deployer has implemented corrective actions approved by the Labor Commissioner, and (3) that either (A) no less discriminatory alternative is available, or (B) a less discriminatory alternative has been implemented by the deployer.
(A) For an employer, by the employer or the employer's agent, except in the case of a bona fide occupational qualification or need, to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment because of, or to use an automated employment-related decision process in any manner that has the effect of causing the employer to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment on the basis of, the individual's race, color, religious creed, age, sex, gender identity or expression, marital status, national origin, ancestry, present or past history of mental disability, intellectual disability, learning disability, physical disability, including, but not limited to, blindness, status as a veteran, status as a victim of domestic violence, status as a victim of sexual assault or status as a victim of trafficking in persons. In any action for a discriminatory practice in violation of this subparagraph involving an automated employment-related decision process, the commission or the court shall consider any evidence, or lack of evidence, of anti-bias testing or similar proactive efforts to avoid such discriminatory practice, including, but not limited to, the quality, efficacy, recency and scope of such testing or efforts, the results of such testing or efforts and the response thereto.
(c) Beginning on February 1, 2024, the Department of Administrative Services shall perform ongoing assessments of systems that employ artificial intelligence and are in use by state agencies to ensure that no such system shall result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of section 2 of this act. The department shall perform such assessment in accordance with the policies and procedures established by the Office of Policy and Management pursuant to subsection (b) of section 2 of this act.
In the city, it shall be unlawful for an employer or an employment agency to use an automated employment decision tool to screen a candidate or employee for an employment decision unless: 1. Such tool has been the subject of a bias audit conducted no more than one year prior to the use of such tool; and 2. A summary of the results of the most recent bias audit of such tool as well as the distribution date of the tool to which such audit applies has been made publicly available on the website of the employer or employment agency prior to the use of such tool.
No developer shall sell, distribute, or otherwise make available to deployers an automated decision system that results in algorithmic discrimination.
A developer of an automated decision system shall take steps to address risks of algorithmic discrimination, invalidity, and errors, including, but not limited to, ensuring suitability and representativeness of data sources, implementing data governance measures, testing the automated decision system for disparate impact, and searching for less discriminatory alternative decision methods. Developers shall continue assessing and mitigating the risk of algorithmic discrimination in their automated decision systems so long as such automated decision systems are in use by any deployer.
No deployer of an automated decision system shall use an automated decision system in a manner that results in algorithmic discrimination.
(e) Except as otherwise provided for in this chapter: (1) A deployer, or a third party contracted by the deployer, that deploys an automated decision system shall complete an impact assessment for the automated decision system; and (2) A deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed automated decision system at least annually and within 90 days after any intentional and substantial modification to the automated decision system is made available. (f) An impact assessment completed pursuant to subsection (e) of this Code section shall include, at a minimum, and to the extent reasonably known by or available to the deployer: (1) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the automated decision system; (2) An analysis of whether the deployment of the automated decision system poses any known or reasonably foreseeable risks of: (A) Algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (B) Limits on accessibility for individuals who are pregnant, breastfeeding, or disabled, and, if so, what reasonable accommodations the deployer may provide that would mitigate any such limitations on accessibility; (C) Any violation of state or federal labor laws, including laws pertaining to wages, occupational health and safety, and the right to organize; or (D) Any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if such intrusion: (i) Would be offensive to a reasonable person; and (ii) May be redressed under the laws of this state; (3) A description of the categories of data the automated decision system processes as inputs and the outputs the automated decision system produces; (4) If the deployer used data to customize the automated decision system, an overview of the categories of data the deployer used to customize the automated decision system; (5) An analysis of the automated decision system's validity and reliability in accordance with contemporary social science standards, and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (6) A description of any transparency measures taken concerning the automated decision system, including any measures taken to disclose to a consumer that the automated decision system is in use when the automated decision system is in use; (7) A description of the post-deployment monitoring and user safeguards provided concerning the automated decision system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the automated decision system; and (8) When such impact assessment is completed following an intentional and substantial modification to an automated decision system, a statement disclosing the extent to which the automated decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of the automated decision system. (g) If the analysis required by paragraph (2) of subsection (f) of this Code section reveals a risk of algorithmic discrimination, the deployer shall not deploy the automated decision system until the developer or deployer takes reasonable steps to search for and implement less discriminatory alternative decision methods. (h) A single impact assessment may address a comparable set of automated decision systems deployed by a deployer. (i) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment shall satisfy the requirements established in this Code section if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this Code section. (j) A deployer shall maintain the most recently completed impact assessment for an automated decision system, all records concerning each impact assessment, and all prior impact assessments, if any, throughout the period of time that the automated decision system is deployed and for at least three years following the final deployment of the automated decision system.
At least annually a deployer, or a third party contracted by the deployer, shall review the deployment of each automated decision system deployed by the deployer to ensure that the automated decision system is not causing algorithmic discrimination.
Deployers shall publish on their public websites all impact assessments completed within the preceding three years in a form and manner prescribed by the Attorney General.
(a) An employer seeking to use or apply an automated decision-making system permitted under Section 10 shall conduct an initial impact assessment, 30 days prior to implementation of the automated decision-making system, bearing the signature of: (1) one or more individuals responsible for meaningful human review of the system; and (2) an independent auditor. A person shall not be an independent auditor under this subsection if, at any point in the 5 years preceding the impact assessment, that person: (i) was involved in using, developing, offering, licensing, or deploying the automated decision-making system under review; (ii) had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision-making system under review; or (iii) had a direct or material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision-making system under review. (b) Following the initial impact assessment, additional impact assessments shall be conducted at least once every 2 years and prior to any material changes to the automated decision-making system. Each impact assessment shall include, in plain language: (1) a description of the objectives of the automated decision-making system; (2) an evaluation of the system's ability to achieve those objectives; (3) a description and evaluation of the algorithms, computational models, and artificial intelligence tools used, including: (A) a summary of underlying algorithms and artificial intelligence tools; and (B) a description of the design and training to be used; (4) testing for: (A) disparate impact or discrimination based on protected characteristics, including, but not limited to discriminating against, persons based on their race, color, religious creed, national origin, sex, disability or perceived disability, gender identity, sexual orientation, genetic information, pregnancy or a condition related to pregnancy, ancestry, or status as a veteran and any actions to mitigate any impacts; (B) accessibility limitations for persons with disabilities; (C) privacy and job quality impacts, including wages, hours, and conditions and safeguards; (D) cybersecurity vulnerabilities and safeguards; (E) public health or safety risks; (F) foreseeable misuse and safeguards; and (G) use, storage, and control of sensitive or personal data; and (5) a notification mechanism for employees impacted by the use of the automated decision-making system.
(c) If an impact assessment finds that an automated decision-making system produces discriminatory, biased, or inaccurate outcomes or fails to meet or negatively impacts any of the measures described in subsection (b) of Section 10, the employer shall immediately cease any use or function of that system and of any information produced by it, and shall take all steps necessary to remedy the discriminatory, biased or inaccurate outcomes produced by the automated decision-making system.
(d) The employer shall notify affected employees and any exclusive bargaining representative, the results of each impact assessment, and provide a copy of the impact assessment upon request. (e) Each impact assessment shall be published on the employer's website, subject to the limitations set forth in Section 20.
(a) On or before January 1, 2027, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses that includes all of the following: (1) a statement of the purpose of the automated decision tool and its intended benefits, uses, and deployment contexts; (2) a description of the automated decision tool's outputs and how they are used to make, or be a controlling factor in making, a consequential decision; (3) a summary of the type of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision; (4) an analysis of potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information from the deployer's use of the automated decision tool; (5) a description of the safeguards implemented, or that will be implemented, by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the automated decision tool known to the deployer at the time of the impact assessment; (6) a description of how the automated decision tool will be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision; and (7) a description of how the automated decision tool has been or will be evaluated for validity or relevance. (b) A deployer shall, in addition to the impact assessment required by subsection (a), perform, as soon as feasible, an impact assessment with respect to any significant update. (c) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year. Section 35. Impact assessment. (a) Within 60 days after completing an impact assessment required by this Act, a deployer shall provide the impact assessment to the Attorney General. (b) A deployer who knowingly violates this Section shall be liable for an administrative fine of not more than $10,000 per violation in an administrative enforcement action brought by the Attorney General. Each day on which an automated decision tool is used for which an impact assessment has not been submitted as required under this Section shall give rise to a distinct violation of this Section. (c) The Attorney General may share impact assessments with other State entities as appropriate.
(a) A deployer shall not use an automated decision tool that results in algorithmic discrimination. (b) On and after January 1, 2028, a person may bring a civil action against a deployer for violation of this Section. In an action brought under this subsection, the plaintiff shall have the burden of proof to demonstrate that the deployer's use of the automated decision tool resulted in algorithmic discrimination that caused actual harm to the person bringing the civil action. (c) In addition to any other remedy at law, a deployer that violates this Section shall be liable to a prevailing plaintiff for any of the following: (1) compensatory damages; (2) declaratory relief; and (3) reasonable attorney's fees and costs.
use an automated decision system output in making an employment related decision with respect to a covered individual unless: (A) the automated decision system used to generate the automated decision system output has had predeployment testing and validation with respect to: (i) the efficacy of the system; (ii) the compliance of the system with applicable employment discrimination laws, including Title VII of the Civil Rights Act of 1964 (42 U.S.C. 2000e et seq.), the Age Discrimination in Employment Act of 1967 (29 U.S.C. 621 et seq.), Title I of the Americans with Disabilities Act of 1990 (42 U.S.C. 12111 et seq.), Title II of the Genetic Information Nondiscrimination Act of 2008 (42 U.S.C. 2000ff et seq.), Section 6(d) of the Fair Labor Standards Act of 1938 (29 U.S.C. 206(d)), Sections 501 and 505 of the Rehabilitation Act of 1973 (29 U.S.C. 791 and 29 U.S.C. 793), and the Pregnant Workers Fairness Act (42 U.S.C. 2000gg); (iii) the lack of any potential discriminatory impact of the system, including discriminatory impact based on race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, or disability, and genetic information (including family medical history); and (iv) the compliance of the system with the Artificial Intelligence Risk Management Framework released by the National Institute of Standards and Technology on January 26, 2023, or a successor framework;
(B) the automated decision system is, not less than annually, independently tested for discriminatory impact described in clause (A)(iii) or potential biases and the results of the test are made publicly available;
(a) Duty of Care: Developers must use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination.
(b) Impact Assessments: (1) Deployers must complete an annual impact assessment for each high-risk AI system, including: (i) The purpose and intended use of the system; (ii) Data categories used and outputs generated; (iii) Potential risks of discrimination and mitigation measures. (2) Impact assessments must be updated after any substantial modification to the system. State-provided templates for these assessments will be made available to reduce compliance burdens.
(a) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
(a) Not later than 6 months after the effective date of this act, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
(c) (1) except as provided in subsections (c)(4), (c)(5), and (f) of this section: (i) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall complete an impact assessment for the high-risk artificial intelligence system; and (ii) Not later than 6 months after the effective date of this act, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) an impact assessment completed pursuant to this subsection (c) must include, at a minimum, and to the extent reasonably known by or available to the deployer: (i) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (iii) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vi) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (vii) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) in addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (c) following an intentional and substantial modification to a high-risk artificial intelligence system not later than 6 months after the effective date of this act, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) a single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) if a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (c) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (c). (6) a deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (c), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) Not later than 6 months after the effective date of this act, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
(j) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated employment decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments must: (i) be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry best practices; (iv) identify which allowable purpose(s) described in this chapter; (vi) consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; and (vii) consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions.
a) It shall be unlawful for an employer to use an automated employment decision tool for an employment decision, alone or in conjunction with electronic monitoring, unless such tool has been the subject of an impact assessment. Impact assessments must: (i) be conducted no more than one year prior to the use of such tool, or where the tool was in use by the employer before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) identify and describe the attributes and modeling techniques that the tool uses to produce outputs; (iv) evaluate whether those attributes and techniques are a scientifically valid means of evaluating an employee or candidate's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under chapter 151B or any other applicable law; (v) consider, identify, and describe any disparities in the data used to train or develop the tool and describe how those disparities may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy any disparate impact; (vi) consider, identify, and describe any outputs produced by the tool that may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy that disparate impact; (vii) evaluate whether the use of the tool may limit accessibility for persons with disabilities, or for persons with any specific disability, and what actions may be taken by the employer or vendor of the tool to reduce or remedy the concern; (viii) consider and describe potential sources of adverse impact against individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that may arise after the tool is deployed; (ix) identify and describe any other assessment of risks of discrimination or a disparate impact of the tool on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that arise over the course of the impact assessment, and what actions may be taken to reduce or remedy that risk; (x) for any finding of a disparate impact or limit on accessibility, evaluate whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a candidate's performance or ability to perform job functions; (xi) consider and describe any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (xii) consider and describe whether use of the tool may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (xiii) be submitted in its entirety or an accessible summary form to the department for inclusion in a public registry of such impact assessments within sixty days of completion and distributed to employees who may be subject to the tool.
(b) An employer shall conduct or commission subsequent impact assessments each year that the tool is in use to assist or replace employment decisions. Subsequent impact assessments shall comply with the requirements of paragraph (a) of this section, and shall assess and describe any change in the validity or disparate impact of the tool.
(e) If an initial or subsequent impact assessment concludes that a data set, feature, or application of the automated employment decision tool results in a disparate impact on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, or unlawfully limits accessibility for persons with disabilities, an employer shall refrain from using the tool until it: (i) takes reasonable and appropriate steps to remedy that disparate impact or limit on accessibility and describe in writing to employees, the auditor, and the department what steps were taken; and (ii) if the employer believes the impact assessment finding of a disparate impact or limit on accessibility is erroneous, or that the steps taken in accordance with subparagraph (i) of this paragraph sufficiently address those findings such that the tool may be lawfully used in accordance with this article, describes in writing to employees, the auditor, and the department how the data set, feature, or application of the tool is the least discriminatory method of assessing an employee's performance or ability to complete essential functions of a position.
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (2) Not directly or indirectly discriminate against an enrollee on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life or other health conditions; (3) Be fairly and equitably applied;
Sec. 9. (1) Before an employer uses an automated decisions tool under section 4 or an electronic monitoring tool under section 5, the employer shall conduct an impact assessment of the tool that meets all of the following requirements: (a) Evaluates the tool's objectives, algorithms, data, cybersecurity vulnerabilities, and potential biases, including, but not limited to, discriminatory outcomes based on race, gender, or disability. (b) Is conducted 1 year before the tool is implemented, or, for a tool already in use on the effective date of this act, not more than 6 months after the effective date of this act. (c) Is conducted by an independent and impartial third party with no financial or legal conflicts of interests related to the use of the tool. (d) Identifies and describes the attributes and modeling techniques that the tool uses to produce outputs. (e) Evaluates whether the attributes and modeling techniques described in subdivision (d) are a scientifically valid means of evaluating a covered individual's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under the Elliot-Larsen civil rights act, 1976 PA 453, MCL 37.2101 to 37.2804. (f) Considers, identifies, and describes both of the following that may result in a disparate impact on a covered individual based on the covered individual's qualified characteristic, and what actions may be taken by the employer to reduce or remedy any disparate impact. (i) Any disparities in the data used to train or develop the tool. (ii) Any outputs produced by the tool. (g) Evaluates whether the use of the tool may limit accessibility for covered individuals with disabilities, or for covered individuals with any specific disability, and what actions may be taken by the employer to reduce or remedy the limit on accessibility. (h) Considers and describes potential sources of adverse impact against covered individuals or groups based on a qualified characteristic that may arise after the tool is implemented. (i) Identifies and describes any other assessment of risks of discrimination or a disparate impact of the tool on covered individuals or groups based on a qualified characteristic, and what actions may be taken to reduce or remedy that risk. (j) For any finding of a disparate impact or limit on accessibility, evaluates whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a covered individual's performance or ability to perform job functions. (k) Considers and describes any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent a violation. (l) Considers and describes whether use of the tool may negatively affect a covered individual's privacy or job quality, including wages, hours, and working conditions. (2) Not more than 60 days after an employer completes an assessment, the employer shall do both of the following: (a) Submit the assessment in its entirety or in an accessible summary form to the department for the department to include in a public registry of impact assessments. (b) Distribute the assessment to covered individuals who may be subject to the tool. (3) An employer shall conduct or commission subsequent impact assessments each year in which the electronic monitoring tool or automated decisions tool is in use. Subsequent impact assessments must comply with the requirements of subsection (1), as applicable, and must assess and describe any change in the validity or disparate impact of the tool.
(b) It is an unfair employment practice, with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, for an employer to: (1) use artificial intelligence that has the effect of subjecting an employee or applicant for employment to discrimination because of race, color, creed, religion, national origin, sex, gender identity, marital status, status with regard to public assistance, familial status, membership or activity in a local commission, disability, sexual orientation, or age;
(e) the use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law, including 49-2-309; (f) the artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services;
(a) Practices and policies that have a disparate impact, as defined at (b) below, on members of a protected class, even if these practices and policies are not discriminatory on their face (that is, facially neutral) and are not motivated by discriminatory intent, will be considered discriminatory and a violation of the Act, unless it is shown that such practices and policies are necessary to achieve a substantial, legitimate, nondiscriminatory interest and there is no less discriminatory alternative that would achieve the same interest. (b) A practice or policy has a disparate impact where it actually or predictably results in a disproportionately negative effect on members of a protected class. A practice or policy predictably can have a disparate impact when there is evidence that the practice or policy will have a disparate impact even though the practice or policy has not yet been implemented, if the practice or policy has been approved, announced, or otherwise finalized. However, a practice or policy that is simply being debated or deliberated internally by a covered entity cannot be challenged pursuant to this chapter before it is implemented, approved, announced, or otherwise finalized.
(c) Automated employment decision technology practices are as follows: 1. The use of automated employment decision tools to make employment decisions, including, but not limited to, decisions related to advertising, recruiting, screening, interviewing, hiring, and compensation, or any other terms, conditions, or privileges of employment, may have a disparate impact on applicants and employees based on their race, national origin, gender, disability, religion, and other protected characteristics. By way of example, but not limitation, an automated employment decision tool that uses data on a company's current employees to inform a search for candidates may have a disparate impact on members of protected classes that are not well represented in that company or industry. If most current employees at a computer science company are white, cisgender men, an automated employment decision tool that assesses applicants based on that pool may score women applicants lower because their resumes list "women's field hockey" rather than "football," or score Black applicants lower because their resumes list "Black Student Alliance," an organization in which the company's current employees are less likely to have been involved; 2. The use of an automated employment decision tool that limits or screens out applicants based on their schedule may have a disparate impact on applicants based on their religion, disability, or medical condition and must include a mechanism for applicants to request a reasonable accommodation. By way of example, but not limitation, an application asking if an applicant is available to work a proposed schedule of Monday through Saturday may screen out applicants who answer the question in the negative due to religious practices they engage in on Saturdays; and 3. An employer's use of an automated employment decision tool that has not been adequately tested and shown to not adversely affect people in a protected class before its use may have a disparate impact on members of that protected class. By way of example, but not limitation, an employer's use of facial analysis technology to detect personality traits during virtual interviews is likely to result in lower scores for interviewees whose facial expressions the tools have not been tested on and designed to read. If the technology was tested exclusively or predominantly on white people with no disabilities, then use of the technology may disproportionately impact interviewees with darker skin or interviewees with disabilities because the technology cannot match their facial expressions to those programmed into the tool and may not account for interviewees who cannot make certain facial expressions. i. The use of facial analysis technology may disproportionately impact interviewees wearing religious headwear or maintaining religiously mandated facial hair if the technology has not been tested on people with similar religious practices.
(e) If a respondent's practice or policy that results in a disparate impact based on a protected characteristic relies on conduct, standards, products, procedures, or systems of an outside person or vendor, the respondent must take reasonable steps to ensure that the outside person or vendor's conduct, standards, products, procedures, or systems are consistent with the Act and this chapter.
(a) A complainant challenging a practice or policy of a covered entity must show the practice or policy challenged has a disparate impact on members of a protected class. (b) In the employment, public accommodations, and contracting contexts, if the complainant meets the burden of proof at (a) above, the respondent has the burden of showing that the challenged practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest. In the employment context, whether a practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest is equivalent to whether the practice or policy is job related and consistent with a legitimate business necessity. A practice or policy is job related when it bears a demonstrable relationship to successful performance of the job and measures the person's fitness for the specific job. (c) In the employment, public accommodations, and contracting contexts, if the respondent meets the burden at (b) above, the complainant has the burden of showing that there is a less discriminatory alternative means of achieving the substantial, legitimate, nondiscriminatory interest. (d) To meet its burden of proof at (a), (b), or (c) above, a party must provide empirical evidence, meaning evidence that is not hypothetical or speculative, to support its allegations. For example, a complainant would not meet its burden to show an employment policy has a disparate impact on job applicants based on gender by speculating that the policy harms women more than men, but could meet its burden by providing empirical evidence, which could include applicant files or data or applicant selection rates by gender. Anecdotal evidence, while not sufficient on its own, may be introduced along with empirical evidence. For example, a complainant would not meet its burden to show an employment policy has a disparate impact on job applicants based on gender by solely providing that they know women who applied and did not receive a position but men who did. However, a complainant could introduce anecdotal evidence along with empirical evidence, such as applicant selection rates by gender. (e) The opposing party may rebut whether the party with the burden of proof at (a), (b), or (c) above has met its burden. (f) Additional proof may be required when challenging or defending particular practices or policies. Such requirements are noted in this chapter, where relevant.
(b) To establish that a challenged practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest, a respondent must establish that: 1. The practice or policy is necessary to achieve one or more substantial, legitimate, nondiscriminatory interests, where "substantial interest" means a core interest of the entity that has a direct relationship to the function of that entity, "legitimate" means that a justification is genuine and not false or pretextual, and "nondiscriminatory" means that the justification for a challenged practice or policy does not itself discriminate based on a protected characteristic; and 2. The practice or policy effectively carries out the identified interest. (c) The determination of whether an interest is substantial, legitimate, and nondiscriminatory requires a case-specific, fact-based inquiry. An interest in achieving diversity or increasing access for underrepresented or underserved members of a protected class may constitute a substantial, legitimate, nondiscriminatory interest.
(a) Employment practices and policies may be unlawful if they have a disparate impact on members of a protected class. An employment practice or policy that has a disparate impact is prohibited unless, in accordance with N.J.A.C. 13:16-2.2, a respondent shows it is necessary to achieve a substantial, legitimate, nondiscriminatory interest. Whether an employment practice or policy is necessary to achieve a substantial, legitimate, nondiscriminatory interest is equivalent to whether the practice or policy is job related and consistent with a legitimate business necessity. An employment practice or policy may still be prohibited if necessary to achieve a substantial, legitimate, nondiscriminatory interest if a complainant shows there is a less discriminatory alternative that would achieve the same interest. (b) Nothing in this subchapter shall preclude affirmative efforts to utilize recruitment practices to attract an individual who is a member of an underrepresented or underserved member of a protected class covered by the Act. (c) This subchapter applies to the practices and policies of employers, labor organizations, employment agencies, and other covered entities.
(1)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section.
(1)(a) On and after February 1, 2026, a deployer of any high-risk artificial intelligence system shall use reasonable care to protect consumers from each known risk of algorithmic discrimination. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section.
(3)(a) Except as otherwise provided in this subsection or subsection (6) of this section: (i) An impact assessment shall be completed for each high-risk artificial intelligence system deployed on or after February 1, 2026. Such impact assessment shall be completed by the deployer or by a third party contracted by the deployer; and (ii) On and after February 1, 2026, for each deployed high-risk artificial intelligence system, a deployer or a third party contracted by the deployer shall complete an impact assessment within ninety days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (b) An impact assessment completed pursuant to this subsection shall include to the extent reasonably known by or available to the deployer: (i) A statement by the deployer disclosing: (A) The purpose of the high-risk artificial intelligence system; (B) Any intended-use case for the high-risk artificial intelligence system; (C) The deployment context of the high-risk artificial intelligence system; and (D) Any benefit afforded by the high-risk artificial intelligence system; (ii) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known risk of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate any such risk; (iii) A high-level summary of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) Any metric used to evaluate the performance and any known limitation of the high-risk artificial intelligence system; (vi) A description of any transparency measure taken concerning the high-risk artificial intelligence system, including any measure taken to disclose to a consumer when the high-risk artificial intelligence system is in use; and (vii) A description of each postdeployment monitoring and user safeguard provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address any issue that arises from the deployment of the high-risk artificial intelligence system. (c) Any impact assessment completed pursuant to this subsection following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, shall include a statement that discloses the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from any use of the high-risk artificial intelligence system intended by the developer. (d) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (e) Any impact assessment completed to comply with another applicable law or regulation by a deployer or by a third party contracted by the deployer shall satisfy this subsection if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (f) A deployer shall maintain: (i) The most recently completed impact assessment required under this subsection for each high-risk artificial intelligence system of the deployer; (ii) Each record concerning each such impact assessment; and (iii) For at least three years following the final deployment of each high-risk artificial intelligence system, each prior impact assessment, if any, and each record concerning such impact assessment.
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: j. Use, deploy, develop, produce, sell, or offer for sale, an EMT or other surveillance of an employee, service beneficiary, or applicant for employment, or use, deploy, develop, produce, sell, or offer for sale, an AEDS or ABSDS, to obtain, infer, analyze, or use in making a hiring decision or other employment-related decision or decision regarding public benefits or services, any data or information about the employee's, service beneficiary's, or applicant for employment's being in or perceived to be in a classification, or having or being perceived to have a characteristic, protected under section 11 of P.L.1945, c.169 (C.10:5-12), or information about present or past union membership or advocacy or any other classification or characteristic, other than unlawful behavior, of the employee or applicant for employment which is not directly related to work performance or work qualifications, or of any other classification or characteristic of a service beneficiary which is not specifically required to confirm the identity of the beneficiary or determine eligibility for public benefits or services. An employer or public entity may not, in providing employee, applicant, or service beneficiary data or information for the AEDS or ABSDS or in directly making employment-related decisions or decisions about public benefits or services, use data or information about employee, applicant, or beneficiary classification or characteristics as identified in this subsection. It shall not be a violation of this subsection for an ABSDS to retain and use information essential to providing specific public services, such as student academic records in educational services and individual health information in health services, and information specifically required to determine eligibility for the public benefits or services;
An employer or public entity, or vendor acting on behalf of an employer or public entity shall not implement the use of an AEDS or an EMT or other surveillance of employees, or use an AEDS or information obtained through the EMT when making employment-related decisions regarding employees or applicants for employment, unless all of the following conditions are met: a. The EMT or other surveillance, and the AEDS, are primarily intended and demonstrably verified through appropriate pretesting, validation, and relevant impact assessments conducted pursuant to this section to accomplish any of the following allowable purposes: (1) assisting an employee to accomplish essential work functions; (2) ensuring the quality of goods and services; (3) making periodic assessments of employee performance, including to assist in making employment-related decisions; (4) ensuring compliance with provisions of employment, labor, or other relevant laws; (5) protecting the health, safety, or security of employees and the public; or (6) administering wages and benefits. b. The EMT and surveillance and the AEDS shall: (1) be limited to what is necessary to accomplish the allowable purposes specified in subsection a. of this section; (2) be used exclusively to accomplish those purposes; (3) use the means least invasive to employees or applicants for employment needed to accomplish those purposes; (4) be limited to the smallest number of employees and least amount of data and information needed to accomplish those purposes, and (5) have data and information collected no more frequently than is necessary to accomplish those purposes. c. The data and information about an employee or applicant collected by an EMT or other surveillance or used by the AEDS shall be accessed only by authorized agents of the employer, the public entity, or the employee or the employee's authorized representative. d. Prior to deployment or implementation, an objective and impartial impact assessment of the AEDS or EMT, including an assessment of the economic impacts of factors such as wages, hours, benefits, work opportunities, and advancement, has been conducted by an independent auditor, or, if the AEDS or EMT is to be applied to public employees, by the department, in which the auditor or the department determines and affirms in a report, with supporting documentation indicating: (1) that the EMT requires the implementation of procedures to ensure that it is used in a manner that complies with the requirements of subsections c., d., e., f., and g. of section 2 of this act; (2) that the AEDS or EMT complies with the requirements of subsections a., b., h. i., k. and l. of section 2 of this act and subsections a. and b. of this section, including the implementation of effective procedures to remedy potential risks to worker rights, including privacy, health and safety, dignity and autonomy, and to prevent inhibiting legally protected activity, including organizing and collective bargaining. (3) that the AEDS or EMT complies with the requirements of subsection j. of section 2 of this act, including that the auditor or the department, with respect to classifications and characteristics identified in that subsection of employees or applicants for employment, considers, identifies, and describes any disparities in the data used to train or develop the AEDS that may result in the outputs of the AEDS having a disparate, adverse impact on employees or applicants, and that the auditor or the department determines that the AEDS includes provisions to effectively remedy any such disparate, adverse impact; and (4) that the AEDS or EMT requires the implementation of effective procedures for monitoring, feedback, and ongoing human oversight, including full compliance with the requirements of section 9 of this act, as needed to prevent or remedy any potential discriminatory, biased, inaccurate, or harmful outcomes. e. The vendor has provided the auditor or the department with access to all information needed to conduct the impact assessment of either an AEDS or an EMT, including, in the case of an AEDS: (1) all documentation about its design and development, its technical specifications, the sources of data used to develop and train it, the individuals involved in its development, and a historical record of past versions of the AEDS; (2) a detailed description of its intended purpose, deployment context, rationale for use, the categories, sources, and methods of data it utilizes; (3) outputs and the types of employment-related decisions in which those outputs may be used; (4) what the benefits and effects are of using the AEDS to supplement non-automated decision-making, and the impacts its use may have on overall efficiency and output for the public entity or employer that deploys it, including quantified estimates of: the amounts of cost savings for the employer or public entity; any anticipated reductions of employment by the employer or public entity; any offset to the employment reductions caused by new employment related to the human oversight requirements of section 9 of this act; and the percentage of the cost savings attributable to reductions of employment, and these estimates shall be featured prominently in the summary of the impact assessment submitted to the department pursuant to subsection g. of this section and section 4 of this act and included in the notices provided to employees or service beneficiaries pursuant to section 6 of this act; and (5) an analysis of the accuracy, reliability, validity, and error rates of the AEDS, including the reasonably foreseeable effects of tuning, retraining, or modification.
The impact assessment shall be conducted not more than one year prior to deployment. For an AEDS or EMT already in use on the effective date of this act, the impact assessment shall be completed within six months after the effective date. Impact assessments shall be updated upon any substantial change in the categories, sources, quotas, metrics, thresholds, or benchmarks used by the EMT or the AEDS, or any substantial modification, retraining, repurposing, or updating which may change outputs of an AEDS. Any subsequent impact assessment or update conducted pursuant to this subsection shall be subject, in the same manner as an initial impact assessment, to all of the requirements of subsections d., e., g., and h. of this section. Until those requirements are met, the AEDS or EMT shall not be permitted to operate.
A public entity, or vendor acting on behalf of a public entity, shall not implement the use an ABSDS, or use the ABSDS when making decisions regarding provision of public benefits or services to service beneficiaries, unless all of the following conditions are met: a. An objective and impartial impact assessment of the ABSDS, including an assessment of its economic impacts of factors such as wages, hours, benefits, work opportunities, and advancement, has been conducted by the department, in which the department determines and affirms in a report, with supporting documentation indicating: (1) that the ABSDS complies with the requirements of subsections a., b., k. and l. of section 2 of this act, including by requiring the implementation of effective procedures to remedy potential risks to the rights of service beneficiaries, including privacy, health and safety, dignity and autonomy, and to prevent inhibiting legally protected activity; (2) that the ABSDS complies with the requirements of subsection j. of section 2 of this act, including that the department, with respect to classifications and characteristics identified in that subsection of service beneficiaries, considers, identifies, and describes any disparities in the data used to train or develop the ABSDS that may result in the outputs of the ABSDS having a disparate, adverse impact on service beneficiaries, and that the department determines that the ABSDS includes provisions to effectively remedy any such disparate, adverse impact; and (3) that the ABSDS requires the implementation of effective procedures for monitoring, feedback, and ongoing human oversight, including full compliance with the requirements of section 9 of this act, as needed to prevent or remedy any potential discriminatory, biased, inaccurate, or harmful outcomes, including incorrect denials of public benefits or services based on mistaken claims of fraud by beneficiaries. b. The vendor has provided the department with access to all information needed to conduct the impact assessment of an ABSDS, including: (1) all documentation about its design and development, its technical specifications, the sources of data used to develop and train it, the individuals involved in its development, and a historical record of past versions of the ABSDS; (2) a detailed description of its intended purpose, deployment context, rationale for use, the categories, sources, and methods of data it utilizes; (3) outputs and the types of employment-related decisions in which such outputs may be used; (4) what the benefits and effects are of using the ABSDS to supplement non-automated decision-making, and the impacts its use may have on overall efficiency and output for the public entity that deploys it, including quantified estimates of: the amounts of savings for the public entity; any anticipated reductions of employment by the employer or public entity; any offset to the employment reductions caused by new employment related to the human oversight requirements of section 9 of this act; and the percentage of cost savings attributable to reductions of employment, and these estimates shall be featured prominently in the summary of the impact assessment submitted to the department pursuant to subsection e. of this section and section 4 of this act and included in the notices submitted to employees or service beneficiaries pursuant to section 6 of this act; and (5) an analysis of the accuracy, reliability, validity, and error rates of the ABSDS, including the reasonably foreseeable effects of tuning, retraining, or modification. c. The data and information used by the ABSDS shall be accessed only by authorized agents of the public entity or service beneficiary. d. The impact assessment shall be conducted not more than one year prior to deployment. For an ABSDS already in use on the effective date of this act, the impact assessment shall be completed within one year after the effective date. Impact assessments shall be updated upon any substantial change in the categories, sources, quotas, metrics, thresholds, or benchmarks used by the ABSDS, or any substantial modification, retraining, repurposing, or updating which may change outputs of an ABSDS. Any subsequent impact assessment or update conducted pursuant to this subsection shall be subject, in the same manner as an initial impact assessment, to all of the requirements of subsections a. b., and e. of this section. Until those requirements are met, the ABSDS shall not be permitted to operate. e. The report of the impact assessment shall include all of the information and data used in making its determinations, including the full data and information provided pursuant to subsections a. and b. of this section, and shall, within 60 days of its completion, be submitted in its entirety, together with an accessible summary of the report, to the department, for inclusion in a public registry of impact assessments maintained by the department, and to the vendor, who shall provide it to any public entity seeking to implement the ABSDS. Impact assessments in the public registry shall be made available to affected service recipients, entities, applicants for employment and their authorized representatives. f. The vendor shall pay the department the full amount of the direct costs of making the impact assessment of the ABSDS.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
a. The Office of the Attorney General shall investigate complaints related to AI-driven discrimination, unreasonable AI workplace surveillance, and claims of violations of civil rights protections related to AI. The Attorney General shall enforce penalties pursuant to the "Law Against Discrimination," P.L.1945, c.169 (C.10:5-1 et seq.), and the "New Jersey Civil Rights Act," P.L.2004, c.143 (C.10:6-1 et seq.) for violations of this section. b. As used in this section: "AI-driven discrimination" means output resulting from AI systems that exhibit biases against individuals based on age, race, religion, or other protected classes. "AI workplace surveillance" means the use of AI to monitor and analyze employee behavior and performance through the use of technology tools that track employee activities including computer usage and physical movements.
An employer that uses an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview shall collect and report the following demographic data: (1) the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview after the use of artificial intelligence analysis; and (2) the race and ethnicity of applicants who are offered a position or hired.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
(a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
(a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
(a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
It shall be unlawful for a landlord to implement or use an automated housing decision making tool, including the use of an automated housing decision making tool that issues a score, classification, or recommendation, that fails to comply with the following provisions: (a) No less than annually, a disparate impact analysis shall be conducted to assess the actual impact of any automated housing decision making tool used by any landlord to select applicants for housing within the state. Such disparate impact analysis shall be provided to the landlord. (b) A summary of the most recent disparate impact analysis of such tool as well as the distribution date of the tool to which the analysis applies shall be made publicly available on the website of the landlord prior to the implementation or use of such tool. Such summary shall also be made accessible through any listing for housing on a digital platform for which the landlord intends to use an automated housing decision making tool to screen applicants for housing.
1. No New York resident shall face discrimination by algorithms, and all automated systems shall be used and designed in an equitable manner. 2. The designers, developers, and deployers of automated systems shall take proactive and continuous measures to protect New York residents and communities from algorithmic discrimination, ensuring the use and design of these systems in an equitable manner. 3. The protective measures required by this section shall include proactive equity assessments as part of the system design, use of representative data, protection against proxies for demographic features, and assurance of accessibility for New York residents with disabilities in design and development. 4. Automated systems shall undergo pre-deployment and ongoing disparity testing and mitigation, under clear organizational oversight.
5. Independent evaluations and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, shall be conducted for all automated systems. 6. New York residents shall have the right to view such evaluations and reports.
(3) The use of the artificial intelligence, algorithm, or other software tool does not adversely discriminate, directly or indirectly, against an individual on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. (4) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied.
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data;
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
4. At the attorney general's discretion, the attorney general may: (a) promulgate further rules as necessary to ensure that audits under this section assess whether or not AI systems produce algorithmic discrimination and otherwise comply with the provisions of this article; and (b) recommend an updated AI system auditing framework to the legislature, where such recommendations are based on a standard or framework (i) designed to evaluate the risks of AI systems, and (ii) that is nationally or internationally recognized and consensus-driven, including but not limited to a relevant framework or standard created by the International Standards Organization.
No less than annually, any real estate broker or online housing platform that uses virtual agents to assist with searches for available properties for sale or rental properties, and any online housing platform that uses AI tools, shall have a disparate impact analysis conducted and shall submit a summary of the most recent disparate impact analysis to the attorney general's office.
Any real estate broker or online housing platform that offers or uses virtual agents or AI tools shall: (a) proactively identify discriminatory algorithmic results and modify such virtual agents or AI tools to adopt less discriminatory alternatives, including but not limited to, assessing data used to train such virtual agents or AI tools and verifying that use of such data does not predict discriminatory outcomes; (b) ensure that the artificial intelligence or other computational or algorithmic systems upon which such virtual agents or AI tools are structured are similarly predictive across groups on the basis of sex, race, ethnicity or other protected classes, and make adjustments to correct any identified disparities in predictiveness for any such groups; and (c) conduct regular end-to-end testing of advertising, captioning, and chatbot systems to ensure that any discriminatory outcomes are detected, including but not limited to, comparing the delivery of advertisements across different demographic audiences.
1. A developer or deployer shall not offer, license, promote, sell, or use a covered algorithm in a manner that: (a) causes or contributes to a disparate impact in a manner that prevents; (b) otherwise discriminates in a manner that prevents; or (c) otherwise makes unavailable, the equal enjoyment of goods, services, or other activities or opportunities, related to a consequential action, on the basis of a protected characteristic. 2. This section shall not apply to: (a) the offer, licensing, or use of a covered algorithm for the sole purpose of: (i) a developer's or deployer's self-testing (or auditing by an independent auditor at a developer's or deployer's request) to identify, prevent, or mitigate discrimination, or otherwise to ensure compliance with obligations, under federal or state law; (ii) expanding an applicant, participant, or customer pool to raise the likelihood of increasing diversity or redressing historic discrimination; or (iii) conducting good faith security research, or other research, if conducting the research is not part or all of a commercial act; or (b) any private club or other establishment not in fact open to the public, as described in section 201(e) of the Civil Rights Act of 1964 (42 U.S.C. 2000a(e)).
1. Prior to deploying, licensing, or offering a covered algorithm (including deploying a material change to a previously-deployed covered algorithm or a material change made prior to deployment) for a consequential action, a developer or deployer shall conduct a pre-deployment evaluation in accordance with this section. 2. (a) The developer shall conduct a preliminary evaluation of the plausibility that any expected use of the covered algorithm may result in a harm. (b) The deployer shall conduct a preliminary evaluation of the plausibility that any intended use of the covered algorithm may result in a harm. (c) Based on the results of the preliminary evaluation, the developer or deployer shall: (i) in the event that a harm is not plausible, record a finding of no plausible harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for the finding, and submit such record to the division; and (ii) in the event that a harm is plausible, conduct a full pre-deployment evaluation as described in subdivision three or subdivision four of this section, as applicable. (d) When conducting a preliminary evaluation of a material change to, or new use of, a previously-deployed covered algorithm, the developer or deployer may limit the scope of the evaluation to whether use of the covered algorithm may result in a harm as a result of the material change or new use. 3. (a) If a developer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the developer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the covered algorithm's design and methodology, including the inputs the covered algorithm is designed to use to produce an output and the outputs the covered algorithm is designed to produce; (ii) how the covered algorithm was created, trained, and tested, including: (A) any metric used to test the performance of the covered algorithm; (B) defined benchmarks and goals that correspond to such metrics, including whether there was sufficient representation of demographic groups that are reasonably likely to use or be affected by the covered algorithm in the data used to create or train the algorithm, and whether there was reasonable testing, if any, across such demographic groups; (C) the outputs the covered algorithm actually produces in testing; (D) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the development of the covered algorithm, or a disclosure that no such consultation occurred; (E) a description of which protected characteristics, if any, were used for testing and evaluation, and how and why such characteristics were used, including: (1) whether the testing occurred in comparable contextual conditions to the conditions in which the covered algorithm is expected to be used; and (2) if protected characteristics were not available to conduct such testing, a description of alternative methods the developer used to conduct the required assessment; (F) any other computational algorithm incorporated into the development of the covered algorithm, regardless of whether such precursor computational algorithm involves a consequential action; (G) a description of the data and information used to develop, test, maintain, or update the covered algorithm, including: (1) each type of personal data used, each source from which the personal data was collected, and how each type of personal data was inferred and processed; (2) the legal authorization for collecting and processing the personal data; and (3) an explanation of how the data (including personal data) used is representative, proportional, and appropriate to the development and intended uses of the covered algorithm; and (H) a description of the training process for the covered algorithm which includes the training, validation, and test data utilized to confirm the intended outputs; (iii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and a description of such potential harm or disparate impact; (iv) alternative practices and recommendations to prevent or mitigate harm and recommendations for how the developer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (v) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the developer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
4. (a) If a deployer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the deployer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the manner in which the covered algorithm makes or contributes to a consequential action and the purpose for which the covered algorithm will be deployed; (ii) the necessity and proportionality of the covered algorithm in relation to its planned use, including the intended benefits and limitations of the covered algorithm and a description of the baseline process being enhanced or replaced by the covered algorithm, if applicable; (iii) the inputs that the deployer plans to use to produce an output, including: (A) the type of personal data and information used and how the personal data and information will be collected, inferred, and processed; (B) the legal authorization for collecting and processing the personal data; and (C) an explanation of how the data used is representative, proportional, and appropriate to the deployment of the covered algorithm; (iv) the outputs the covered algorithm is expected to produce and the outputs the covered algorithm actually produces in testing; (v) a description of any additional testing or training completed by the deployer for the context in which the covered algorithm will be deployed; (vi) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the deployment of the covered algorithm; (vii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities in the context in which the covered algorithm will be deployed and a description of such potential harm or disparate impact; (viii) alternative practices and recommendations to prevent or mitigate harm in the context in which the covered algorithm will be deployed and recommendations for how the deployer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (ix) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the deployer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
1. After the deployment of a covered algorithm, a deployer shall, on an annual basis, conduct an impact assessment in accordance with this section. The deployer shall conduct a preliminary impact assessment of the covered algorithm to identify any harm that resulted from the covered algorithm during the reporting period and: (a) if no resulting harm is identified by such assessment, shall record a finding of no harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for such finding, and submit such finding to the division; and (b) if a resulting harm is identified by such assessment, shall conduct a full impact assessment as described in subdivision two of this section. 2. In the event that the covered algorithm resulted in a harm during the reporting period, the deployer shall engage an independent auditor to conduct a full impact assessment with respect to the reporting period, including: (a) an assessment of the harm that resulted or was reasonably likely to have been produced during the reporting period; (b) a description of the extent to which the covered algorithm produced a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, including the methodology for such evaluation, of how the covered algorithm produced or likely produced such disparity; (c) a description of the types of data input into the covered algorithm during the reporting period to produce an output, including: (i) documentation of how data input into the covered algorithm to produce an output is represented and complete descriptions of each field of data; and (ii) whether and to what extent the data input into the covered algorithm to produce an output was used to train or otherwise modify the covered algorithm; (d) whether and to what extent the covered algorithm produced the outputs it was expected to produce; (e) a detailed description of how the covered algorithm was used to make a consequential action; (f) any action taken to prevent or mitigate harms, including how relevant staff are informed of, trained about, and implement harm mitigation policies and practices, and recommendations for how the deployer could monitor for and prevent harm after offering, licensing, or deploying the covered algorithm; and (g) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. 3. (a) After the engagement of the independent auditor, the independent auditor shall submit to the deployer a report on the impact assessment conducted under subdivision two of this section, including the findings and recommendations of such independent auditor. (b) Not later than thirty days after the submission of a report on an impact assessment under this section, a deployer shall submit to the developer of the covered algorithm a summary of such report, subject to the trade secret and privacy protections described in subdivision six of this section.
4. A developer shall, on an annual basis, review each impact assessment summary submitted by a deployer of its covered algorithm under subdivision three of this section for the following purposes: (a) to assess how the deployer is using the covered algorithm, including the methodology for assessing such use; (b) to assess the type of data the deployer is inputting into the covered algorithm to produce an output and the types of outputs the covered algorithm is producing; (c) to assess whether the deployer is complying with any relevant contractual agreement with the developer and whether any remedial action is necessary; (d) to compare the covered algorithm's performance in real-world conditions versus pre-deployment testing, including the methodology used to evaluate such performance; (e) to assess whether the covered algorithm is causing harm or is reasonably likely to be causing harm; (f) to assess whether the covered algorithm is causing, or is reasonably likely to be causing, a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and, if so, how and with respect to which protected characteristic; (g) to determine whether the covered algorithm needs modification; (h) to determine whether any other action is appropriate to ensure that the covered algorithm remains safe and effective; and (i) to undertake any other assessment or responsive action the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; (e) an estimate of the number of employees already displaced due to artificial intelligence; and (f) an estimate of the number of employees expected to be displaced or otherwise affected due to the increased use of artificial intelligence in the workplace.
1. (a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
1. (a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
3. (a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
4. Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
(a) It shall be an unlawful discriminatory practice for an employer to use artificial intelligence for recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment that has the effect of subjecting employees to discrimination on the basis of age, race, creed, color, national origin, citizenship or immigration status, sexual orientation, gender identity or expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, or status as a victim of domestic violence or to use zip codes as a proxy for such protected classes.
Notwithstanding the provisions of this article or any other law, if an impact assessment finds that the automated decision-making system produces discriminatory or biased outcomes, the state agency shall cease any utilization, application, or function of such automated decision-making system, and of any information produced using such system.
4. Does not discriminate against enrollees in violation of state and federal law;
(2) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
(2) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
(A) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought by the Attorney General pursuant to Section 37-31-60, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
(A) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought by the Attorney General pursuant to Section 37-31-70, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
(C)(1) Except as provided in items (4), (5), and subsection (F) of this section: (a) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system shall complete an impact assessment for the high-risk artificial intelligence system; and (b) a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) An impact assessment completed pursuant to this subsection must include, at a minimum, and to the extent reasonably known by or available to the deployer: (a) a statement by the deployer disclosing the purpose, intended-use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (c) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (d) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (e) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (f) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (g) a description of the postdeployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) In addition to the information required under item (2), an impact assessment completed pursuant to this item following an intentional and substantial modification to a high-risk artificial intelligence system must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection, all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) At least annually, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
The Director shall require any state agency that uses an automated decision system as a substantial factor in any employment decision to: 4. Annually test, or ensure that an appropriate contractor employed by such agency annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall: 4. Annually test, or ensure that an appropriate contractor employed by such department, office, board, commission, agency, or instrumentality of local government annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
(g) Impact assessment of automated decision systems. (1) Prior to utilizing an automated decision system, an employer shall create a written impact assessment of the system that includes, at a minimum: (A) a detailed description of the automated decision system and its purpose; (B) a description of the data utilized by the system; (C) a description of the outputs produced by the system and the types of employment-related decisions in which those outputs may be utilized; (D) an assessment of the necessity for the system, including reasons for utilizing the system to supplement nonautomated means of decision making; (E) a detailed assessment of the system's validity and reliability in accordance with contemporary social science standards and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (F) a detailed assessment of the potential risks of utilizing the system, including the risk of: (i) discrimination against employees on the basis of race, color, religion, national origin, sex, sexual orientation, gender identity, ancestry, place of birth, age, crime victim status, or physical or mental condition; (ii) violating employees' legal rights or chilling employees' exercise of legal rights; (iii) directly or indirectly harming employees' physical health, mental health, safety, sense of well-being, dignity, or autonomy; (iv) harm to employee privacy, including through potential security breaches or inadvertent disclosure of information; and (v) negative economic and material impacts to employees, including potential effects on compensation, benefits, work conditions, evaluations, advancement, and work opportunities; (G) a detailed summary of measures taken by the employer to address or mitigate the risks identified pursuant to subdivision (E) of this subdivision (1); and (H) a description of any methodology used in preparing the assessment. (2) An employer shall provide a copy of the assessment prepared pursuant to subdivision (1) of this subsection to an employee upon request. (3) An employer shall update the assessment required pursuant to this subsection any time a significant change or update is made to the automated decision system. (4) A single impact assessment may address a comparable set of automated decision systems deployed by an employer.
It shall be unlawful discrimination for a developer or deployer to use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that produces algorithmic discrimination.
(f) A developer shall not use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that has not passed an independent audit, in accordance with section 4193e of this title. If an independent audit finds that an automated decision system for use in a consequential decision does produce algorithmic discrimination, the developer shall not use, sell, or share the system until the algorithmic discrimination has been proven to be rectified by a post-adjustment audit.
(a) Prior to deployment of an automated decision system for use in a consequential decision, six months after deployment, and at least every 18 months thereafter for each calendar year an automated decision system is in use in consequential decisions after the first post-deployment audit, the developer and deployer shall be jointly responsible for ensuring that an independent audit is conducted in compliance with the provisions of this section to ensure that the product does not produce algorithmic discrimination and complies with the provisions of this subchapter. The developer and deployer shall enter into a contract specifying which party is responsible for the costs, oversight, and results of the audit. Absent an agreement of responsibility through contract, the developer and deployer shall be jointly and severally liable for any violations of this section. Regardless of final findings, the deployer or developer shall deliver all audits conducted under this section to the Attorney General. (b) A deployer or developer may contract with more than one auditor to fulfill the requirements of this section. (c) The audit shall include the following: (1) an analysis of data management policies, including whether personal or sensitive data relating to a consumer is subject to data security protection standards that comply with the requirements of applicable State law; (2) an analysis of the system validity and reliability according to each specified use case listed in the entity's reporting document filed by the developer or deployer pursuant to section 4193f of this title; (3) a comparative analysis of the system's performance when used on consumers of different demographic groups and a determination of whether the system produces algorithmic discrimination in violation of this subchapter by each intended and foreseeable identified use as identified by the deployer and developer pursuant to section 4193f of this title; (4) an analysis of how the technology complies with existing relevant federal, State, and local labor, civil rights, consumer protection, privacy, and data privacy laws; and (5) an evaluation of the developer's or deployer's documented risk management policy and program as set forth in section 4193g of this title for conformity with subsection 4193g(a) of this title.
(f) An audit conducted under this section shall be completed in its entirety without the assistance of an automated decision system. (g)(1) An auditor shall be an independent entity, including an individual, nonprofit, firm, corporation, partnership, cooperative, or association. (2) For the purposes of this subchapter, no auditor may be commissioned by a developer or deployer of an automated decision system used in consequential decisions if the auditor: (A) has already been commissioned to provide any auditing or nonauditing service, including financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past 12 months; (B) is or was involved in using, developing, integrating, offering, licensing, or deploying the automated decision system; (C) has or had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision system; or (D) has or had a direct financial interest or a material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision system. (3) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
(3) The artificial intelligence, algorithm, or other software tool is fairly applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services. (4) The artificial intelligence, algorithm, or other software tool is configured and applied in a standard, consistent manner for all health plans and insureds so that the resulting decisions are the same for all patients with similar clinical presentation and considerations.
(4) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision making. (5) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against covered individuals in violation of State or federal law. (6) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services.
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 9 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter.
(2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
(1) The requirements in section 5 (1) through (3) of this act and section 3(2) of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed: (a) The deployer: (i) Employs fewer than 50 full-time equivalent employees; and (ii) Does not use the deployer's own data to train the high-risk artificial intelligence system; (b) The high-risk artificial intelligence system: (i) Is used for the intended uses that are disclosed by the deployer; and (ii) Continues learning based on data derived from sources other than the deployer's own data; and (c) The deployer makes available to consumers any impact assessment that: (i) The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and (ii) Includes information that is substantially similar to the information in the impact assessment required under section 5 of this act.
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 10 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.