AI systems used in high-stakes contexts must be tested and formally assessed for discriminatory impact across protected characteristics before deployment. Results must be documented and retained. Some jurisdictions require submission to regulators; others require independent third-party audits with public disclosure of results.
(a) (1) For a high-risk automated decision system made publicly available for use on or after January 1, 2026, a developer shall perform an impact assessment on the high-risk automated decision system before making the high-risk automated decision system publicly available for use. (2) For a high-risk automated decision system first made publicly available for use before January 1, 2026, a developer shall perform an impact assessment if the developer makes a substantial modification to the high-risk automated decision system. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (A) A statement of the purpose of the high-risk automated decision system and its intended benefits, intended uses, and intended deployment contexts. (B) A description of the high-risk automated decision system's intended outputs. (C) A summary of the types of data intended to be used as inputs to the high-risk automated decision system and any processing of those data inputs recommended to ensure the intended functioning of the high-risk automated decision system. (D) A summary of reasonably foreseeable potential disproportionate or unjustified impacts on a protected classification from the intended use by deployers of the high-risk automated decision system. (E) A developer's impact assessment shall also include both of the following: (i) A description of safeguards implemented or other measures taken by the developer to mitigate and guard against risks known to the developer of algorithmic discrimination arising from the use of the high-risk automated decision system. (ii) A description of how the high-risk automated decision system can be monitored by a deployer for risks of algorithmic discrimination known to the developer.
(b) (1) Except as provided in paragraph (2), for a high-risk automated decision system first deployed after January 1, 2026, a deployer shall perform an impact assessment within two years of deploying the high-risk automated decision system. (2) A state agency that is a deployer may opt out of performing an impact assessment if the state agency uses the automated decision system only for its intended use as determined by the developer and all of the following requirements are met: (A) The state agency does not make a substantial modification to the high-risk automated decision system. (B) The developer of the high-risk automated decision system is in compliance with Section 10285.8 of the Public Contract Code and subdivision (d). (C) The state agency does not have a reasonable basis to believe that deployment of the high-risk automated decision system as intended by the developer is likely to result in algorithmic discrimination. (D) The state agency is in compliance with Section 22756.3. (c) (2) An impact assessment prepared pursuant to this section shall include all of the following: (F) A statement of the extent to which the deployer's use of the high-risk automated decision system is consistent with, or varies from, the developer's statement of the high-risk automated decision system's purpose and intended benefits, intended uses, and intended deployment contexts. (G) A description of safeguards implemented or other measures taken to mitigate and guard against any known risks to the deployer of discrimination arising from the high-risk automated decision system. (H) A description of how the high-risk automated decision system has been, and will be, monitored and evaluated.
(a) Except as provided in subdivision (b), a deployer or developer shall not deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system is likely to result in algorithmic discrimination. (b) (1) A deployer or developer may deploy or make available for deployment a high-risk automated decision system if the impact assessment performed pursuant to this chapter determines that the high-risk automated decision system will result in algorithmic discrimination if the deployer or developer implements safeguards to mitigate the known risks of algorithmic discrimination. (2) A deployer or developer acting under the exception provided by paragraph (1) shall perform an updated impact assessment to verify that the algorithmic discrimination has been mitigated and is not reasonably likely to occur.
(c) (1) Developers of AI models or AI systems, in conjunction with health facilities, clinics, physician's offices, or offices of a group practice, shall test for biased impacts in the outputs produced by the specified AI model or AI system based on the health facility's patient population. (2) Developers shall use an existing testing system designated by the advisory board until the advisory board has developed its standardized testing system described in paragraph (2) of subdivision (b). After the advisory board has developed its testing system, developers may alternatively use the board's testing system.
(c) THE ARTIFICIAL INTELLIGENCE SYSTEM IS NOT USED IN ANY WAY THAT DISCRIMINATES AGAINST INDIVIDUALS IN VIOLATION OF OTHER STATE OR FEDERAL LAWS; (d) THE ARTIFICIAL INTELLIGENCE SYSTEM IS FAIRLY AND EQUITABLY APPLIED, INCLUDING IN ACCORDANCE WITH APPLICABLE REGULATIONS AND GUIDANCE ISSUED BY THE FEDERAL DEPARTMENT OF HEALTH AND HUMAN SERVICES;
(1) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
(1) On and after June 30, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
(3) (a) Except as provided in subsections (3)(d), (3)(e), and (6) of this section: (I) A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system on or after June 30, 2026, shall complete an impact assessment for the high-risk artificial intelligence system; and (II) On and after June 30, 2026, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.
(c) In addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (3) following an intentional and substantial modification to a high-risk artificial intelligence system on or after June 30, 2026, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system.
(g) On or before June 30, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
(a) (1) Prior to deploying an automated employment-related decision process, and annually thereafter, a deployer shall contract with an independent auditor to complete a bias audit. Such bias audit shall be done not later than one year prior to the date the deployer intends to deploy such automated employment-related decision process. (2) Each bias audit conducted pursuant to this subsection shall: (A) Evaluate the automated employment-related decision process performance and error rates across relevant subgroups; (B) Assess disparate impact caused by the automated employment-related decision process against protected classes; (C) Examine the sources of data processed by the automated employment-related decision process and quality of content, decisions, predictions or recommendations generated by the automated employment-related decision process; (D) Evaluate the effects of any thresholds, scoring or ranking criteria utilized by the automated employment-related decision process; and (E) Test for less discriminatory alternatives or adjustments to such automated employment-related decision process. (3) No deployer shall contract with an independent auditor who (A) has a financial or operational interest in the deployer or developer of the automated employment-related decision process, or (B) has not been approved by the Labor Commissioner pursuant to subsection (b) of this section. (b) The Labor Commissioner shall establish and implement an approval process of independent auditors to conduct bias audits pursuant to subsection (a) of this section and shall maintain a registry of independent auditors approved by such process. (c) Not later than thirty days after completing a bias audit pursuant to subsection (a) of this section, the deployer shall (1) in a form and manner prescribed by the Labor Commissioner, file a bias audit report and a plain-language summary of such report with the commissioner, and (2) publish a plain-language summary of such audit report on the deployer's Internet web site in a conspicuous place accessible to applicants for employment and employees. Such summary shall include (A) the methodology used in such bias audit, (B) the key findings and identified risks found by such bias audit, and (C) any corrective actions taken by the deployer. (d) No automated employment-related decision process shall be deployed or continue to be deployed by a deployer if the most recent bias audit conducted pursuant to subsection (a) of this section identified any disparate impact caused by such automated employment-related decision process, except where the deployer can demonstrate (1) a business necessity, (2) such deployer has implemented corrective actions approved by the Labor Commissioner, and (3) that either (A) no less discriminatory alternative is available, or (B) a less discriminatory alternative has been implemented by the deployer. (e) Each deployer shall maintain records relating to bias audits required pursuant to subsection (a) of this section for a period of not less than five years and shall make such records available to the Labor Commissioner upon request. (f) The Labor Commissioner may adopt regulations, in accordance with the provisions of chapter 54 of the general statutes, necessary to carry out the purposes of this section, including, but not limited to, establishing minimum qualifications for independent auditors and methodologic requirements for bias audits required pursuant to subsection (a) of this section.
(A) For an employer, by the employer or the employer's agent, except in the case of a bona fide occupational qualification or need, to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment because of, or to use an automated employment-related decision process in any manner that has the effect of causing the employer to refuse to hire or employ or to bar or to discharge from employment any individual or to discriminate against any individual in compensation or in terms, conditions or privileges of employment on the basis of, the individual's race, color, religious creed, age, sex, gender identity or expression, marital status, national origin, ancestry, present or past history of mental disability, intellectual disability, learning disability, physical disability, including, but not limited to, blindness, status as a veteran, status as a victim of domestic violence, status as a victim of sexual assault or status as a victim of trafficking in persons. In any action for a discriminatory practice in violation of this subparagraph involving an automated employment-related decision process, the commission or the court shall consider any evidence, or lack of evidence, of anti-bias testing or similar proactive efforts to avoid such discriminatory practice, including, but not limited to, the quality, efficacy, recency and scope of such testing or efforts, the results of such testing or efforts and the response thereto.
(c) Beginning on February 1, 2024, the Department of Administrative Services shall perform ongoing assessments of systems that employ artificial intelligence and are in use by state agencies to ensure that no such system shall result in any unlawful discrimination or disparate impact described in subparagraph (B) of subdivision (1) of subsection (b) of section 2 of this act. The department shall perform such assessment in accordance with the policies and procedures established by the Office of Policy and Management pursuant to subsection (b) of section 2 of this act.
No developer shall sell, distribute, or otherwise make available to deployers an automated decision system that results in algorithmic discrimination.
(1) A developer of an automated decision system shall take steps to address risks of algorithmic discrimination, invalidity, and errors, including, but not limited to, ensuring suitability and representativeness of data sources, implementing data governance measures, testing the automated decision system for disparate impact, and searching for less discriminatory alternative decision methods. Developers shall continue assessing and mitigating the risk of algorithmic discrimination in their automated decision systems so long as such automated decision systems are in use by any deployer. (2) A developer of an automated decision system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the automated decision system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system without unreasonable delay but no later than 90 days after the date on which: (A) The developer discovers through the developer's ongoing testing and analysis that the developer's automated decision system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (B) The developer receives from a deployer a credible report that the automated decision system has been deployed and has caused algorithmic discrimination.
No deployer of an automated decision system shall use an automated decision system in a manner that results in algorithmic discrimination.
(e) Except as otherwise provided for in this chapter: (1) A deployer, or a third party contracted by the deployer, that deploys an automated decision system shall complete an impact assessment for the automated decision system; and (2) A deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed automated decision system at least annually and within 90 days after any intentional and substantial modification to the automated decision system is made available. (f) An impact assessment completed pursuant to subsection (e) of this Code section shall include, at a minimum, and to the extent reasonably known by or available to the deployer: (1) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the automated decision system; (2) An analysis of whether the deployment of the automated decision system poses any known or reasonably foreseeable risks of: (A) Algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (B) Limits on accessibility for individuals who are pregnant, breastfeeding, or disabled, and, if so, what reasonable accommodations the deployer may provide that would mitigate any such limitations on accessibility; (C) Any violation of state or federal labor laws, including laws pertaining to wages, occupational health and safety, and the right to organize; or (D) Any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if such intrusion: (i) Would be offensive to a reasonable person; and (ii) May be redressed under the laws of this state; (3) A description of the categories of data the automated decision system processes as inputs and the outputs the automated decision system produces; (4) If the deployer used data to customize the automated decision system, an overview of the categories of data the deployer used to customize the automated decision system; (5) An analysis of the automated decision system's validity and reliability in accordance with contemporary social science standards, and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (6) A description of any transparency measures taken concerning the automated decision system, including any measures taken to disclose to a consumer that the automated decision system is in use when the automated decision system is in use; (7) A description of the post-deployment monitoring and user safeguards provided concerning the automated decision system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the automated decision system; and (8) When such impact assessment is completed following an intentional and substantial modification to an automated decision system, a statement disclosing the extent to which the automated decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of the automated decision system. (g) If the analysis required by paragraph (2) of subsection (f) of this Code section reveals a risk of algorithmic discrimination, the deployer shall not deploy the automated decision system until the developer or deployer takes reasonable steps to search for and implement less discriminatory alternative decision methods. (h) A single impact assessment may address a comparable set of automated decision systems deployed by a deployer. (i) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment shall satisfy the requirements established in this Code section if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this Code section. (j) A deployer shall maintain the most recently completed impact assessment for an automated decision system, all records concerning each impact assessment, and all prior impact assessments, if any, throughout the period of time that the automated decision system is deployed and for at least three years following the final deployment of the automated decision system.
At least annually a deployer, or a third party contracted by the deployer, shall review the deployment of each automated decision system deployed by the deployer to ensure that the automated decision system is not causing algorithmic discrimination.
Deployers shall publish on their public websites all impact assessments completed within the preceding three years in a form and manner prescribed by the Attorney General.
(a) An employer seeking to use or apply an automated decision-making system permitted under Section 10 shall conduct an initial impact assessment, 30 days prior to implementation of the automated decision-making system, bearing the signature of: (1) one or more individuals responsible for meaningful human review of the system; and (2) an independent auditor. A person shall not be an independent auditor under this subsection if, at any point in the 5 years preceding the impact assessment, that person: (i) was involved in using, developing, offering, licensing, or deploying the automated decision-making system under review; (ii) had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision-making system under review; or (iii) had a direct or material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision-making system under review. (b) Following the initial impact assessment, additional impact assessments shall be conducted at least once every 2 years and prior to any material changes to the automated decision-making system. Each impact assessment shall include, in plain language: (1) a description of the objectives of the automated decision-making system; (2) an evaluation of the system's ability to achieve those objectives; (3) a description and evaluation of the algorithms, computational models, and artificial intelligence tools used, including: (A) a summary of underlying algorithms and artificial intelligence tools; and (B) a description of the design and training to be used; (4) testing for: (A) disparate impact or discrimination based on protected characteristics, including, but not limited to discriminating against, persons based on their race, color, religious creed, national origin, sex, disability or perceived disability, gender identity, sexual orientation, genetic information, pregnancy or a condition related to pregnancy, ancestry, or status as a veteran and any actions to mitigate any impacts; (B) accessibility limitations for persons with disabilities; (C) privacy and job quality impacts, including wages, hours, and conditions and safeguards; (D) cybersecurity vulnerabilities and safeguards; (E) public health or safety risks; (F) foreseeable misuse and safeguards; and (G) use, storage, and control of sensitive or personal data; and (5) a notification mechanism for employees impacted by the use of the automated decision-making system.
(c) If an impact assessment finds that an automated decision-making system produces discriminatory, biased, or inaccurate outcomes or fails to meet or negatively impacts any of the measures described in subsection (b) of Section 10, the employer shall immediately cease any use or function of that system and of any information produced by it, and shall take all steps necessary to remedy the discriminatory, biased or inaccurate outcomes produced by the automated decision-making system.
(a) On or before January 1, 2027, and annually thereafter, a deployer of an automated decision tool shall perform an impact assessment for any automated decision tool the deployer uses that includes all of the following: (1) a statement of the purpose of the automated decision tool and its intended benefits, uses, and deployment contexts; (2) a description of the automated decision tool's outputs and how they are used to make, or be a controlling factor in making, a consequential decision; (3) a summary of the type of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision; (4) an analysis of potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information from the deployer's use of the automated decision tool; (5) a description of the safeguards implemented, or that will be implemented, by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the automated decision tool known to the deployer at the time of the impact assessment; (6) a description of how the automated decision tool will be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision; and (7) a description of how the automated decision tool has been or will be evaluated for validity or relevance. (b) A deployer shall, in addition to the impact assessment required by subsection (a), perform, as soon as feasible, an impact assessment with respect to any significant update. (c) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year. Section 35. (a) Within 60 days after completing an impact assessment required by this Act, a deployer shall provide the impact assessment to the Attorney General. (b) A deployer who knowingly violates this Section shall be liable for an administrative fine of not more than $10,000 per violation in an administrative enforcement action brought by the Attorney General. Each day on which an automated decision tool is used for which an impact assessment has not been submitted as required under this Section shall give rise to a distinct violation of this Section. (c) The Attorney General may share impact assessments with other State entities as appropriate.
(a) A deployer shall not use an automated decision tool that results in algorithmic discrimination. (b) On and after January 1, 2028, a person may bring a civil action against a deployer for violation of this Section. In an action brought under this subsection, the plaintiff shall have the burden of proof to demonstrate that the deployer's use of the automated decision tool resulted in algorithmic discrimination that caused actual harm to the person bringing the civil action. (c) In addition to any other remedy at law, a deployer that violates this Section shall be liable to a prevailing plaintiff for any of the following: (1) compensatory damages; (2) declaratory relief; and (3) reasonable attorney's fees and costs.
An employer may not: (2) use an automated decision system output in making an employment related decision with respect to a covered individual unless: (A) the automated decision system used to generate the automated decision system output has had predeployment testing and validation with respect to: (i) the efficacy of the system; (ii) the compliance of the system with applicable employment discrimination laws, including Title VII of the Civil Rights Act of 1964 (42 U.S.C. 2000e et seq.), the Age Discrimination in Employment Act of 1967 (29 U.S.C. 621 et seq.), Title I of the Americans with Disabilities Act of 1990 (42 U.S.C. 12111 et seq.), Title II of the Genetic Information Nondiscrimination Act of 2008 (42 U.S.C. 2000ff et seq.), Section 6(d) of the Fair Labor Standards Act of 1938 (29 U.S.C. 206(d)), Sections 501 and 505 of the Rehabilitation Act of 1973 (29 U.S.C. 791 and 29 U.S.C. 793), and the Pregnant Workers Fairness Act (42 U.S.C. 2000gg); (iii) the lack of any potential discriminatory impact of the system, including discriminatory impact based on race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, or disability, and genetic information (including family medical history); and (iv) the compliance of the system with the Artificial Intelligence Risk Management Framework released by the National Institute of Standards and Technology on January 26, 2023, or a successor framework; (B) the automated decision system is, not less than annually, independently tested for discriminatory impact described in clause (A)(iii) or potential biases and the results of the test are made publicly available;
Each health insurer and utilization review organization shall ensure that the artificial intelligence, algorithm or other software tool used to review and approve, modify and delay or deny requests by providers: (C) does not supplant healthcare provider decision-making; (D) does not discriminate, directly or indirectly, against enrollees in violation of state or federal law; (E) is fairly and equitably applied, in accordance with any applicable regulations or guidance issued by the United States department of health and human services; (H) does not directly or indirectly cause harm to the enrollee.
(a) Duty of Care: Developers must use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination.
(b) Impact Assessments: (1) Deployers must complete an annual impact assessment for each high-risk AI system, including: (i) The purpose and intended use of the system; (ii) Data categories used and outputs generated; (iii) Potential risks of discrimination and mitigation measures. (2) Impact assessments must be updated after any substantial modification to the system. State-provided templates for these assessments will be made available to reduce compliance burdens.
(a) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7. (b) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (i) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (ii) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (iii) the purpose of the high-risk artificial intelligence system; (iv) the intended benefits and uses of the high-risk artificial intelligence system; and (v) all other information necessary to allow the deployer to comply with the requirements of section 3; (3) documentation describing: (i) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of the high-risk artificial intelligence system; (iv) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (v) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination. (f) nothing in subsections (b) to (e) of this section requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
(a) Not later than 6 months after the effective date of this act, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
(c) (1) except as provided in subsections (c)(4), (c)(5), and (f) of this section: (i) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall complete an impact assessment for the high-risk artificial intelligence system; and (ii) Not later than 6 months after the effective date of this act, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) an impact assessment completed pursuant to this subsection (c) must include, at a minimum, and to the extent reasonably known by or available to the deployer: (i) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (iii) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vi) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (vii) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) in addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (c) following an intentional and substantial modification to a high-risk artificial intelligence system not later than 6 months after the effective date of this act, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) a single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) if a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (c) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (c). (6) a deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (c), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) Not later than 6 months after the effective date of this act, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
(j) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated employment decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments must: (i) be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry best practices; (iv) identify which allowable purpose(s) described in this chapter; (vi) consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; and (vii) consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions.
a) It shall be unlawful for an employer to use an automated employment decision tool for an employment decision, alone or in conjunction with electronic monitoring, unless such tool has been the subject of an impact assessment. Impact assessments must: (i) be conducted no more than one year prior to the use of such tool, or where the tool was in use by the employer before the effective date of this article, within six months of the effective date of this article; (ii) be conducted by an independent and impartial party with no financial or legal conflicts of interest; (iii) identify and describe the attributes and modeling techniques that the tool uses to produce outputs; (iv) evaluate whether those attributes and techniques are a scientifically valid means of evaluating an employee or candidate's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under chapter 151B or any other applicable law; (v) consider, identify, and describe any disparities in the data used to train or develop the tool and describe how those disparities may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy any disparate impact; (vi) consider, identify, and describe any outputs produced by the tool that may result in a disparate impact on persons based on their race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, and what actions may be taken by the employer or vendor of the tool to reduce or remedy that disparate impact; (vii) evaluate whether the use of the tool may limit accessibility for persons with disabilities, or for persons with any specific disability, and what actions may be taken by the employer or vendor of the tool to reduce or remedy the concern; (viii) consider and describe potential sources of adverse impact against individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that may arise after the tool is deployed; (ix) identify and describe any other assessment of risks of discrimination or a disparate impact of the tool on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran that arise over the course of the impact assessment, and what actions may be taken to reduce or remedy that risk; (x) for any finding of a disparate impact or limit on accessibility, evaluate whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a candidate's performance or ability to perform job functions; (xi) consider and describe any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (xii) consider and describe whether use of the tool may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (xiii) be submitted in its entirety or an accessible summary form to the department for inclusion in a public registry of such impact assessments within sixty days of completion and distributed to employees who may be subject to the tool. (b) An employer shall conduct or commission subsequent impact assessments each year that the tool is in use to assist or replace employment decisions. Subsequent impact assessments shall comply with the requirements of paragraph (a) of this section, and shall assess and describe any change in the validity or disparate impact of the tool.
(e) If an initial or subsequent impact assessment concludes that a data set, feature, or application of the automated employment decision tool results in a disparate impact on individuals or groups based on race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran, or unlawfully limits accessibility for persons with disabilities, an employer shall refrain from using the tool until it: (i) takes reasonable and appropriate steps to remedy that disparate impact or limit on accessibility and describe in writing to employees, the auditor, and the department what steps were taken; and (ii) if the employer believes the impact assessment finding of a disparate impact or limit on accessibility is erroneous, or that the steps taken in accordance with subparagraph (i) of this paragraph sufficiently address those findings such that the tool may be lawfully used in accordance with this article, describes in writing to employees, the auditor, and the department how the data set, feature, or application of the tool is the least discriminatory method of assessing an employee's performance or ability to complete essential functions of a position. (f) It shall be unlawful for an independent auditor, vendor, or employer to manipulate, conceal, or misrepresent the results of an impact assessment.
(D) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making. (E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (2) Not directly or indirectly discriminate against an enrollee on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life or other health conditions; (3) Be fairly and equitably applied;
Sec. 9. (1) Before an employer uses an automated decisions tool under section 4 or an electronic monitoring tool under section 5, the employer shall conduct an impact assessment of the tool that meets all of the following requirements: (a) Evaluates the tool's objectives, algorithms, data, cybersecurity vulnerabilities, and potential biases, including, but not limited to, discriminatory outcomes based on race, gender, or disability. (b) Is conducted 1 year before the tool is implemented, or, for a tool already in use on the effective date of this act, not more than 6 months after the effective date of this act. (c) Is conducted by an independent and impartial third party with no financial or legal conflicts of interests related to the use of the tool. (d) Identifies and describes the attributes and modeling techniques that the tool uses to produce outputs. (e) Evaluates whether the attributes and modeling techniques described in subdivision (d) are a scientifically valid means of evaluating a covered individual's performance or ability to perform the essential functions of a role, and whether those attributes may function as a proxy for belonging to a protected class under the Elliot-Larsen civil rights act, 1976 PA 453, MCL 37.2101 to 37.2804. (f) Considers, identifies, and describes both of the following that may result in a disparate impact on a covered individual based on the covered individual's qualified characteristic, and what actions may be taken by the employer to reduce or remedy any disparate impact. (i) Any disparities in the data used to train or develop the tool. (ii) Any outputs produced by the tool. (g) Evaluates whether the use of the tool may limit accessibility for covered individuals with disabilities, or for covered individuals with any specific disability, and what actions may be taken by the employer to reduce or remedy the limit on accessibility. (h) Considers and describes potential sources of adverse impact against covered individuals or groups based on a qualified characteristic that may arise after the tool is implemented. (i) Identifies and describes any other assessment of risks of discrimination or a disparate impact of the tool on covered individuals or groups based on a qualified characteristic, and what actions may be taken to reduce or remedy that risk. (j) For any finding of a disparate impact or limit on accessibility, evaluates whether the data set, attribute, or feature of the tool at issue is the least discriminatory method of assessing a covered individual's performance or ability to perform job functions. (k) Considers and describes any other ways in which the tool could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent a violation. (l) Considers and describes whether use of the tool may negatively affect a covered individual's privacy or job quality, including wages, hours, and working conditions. (2) Not more than 60 days after an employer completes an assessment, the employer shall do both of the following: (a) Submit the assessment in its entirety or in an accessible summary form to the department for the department to include in a public registry of impact assessments. (b) Distribute the assessment to covered individuals who may be subject to the tool. (3) An employer shall conduct or commission subsequent impact assessments each year in which the electronic monitoring tool or automated decisions tool is in use. Subsequent impact assessments must comply with the requirements of subsection (1), as applicable, and must assess and describe any change in the validity or disparate impact of the tool.
(b) It is an unfair employment practice, with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, for an employer to: (1) use artificial intelligence that has the effect of subjecting an employee or applicant for employment to discrimination because of race, color, creed, religion, national origin, sex, gender identity, marital status, status with regard to public assistance, familial status, membership or activity in a local commission, disability, sexual orientation, or age;
(e) the use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law, including 49-2-309; (f) the artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services;
(1)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section.
(1)(a) On and after February 1, 2026, a deployer of any high-risk artificial intelligence system shall use reasonable care to protect consumers from each known risk of algorithmic discrimination. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section.
(3)(a) Except as otherwise provided in this subsection or subsection (6) of this section: (i) An impact assessment shall be completed for each high-risk artificial intelligence system deployed on or after February 1, 2026. Such impact assessment shall be completed by the deployer or by a third party contracted by the deployer; and (ii) On and after February 1, 2026, for each deployed high-risk artificial intelligence system, a deployer or a third party contracted by the deployer shall complete an impact assessment within ninety days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (b) An impact assessment completed pursuant to this subsection shall include to the extent reasonably known by or available to the deployer: (i) A statement by the deployer disclosing: (A) The purpose of the high-risk artificial intelligence system; (B) Any intended-use case for the high-risk artificial intelligence system; (C) The deployment context of the high-risk artificial intelligence system; and (D) Any benefit afforded by the high-risk artificial intelligence system; (ii) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known risk of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate any such risk; (iii) A high-level summary of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) Any metric used to evaluate the performance and any known limitation of the high-risk artificial intelligence system; (vi) A description of any transparency measure taken concerning the high-risk artificial intelligence system, including any measure taken to disclose to a consumer when the high-risk artificial intelligence system is in use; and (vii) A description of each postdeployment monitoring and user safeguard provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address any issue that arises from the deployment of the high-risk artificial intelligence system. (c) Any impact assessment completed pursuant to this subsection following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, shall include a statement that discloses the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from any use of the high-risk artificial intelligence system intended by the developer. (d) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (e) Any impact assessment completed to comply with another applicable law or regulation by a deployer or by a third party contracted by the deployer shall satisfy this subsection if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (f) A deployer shall maintain: (i) The most recently completed impact assessment required under this subsection for each high-risk artificial intelligence system of the deployer; (ii) Each record concerning each such impact assessment; and (iii) For at least three years following the final deployment of each high-risk artificial intelligence system, each prior impact assessment, if any, and each record concerning such impact assessment.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
f. The Department of Labor and Workforce Development shall analyze the data reported in accordance with subsection e. of this act and report to the Governor and the Legislature, as provided pursuant to section 2 of P.L.1991, c.164 (C.52:14-19.1), each year whether the data discloses a racial bias in the use of artificial intelligence.
High-risk AI systems implemented in New Jersey shall: a. Undergo algorithmic impact assessments prior to deployment. The Office of Information Technology in, but not of, the Department of the Treasury, shall perform the impact assessments, in a manner to be determined by the Office of Information Technology.
a. The Office of the Attorney General shall investigate complaints related to AI-driven discrimination, unreasonable AI workplace surveillance, and claims of violations of civil rights protections related to AI. The Attorney General shall enforce penalties pursuant to the "Law Against Discrimination," P.L.1945, c.169 (C.10:5-1 et seq.), and the "New Jersey Civil Rights Act," P.L.2004, c.143 (C.10:6-1 et seq.) for violations of this section. b. As used in this section: "AI-driven discrimination" means output resulting from AI systems that exhibit biases against individuals based on age, race, religion, or other protected classes. "AI workplace surveillance" means the use of AI to monitor and analyze employee behavior and performance through the use of technology tools that track employee activities including computer usage and physical movements.
(a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
(a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
(a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
It shall be unlawful for a landlord to implement or use an automated housing decision making tool, including the use of an automated housing decision making tool that issues a score, classification, or recommendation, that fails to comply with the following provisions: (a) No less than annually, a disparate impact analysis shall be conducted to assess the actual impact of any automated housing decision making tool used by any landlord to select applicants for housing within the state. Such disparate impact analysis shall be provided to the landlord. (b) A summary of the most recent disparate impact analysis of such tool as well as the distribution date of the tool to which the analysis applies shall be made publicly available on the website of the landlord prior to the implementation or use of such tool. Such summary shall also be made accessible through any listing for housing on a digital platform for which the landlord intends to use an automated housing decision making tool to screen applicants for housing.
1. No New York resident shall face discrimination by algorithms, and all automated systems shall be used and designed in an equitable manner. 2. The designers, developers, and deployers of automated systems shall take proactive and continuous measures to protect New York residents and communities from algorithmic discrimination, ensuring the use and design of these systems in an equitable manner. 3. The protective measures required by this section shall include proactive equity assessments as part of the system design, use of representative data, protection against proxies for demographic features, and assurance of accessibility for New York residents with disabilities in design and development. 4. Automated systems shall undergo pre-deployment and ongoing disparity testing and mitigation, under clear organizational oversight.
5. Independent evaluations and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, shall be conducted for all automated systems. 6. New York residents shall have the right to view such evaluations and reports.
(3) The use of the artificial intelligence, algorithm, or other software tool does not adversely discriminate, directly or indirectly, against an individual on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. (4) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied.
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; (e) an estimate of the number of employees already displaced due to artificial intelligence; and (f) an estimate of the number of employees expected to be displaced or otherwise affected due to the increased use of artificial intelligence in the workplace.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
No less than annually, any real estate broker or online housing platform that uses virtual agents to assist with searches for available properties for sale or rental properties, and any online housing platform that uses AI tools, shall have a disparate impact analysis conducted and shall submit a summary of the most recent disparate impact analysis to the attorney general's office.
Any real estate broker or online housing platform that offers or uses virtual agents or AI tools shall: (a) proactively identify discriminatory algorithmic results and modify such virtual agents or AI tools to adopt less discriminatory alternatives, including but not limited to, assessing data used to train such virtual agents or AI tools and verifying that use of such data does not predict discriminatory outcomes; (b) ensure that the artificial intelligence or other computational or algorithmic systems upon which such virtual agents or AI tools are structured are similarly predictive across groups on the basis of sex, race, ethnicity or other protected classes, and make adjustments to correct any identified disparities in predictiveness for any such groups; and (c) conduct regular end-to-end testing of advertising, captioning, and chatbot systems to ensure that any discriminatory outcomes are detected, including but not limited to, comparing the delivery of advertisements across different demographic audiences.
1. A developer or deployer shall not offer, license, promote, sell, or use a covered algorithm in a manner that: (a) causes or contributes to a disparate impact in a manner that prevents; (b) otherwise discriminates in a manner that prevents; or (c) otherwise makes unavailable, the equal enjoyment of goods, services, or other activities or opportunities, related to a consequential action, on the basis of a protected characteristic. 2. This section shall not apply to: (a) the offer, licensing, or use of a covered algorithm for the sole purpose of: (i) a developer's or deployer's self-testing (or auditing by an independent auditor at a developer's or deployer's request) to identify, prevent, or mitigate discrimination, or otherwise to ensure compliance with obligations, under federal or state law; (ii) expanding an applicant, participant, or customer pool to raise the likelihood of increasing diversity or redressing historic discrimination; or (iii) conducting good faith security research, or other research, if conducting the research is not part or all of a commercial act; or (b) any private club or other establishment not in fact open to the public, as described in section 201(e) of the Civil Rights Act of 1964 (42 U.S.C. 2000a(e)).
1. Prior to deploying, licensing, or offering a covered algorithm (including deploying a material change to a previously-deployed covered algorithm or a material change made prior to deployment) for a consequential action, a developer or deployer shall conduct a pre-deployment evaluation in accordance with this section. 2. (a) The developer shall conduct a preliminary evaluation of the plausibility that any expected use of the covered algorithm may result in a harm. (b) The deployer shall conduct a preliminary evaluation of the plausibility that any intended use of the covered algorithm may result in a harm. (c) Based on the results of the preliminary evaluation, the developer or deployer shall: (i) in the event that a harm is not plausible, record a finding of no plausible harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for the finding, and submit such record to the division; and (ii) in the event that a harm is plausible, conduct a full pre-deployment evaluation as described in subdivision three or subdivision four of this section, as applicable. (d) When conducting a preliminary evaluation of a material change to, or new use of, a previously-deployed covered algorithm, the developer or deployer may limit the scope of the evaluation to whether use of the covered algorithm may result in a harm as a result of the material change or new use. 3. (a) If a developer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the developer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the covered algorithm's design and methodology, including the inputs the covered algorithm is designed to use to produce an output and the outputs the covered algorithm is designed to produce; (ii) how the covered algorithm was created, trained, and tested, including: (A) any metric used to test the performance of the covered algorithm; (B) defined benchmarks and goals that correspond to such metrics, including whether there was sufficient representation of demographic groups that are reasonably likely to use or be affected by the covered algorithm in the data used to create or train the algorithm, and whether there was reasonable testing, if any, across such demographic groups; (C) the outputs the covered algorithm actually produces in testing; (D) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the development of the covered algorithm, or a disclosure that no such consultation occurred; (E) a description of which protected characteristics, if any, were used for testing and evaluation, and how and why such characteristics were used, including: (1) whether the testing occurred in comparable contextual conditions to the conditions in which the covered algorithm is expected to be used; and (2) if protected characteristics were not available to conduct such testing, a description of alternative methods the developer used to conduct the required assessment; (F) any other computational algorithm incorporated into the development of the covered algorithm, regardless of whether such precursor computational algorithm involves a consequential action; (G) a description of the data and information used to develop, test, maintain, or update the covered algorithm, including: (1) each type of personal data used, each source from which the personal data was collected, and how each type of personal data was inferred and processed; (2) the legal authorization for collecting and processing the personal data; and (3) an explanation of how the data (including personal data) used is representative, proportional, and appropriate to the development and intended uses of the covered algorithm; and (H) a description of the training process for the covered algorithm which includes the training, validation, and test data utilized to confirm the intended outputs; (iii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and a description of such potential harm or disparate impact; (iv) alternative practices and recommendations to prevent or mitigate harm and recommendations for how the developer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (v) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the developer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
4. (a) If a deployer determines a harm is plausible during the preliminary evaluation described in subdivision two of this section, the deployer shall engage an independent auditor to conduct a pre-deployment evaluation. The evaluation required by this subdivision shall include a detailed review and description, sufficient for an individual having ordinary skill in the art to understand the functioning, risks, uses, benefits, limitations, and other pertinent attributes of the covered algorithm, including: (i) the manner in which the covered algorithm makes or contributes to a consequential action and the purpose for which the covered algorithm will be deployed; (ii) the necessity and proportionality of the covered algorithm in relation to its planned use, including the intended benefits and limitations of the covered algorithm and a description of the baseline process being enhanced or replaced by the covered algorithm, if applicable; (iii) the inputs that the deployer plans to use to produce an output, including: (A) the type of personal data and information used and how the personal data and information will be collected, inferred, and processed; (B) the legal authorization for collecting and processing the personal data; and (C) an explanation of how the data used is representative, proportional, and appropriate to the deployment of the covered algorithm; (iv) the outputs the covered algorithm is expected to produce and the outputs the covered algorithm actually produces in testing; (v) a description of any additional testing or training completed by the deployer for the context in which the covered algorithm will be deployed; (vi) a description of any consultation with relevant stakeholders, including any communities that will be impacted by the covered algorithm, regarding the deployment of the covered algorithm; (vii) the potential for the covered algorithm to produce a harm or to have a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities in the context in which the covered algorithm will be deployed and a description of such potential harm or disparate impact; (viii) alternative practices and recommendations to prevent or mitigate harm in the context in which the covered algorithm will be deployed and recommendations for how the deployer could monitor for harm after offering, licensing, or deploying the covered algorithm; and (ix) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. (b) The independent auditor shall submit to the deployer a report on the evaluation conducted under this subdivision, including the findings and recommendations of such independent auditor.
1. After the deployment of a covered algorithm, a deployer shall, on an annual basis, conduct an impact assessment in accordance with this section. The deployer shall conduct a preliminary impact assessment of the covered algorithm to identify any harm that resulted from the covered algorithm during the reporting period and: (a) if no resulting harm is identified by such assessment, shall record a finding of no harm, including a description of the developer's expected use or the deployer's intended use of the covered algorithm, how the preliminary evaluation was conducted, and an explanation for such finding, and submit such finding to the division; and (b) if a resulting harm is identified by such assessment, shall conduct a full impact assessment as described in subdivision two of this section. 2. In the event that the covered algorithm resulted in a harm during the reporting period, the deployer shall engage an independent auditor to conduct a full impact assessment with respect to the reporting period, including: (a) an assessment of the harm that resulted or was reasonably likely to have been produced during the reporting period; (b) a description of the extent to which the covered algorithm produced a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, including the methodology for such evaluation, of how the covered algorithm produced or likely produced such disparity; (c) a description of the types of data input into the covered algorithm during the reporting period to produce an output, including: (i) documentation of how data input into the covered algorithm to produce an output is represented and complete descriptions of each field of data; and (ii) whether and to what extent the data input into the covered algorithm to produce an output was used to train or otherwise modify the covered algorithm; (d) whether and to what extent the covered algorithm produced the outputs it was expected to produce; (e) a detailed description of how the covered algorithm was used to make a consequential action; (f) any action taken to prevent or mitigate harms, including how relevant staff are informed of, trained about, and implement harm mitigation policies and practices, and recommendations for how the deployer could monitor for and prevent harm after offering, licensing, or deploying the covered algorithm; and (g) any other information the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities as prescribed by rules promulgated by the division. 3. (a) After the engagement of the independent auditor, the independent auditor shall submit to the deployer a report on the impact assessment conducted under subdivision two of this section, including the findings and recommendations of such independent auditor. (b) Not later than thirty days after the submission of a report on an impact assessment under this section, a deployer shall submit to the developer of the covered algorithm a summary of such report, subject to the trade secret and privacy protections described in subdivision six of this section.
4. A developer shall, on an annual basis, review each impact assessment summary submitted by a deployer of its covered algorithm under subdivision three of this section for the following purposes: (a) to assess how the deployer is using the covered algorithm, including the methodology for assessing such use; (b) to assess the type of data the deployer is inputting into the covered algorithm to produce an output and the types of outputs the covered algorithm is producing; (c) to assess whether the deployer is complying with any relevant contractual agreement with the developer and whether any remedial action is necessary; (d) to compare the covered algorithm's performance in real-world conditions versus pre-deployment testing, including the methodology used to evaluate such performance; (e) to assess whether the covered algorithm is causing harm or is reasonably likely to be causing harm; (f) to assess whether the covered algorithm is causing, or is reasonably likely to be causing, a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, and, if so, how and with respect to which protected characteristic; (g) to determine whether the covered algorithm needs modification; (h) to determine whether any other action is appropriate to ensure that the covered algorithm remains safe and effective; and (i) to undertake any other assessment or responsive action the division deems pertinent to prevent the covered algorithm from causing harm or having a disparate impact in the equal enjoyment of goods, services, or other activities or opportunities, as prescribed by rules promulgated by the division.
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
No employer shall utilize or apply any artificial intelligence unless the employer, or an entity acting on behalf of such employer, shall have conducted an impact assessment for the application and use of such artificial intelligence. Following the first impact assessment, an impact assessment shall be conducted at least once every two years. An impact assessment shall be conducted prior to any material change to the artificial intelligence that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the artificial intelligence; (b) an evaluation of the ability of the artificial intelligence to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the artificial intelligence including: (i) a summary of the underlying algorithms, computational modes, and tools that are used within the artificial intelligence; and (ii) the design and training data used to develop the artificial intelligence process; (d) the extent to which the deployment and use of the artificial intelligence requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; (e) an estimate of the number of employees already displaced due to artificial intelligence; and (f) an estimate of the number of employees expected to be displaced or otherwise affected due to the increased use of artificial intelligence in the workplace.
3. (a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
4. Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
(ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against enrollees in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
(e) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against insureds in violation of state or federal law. (f) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal department of health and human services.
(a) It shall be an unlawful discriminatory practice for an employer to use artificial intelligence for recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment that has the effect of subjecting employees to discrimination on the basis of age, race, creed, color, national origin, citizenship or immigration status, sexual orientation, gender identity or expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, or status as a victim of domestic violence or to use zip codes as a proxy for such protected classes.
(2) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
(4) The artificial intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
(2) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against patients in violation of Federal or State law. (3) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and or guidance issued by the United States Department of Health and Human Services.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against covered persons in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations or guidance issued by the United States Department of Health and Human Services.
(4) The artificial-intelligence-based algorithms and training data sets must not directly or indirectly discriminate against the enrollees in violation of Federal or State law. (5) The artificial-intelligence-based algorithms must be fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
(k) It shall be unlawful for an employer to use electronic monitoring, alone or in conjunction with an automated decision system, unless the employer's proposed use of electronic monitoring has been the subject of an impact assessment. Such impact assessments shall: (1) Be conducted no more than one year prior to the use of such electronic monitoring, or where the electronic monitoring began before the effective date of this section, within six (6) months of the effective date of this chapter; (2) Be conducted by an independent and impartial party with no financial or legal conflicts of interest; (3) Evaluate whether the data protection and security practices surrounding the electronic monitoring are consistent with applicable law and cybersecurity industry's best practices; (4) Identify the allowable purpose(s) as defined in this chapter; (5) Consider and describe any other ways in which the electronic monitoring could result in a violation of applicable law and, for any finding that a violation of law may occur, any necessary or appropriate steps to prevent such violation of law; (6) Consider and describe whether the electronic monitoring may negatively impact employees' privacy and job quality, including wages, hours, and working conditions; and (7) Be disclosed in full, in plain language, to all affected workers and their authorized representatives within thirty (30) days of the employer's receipt of the impact assessment. (i) Workers and their authorized representatives shall have the right to comment on, challenge and bargain over the proposed monitoring based on the assessment's findings.
(A) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought by the Attorney General pursuant to Section 37-31-60, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
(A) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought by the Attorney General pursuant to Section 37-31-70, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
(C)(1) Except as provided in items (4), (5), and subsection (F) of this section: (a) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system shall complete an impact assessment for the high-risk artificial intelligence system; and (b) a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) An impact assessment completed pursuant to this subsection must include, at a minimum, and to the extent reasonably known by or available to the deployer: (a) a statement by the deployer disclosing the purpose, intended-use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (c) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (d) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (e) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (f) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (g) a description of the postdeployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) In addition to the information required under item (2), an impact assessment completed pursuant to this item following an intentional and substantial modification to a high-risk artificial intelligence system must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection, all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) At least annually, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
Annually test, or ensure that an appropriate contractor employed by such agency annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
Annually test, or ensure that an appropriate contractor employed by such department, office, board, commission, agency, or instrumentality of local government annually tests, the automated decision system for algorithmic discrimination and certify its compliance with federal and state law;
(g) Impact assessment of automated decision systems. (1) Prior to utilizing an automated decision system, an employer shall create a written impact assessment of the system that includes, at a minimum: (A) a detailed description of the automated decision system and its purpose; (B) a description of the data utilized by the system; (C) a description of the outputs produced by the system and the types of employment-related decisions in which those outputs may be utilized; (D) an assessment of the necessity for the system, including reasons for utilizing the system to supplement nonautomated means of decision making; (E) a detailed assessment of the system's validity and reliability in accordance with contemporary social science standards and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (F) a detailed assessment of the potential risks of utilizing the system, including the risk of: (i) discrimination against employees on the basis of race, color, religion, national origin, sex, sexual orientation, gender identity, ancestry, place of birth, age, crime victim status, or physical or mental condition; (ii) violating employees' legal rights or chilling employees' exercise of legal rights; (iii) directly or indirectly harming employees' physical health, mental health, safety, sense of well-being, dignity, or autonomy; (iv) harm to employee privacy, including through potential security breaches or inadvertent disclosure of information; and (v) negative economic and material impacts to employees, including potential effects on compensation, benefits, work conditions, evaluations, advancement, and work opportunities; (G) a detailed summary of measures taken by the employer to address or mitigate the risks identified pursuant to subdivision (E) of this subdivision (1); and (H) a description of any methodology used in preparing the assessment. (2) An employer shall provide a copy of the assessment prepared pursuant to subdivision (1) of this subsection to an employee upon request. (3) An employer shall update the assessment required pursuant to this subsection any time a significant change or update is made to the automated decision system. (4) A single impact assessment may address a comparable set of automated decision systems deployed by an employer.
It shall be unlawful discrimination for a developer or deployer to use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that produces algorithmic discrimination.
(f) A developer shall not use, sell, or share an automated decision system for use in a consequential decision or a product featuring an automated decision system for use in a consequential decision that has not passed an independent audit, in accordance with section 4193e of this title. If an independent audit finds that an automated decision system for use in a consequential decision does produce algorithmic discrimination, the developer shall not use, sell, or share the system until the algorithmic discrimination has been proven to be rectified by a post-adjustment audit.
(a) Prior to deployment of an automated decision system for use in a consequential decision, six months after deployment, and at least every 18 months thereafter for each calendar year an automated decision system is in use in consequential decisions after the first post-deployment audit, the developer and deployer shall be jointly responsible for ensuring that an independent audit is conducted in compliance with the provisions of this section to ensure that the product does not produce algorithmic discrimination and complies with the provisions of this subchapter. The developer and deployer shall enter into a contract specifying which party is responsible for the costs, oversight, and results of the audit. Absent an agreement of responsibility through contract, the developer and deployer shall be jointly and severally liable for any violations of this section. Regardless of final findings, the deployer or developer shall deliver all audits conducted under this section to the Attorney General. (b) A deployer or developer may contract with more than one auditor to fulfill the requirements of this section. (c) The audit shall include the following: (1) an analysis of data management policies, including whether personal or sensitive data relating to a consumer is subject to data security protection standards that comply with the requirements of applicable State law; (2) an analysis of the system validity and reliability according to each specified use case listed in the entity's reporting document filed by the developer or deployer pursuant to section 4193f of this title; (3) a comparative analysis of the system's performance when used on consumers of different demographic groups and a determination of whether the system produces algorithmic discrimination in violation of this subchapter by each intended and foreseeable identified use as identified by the deployer and developer pursuant to section 4193f of this title; (4) an analysis of how the technology complies with existing relevant federal, State, and local labor, civil rights, consumer protection, privacy, and data privacy laws; and (5) an evaluation of the developer's or deployer's documented risk management policy and program as set forth in section 4193g of this title for conformity with subsection 4193g(a) of this title. (e) The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer pursuant to section 4193f of this title. (f) An audit conducted under this section shall be completed in its entirety without the assistance of an automated decision system. (g)(1) An auditor shall be an independent entity, including an individual, nonprofit, firm, corporation, partnership, cooperative, or association. (2) For the purposes of this subchapter, no auditor may be commissioned by a developer or deployer of an automated decision system used in consequential decisions if the auditor: (A) has already been commissioned to provide any auditing or nonauditing service, including financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past 12 months; (B) is or was involved in using, developing, integrating, offering, licensing, or deploying the automated decision system; (C) has or had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision system; or (D) has or had a direct financial interest or a material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision system. (3) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
(3) The artificial intelligence, algorithm, or other software tool is fairly applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services. (4) The artificial intelligence, algorithm, or other software tool is configured and applied in a standard, consistent manner for all health plans and insureds so that the resulting decisions are the same for all patients with similar clinical presentation and considerations.
(5) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against covered individuals in violation of State or federal law. (6) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services.
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section.
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 9 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system. (7) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
(1) The requirements in section 5 (1) through (3) of this act and section 3(2) of this act do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed: (a) The deployer: (i) Employs fewer than 50 full-time equivalent employees; and (ii) Does not use the deployer's own data to train the high-risk artificial intelligence system; (b) The high-risk artificial intelligence system: (i) Is used for the intended uses that are disclosed by the deployer; and (ii) Continues learning based on data derived from sources other than the deployer's own data; and (c) The deployer makes available to consumers any impact assessment that: (i) The developer of the high-risk artificial intelligence system has completed and provided to the deployers; and (ii) Includes information that is substantially similar to the information in the impact assessment required under section 5 of this act. (2) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
(1) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In a civil action brought against a developer pursuant to this chapter, there is a rebuttable presumption that a developer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the developer complied with the requirements of this section. (2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
(1) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In a civil action brought against a deployer pursuant to this chapter, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required by this subsection (1) if the deployer complied with the provisions of this section.
(3)(a) Except as provided in (c) of this subsection (3), a deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system before the deployer initially deploys such high-risk artificial intelligence system and before a significant update to such high-risk artificial intelligence system is used to make a consequential decision. (b) An impact assessment completed pursuant to (a) of this subsection (3) must include, at a minimum: (i) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) A statement by the deployer disclosing whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken, to the extent feasible, to mitigate such risk; (iii) For each postdeployment impact assessment completed pursuant to this section, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system; (iv) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs such high-risk artificial intelligence system produces; (v) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system; (vi) A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vii) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; (viii) A description of any postdeployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise; and (ix) An analysis of such high-risk artificial intelligence system's validity and reliability in accordance with standard industry practices. (c)(i) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. (ii) If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the relevant requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this section. (iii) A deployer that completes an impact assessment pursuant to this section shall maintain such impact assessment and all records concerning the impact assessment for three years. Throughout the period of time that a high-risk artificial intelligence system is deployed and for a period of at least three years following the final deployment of the high-risk artificial intelligence system, the deployer shall retain all records concerning each impact assessment conducted on the high-risk artificial intelligence system, including all raw data used to evaluate the performance and known limitations of such system.
(1)(a) Beginning July 1, 2027, each deployer of a high-risk artificial intelligence system must use industry-standard means to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. (b) In any enforcement action brought on or after July 1, 2027, by the attorney general pursuant to section 10 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this chapter. (2)(a) By July 1, 2027, and at least annually thereafter, a deployer or third party contracted by the deployer shall review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination. (b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
(1) Except as provided in subsection (6) of this section, a deployer that deploys a high-risk artificial intelligence system on or after July 1, 2027, or a third party contracted by the deployer for such purposes, shall complete an impact assessment for: (a) The high-risk artificial intelligence system; and (b) A deployed high-risk artificial intelligence system no later than 90 days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (2) Each impact assessment completed pursuant to this section must include, at a minimum, and to the extent reasonably known by, or available to, the deployer: (a) A statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (c) A description of the following: (i) The categories of data the high-risk artificial intelligence system processes as inputs; (ii) The outputs the high-risk artificial intelligence system produces; (iii) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (iv) A description of any transparency measures taken concerning the high-risk artificial intelligence system, such as any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and (v) A description of the postdeployment monitoring and user safeguards provided concerning such high-risk artificial intelligence system, such as the oversight process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence system. (3) In addition to the information required under subsection (2)(c) of this section, each impact assessment completed following an intentional and substantial modification made to a high-risk artificial intelligence system on or after July 1, 2027, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment satisfies the requirements established in this section if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system.