Plain Language
Before deploying any automated decision-making system, state agencies must conduct a comprehensive impact assessment signed by the individual(s) responsible for meaningful human review. The assessment must cover system objectives, effectiveness evaluation, technical description (algorithms, training data), bias and discrimination testing across an extensive list of protected characteristics, cybersecurity and privacy risks, public health and safety risks, foreseeable misuse, data handling practices, and notification mechanisms for affected individuals. After the initial assessment, agencies must conduct reassessments at least every two years and before any material change that could alter the system's outcomes. This is among the most detailed government AI impact assessment requirements in U.S. state legislation.
Statutory Text
State agencies seeking to utilize or apply an automated decision-making system permitted under section four hundred two of this article with continued and operational meaningful human review shall conduct or have conducted an impact assessment substantially completed and bearing the signature of one or more individuals responsible for meaningful human review for the lawful application and use of such automated decision-making system. Following the first impact assessment, an impact assessment shall be conducted in accordance with this section at least once every two years. An impact assessment shall be conducted prior to any material change to the automated decision-making system that may change the outcome or effect of such system. Such impact assessments shall include: (a) a description of the objectives of the automated decision-making system; (b) an evaluation of the ability of the automated decision-making system to achieve its stated objectives; (c) a description and evaluation of the objectives and development of the automated decision-making including: (i) a summary of the underlying algorithms, computational modes, and artificial intelligence tools that are used within the automated decision-making system; and (ii) the design and training data used to develop the automated decision-making system process; (d) testing for: (i) accuracy, fairness, bias and discrimination, and an assessment of whether the use of the automated decision-making system produces discriminatory results on the basis of a consumer's or a class of consumers' actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability and outlines mitigations for any identified performance differences in outcomes across relevant groups impacted by such use; (ii) any cybersecurity vulnerabilities and privacy risks resulting from the deployment and use of the automated decision-making system, and the development or existence of safeguards to mitigate the risks; (iii) any public health or safety risks resulting from the deployment and use of the automated decision-making system; (iv) any reasonably foreseeable misuse of the automated decision-making system and the development or existence of safeguards against such misuse; (e) the extent to which the deployment and use of the automated decision-making system requires input of sensitive and personal data, how that data is used and stored, and any control users may have over their data; and (f) the notification mechanism or procedure, if any, by which individuals impacted by the utilization of the automated decision-making system may be notified of the use of such automated decision-making system and of the individual's personal data, and informed of their rights and options relating to such use.