Plain Language
Before deploying any automated decision-making system, the employer must complete an initial impact assessment at least 30 days prior to implementation. The assessment must be signed by both the human reviewer(s) responsible for meaningful human review and a qualified independent auditor who has had no involvement with, employment relationship with, or financial interest in the system's developer or deployer within the preceding five years. Subsequent assessments are required at least every two years and before any material change. Each assessment must include, in plain language: (1) system objectives and effectiveness evaluation; (2) a description of algorithms, AI tools, design, and training; (3) testing across seven risk categories — disparate impact on protected characteristics, accessibility, privacy and job quality, cybersecurity, public health/safety, foreseeable misuse, and sensitive data handling; and (4) an employee notification mechanism. The independent auditor conflict-of-interest bar is strict — a five-year lookback covering development involvement, employment, and direct or material indirect financial interest.
Statutory Text
(a) An employer seeking to use or apply an automated decision-making system permitted under Section 10 shall conduct an initial impact assessment, 30 days prior to implementation of the automated decision-making system, bearing the signature of: (1) one or more individuals responsible for meaningful human review of the system; and (2) an independent auditor. A person shall not be an independent auditor under this subsection if, at any point in the 5 years preceding the impact assessment, that person: (i) was involved in using, developing, offering, licensing, or deploying the automated decision-making system under review; (ii) had an employment relationship with a developer or deployer that uses, offers, or licenses the automated decision-making system under review; or (iii) had a direct or material indirect financial interest in a developer or deployer that uses, offers, or licenses the automated decision-making system under review. (b) Following the initial impact assessment, additional impact assessments shall be conducted at least once every 2 years and prior to any material changes to the automated decision-making system. Each impact assessment shall include, in plain language: (1) a description of the objectives of the automated decision-making system; (2) an evaluation of the system's ability to achieve those objectives; (3) a description and evaluation of the algorithms, computational models, and artificial intelligence tools used, including: (A) a summary of underlying algorithms and artificial intelligence tools; and (B) a description of the design and training to be used; (4) testing for: (A) disparate impact or discrimination based on protected characteristics, including, but not limited to discriminating against, persons based on their race, color, religious creed, national origin, sex, disability or perceived disability, gender identity, sexual orientation, genetic information, pregnancy or a condition related to pregnancy, ancestry, or status as a veteran and any actions to mitigate any impacts; (B) accessibility limitations for persons with disabilities; (C) privacy and job quality impacts, including wages, hours, and conditions and safeguards; (D) cybersecurity vulnerabilities and safeguards; (E) public health or safety risks; (F) foreseeable misuse and safeguards; and (G) use, storage, and control of sensitive or personal data; and (5) a notification mechanism for employees impacted by the use of the automated decision-making system.