Plain Language
Before an employer may use any automated decision system output in an employment decision, the system must have completed predeployment testing and validation covering four areas: (i) system efficacy, (ii) compliance with a comprehensive list of federal employment discrimination statutes (Title VII, ADEA, ADA, GINA, EPA, Rehabilitation Act, Pregnant Workers Fairness Act), (iii) absence of discriminatory impact across race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), national origin, age, disability, and genetic information, and (iv) compliance with the NIST AI Risk Management Framework (January 2023) or its successor. Additionally, the system must be independently tested for discriminatory impact or bias at least annually, and the results of that independent testing must be made publicly available. The annual testing requirement is ongoing — not a one-time predeployment check.
Statutory Text
An employer may not: (2) use an automated decision system output in making an employment related decision with respect to a covered individual unless: (A) the automated decision system used to generate the automated decision system output has had predeployment testing and validation with respect to: (i) the efficacy of the system; (ii) the compliance of the system with applicable employment discrimination laws, including Title VII of the Civil Rights Act of 1964 (42 U.S.C. 2000e et seq.), the Age Discrimination in Employment Act of 1967 (29 U.S.C. 621 et seq.), Title I of the Americans with Disabilities Act of 1990 (42 U.S.C. 12111 et seq.), Title II of the Genetic Information Nondiscrimination Act of 2008 (42 U.S.C. 2000ff et seq.), Section 6(d) of the Fair Labor Standards Act of 1938 (29 U.S.C. 206(d)), Sections 501 and 505 of the Rehabilitation Act of 1973 (29 U.S.C. 791 and 29 U.S.C. 793), and the Pregnant Workers Fairness Act (42 U.S.C. 2000gg); (iii) the lack of any potential discriminatory impact of the system, including discriminatory impact based on race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, or disability, and genetic information (including family medical history); and (iv) the compliance of the system with the Artificial Intelligence Risk Management Framework released by the National Institute of Standards and Technology on January 26, 2023, or a successor framework; (B) the automated decision system is, not less than annually, independently tested for discriminatory impact described in clause (A)(iii) or potential biases and the results of the test are made publicly available;