H-01
Human Oversight & Fairness
Human Oversight of Automated Decisions
When AI systems make or inform consequential decisions about individuals — typically covering employment, credit, housing, insurance, healthcare, and public benefits — those individuals must have meaningful rights to understand, review, challenge, and in high-stakes contexts override those decisions. The specific rights and processes vary by jurisdiction and context, but the core principle is that individuals should not be subject to consequential automated decisions without meaningful recourse.
Applies to DeveloperDeployerProfessionalGovernment Sector EmploymentFinancial ServicesHealthcareGovernment System
Bills — Enacted
2
unique bills
Bills — Proposed
55
Last Updated
2026-03-29
Core Obligation

When AI systems make or inform consequential decisions about individuals — typically covering employment, credit, housing, insurance, healthcare, and public benefits — those individuals must have meaningful rights to understand, review, challenge, and in high-stakes contexts override those decisions. The specific rights and processes vary by jurisdiction and context, but the core principle is that individuals should not be subject to consequential automated decisions without meaningful recourse.

Sub-Obligations6 sub-obligations
ID
Name & Description
Enacted
Proposed
H-01.1
Explanation right The individual must receive an explanation of the principal factors that drove the automated decision, in plain language specific enough to be actionable — not a generic statement that AI was used.
1 enacted
34 proposed
H-01.2
Data disclosure right The specific data inputs used in making the decision about this individual must be disclosed, including the right to know what data was used and to correct inaccurate data.
0 enacted
19 proposed
H-01.3
Pre-decision notice The individual must be notified before a consequential automated decision is made — informing them that an automated system will be used and what categories of decisions it can make.
2 enacted
35 proposed
H-01.4
Right to request human review The individual must have a clear, accessible mechanism to request human review of an automated decision. The right must be disclosed at or near the time of the decision. Human review must be available but the individual must invoke it.
1 enacted
25 proposed
H-01.5
Appeal and contestation right A defined process must exist for the individual to formally contest an automated decision and receive a substantive response explaining the outcome. The process must be accessible without unreasonable burden.
1 enacted
22 proposed
H-01.6
Mandatory pre-action human sign-off Before action is taken on an AI recommendation in defined high-stakes contexts, a qualified human reviewer must affirmatively review and authorize the decision. The human must have authority and practical ability to override — not merely ratify — the AI output.
0 enacted
24 proposed
Bills That Map This Requirement 57 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-01-01
H-01.4
Bus. & Prof. Code § 22627(a)-(c)
Plain Language
During business hours (8 a.m. to 6 p.m. daily), operators must provide human customer service and connect consumers to a human customer service agent within five minutes of a request. For telephone platforms, individual hold times may not exceed 5 minutes and cumulative hold time may not exceed 10 minutes total per call; if a chatbot answers, human assistance must be provided within five minutes of the call. For online platforms, customers must be given the option to request human assistance, and that assistance must be provided within five minutes. This is a human escalation right — the consumer must invoke it, but the operator must fulfill it within the specified timeframe. The obligation applies only during the defined business hours window.
(a) During the business hours of 8 a.m. to 6 p.m. daily, an operator of a large private business who provide goods and services to consumers in California shall provide consumers with human customer service support and communications. During these times, an operator shall connect a person interacting with a customer service chatbot, or automated customer support system, to a customer service agent within five minutes after a request for human customer service is made.
(b) For telephonic customer service platforms, the business shall ensure all of the following:
(1) That a customer call be answered quickly and, after the call is answered, that a customer is not placed on hold for more than 5 minutes at any point after the call is answered, and that cumulative hold times for a call not exceed more than 10 minutes total.
(2) If a call is answered by a customer service chatbot, the operator of the telephonic platform shall provide human assistance within five minutes after the call is made.
(c) For online customer service platforms, the business shall ensure that a customer is given option to request customer service assistance from a human being and, upon that request, the operator of the online platform shall provide human assistance within five minutes after the request is made.
Pending 2027-01-01
H-01.6
Labor Code § 2821(c)
Plain Language
Employers may not use or deploy AI, clinical decision support systems, or other technology in a manner that replaces or limits a health care worker's exercise of professional judgment in patient care. This is an affirmative prohibition — the employer must ensure that technology is deployed as a supplement to, not a replacement for, clinical judgment. In practice, this means AI outputs in patient care settings must remain advisory and cannot be treated as binding directives that override worker discretion. This goes beyond requiring that human review be available upon request — it categorically prohibits technology from supplanting professional judgment.
(c) An employer shall not use or deploy technology to replace or limit a worker's use of professional judgment in patient care.
Pending 2026-01-01
H-01.1H-01.3
Bus. & Prof. Code § 22756.2(a)(1)-(5)
Plain Language
When a deployer uses a high-risk automated decision system to make a decision about a person, the deployer must notify that person and provide specific disclosures: the system's purpose and the specific decision made, how the system was used, the types of data used, the deployer's contact information, and a link to the deployer's public summary statement. This is a post-decision notification — it does not require pre-decision notice, but it does require disclosure of how and why the system was used in the specific decision affecting that individual.
(a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following: (1) The purpose of the high-risk automated decision system and the specific decision it was used to make. (2) How the high-risk automated decision system was used to make the decision. (3) The type of data used by the high-risk automated decision system. (4) Contact information for the deployer. (5) A link to the statement required by subdivision (b).
Pending 2026-01-01
H-01.4
Bus. & Prof. Code § 22756.2(c)
Plain Language
Deployers must provide affected individuals an opportunity to appeal decisions made by a high-risk automated decision system for human review, to the extent technically feasible. The 'technically feasible' qualifier gives deployers some flexibility, but the baseline obligation is to offer a human review appeal process. The statute does not prescribe the format, timeline, or substantive standard for the appeal.
(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.
Failed 2026-01-01
H-01.3
Lab. Code § 1522(a), (c), (e)
Plain Language
Employers must provide a written pre-use notice to any worker (or their authorized representative) who will foreseeably be directly affected by an ADS used for employment-related decisions other than hiring. The notice must be delivered at least 30 days before initial deployment, by April 1, 2026 for systems already in use, or within 30 days of hiring a new worker. The notice must be a standalone, plain-language communication in the worker's routine language, and must describe the types of decisions affected, the categories and sources of worker data collected, key parameters that disproportionately affect ADS output, the ADS creator, any applicable quotas, and the worker's right to access and correct their data. This is a proactive pre-deployment obligation — employers cannot wait for workers to ask.
(a) An employer shall provide a written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or their authorized representative, according to the following: (1) At least 30 days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this title takes effect, no later than April 1, 2026. (3) To a new worker within 30 days of hiring the worker. (c) A written notice required by this section shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including, but not limited to, an email, hyperlink, or other written format. (e) A notice issued pursuant to subdivision (a) shall contain the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measured by an ADS to which the worker is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's data used by the ADS. (7) That the employer is prohibited from retaliating against workers for exercising their rights described in paragraph (6).
Failed 2026-01-01
H-01.3
Lab. Code § 1522(d)
Plain Language
When an employer uses an ADS to make hiring decisions for a particular position, the employer must notify each applicant upon receiving their application. Notification may be delivered via automated reply or included in the job posting itself. Unlike the pre-deployment notice to existing workers in § 1522(a), this hiring-specific notice does not require the detailed content items (data categories, quotas, vendor identity, etc.) specified in § 1522(e). This applies only where the employer will actually use an ADS for the position the applicant is applying for.
(d) An employer shall notify a job applicant upon receiving the application that the employer utilizes an ADS when making hiring decisions, if the employer will use the ADS in making decisions for that position. Notifications may be made using an automatic reply mechanism or on a job posting.
Failed 2026-01-01
H-01.6
Lab. Code § 1524(c)(1)-(2)
Plain Language
Employers face two layered human oversight requirements for discipline, termination, and deactivation decisions. First, an employer may never rely solely on ADS output for these decisions — a human must always be in the loop. Second, when ADS output is the primary basis for such a decision, the employer must assign a human reviewer who affirmatively reviews the ADS output and also compiles and reviews other relevant information such as supervisor evaluations, personnel files, work product, peer reviews, and witness interviews. The 'primarily' threshold is lower than 'solely' — even partial reliance that is the dominant factor triggers the human review obligation. This creates a meaningful human-in-the-loop requirement, not merely a rubber-stamp.
(c) (1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) When an employer relies primarily on ADS output to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the ADS output and compile and review other information that is relevant to the decision, if any. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following: (A) Supervisory or managerial evaluations. (B) Personnel files. (C) Work product of workers. (D) Peer reviews. (E) Witness interviews, that may include relevant online customer reviews.
Failed 2026-01-01
H-01.1
Lab. Code § 1526(a)-(b)
Plain Language
When an employer primarily relied on ADS output to make a discipline, termination, or deactivation decision, the employer must provide a written post-decision notice to the affected worker at the time the worker is informed of the decision. The notice must be plain-language, standalone, in the worker's routine language, and must include: (1) a human contact for more information and data access requests, (2) disclosure that an ADS was used, (3) the worker's right to request a copy of their ADS data, and (4) notice that retaliation is prohibited. This is a post-decision transparency obligation — it provides the worker with information needed to understand and potentially challenge the decision. The trigger is 'primarily relied on,' meaning it only applies when ADS output was the dominant factor, not merely one of several inputs.
(a) An employer that primarily relied on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with a written notice at the time the employer informs the worker of the decision. The notice shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including an email, hyperlink, or other written format. (b) A notice issued pursuant to subdivision (a) shall contain all of the following information: (1) The human to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part.
Pending 2027-01-01
H-01.6
Lab. Code § 1522(b)-(c)
Plain Language
Employers may never rely solely on an ADS for disciplinary, termination, or deactivation decisions. When an ADS output is used to assist such a decision, a human reviewer must conduct an independent investigation and compile corroborating evidence — which may include supervisory evaluations, personnel files, work product, peer reviews, or witness interviews. Critically, if the human reviewer cannot corroborate the ADS output or concludes it is inaccurate, incomplete, or misleading, the employer is prohibited from using that output to discipline, terminate, or deactivate the worker. This goes beyond standard human-in-the-loop requirements by requiring affirmative corroboration, not merely human review.
(b) (1) An employer shall not rely solely on an ADS when making a disciplinary, termination, or deactivation decision.
(2) If an employer uses an ADS output to assist in making a disciplinary, termination, or deactivation decision, the employer shall direct a human reviewer to conduct an independent investigation and compile corroborating or supporting information for the decision. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following:
(A) Supervisory or managerial evaluations.
(B) Personnel files.
(C) Work product of workers.
(D) Peer reviews.
(E) Witness interviews, that may include relevant online customer reviews.
(c) If an employer cannot corroborate the ADS output, or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading, the employer shall not use the ADS output to discipline, terminate, or deactivate a worker.
Pending 2027-01-01
H-01.1H-01.3
Lab. Code § 1524(a)-(b)
Plain Language
When an employer uses an ADS to assist in a disciplinary, termination, or deactivation decision, it must provide the affected worker with a written postuse notice at the time the decision is communicated. The notice must be a standalone plain-language document in the worker's routine communication language, delivered via an accessible method. It must disclose: (1) that an ADS was used; (2) that a human reviewer independently investigated and corroborated the output; (3) contact information for a human the worker can reach for more information and to exercise data access rights; and (4) that retaliation for exercising rights under this part is prohibited. This is a post-decision notice — not a pre-decision notification — which is unusual compared to most automated decision notice frameworks.
(a) An employer that uses an ADS to assist in making a disciplinary, termination, or deactivation decision shall provide the affected worker with a written postuse notice at the time the employer informs the worker of the decision. The notice shall comply with all of the following:
(1) It shall be written in plain language as a separate, stand-alone communication.
(2) It shall be in the language in which routine communications and other information are provided to workers.
(3) It shall be provided via a simple and easy-to-use method, including an email, hyperlink, or other written format.
(b) The post-use notice shall contain all of the following information:
(1) That the employer used an ADS to assist the employer in the disciplinary, termination, or deactivation decision with respect to the worker.
(2) That a human reviewer conducted an independent investigation and compiled evidence to corroborate the ADS output.
(3) Contact information for the human that the worker may contact for more information about the decision and the worker's right to access a copy of their own data and corroborating evidence that was used in the decision.
(4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part.
Pending 2027-01-01
H-01.1H-01.2
Lab. Code § 1524(c)
Plain Language
When a worker exercises their data access right, the employer must provide a written plain-language document accessible away from the workplace that details: the specific decision the ADS was used for, the specific input data used and output produced by the ADS, any additional corroborating information used, the ADS vendor name and product name, and a copy of any completed impact assessments for that ADS. This is a detailed transparency response triggered by a worker's request — it goes well beyond the initial postuse notice by providing the actual data, outputs, corroborating evidence, and vendor identification. The requirement to produce impact assessments is notable as it effectively requires employers to have conducted such assessments if they exist.
(c) When responding to a data access request pursuant to this section, an employer shall provide to the worker a written, plain language document using a simple and easy-to-use method that is accessible away from the workplace containing all of the following:
(1) The specific decision for which the employer used the ADS.
(2) The specific worker input data that the ADS used, and the specific worker output produced by the ADS.
(3) Any additional corroborating or supporting information used in addition to the ADS output in making the decision.
(4) The name of the vender or entity that created the ADS and the product name of the ADS.
(5) A copy of any completed impact assessments regarding the ADS in question.
Pending 2027-01-01
H-01.3
C.R.S. § 6-1-1704(1)-(2)
Plain Language
Before using a covered ADMT to materially influence a consequential decision, the deployer must give consumers clear and conspicuous notice that ADMT is or will be used and tell them how to get more information. The deployer can satisfy this requirement by maintaining a prominent public notice at consumer interaction points — such as a link or posting near the transaction. This is a pre-decision disclosure, not a post-adverse-outcome notice.
(1) PRIOR TO A DEPLOYER USING A COVERED ADMT TO MATERIALLY INFLUENCE A CONSEQUENTIAL DECISION, THE DEPLOYER SHALL PROVIDE A CLEAR AND CONSPICUOUS NOTICE TO A CONSUMER THAT THE DEPLOYER USED OR WILL USE A COVERED ADMT IN A CONSEQUENTIAL DECISION AFFECTING THE CONSUMER AND INSTRUCTIONS REGARDING HOW THE CONSUMER MAY OBTAIN THE ADDITIONAL INFORMATION DESCRIBED IN THIS SECTION. (2) A DEPLOYER COMPLIES WITH SUBSECTION (1) OF THIS SECTION BY MAINTAINING A PROMINENT PUBLIC NOTICE THAT IS REASONABLY ACCESSIBLE AT POINTS OF CONSUMER INTERACTION, INCLUDING THROUGH A LINK OR POSTING THAT IS REASONABLY PROXIMATE TO THE INTERACTION OR TRANSACTION IN WHICH A CONSEQUENTIAL DECISION MAY OCCUR.
Pending 2027-01-01
H-01.1H-01.2
C.R.S. § 6-1-1704(3)(a)-(c)
Plain Language
When a covered ADMT materially influences a consequential decision that results in an adverse outcome for a consumer, the deployer must provide within 30 days: (1) a plain-language explanation of the decision and the ADMT's role; (2) instructions and a simple process for requesting additional information about the ADMT (including its name, version, developer, and categories and sources of personal data used); and (3) an explanation of the consumer's rights to data correction and human review. The deployer's obligation to disclose input details is limited to information the developer has provided under § 6-1-1702. Trade secrets and legally protected information may be withheld, but the deployer must notify the consumer of any withholding. The attorney general must adopt rules by January 1, 2027 to further clarify these post-adverse-outcome disclosure requirements.
(3) IF A DEPLOYER USES A COVERED ADMT TO MATERIALLY INFLUENCE A CONSEQUENTIAL DECISION THAT RESULTS IN AN ADVERSE OUTCOME FOR A CONSUMER, THE DEPLOYER SHALL PROVIDE WITHIN THIRTY DAYS AFTER MAKING THE DECISION: (a) A PLAIN LANGUAGE DESCRIPTION OF THE CONSEQUENTIAL DECISION AND THE ROLE THE COVERED ADMT PLAYED IN THE CONSEQUENTIAL DECISION; (b) INSTRUCTIONS AND A SIMPLE-TO-FOLLOW PROCESS TO REQUEST ADDITIONAL INFORMATION ABOUT THE COVERED ADMT AND THE INPUTS, INCLUDING THE NAME OF THE COVERED ADMT, THE COVERED ADMT VERSION NUMBER, IF APPLICABLE, THE COVERED ADMT DEVELOPER, AND THE TYPES, CATEGORIES, AND SOURCES OF PERSONAL DATA USED, TO THE EXTENT THE DEPLOYER RECEIVES THE NECESSARY INFORMATION FROM THE DEVELOPER IN COMPLIANCE WITH SECTION 6-1-1702; AND (c) AN EXPLANATION OF THE CONSUMER RIGHTS DESCRIBED IN SECTION 6-1-1705 AND HOW TO EXERCISE THEM.
Pending 2027-01-01
H-01.4
C.R.S. § 6-1-1705(1)(a)(II)
Plain Language
When a consumer experiences an adverse outcome from a consequential decision materially influenced by a covered ADMT, the consumer may request meaningful human review and reconsideration of the decision, and the deployer must provide it to the extent commercially reasonable. The reviewing individual must have authority to approve, modify, or override the decision; must consider primary evidence; must be trained; must not default to the system output; and must understand the output's intended use, limitations, input categories, and principal factors. The 'commercially reasonable' qualifier gives deployers some flexibility but does not eliminate the obligation. FERPA-subject deployers may comply through existing student record amendment and appeal processes.
(1) (a) WHEN A CONSUMER EXPERIENCES AN ADVERSE OUTCOME RESULTING FROM A CONSEQUENTIAL DECISION IN WHICH A COVERED ADMT MATERIALLY INFLUENCES THE CONSEQUENTIAL DECISION, THE CONSUMER MAY REQUEST AND THE DEPLOYER SHALL PROVIDE IN RESPONSE TO THE REQUEST: ... (II) AN OPPORTUNITY FOR MEANINGFUL HUMAN REVIEW AND RECONSIDERATION OF THE CONSEQUENTIAL DECISION, TO THE EXTENT COMMERCIALLY REASONABLE.
Pending 2027-01-01
H-01.1H-01.2H-01.4
C.R.S. § 6-1-1708(3)(a)-(e)
Plain Language
HIPAA covered entities and their business associates are broadly exempt from Part 17, except for employment-related consequential decisions. However, even exempt healthcare entities must: (1) provide patients a general notice that they use advanced technologies including ADMT, and (2) when using ADMT to determine financial assistance eligibility, provide specific disclosures including a plain-language description of the decision and ADMT's role, the types of information relied upon, how to request data correction under HIPAA, and how to request human review. These financial assistance disclosures may be provided either through advance general disclosure or within 30 days after an adverse outcome. Healthcare providers must be operating from a Colorado location to qualify for the HIPAA exemption.
(3) (a) SECTIONS 6-1-1701, 6-1-1702, 6-1-1703, 6-1-1704, 6-1-1705, AND 6-1-1706 DO NOT APPLY TO A COVERED ENTITY WITHIN THE MEANING OF THE FEDERAL "HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT OF 1996", 42 U.S.C. SECS. 1320d TO 1320d-9, AND THE REGULATIONS PROMULGATED UNDER THE FEDERAL ACT, OR A COVERED ENTITY'S BUSINESS ASSOCIATES FOR ANY SERVICES RENDERED TO A COVERED ENTITY, TO THE EXTENT THE COVERED ENTITY IS DOING BUSINESS IN COLORADO, EXCEPT FOR A CONSEQUENTIAL DECISION RELATED TO EMPLOYMENT OR AN EMPLOYMENT OPPORTUNITY. (b) NOTWITHSTANDING SUBSECTION (3)(a) OF THIS SECTION, FOR A COVERED ENTITY THAT IS A HEALTH-CARE PROVIDER, AS DEFINED IN 45 CFR 160.103, THIS SUBSECTION (3) APPLIES ONLY IF THE HEALTH-CARE PROVIDER IS OPERATING FROM A LOCATION WITHIN COLORADO. (c) A COVERED ENTITY SHALL PROVIDE PATIENTS WITH A GENERAL NOTICE OF USE OF ADVANCED TECHNOLOGIES, INCLUDING A COVERED ADMT. THE NOTICE MAY BE INCORPORATED WITH OTHER NOTICES DESCRIBING PATIENT RIGHTS AND HOW THE COVERED ENTITY PROVIDES CARE. (d) NOTWITHSTANDING SUBSECTION (3)(a) OF THIS SECTION, A COVERED ENTITY THAT USES A COVERED ADMT TO DETERMINE A PATIENT'S ELIGIBILITY FOR FINANCIAL ASSISTANCE, INCLUDING DISCOUNTED CARE AS DESCRIBED IN SECTION 25.5-3-502, SHALL PROVIDE A PATIENT THE FOLLOWING DISCLOSURES: (I) A PLAIN LANGUAGE DESCRIPTION OF THE CONSEQUENTIAL DECISION AND THE ROLE OF THE COVERED ADMT IN THE CONSEQUENTIAL DECISION; (II) THE TYPES OF INFORMATION ABOUT THE INDIVIDUAL THE COVERED ENTITY RELIED UPON IN MAKING ITS DETERMINATION OF ELIGIBILITY, EXCEPT FOR TRADE SECRETS AND OTHER CONFIDENTIAL OR LEGALLY PROTECTED INFORMATION; (III) INFORMATION ON HOW TO REQUEST CORRECTION OF MATERIALLY INACCURATE PERSONAL DATA HELD BY THE COVERED ENTITY CONSISTENT WITH THE FEDERAL "HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT OF 1996", 42 U.S.C. SECS. 1320d TO 1320d-9 AND SECTION 25.5-3-502; AND (IV) INFORMATION ON HOW TO REQUEST MEANINGFUL HUMAN REVIEW OR RECONSIDERATION, WHERE APPLICABLE. (e) A COVERED ENTITY MAY COMPLY WITH SUBSECTION (3)(d) OF THIS SECTION THROUGH EITHER AN ADVANCE GENERAL DISCLOSURE OF THE INFORMATION REQUIRED BY SUBSECTION (3)(d) OF THIS SECTION OR THROUGH A NOTICE PROVIDED WITHIN THIRTY CALENDAR DAYS AFTER AN ADVERSE OUTCOME. THIS SECTION DOES NOT CREATE A SEPARATE AND DUPLICATIVE DISCLOSURE PROCESS OR APPEAL PROCESS IF THE REVIEW OPPORTUNITIES AND INFORMATION DESCRIBED IN SUBSECTION (3)(d) OF THIS SECTION ARE PROVIDED.
Enacted 2026-06-30
H-01.1H-01.3
C.R.S. § 6-1-1703(4)(a)
Plain Language
Deployers must, no later than the time the high-risk AI system is deployed to make or substantially factor in a consequential decision about a consumer, provide certain disclosures. The specific disclosures required are enumerated in the original SB 205 § 6-1-1703(4)(a) (e.g., that an AI system is being used, categories of decisions it makes, contact information for the deployer, a description of the purpose). This is a pre-decision or at-decision timing requirement — the deployer cannot make the consequential decision and disclose later.
(4) (a) On and after June 30, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:
Enacted 2026-06-30
H-01.1H-01.4H-01.5
C.R.S. § 6-1-1703(4)(b)
Plain Language
When a high-risk AI system makes or substantially factors into a consequential decision that is adverse to a consumer, the deployer must provide the consumer with specific information. The original SB 205 § 6-1-1703(4)(b) requires: a statement that an AI system was used, contact information for the deployer, a description of the purpose of the AI system, information about the consumer's right to opt out and to appeal, and other relevant details. This post-adverse-decision disclosure obligation gives affected consumers the information they need to exercise appeal rights.
(b) On and after June 30, 2026, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer:
Pending 2026-10-01
H-01.3
Sec. 5
Plain Language
Before making an employment-related decision that is made or substantially informed by an automated process, the deployer must provide the affected applicant or employee written notice covering eight categories of information: that an automated process is being used, its purpose and the nature of the decision, opt-out rights, deployer contact information, availability of human review, how to request reevaluation, a link to the most recent bias audit summary, and how to request further documentation. This is a pre-decision notice distinct from the pre-collection notice in Section 4.
Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, before such employment-related decision is made, provide to such applicant or employee a written notice disclosing: (1) That the deployer has deployed an automated employment-related decision process; (2) The purpose of the automated employment-related decision process and the nature of such employment-related decision; (3) Information concerning the right, under subparagraph (C) of subdivision (5) of subsection (a) of section 42-518 of the general statutes, to opt out of the processing of personal data for the purposes set forth in said subparagraph; (4) Contact information for the deployer; (5) The availability of human review pursuant to section 7 of this act; (6) Information concerning how such applicant or employee may request a revaluation of any employment-related decision made in whole or in part by such automated employment-related decision process; (7) A link to the summary of the most recent bias audit required pursuant to section 8 of this act; and (8) Information concerning how to request additional documentation or information about such automated employment-related decision process.
Pending 2026-10-01
H-01.1H-01.2H-01.5
Sec. 6(a)-(b)
Plain Language
When an automated employment-related decision process makes or substantially contributes to an adverse decision about an applicant or employee, the deployer must provide: (1) a high-level, plain-language explanation of the principal reasons for the adverse decision, including the degree and manner of the automated process's contribution, the types and sources of data used; (2) the opportunity to examine the data used, correct inaccurate data, and appeal the decision with human review if it was based on incorrect data; and (3) upon request, a copy of the most recent bias audit. The explanation must be provided directly to the individual, in plain language, in all languages used in the deployer's ordinary business, and in an accessible format.
(a) Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, if such employment-related decision is adverse to such applicant or employee, provide to such applicant or employee: (1) A high-level statement disclosing the principal reason or reasons for such adverse employment-related decision, including, but not limited to, (A) the degree to which, and manner in which, the automated employment-related decision process contributed to such adverse employment-related decision, (B) the type of data that were processed by such automated employment-related decision process in making, or as a substantial factor in making, such adverse employment-related decision, and (C) the source of the data described in subparagraph (B) of this subdivision; (2) An opportunity to (A) examine the data the automated employment-related decision process processed in making, or as a substantial factor in making, such adverse employment-related decision, (B) correct any incorrect data described in subparagraph (A) of this subdivision, and (C) appeal such adverse employment-related decision if such adverse employment-related decision is based upon any incorrect data described in subparagraph (A) of this subdivision. Such appeal shall allow for human review; and (3) Upon request by such applicant or employee, or such applicant or employee's representative, a copy of the most recent bias audit required pursuant to section 8 of this act. (b) A deployer who is required to provide a high-level statement to an applicant for employment or employee in the state pursuant to subdivision (1) of subsection (a) of this section shall provide such statement: (1) Directly to such applicant or employee; (2) In plain language; (3) In all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sales announcements and other information to persons in the state; and (4) In a format that is accessible to individuals with disabilities.
Pending 2026-10-01
H-01.4H-01.6
Sec. 7(a)-(c)
Plain Language
Deployers must implement mandatory human review over all automated employment-related decision processes before any adverse or final/determinative employment decision is made. The human reviewer must be a qualified individual with authority to change the decision, understanding of the system's limitations and bias risks, and who does not rely solely on the automated output. Deployers must also establish procedures to pause, correct, or reverse erroneous outputs, and must maintain logs of all human review activities and interventions. No automated process may be used for a final or determinative employment decision without human review — this is an absolute prohibition, not merely an option available upon request.
(a) For the purposes of this section "human review" means a review conducted by a qualified individual who (1) has the authority to make or change an employment-related decision, (2) understands the capabilities, limitations and risks of the automated employment-related decision process, including, but not limited to, patterns of bias, disparate impact and data quality issues, and (3) does not rely solely on the content, decision, prediction or recommendation generated by the automated employment-related decision process in making a final or determinative employment-related decision. (b) (1) A deployer who has deployed an automated employment-related decision process in making, or as a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall implement human review over such automated employment-related decision process by providing for review of the content, decisions, predictions or recommendations generated by the automated employment-related decision process and any other information relevant to such content, decision, prediction or recommendation in order to confirm the accuracy of data processed by such automated employment-related decision process and, when appropriate, modify or veto any such content, decision, prediction or recommendation generated by such automated decision-making process prior to any adverse employment-related decision. (2) A deployer shall (A) establish procedures necessary to pause, correct or reverse erroneous or harmful content, decision, prediction or recommendation generated by an automated employment-related decision process, and (B) establish and maintain logs listing all human review reports and any intervention taken by an individual conducting such human review. (c) No automated employment-related decision process shall be used by a deployer in making a final or determinative employment-related decision without human review over such final or determinative employment-related decision.
Pending 2026-10-01
H-01.3
Sec. 18(b)(1)(B) (new § 46a-60(b)(1)(B))
Plain Language
Under amended § 46a-60, employers must provide advance written notice to individuals before using an automated employment-related decision process in any employment decision affecting them. The notice must disclose at minimum: that an automated process will be used, the trade name of the system, and the types and sources of personal information the system will process or analyze. Failure to provide this notice is a discriminatory employment practice enforceable through the Commission on Human Rights and Opportunities. This creates a separate notice obligation within the antidiscrimination framework, distinct from but overlapping with the Section 5 pre-decision notice.
(B) For an employer, by the employer or the employer's agent, to fail to provide to any individual advance written notice disclosing, at a minimum, that an automated employment-related decision process will be used to make, to assist in making or in the course of making a decision to hire or employ or to bar or to discharge from employment, or concerning the compensation or terms, conditions or privileges of employment, of such individual. Such notice shall, at a minimum, disclose the trade name of the automated employment-related decision process and the types and sources of personal information concerning the individual that the automated employment-related decision process will process or analyze.
Enacted 2023-01-01
H-01.3
N.Y.C. Admin. Code § 20-871(b)(1)-(2)
Plain Language
Employers and employment agencies must notify each NYC-resident candidate or employee at least 10 business days before using an AEDT that: (1) an AEDT will be used in the assessment or evaluation, with instructions for requesting an alternative selection process or reasonable accommodation; and (2) the specific job qualifications and characteristics the AEDT will evaluate. Per the implementing rules, notice may be provided on the employment section of the employer's website, in a job posting, or by U.S. mail or email. The notice must include instructions for how to request an alternative process or accommodation, though the law does not require employers to actually provide an alternative process.
In the city, any employer or employment agency that uses an automated employment decision tool to screen an employee or a candidate who has applied for a position for an employment decision shall notify each such employee or candidate who resides in the city of the following: 1. That an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate that resides in the city. Such notice shall be made no less than ten business days before such use and allow a candidate to request an alternative selection process or accommodation; 2. The job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate or employee. Such notice shall be made no less than 10 business days before such use;
Pending 2025-07-01
H-01.3
O.C.G.A. § 10-16-4(a)
Plain Language
Before or at the time an automated decision system is used for a consequential decision about a consumer, the deployer must notify the consumer and provide: the system's purpose and the nature of the decision; deployer contact information; a plain-language description covering what attributes the system measures, how it measures them, why they are relevant, what human components exist, how automated components inform the decision, and a link to a public page with system logic, outputs, data sources, and the most recent impact assessment results; and instructions to access the deployer's public statement under § 10-16-5. This is a comprehensive pre-decision disclosure combining AI identity notice with detailed system explanation.
No later than the time that a deployer deploys an automated decision system to make, or assist in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an automated decision system to make, or assist in making, a consequential decision; and (2) Provide to the consumer: (A) A statement disclosing the purpose of the automated decision system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description, in plain language, of the automated decision system, which description shall, at a minimum, include: (i) A description of the personal characteristics or attributes that the system will measure or assess; (ii) The method by which the system measures or assesses those attributes or characteristics; (iii) How those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; (v) How any automated components of such system are used to inform such consequential decision; and (vi) A direct link to a publicly accessible page on the deployer's public website that contains a plain-language description of the logic used in the system, including the key parameters that affect the output of the system; the system's outputs; the types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and the results of the most recent impact assessment, or an active link to a web page where a consumer can review those results; and (D) Instructions on how to access the statement required by Code Section 10-16-5.
Pending 2025-07-01
H-01.1H-01.2H-01.4H-01.5
O.C.G.A. § 10-16-4(b)-(d)
Plain Language
Within one business day of making or assisting in a consequential decision, the deployer must send the affected consumer a detailed post-decision notice covering: the principal factors and variables driving the decision, the degree of AI contribution, data sources used, how the consumer's personal data informed the decision, the consumer's right to correct data and provide supplemental information, what actions could have or could in the future change the outcome, how to correct inaccurate personal data, and how to appeal an adverse decision with human review if technically feasible. All notices must be provided directly, in plain language, in all languages the deployer uses commercially, and in disability-accessible formats. Critically, if a deployer cannot provide these notices and explanations, it may not use the automated decision system for the consequential decision at all.
(b) A deployer that has used an automated decision system to make, or assist in making, a consequential decision concerning a consumer shall transmit to such consumer within one business day after such decision a notice that includes: (1) A specific and accurate explanation that identifies the principal factors and variables that led to the consequential decision, including: (A) The degree to which, and manner in which, the automated decision system contributed to the consequential decision; (B) The source or sources of the data processed by the automated decision system; and (C) A plain-language explanation of how the consumer's personal data informed these principal factors and variables when the automated decision system made, or assisted in making, the consequential decision; (2) Information about consumers' right to correct, and how the consumer can submit corrections and provide supplementary information relevant to, the consequential decision; (3) What actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future; (4) Information on opportunities to correct any incorrect personal data that the automated decision system processed in making, or assisting in making, the consequential decision; and (5) Information on opportunities to appeal an adverse consequential decision concerning the consumer arising from the deployment of an automated decision system, which appeal shall, if technically feasible, allow for human review. (c)(1) A deployer shall provide the notice, statement, contact information, and description required by subsections (a) and (b) of this Code section: (A) Directly to the consumer; (B) In plain language; (C) In all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) In a format that is accessible to consumers with disabilities. (2) If the deployer is unable to provide the notice, statement, contact information, and description directly to the consumer, the deployer shall make such information available in a manner that is reasonably calculated to ensure that the consumer receives it. (d) No deployer shall use an automated decision system to make, or assist in making, a consequential decision if it cannot provide notices and explanations that satisfy the requirements of this Code section.
Pending 2028-07-01
H-01.1H-01.2H-01.3
HRS § 321-__ (Consequential decisions; notice; statement; opt-out; corrections; appeal)(a)-(c)
Plain Language
Health care providers must provide patients (or their authorized representatives) with two separate written communications around AI-assisted consequential decisions. First, before using AI to make or substantially factor into a consequential health decision, the provider must deliver a pre-decision written notice that: informs the patient AI will be used, discloses the AI system's purpose and the nature of the decision, describes the system in plain language, and offers an opt-out from profiling using the patient's individually identifiable health information for decisions with legal or similarly significant effects. Second, after the decision is made, the provider must deliver a written statement explaining the decision, the principal reasons for it, the degree and manner of AI involvement, the data types and sources used, an opportunity to correct inaccurate data, and an opportunity to appeal with human review — unless appeal would risk the patient's life or safety. Both communications must be provided directly to the patient or authorized representative where feasible.
(a) Before using an artificial intelligence system to make, or be a substantial factor in making, a consequential decision, a health care provider shall provide the patient or the patient's authorized representative, as applicable, with a written notice that: (1) Informs the recipient that the health care provider will be using an artificial intelligence system to make, or be a substantial factor in making, the consequential decision; (2) Discloses the purpose of the artificial intelligence system and the nature of the consequential decision; (3) Describes the artificial intelligence system in plain language; and (4) Allows the patient to opt out of the processing of the patient's individually identifiable health information or other personal data for purposes of profiling in furtherance of decisions that have legal or similarly significant effects concerning the patient. (b) Any health care provider that used an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall provide the patient or the patient's authorized representative, as applicable, with: (1) A written statement that describes the consequential decision and the principal reasons for the consequential decision, including: (A) The degree to which, and manner in which, the artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the artificial intelligence system in making the consequential decision; and (C) The sources of the data described in paragraph (B); (2) An opportunity to correct any incorrect health information or personal data that the artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (3) An opportunity to appeal the consequential decision, including allowing, to the extent technically feasible, human review of all information relating to the consequential decision; provided that this paragraph shall not apply if providing the opportunity for appeal is not in the best interest of the patient, including in instances in which any delay might pose a risk to the life or safety of the patient. (c) The notice and statement required pursuant to subsections (a) and (b), respectively, shall be provided directly to the patient or the patient's authorized representative, as applicable; provided that if the health care provider is unable to comply with this requirement, the health care provider shall provide the notice or statement in a manner that is reasonably calculated to ensure that the patient or the patient's authorized representative, as applicable, receives the notice or statement.
Pending 2028-07-01
H-01.6
HRS § 321-__ (Consequential decisions; review and validation by qualified oversight personnel)(a)-(c)
Plain Language
Health care providers using AI to make or substantially factor into consequential patient decisions must designate and maintain qualified AI oversight personnel — a natural person with the qualifications, experience, and expertise to evaluate AI outputs in health care. This person may be an employee or a contracted third party. The oversight personnel must continuously monitor the provider's AI systems and, critically, must review, evaluate, and either validate or override every AI output before it is used in a consequential decision. This is a mandatory human-in-the-loop requirement: no consequential AI-informed decision may proceed without affirmative human review and authorization.
(a) Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall maintain an artificial intelligence oversight personnel. (b) The artificial intelligence oversight personnel: (1) Shall be a natural person; (2) Shall have the qualifications, experience, and expertise necessary to effectively evaluate outputs, including but not limited to any information, data, assumptions, predictions, scoring, recommendations, decisions, or conclusions generated by artificial intelligence systems in the field of health care; and (3) May be retained by contracting with a third-party. (c) The artificial intelligence oversight personnel shall: (1) Monitor the artificial intelligence systems used by the health care provider; and (2) Before the health care provider uses an output generated by an artificial intelligence system to make, or be a substantial factor in making, a consequential decision: (A) Review and evaluate the output; and (B) Validate or override the output.
Pending 2026-07-01
H-01.3
Iowa Code § 91F.2(1)-(3)
Plain Language
Employers must provide advance written notice to each employee (or their authorized representative) who will foreseeably be directly affected by an automated decision system used for employment-related decisions other than hiring. The notice must be delivered at least 30 days before a new ADS is deployed, by January 1, 2027 for systems already in use, or within 30 days of hiring a new employee. The notice must contain seven specific categories of information: the types of employment decisions affected, the data categories and sources, key parameters that disproportionately affect output, the ADS vendor, any quota details, the employee's data access and correction rights, and anti-retaliation protections. The notice must be a standalone plain-language communication delivered via a simple method such as email.
1. An employer shall provide a written notice that an automated decision system is in use for the purpose of making employment-related decisions, other than hiring decisions, at the workplace to an employee who will foreseeably be directly affected by the automated decision system, or the employee's authorized representative. The employer shall provide the notice by the following dates: a. At least thirty days before an automated decision system is first deployed by the employer. b. If the employer is using an automated decision system to assist in making employment-related decisions as of the effective date of this Act, no later than January 1, 2027. c. To a new employee within thirty days of hiring the employee. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The type of employment-related decisions potentially affected by the automated decision system. b. A general description of the categories of employee-input data the automated decision system will use, the sources of employee input data, and how employee input data will be collected. c. Any key parameters known to disproportionately affect the output of the automated decision system. d. The individuals, vendors, or entities that created the automated decision system. e. If applicable, a description of each quota set or measured by an automated decision system to which the employee is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. f. A description of the employee's right to access and correct the employee's data used by the automated decision system. g. That the employer is prohibited from retaliating against employees for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
Pending 2026-07-01
H-01.3
Iowa Code § 91F.2(4)
Plain Language
When an employer uses an ADS in hiring decisions, each applicant must be notified upon receipt of their application that the employer uses an ADS for hiring. This notification may be delivered via an automatic reply to the application or included in the job posting itself. This is a simpler notice than the detailed employee-facing notice in § 91F.2(1)-(3) — it does not require the same seven content categories and applies specifically to applicants rather than existing employees.
4. If an employer will use an automated decision system in making hiring decisions for a position, the employer shall notify an applicant for the position, upon receiving the application, that the employer utilizes an automated decision system when making hiring decisions. The employer may make the notification using an automatic reply mechanism or on a job posting.
Pending 2026-07-01
H-01.6
Iowa Code § 91F.3(1)(e), 91F.3(2)
Plain Language
Employers are categorically prohibited from relying solely on an ADS for discipline, termination, or deactivation decisions — human involvement is always required. When an employer relies primarily on ADS output for such decisions, a human reviewer must affirmatively review the ADS output and compile and review other relevant information, which may include supervisory evaluations, personnel files, employee work product, peer reviews, and witness interviews. This is a mandatory human-in-the-loop requirement specifically for adverse employment actions, not merely a right to request human review.
1. An employer shall not use an automated decision system to do any of the following: ... e. Rely solely on an automated decision system when making a discipline, termination, or deactivation decision. 2. When an employer relies primarily on output from an automated decision system to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the automated decision system output and compile and review other information that is relevant to the decision, if any. For purposes of this subsection, "other information" may include but is not limited to any of the following: a. Supervisory or managerial evaluations. b. Personnel files. c. Work product of employees. d. Peer reviews. e. Witness interviews, which may include relevant online customer reviews.
Pending 2026-07-01
H-01.1
Iowa Code § 91F.4(1)-(3)
Plain Language
When an employer has primarily relied on an ADS to make a discipline, termination, or deactivation decision, the employer must provide a written post-decision notice to the affected employee at the time the employee is informed of the decision. The notice must include: (a) a contact person for more information, (b) disclosure that an ADS was used, (c) the employee's right to request a copy of the data used, and (d) notice of anti-retaliation protections. The notice must be a standalone plain-language communication delivered via a simple method. This is a post-decision explanation obligation — it tells the employee that an ADS was involved and what rights they have, at the moment they learn of the adverse action.
1. An employer that primarily relied on an automated decision system to make a discipline, termination, or deactivation decision shall provide the affected employee with a written notice at the time the employer informs the employee of the decision. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The individual to contact for more information about the decision. b. That the employer used an automated decision system to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the employee. c. That the employee has the right to request a copy of the employee's data used by the automated decision system. d. That the employer is prohibited from retaliating against the employee for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
Pending 2026-01-01
H-01.6
Section 10(a)
Plain Language
Public employers may not use, procure, or acquire any automated decision-making system without meaningful and continuing human review when the system performs functions related to public assistance administration, employee rights or welfare, or constitutionally or statutorily protected employee rights. 'Meaningful human review' is a demanding standard — the human reviewer must understand the system's risks and limitations, be trained on it, have authority and obligation to intervene or reject uncorroborated outputs, and have adequate time and resources. This is not rubber-stamp oversight; the reviewer must exercise independent judgment and consider information beyond what the system collected.
(a) An employer shall not use or apply, or authorize any procurement, purchase, or acquisition of any service or system using or relying on any automated decision-making system, directly or indirectly, without meaningful and continuing human review when performing any function that: (1) is related to the administration of any public assistance program; (2) will have an adverse impact on the rights, civil liberties, safety, or welfare of any employee in this State; or (3) affects any statutorily or constitutionally provided rights of an employee.
Pending 2026-01-01
H-01.3H-01.4H-01.5
Section 10(b)
Plain Language
When an automated decision-making system is used for any function described in Section 10(a), the employer must: (1) notify the affected employee at or before the time the decision is issued that the decision was made using an automated system; (2) provide an appeals process for employees directly impacted; and (3) offer the employee an alternative review by a human reviewer who is independent of the automated system. These three protections are preconditions to any use of an ADMS in the covered contexts — without them, the use is prohibited.
(b) An employer shall not use or apply any automated decision-making system, directly or indirectly, to perform any function described in subsection (a) without providing: (1) a notice to any affected employee no later than the time a decision is issued to that employee that a decision concerning the employee was made using an automated decision-making system; (2) an appeals process for decisions made by automated decision-making system in which an employee is impacted as a direct result of the use of the automated decision-making system; and (3) the opportunity for an affected employee to have an appropriate alternative review, by an individual working for or on behalf of the employer with respect to the decision, independent of the automated decision-making system.
Pending 2027-01-01
H-01.3
Section 15(a)
Plain Language
Before or at the time an automated decision tool is used to make a consequential decision, the deployer must notify the affected individual that an automated tool is involved. The notification must include the tool's purpose, deployer contact information, and a plain-language description of the tool covering both its human and automated components and how the automated component informs the decision. This is a pre-decision or contemporaneous notice requirement — it cannot be satisfied after the decision has been made. The range of covered consequential decisions is very broad, spanning employment, education, housing, utilities, healthcare, financial services, criminal justice, legal services, voting, and access to benefits.
(a) A deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision that an automated decision tool is being used to make, or be a controlling factor in making, the consequential decision. A deployer shall provide to a natural person notified under this subsection all of the following: (1) a statement of the purpose of the automated decision tool; (2) the contact information for the deployer; and (3) a plain language description of the automated decision tool that includes a description of any human components and how any automated component is used to inform a consequential decision.
Pending 2027-01-01
H-01.4
Section 15(b)
Plain Language
When a consequential decision is made solely by an automated decision tool (with no human component in the final decision), the deployer must accommodate a request from the affected individual to opt out of the automated tool and instead be subject to an alternative process — but only if technically feasible. The deployer may ask the individual for identifying information to process the request, and if the individual does not provide it, the deployer has no obligation to provide an alternative. This right applies only when the decision is made solely by the tool; if there is meaningful human involvement in the decision, this opt-out provision does not apply.
(b) If a consequential decision is made solely based on the output of an automated decision tool, a deployer shall, if technically feasible, accommodate a natural person's request to not be subject to the automated decision tool and to be subject to an alternative selection process or accommodation. After a request is made under this subsection, a deployer may reasonably request, collect, and process information from a natural person for the purposes of identifying the person and the associated consequential decision. If the person does not provide that information, the deployer shall not be obligated to provide an alternative selection process or accommodation.
Pending 2026-01-01
H-01.4
Student Educational Technologies Rights Act § 15(a)(2)
Plain Language
Students and their parents have the right to request that a human teacher review any grade that was scored automatically or generated by AI. This is a human-review-on-demand right — the student or parent must invoke it, but when invoked, the school must provide a human teacher's review. The bill frames this as a state policy right rather than a procedural mandate with specific timelines, which may create ambiguity about enforcement mechanisms.
(a) It is the policy of this State that a student and the student's parent have the right to: (2) request a human teacher review any automated scored grade or scored grade generated by artificial intelligence;
Pending 2026-07-01
H-01.6
IC 22-5-10.4-10(1)
Plain Language
Employers are categorically prohibited from relying exclusively on an automated decision system to make any employment-related decision — including hiring, firing, discipline, pay, scheduling, benefits, and promotion. A human must always be meaningfully involved in the decision. This is an absolute prohibition with no exceptions or safe harbors.
An employer may not: (1) rely exclusively on an automated decision system in making an employment related decision with respect to a covered individual;
Pending 2026-07-01
H-01.6
IC 22-5-10.4-10(2)(E)
Plain Language
Each time an employer uses an ADS output in an employment decision, a human with appropriate and relevant experience must independently corroborate the output through meaningful oversight. This is not a rubber-stamp — the human must have the qualifications and practical authority to override the ADS output. This requirement applies at the point of each decision, not merely as a periodic review.
(E) the employer independently corroborates, via meaningful oversight by a human with appropriate and relevant experience, the automated decision system output;
Pending 2026-07-01
H-01.1H-01.2
IC 22-5-10.4-10(2)(F)
Plain Language
Within seven days of making any employment decision informed by an ADS output, the employer must provide the affected individual with comprehensive, free, plain-language documentation covering: a description of the ADS used, the input data (including a machine-readable copy), how the ADS output was used in the decision, and the employer's reasoning for relying on it. This is a post-decision explanation right — distinct from the pre-decision disclosure under Section 11 — and it is triggered automatically, not upon request.
(F) not later than seven (7) days after making the employment related decision, the employer provides full, accessible, and meaningful documentation in plain language and at no cost to the covered individual on the automated decision system output, including: (i) a description of the automated decision system used to generate the automated decision system output; (ii) a description and explanation, in plain language, of the input date to the automated decision system used to generate the automated decision system output and a machine readable copy of the data; (iii) a description and explanation of how the automated decision system output was used in making the employment related decision; and (iv) the reasoning for the use of the automated decision system output in the employment related decision;
Pending 2026-07-01
H-01.4H-01.5
IC 22-5-10.4-10(2)(G)
Plain Language
After receiving the post-decision documentation under clause (F), the covered individual has two distinct rights: (1) the right to dispute the ADS output itself to a qualified human reviewer, through a process that must be accessible, equitable, and not unreasonably burdensome; and (2) the right to appeal the overall employment decision to a different qualified human — one who was not the corroborating human under clause (E). The two-reviewer separation requirement is a structural independence safeguard preventing the same person from both corroborating the initial output and deciding the appeal.
(G) the employer allows the covered individual to, after receiving the documentation described in clause (F): (i) dispute, in a manner that is accessible, equitable, and does not pose an unreasonable burden on the covered individual, the automated decision system output to a human with appropriate and relevant experience; and (ii) appeal the employment related decision to a human with appropriate and relevant experience who is not the human for purposes of the corroboration under clause (E).
Pending 2026-07-01
H-01.3
IC 22-5-10.4-11(a)-(c)
Plain Language
Employers must proactively disclose detailed information to covered individuals before or at the start of employment. The disclosure covers five categories: (1) that ADS outputs are or will be used; (2) a detailed description of the system including data inputs, measured characteristics, their job-relevance, measurement methodology, and plain-language interpretation guidance; (3) the identity of the system operator; (4) how ADS outputs factor into employment decisions; and (5) how to dispute or appeal decisions. For existing employees as of July 1, 2026, the disclosure must be provided by August 1, 2026. For new hires, it must be provided before hiring. Updated disclosures are required within 30 days of any significant change.
Sec. 11. (a) An employer that uses or intends to use an automated decision system output in making an employment related decision with respect to a covered individual shall, in accordance with subsections (b) and (c), disclose to the covered individual: (1) that the employer uses or intends to use an automated decision system output in making an employment related decision; (2) a description and explanation of the automated decision system used or intended to be used to generate the automated decision system output, including: (A) the types of data collected or intended to be collected as inputs to the automated decision system and the circumstances of the collection; (B) the characteristics that the automated decision system measures or is intended to measure, such as the knowledge, skills, or abilities of the covered individual; (C) how the characteristics relate or would relate to any function required for the work or potential work of the covered individual; (D) how the system measures or is intended to measure the characteristics; and (E) how the covered individual can interpret the automated decision system output in plain language; (3) the identity of the covered individual or entity that operates the automated decision system that provides the automated decision system output; (4) how the employer uses or intends to use the automated decision system output in making the employment related decision; and (5) how the covered individual may dispute or appeal an employment related decision made with respect to the covered individual using an automated decision system output. (b) An employer shall provide the disclosures required by subsection (a) to a covered individual as follows: (1) In the case of a covered individual who was hired on or before July 1, 2026, the disclosure must be provided to the covered individual not later than August 1, 2026. (2) In the case of a covered individual who is hired after July 1, 2026, the disclosure must be provided to the covered individual before hiring. (c) Not later than thirty (30) days after: (1) any information provided by an employer to a covered individual through a disclosure required by subsection (a) significantly changes; or (2) any significant new information required to be provided in the disclosure becomes available; the employer shall provide the covered individual with an updated disclosure.
Pending 2026-07-01
IC 22-5-10.4-10(2)(D)
Plain Language
As a condition of lawfully using an ADS output in an employment decision, the employer must ensure the system's use is designed for the specific purpose of making the type of employment decision at hand. An employer may not repurpose an ADS designed for one context (e.g., customer analytics) to make employment decisions. This is a purpose-limitation requirement ensuring the ADS was intended and validated for the employment context in which it is being applied.
(D) the use is designed for purposes of making the employment related decision;
Passed 2025-03-13
H-01.1H-01.4H-01.5
Section 3(6)(b)
Plain Language
When a state agency AI system makes decisions affecting Kentucky citizens, the agency must: (1) explain how AI is used in the decision-making process, (2) disclose the extent of human involvement in validating the decision, and (3) provide readily available appeal options for individuals subject to consequential AI-involved decisions. This creates both a transparency obligation (explaining AI's role and human oversight level) and an appeal right for individuals affected by consequential automated decisions.
(b) When an artificial intelligence system makes external decisions related to citizens of the Commonwealth, a department, agency, or administrative body shall: 1. Disclose how artificial intelligence is used in the decision-making process; 2. Provide the extent of human involvement in validating and oversight of any decision made; and 3. Make readily available options for individuals to appeal a consequential decision that involves artificial intelligence.
Pending 2026-08-01
H-01.3
R.S. 23:972(A)-(C), (E)
Plain Language
Before deploying an ADS that will affect workers' employment-related decisions (other than hiring), employers must provide affected workers or their authorized representatives with a detailed written notice at least 30 days in advance. The notice must be in plain language, delivered as a standalone communication in the workers' customary language, and must disclose: what types of decisions the ADS affects, the data categories and sources used, any parameters known to cause disproportionate impact, who created the ADS, any quotas the ADS sets, the worker's data access and correction rights, anti-retaliation protections, and the right to appeal. For existing ADS already in use when the law takes effect, notice must be provided promptly; new hires must be notified within 30 days.
A. An employer shall provide written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or his authorized representative. The notice shall be provided at any of the following time periods: (1) At least thirty days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this Part takes effect. (3) To a new worker within thirty days of his hiring date. C. A written notice required by this Section shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. E. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measure by an ADS that the worker is subject to, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's own data used by the ADS. (7) That the employer shall be prohibited from retaliating against a worker who exercises his rights as provided in Paragraph (6) of this Subsection. (8) That the worker has a right to appeal any decision made with the assistance of an ADS and the process to appeal that decision.
Pending 2026-08-01
H-01.3
R.S. 23:972(D)
Plain Language
Employers that use an ADS in hiring must notify each job applicant at the point of application receipt that the employer uses ADS for hiring decisions. This can be accomplished through an automatic reply or by including the notice on the job posting itself. Unlike the pre-use notice for existing workers (which requires detailed content), this hiring-context notice is simpler — it only needs to inform the applicant that ADS is used.
D. An employer who uses an ADS to make hiring decisions shall notify a job applicant upon receiving his application that the employer utilizes an ADS for hiring decisions. Notifications may be made using an automatic reply mechanism or on the job posting.
Pending 2026-08-01
H-01.6
R.S. 23:973(C)(1)-(3)
Plain Language
Employers may never rely solely on an ADS for discipline, termination, or deactivation decisions. For any employment-related decision assisted by ADS, the employer or vendor must: (1) verify the accuracy of the ADS output, and (2) designate an internal human reviewer who independently investigates and compiles corroborating evidence. The reviewer must have sufficient authority, discretion, resources, time, and expertise to meaningfully evaluate the ADS output — including the ability to interpret the system's outputs and relevant impact assessments. The reviewer is protected from retaliation. If the ADS output cannot be corroborated, or the reviewer finds it inaccurate, incomplete, or misleading, the employer may not rely on it. This is a robust human-in-the-loop requirement — the human reviewer must have genuine override capability, not merely rubber-stamp authority.
C.(1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) If an employer or a vendor utilizes an ADS output to assist in making an employment-related decision, the employer or vendor shall do all of the following: (a) Ensure the accuracy of the ADS output. (b)(i) Use a designated internal reviewer to conduct a separate investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (ii) The designated internal reviewer required by this Subparagraph shall have all of the following: (aa) Sufficient authority, discretion, resources, and time to corroborate the ADS output. (bb) Sufficient expertise in the operation of similar systems and a sufficient understanding of the ADS in question to interpret its outputs as well as results of relevant impact assessments. (cc) Education, training, or experience sufficient to allow the reviewer to make a well-informed decision. (iii) The designated internal reviewer shall be protected from retaliation for exercising his responsibilities. (3) An employer shall not rely on an ADS to make an employment-related decision if the employer cannot corroborate the ADS output or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading.
Pending 2026-08-01
H-01.1H-01.3
R.S. 23:974(A)-(B)
Plain Language
When an employer primarily relies on an ADS for a discipline, termination, or deactivation decision, the employer must provide the affected worker with a post-decision written notice at the time the decision is made. The notice must be in plain language, standalone, in the worker's customary language, and delivered via an accessible method. It must disclose: a human contact for more information, that an ADS was used, the worker's right to request their data, anti-retaliation protections, and the right to appeal under § 975. This is an adverse-action notification — it triggers at the point the decision is made, complementing the pre-use notice under § 972.
A. An employer that primarily relies on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with written notice at the time such decision is made. The notice shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. B. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The human individual to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in any discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising his right pursuant to this Part. (5) The worker's right to appeal the decision as provided in R.S. 23:975.
Pending 2026-08-01
H-01.4H-01.5
R.S. 23:975(A)-(C)
Plain Language
Workers affected by ADS-assisted employment decisions have the right to appeal within 30 days of notification. The employer or vendor must provide an appeal form (physical or electronic) that allows the worker to: request the ADS input and output data, request the human reviewer's corroborating evidence, submit their reason for appeal with supporting evidence, and designate an authorized representative. The employer or vendor must respond within 14 business days through a human reviewer who: (1) can objectively evaluate all evidence, (2) has sufficient authority, discretion, and resources to evaluate the decision, and (3) has authority to overturn it. The reviewer must not have been involved in the original decision. The response must be a clear written document explaining the outcome and reasoning. If the decision is overturned, the employer must rectify it within 21 business days.
A. If an employer has used an ADS to make an employment-related decision about a worker, the affected worker has the right to appeal that decision, request a human review, request submission of additional information, and correct any errors in the data used by the ADS. B. An employer or a vendor that used an ADS to make an employment-related decision shall provide an affected worker with a form or a hyperlink to an electronic form that provides that the worker has a right to appeal the decision within thirty days from the date that the worker was notified. The appeal form provided to an affected worker shall include all of the following: (1) The option to request access to the data used as input to or as output from the ADS. (2) The option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the ADS. (3) The worker's reason or justification for an appeal and any evidence to support the appeal. (4) A designation for an authorized representative who can also access the data. C.(1) An employer or a vendor shall respond to an appeal within fourteen business days. (2)(a)(i) In responding to an appeal, the employer or vendor shall designate a human reviewer who shall meet all of the following requirements: (aa) He can objectively evaluate all evidence. (bb) He has sufficient authority, discretion, and resources to evaluate the decision. (cc) He has the authority to overturn the decision. (ii) The employer or vendor shall not designate a person who was involved in the decision that the worker is appealing. (b) The response provided to the worker shall be composed on a clear, written document which describes the result of the appeal and the reasons for that result. (3) If the human reviewer determines that the employment-related decision should be overturned, the employer or vendor shall rectify the decision within twenty-one business days.
Pending 2027-01-01
H-01.4H-01.5
R.S. 22:1260.49(E)(1)-(3)
Plain Language
Insureds have an explicit right to appeal any determination they learn was made with an AI or automated decision system recommendation. Any adverse determination where AI materially contributed is presumed invalid — the insurer bears the burden of proving the determination was independently reached through documented clinical judgment without reliance on algorithmic output. This is a rebuttable presumption that effectively shifts the burden of proof to the insurer. Additionally, if an adverse determination is appealed on AI-involvement grounds, the insurer is prohibited from using any AI or automated system in the subsequent review of that claim, requiring a fully human re-review.
E.(1) Any insured has the right to appeal a determination that he has learned was made with a recommendation from an artificial intelligence or an automated decision system. (2) Any adverse determination in which artificial intelligence or an automated decision system materially contributed to the determination shall be presumed invalid unless the health insurance issuer demonstrates that the determination was independently reached through documented clinical judgment without reliance upon algorithmic output. (3) If an adverse determination is appealed on the basis of the use of an artificial intelligence or an automated decision system, the insurer shall not use an artificial intelligence or an automated decision system in any subsequent review of the claim.
Pending 2027-01-01
H-01.2
R.S. 22:2401(4)
Plain Language
As part of the appeals process, covered persons have the right to request and receive copies of all documents relevant to any AI or automated decision system used in the utilization review or coverage determination. This is a document access right — not merely a right to know that AI was used, but a right to review the underlying documentation of how the AI system was applied to the individual's claim. This supplements the general appeal rights under existing law by adding AI-specific transparency.
(4) Allow covered persons, upon request, to review and have copies of all documents relevant to any artificial intelligence or an automated decision system as defined in R.S. 22:1260.49(A)(1) used in the utilization review or determination process.
Pre-filed 2025-07-07
H-01.1H-01.3H-01.5
Chapter 93M, Section 3(c)
Plain Language
When an AI system materially influences a consequential decision about a consumer, deployers must: (1) notify the consumer that AI was involved, (2) explain the system's purpose and how it influenced the specific decision, and (3) provide a process for the consumer to appeal or correct adverse decisions. This creates three distinct consumer-facing obligations triggered by any consequential decision — covering employment, housing, healthcare, lending, insurance, education, and government services. The explanation must address how the system influenced the particular decision, not just a generic statement that AI was used.
(c) Consumer Protections: Deployers must: (1) Notify consumers when an AI system materially influences a consequential decision; (2) Provide consumers with: (i) The purpose of the system; (ii) An explanation of how the system influenced the decision; (iii) A process to appeal or correct adverse decisions.
Pending 2025-01-17
Ch. 110I, § 4(a)
Plain Language
Covered entities are categorically prohibited from using biometric data as an input to any decision that produces legal or similarly significant effects on end users. This is not a 'disclose and proceed' or 'human-in-the-loop' requirement — it is an absolute ban. The scope of covered decisions is broad and includes denial or degradation of financial services, housing, insurance, education, criminal justice, employment, healthcare, and access to basic necessities. This is stricter than most automated decision statutes, which typically require bias testing or human review rather than an outright prohibition on the use of a data type.
(a) Covered entities shall not use biometric data to help make decisions that produce legal effects or similarly significant effects concerning end users. Decisions that include legal effects or similarly significant effects concerning end users include, without limitation, denial or degradation of consequential services or support, such as financial or lending services, housing, insurance, educational enrollment, criminal justice, employment opportunities, health care services, and access to basic necessities, such as food and water.
Pre-filed
H-01.3
Chapter 93M § 3(d)(1)
Plain Language
Before making or substantially contributing to a consequential decision about a consumer, deployers must: (1) notify the consumer that a high-risk AI system will be used, (2) provide a plain-language statement disclosing the system's purpose, the nature of the decision, the deployer's contact information, and how to access the deployer's public website statement, and (3) inform the consumer of any applicable right to opt out of profiling for decisions with legal or similarly significant effects. This notification must be provided directly to the consumer in plain language, in all languages the deployer ordinarily uses, and in formats accessible to consumers with disabilities — or, if direct delivery is not possible, in a manner reasonably calculated to reach the consumer.
(d) (1) Not later than 6 months after the effective date of this act, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (ii) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and (iii) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.
Pre-filed
H-01.1H-01.2H-01.5
Chapter 93M § 3(d)(2)
Plain Language
When a high-risk AI system contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement explaining the principal reasons for the decision — including the degree of AI involvement, the types of data processed, and data sources; (2) an opportunity to correct any incorrect personal data used in the decision; and (3) an opportunity to appeal, which must include human review if technically feasible, unless delay would endanger the consumer's life or safety. This is a post-decision adverse-action package — all three elements must be provided together.
(2) Not later than 6 months after the effective date of this act, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (i) a statement disclosing the principal reason or reasons for the consequential decision, including: (A) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (B) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) the source or sources of the data described in subsection (d)(2)(i)(B) of this section; (ii) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (iii) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.
Pending 2025-01-14
H-01.6
Ch. 149B § 2(h)
Plain Language
Employers may not rely primarily on electronically monitored data when making hiring, promotion, discipline, termination, or compensation decisions. Three requirements must be satisfied: (1) meaningful human oversight must be established, including a designated internal reviewer with expertise, authority to dispute or reject outputs, and adequate time/resources; (2) a human decision-maker must actually review the monitored data, verify accuracy, address pending correction requests, and exercise independent judgment; and (3) the human must consider non-monitoring information such as supervisory evaluations, personnel files, and peer reviews.
(h) An employer shall not rely primarily on employee data collected through electronic monitoring when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer shall establish meaningful human oversight of such decisions based in whole or in part on data collected through electronic monitoring. (ii) A human decision-maker must actually review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (iii) The human decision-maker must consider information other than information collected through electronic monitoring when making each such decision, such as but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
Pending 2025-01-14
H-01.1H-01.2
Ch. 149B § 2(i)
Plain Language
When an employer makes any consequential employment decision (hiring, promotion, termination, discipline, or compensation) based in whole or in part on electronically monitored data, it must disclose to the affected employee at least 30 days before the decision takes effect: that monitoring data was used, the specific tools and how they work, the specific data and judgments used, and any non-monitoring information used. This is an unusually granular pre-decision disclosure obligation with a 30-day advance notice requirement.
(i) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision based in whole or part on data gathered through the use of electronic monitoring, it shall disclose to affected employees no less than thirty days prior to the decision going into effect: (i) that the decision was based in whole or part on data gathered through electronic monitoring; (ii) the specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (iii) the specific data, and judgments based upon such data, used in the decision-making process; and (iv) any information used in the decision-making process gathered through sources other than electronic monitoring.
Pending 2025-01-14
H-01.3
Ch. 149B § 4(a)-(b)
Plain Language
Employers must give employees and candidates at least 10 business days' notice before using an automated employment decision tool. The notice must cover six categories: that an ADS will be used; what qualifications/data/outputs the tool assesses; what data is collected and from where; the latest impact assessment results (including any disparate impact findings); how to request an alternative non-automated process or accommodation; and how to request reevaluation or file a civil complaint. The notice must be in plain language, included in job postings, posted on the employer's website in all languages used with employees, provided directly to candidates in their preferred language, and accessible to persons with disabilities.
(a) Any employer that uses an automated employment decision tool to assess or evaluate an employee or candidate shall notify employees and candidates subject to the tool no less than ten business days before such use: (i) that an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate; (ii) the job qualifications and characteristics that such automated employment decision tool will assess, what employee or candidate data or attributes the tool will use to conduct that assessment, and what kind of outputs the tool will produce as an evaluation of such employee or candidate; (iii) what employee or candidate data is collected for the automated employment decision tool, the source of such data and the employer's data retention policy. Information pursuant to this section shall not be disclosed where such disclosure would violate local, state, or federal law, or interfere with a law enforcement investigation; (iv) the results of the most recent impact assessment of the automated employment decision tool, including any findings of a disparate impact and associated response from the employer, or information about how to access that information if publicly available; (v) information about how an employee or candidate may request an alternative selection process or accommodation that does not involve the use of an automated employment decision tool and details about that alternative process or accommodation process; and (vi) information about how the employee or candidate may: (A) request reevaluation of the employment decision made by the automated employment decision tool in accordance with section one thousand thirteen of this article; and (B) notification of the employee or candidate's right to file a complaint in a civil court in accordance with section seven of this chapter or otherwise exercise the rights described in this chapter. (b) The notice required by this section shall be: (i) written in clear and plain language; (ii) included in each job posting or advertisement for each position for which the automated employment decision tool will be used; (iii) posted on the employer's website in any language that the employer regularly uses to communicate with employees; (iv) provided directly to each candidate who applies for a position in the language with which that candidate communicates with the employer; (v) made available in formats that are reasonably accessible to and usable by individuals with disabilities; and (vi) otherwise presented in a manner that ensures the notice clearly and effectively communicates the required information to employees.
Pending 2025-01-14
H-01.6
Ch. 149B § 5(b)
Plain Language
Employers may not rely primarily on ADS outputs for hiring, promotion, termination, discipline, or compensation decisions. They must establish meaningful human oversight with a qualified internal reviewer who has sufficient expertise (assessed by complexity of the tool, reviewer's experience, training, and ability to consult experts). A human decision-maker must actually review ADS outputs, exercise independent judgment, and must also consider non-ADS information (supervisory evaluations, personnel files, work products, peer reviews).
(b) An employer shall not rely primarily on output from an automated decision tool when making hiring, promotion, termination, disciplinary, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer must establish meaningful human oversight of such decisions based in whole or in part on the output of automated employment decision tools. In determining whether an internal reviewer employs the requisite knowledge and skill to provide meaningful human oversight, relevant factors include the relative complexity and specialized nature of the automated decision tool, the reviewer's general experience, the reviewer's training and experience in the field, the preparation and study the reviewer is able to give the matter and whether it is feasible to refer the matter to, or associate or consult with, an expert with established competence in the field automated decision tools. (ii) A human decision-maker must actually review any output of an automated employment decision tool and exercise independent judgment in making each such decision; (iii) The human decision-maker must consider information other than automated employment decision tool outputs when making each such decision, such as but not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews; and (iv) An employer shall consider information other than automated employment decision tool outputs when making hiring, promotion, termination, disciplinary, or compensation decisions, such as supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
Pending 2025-01-14
H-01.4
Ch. 149B § 5(c)
Plain Language
Employers may not condition employment consideration on the employee's or candidate's consent to be assessed by an automated decision tool. Employers also may not discipline or disadvantage anyone who requests an alternative (non-ADS) accommodation. This means employers must offer a meaningful alternative evaluation process for those who decline automated assessment.
(c) An employer shall not require employees or candidates to consent to the use of an automated employment decision tool in an employment decision in order to be considered for an employment decision, nor shall an employer discipline or disadvantage an employee or candidate for employment as a result of their request for accommodation.
Pending 2026-02-24
H-01.3
Sec. 13(1)-(3)
Plain Language
Employers must provide multi-channel notice before using monitoring or automated decision tools: a workplace poster, written notice to all employees at least 30 days before implementation, inclusion in every job posting, website posting, direct notice to every applicant, and accessible formats (accounting for language and disability). The notice must include an opt-out right. If a covered individual opts out, the employer is categorically prohibited from using the tool for any employment-related decisions about that person. This is a meaningful opt-out — it creates an absolute bar, not merely a preference.
Sec. 13. (1) If an employer uses an electronic monitoring tool or automated decisions tool, the employer must display a poster at the employer's place of business, in a conspicuous place accessible to the employer's employees, that includes, but is not limited to, notice of the use of an electronic monitoring tool or automated decisions tool. (2) Not less than 30 days before an employer implements an electronic monitoring tool or automated decisions tool, the employer shall provide notice, in writing, of the tool's use to all of the employer's employees. The employer shall also include the notice in every job posting, post the notice on the employer's website, provide the notice directly to every applicant, and make the notice available in accessible formats that account for the applicant's first language, if it is not English, and any disability the applicant may have. The notice must provide a covered individual with the ability to opt out of the electronic monitoring tool or automated decisions tool. (3) If a covered individual opts out of the use of an electronic monitoring tool or automated decisions tool under subsection (2), the employer shall not use the electronic monitoring tool or automated decisions tool to make any employment-related decisions for that covered individual.
Pending 2026-08-01
H-01.3
Minn. Stat. § 181.9922, subd. 1(a)-(f), subd. 2
Plain Language
Before deploying any automated decision system for employment-related decisions, employers must provide affected workers, their authorized representatives, and any union a detailed written pre-use notice at least 30 days in advance (or by September 1, 2026 for existing systems). The notice must be plain-language, standalone, in the workers' customary language, and include the system's purpose, data sources, logic, vendors, impact assessment results, a current list of all automated decision systems in use, and a description of worker rights. Job applicants and workers must give affirmative written consent before being subject to the system, and must be allowed to opt out where reasonable alternatives exist. A copy of every notice must also be filed with the Commissioner of Labor and Industry within 10 days.
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Pending 2026-08-01
H-01.6
Minn. Stat. § 181.9924, subd. 2(a)-(d)
Plain Language
Employers may never rely solely on an automated decision system for any employment-related decision. When relying in part on such a system, the employer must verify the accuracy of the system's output and designate a qualified internal reviewer to conduct an independent investigation and compile corroborating evidence. The reviewer must have real authority, AI literacy, and retaliation protection. If the reviewer cannot corroborate the output or finds it inaccurate, incomplete, or misleading, the employer must not use the automated output for the decision. This is a mandatory human-in-the-loop requirement with substantive reviewer qualifications — not a rubber-stamp review.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Pending 2026-08-01
H-01.1H-01.2
Minn. Stat. § 181.9925, subd. 1(a)-(d), subd. 2(a)-(b)
Plain Language
After making any employment-related decision involving an automated decision system, employers must provide the affected worker a post-decision written notice acknowledging AI was used, describing worker rights, providing an appeal form or link, and informing the worker that retaliation is prohibited. For routine decisions (same system, same use, multiple times per quarter), a full notice is required for the first use each quarter, with a summary notice at quarter-end covering frequency and dates. Workers who request access must receive within 14 days: a plain-language explanation of the decision, the specific data used and outputs produced, the rationale including human vs. AI roles, how the system's logic applied to them, comparative output statistics, the vendor name, and any impact assessments. Discipline or termination decisions require at least 30 days' advance notice.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision. Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request.
Pending 2026-08-01
H-01.4H-01.5
Minn. Stat. § 181.9926(a)-(f)
Plain Language
Workers have the right to appeal any employment-related decision that involved an automated decision system. The employer must provide an appeal form (or link) allowing the worker to request access to all input/output data and corroborating evidence, submit reasons and supporting evidence, and designate an authorized representative. Appeals must be filed within 30 days of post-use notice. The employer must respond within 5 business days by designating an independent human reviewer who was not involved in the original decision, has authority to overturn it, and has AI literacy training. The reviewer must produce a written decision with reasons, provided to both employer and worker. If the decision is overturned, the employer must rectify it within 5 business days.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
Pending 2025-08-01
H-01.3
Minn. Stat. § 363A.08, subd. 9(b)(2)
Plain Language
Employers must notify employees and applicants when AI is being used in employment decisions covered by subdivision 9(b)(1) — i.e., recruitment, hiring, promotion, renewal of employment, training selection, discharge, discipline, tenure, or terms and conditions of employment. Failure to provide this notice is itself an independent unfair employment practice. The bill does not specify the form, timing, or content of the notice beyond requiring that it be given, leaving significant implementation discretion to employers and potential future regulatory guidance.
(2) fail to provide notice to an employee or applicant for employment that the employer is using artificial intelligence for the purposes described in clause (1).
Pending 2026-09-01
H-01.3
§ 181.9922, Subd. 1(a)-(f); Subd. 2
Plain Language
Before deploying any automated decision system for employment-related decisions, employers must provide a detailed written pre-use notice to every affected worker (including job applicants and independent contractors), their authorized representatives, and any relevant union. For new systems, notice must come at least 30 days in advance; for existing systems, by September 1, 2026. The notice must be plain-language, standalone, in workers' routine language, and must describe the system's purpose, data inputs, logic, vendor, impact assessment results, a full list of ADS in use, and workers' rights. The employer must also obtain affirmative written consent before subjecting any worker to the ADS and must allow opt-out where reasonable alternatives exist. A copy of every notice must be submitted to the commissioner of labor and industry within ten days. Union notice must allow meaningful bargaining opportunity before deployment or modification.
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Pending 2026-09-01
H-01.6
§ 181.9924, Subd. 2(a)-(d)
Plain Language
Employers may never rely solely on an automated decision system for employment-related decisions. When using an ADS in part, they must verify the accuracy of the output and assign a designated internal reviewer who must independently investigate and compile corroborating information. The reviewer must have sufficient authority, expertise, ADS training, and retaliation protection. If the reviewer cannot corroborate the ADS output or finds it inaccurate, incomplete, or misleading, the employer must not rely on it. This creates a mandatory human-in-the-loop for every ADS-informed employment decision — not merely an option for human review, but a structural requirement that the human reviewer have the qualifications and authority to override.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Pending 2026-09-01
H-01.1
§ 181.9925, Subd. 1(a)-(d)
Plain Language
After using an ADS for an employment-related decision, employers must provide the affected worker with a post-decision written notice that acknowledges ADS was used, describes the worker's rights, provides an appeal form or link, and states the anti-retaliation prohibition. For most decisions, notice must arrive at the time of decision or within 15 business days; for discipline or termination, at least 30 days before it takes effect. When the same ADS is used the same way multiple times per quarter, a full notice is required for the first use each quarter, followed by a summary notice at quarter-end listing usage dates and counts. All notices must be plain-language, standalone, and in the worker's routine communication language.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision.
Pending 2026-09-01
H-01.1H-01.2
§ 181.9925, Subd. 2(a)-(c)
Plain Language
Upon a worker's request, employers must provide within 14 calendar days a detailed explanation package covering: the specific decision made, the specific worker data inputs and outputs, the rationale including the respective roles of ADS output and human judgment, the system's logic and how it was applied to the worker, key parameters and performance metrics, aggregate statistics for context, the vendor and product name, and all completed impact assessments. Service providers, contractors, and vendors must cooperate fully in responding. This is an exceptionally detailed individual explanation right — it requires not just what the decision was, but how the ADS was applied to the specific worker, the human's role, and comparative context against other workers.
Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request. (c) A service provider, contractor, or vendor must provide full assistance to the employer in responding to a worker request for access, including any of that worker's input or output data in the service provider, contractor, or vender's possession and any relevant information about the automated decision system.
Pending 2026-09-01
H-01.4H-01.5
§ 181.9926(a)-(f)
Plain Language
Employers must provide affected workers with a formal appeal mechanism for any ADS-informed employment decision. The appeal form must allow workers to request input/output data, corroborating evidence, present their reasons and evidence, and designate an authorized representative. Workers have 30 days from post-use notice to file. The employer must respond within five business days by designating an independent human reviewer who was not involved in the original decision, has authority to overturn it, is trained on ADS limitations and worker rights, and who produces a written decision with reasons. If the appeal is sustained, the employer must rectify the decision within five business days. This creates a structured, time-bound, substantive appeal process with independence requirements for the reviewer.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
Failed 2026-02-01
H-01.3
Sec. 4(4)(a)(i)-(iii)
Plain Language
Before deploying a high-risk AI system to make or substantially contribute to a consequential decision about a consumer, the deployer must: (1) notify the consumer that such a system is being used; (2) provide a statement disclosing the system's purpose and the nature of the consequential decision, the deployer's contact information, a plain-language system description, and instructions to access the deployer's public statement under Section 4(5); and (3) where applicable, inform the consumer of their opt-out right under Nebraska's data privacy law (Section 87-1107). This is a pre-decision notice obligation — it must be completed before the consequential decision is made.
(4)(a) On and after February 1, 2026, prior to deploying any high-risk artificial intelligence system to make or be a substantial factor in making any consequential decision concerning any consumer, the deployer shall: (i) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make or be a substantial factor in making a consequential decision; (ii) Provide to the consumer: (A) A statement that discloses the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description written in plain language that describes the high-risk artificial intelligence system; and (D) Instructions on how to access the statement described in subdivision (5)(a) of this section; and (iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107.
Failed 2026-02-01
H-01.1H-01.2H-01.4H-01.5
Sec. 4(4)(b)(i)-(iii)
Plain Language
When a high-risk AI system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement explaining the principal reasons for the decision — including the AI system's degree and manner of contribution, the types of data processed, and each data source; (2) an opportunity to correct any incorrect personal data used in the decision; and (3) an opportunity to appeal the adverse decision, with human review if technically feasible. The appeal right has a narrow exception for situations where delay would risk the consumer's life or safety.
(b) On and after February 1, 2026, for each high-risk artificial intelligence system that makes or is a substantial factor in making any consequential decision that is adverse to any consumer, the deployer of such high-risk artificial intelligence system shall provide to such consumer: (i) A statement that discloses each principal reason for the consequential decision, including: (A) The degree to and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) Each source of the data described in subdivision (b)(i)(B) of this subsection; (ii) An opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making or processed as a substantial factor in making the consequential decision; and (iii) An opportunity to appeal any adverse consequential decision concerning the consumer arising from the deployment of the high-risk artificial intelligence system unless providing the opportunity for appeal is not in the best interest of the consumer, including instances when any delay might pose a risk to the life or safety of such consumer. Any such appeal shall allow for human review if technically feasible.
Failed 2026-02-01
Sec. 4(4)(c)(i)-(ii)
Plain Language
All consumer notices required under Section 4(4)(a) and (b) — pre-decision notification and post-adverse-decision disclosures — must be delivered directly to the consumer, in plain language, in all languages the deployer normally uses for business communications, and in formats accessible to consumers with disabilities. If direct delivery is impossible, the deployer must use a method reasonably calculated to reach the consumer. This is a delivery-format requirement that qualifies the notice obligations in the preceding subsections.
(c)(i) Except as provided in subdivision (c)(ii) of this subsection, a deployer shall provide the notice, statement, contact information, and description required under subdivisions (4)(a) and (b) of this section: (A) Directly to the consumer; (B) In plain language; (C) In each language in which the deployer in the ordinary course of business provides any contract, disclaimer, sale announcement, or other information to any consumer; and (D) In a format that is accessible to any consumer with any disability. (ii) If the deployer is unable to provide the notice, statement, contact information, and description required under subdivisions (a) and (b) of this subsection directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
Pending
H-01.1
Section 2(c)
Plain Language
When a business entity uses biometric surveillance data to deny a consumer access to its premises or to physically remove a consumer, the business must provide the consumer with a detailed explanation of the actions taken and the criteria the business used to make that determination. This is an adverse-action explanation requirement triggered specifically by access denial or removal decisions informed by biometric surveillance. The explanation must cover both what the business did and why — including the decision criteria, not merely that biometric data was used.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
Pending
H-01.1H-01.3
Section 6(a)–(b), Section 7
Plain Language
Employers and public entities must provide detailed written notice to all affected employees, service beneficiaries, and bargaining representatives at least 60 days before implementing an AEDS, ABSDS, or EMT. For existing systems, notice must be given within 60 days of the act's effective date; for new hires, within 30 days of hiring (with written acknowledgment required). The notice must describe: what system is being implemented and what decisions it affects; summaries of impact assessments with directions to the full reports on the public registry; the data to be collected and outputs to be used; individual rights to access data and contest decisions; performance standards and productivity quotas (employees only); and the employer's obligation to respond to bargaining representative concerns. Significant changes to systems require an additional 60-day advance notice. All notices must be in plain language, translated into languages spoken by at least 5% of the workforce, provided in hard copy (and electronically if possible), posted conspicuously in the workplace, and must include anti-retaliation disclosures.
6. a. An employer or public entity shall not implement an EMT or other surveillance or the use of an AEDS or ABSDS unless the employer or public entity has provided a written notice to all affected service beneficiaries and employees, including public employees making decisions about public benefits or services for service beneficiaries, and to any recognized bargaining representative of the employees, at least 60 days prior to implementation. If the EMT, AEDS, or SBSDS was in operation on the effective date of this act, the written notice shall be provided not more than 60 days after the effective date of this act. For an employee hired after the effective date of this act, written notice shall be provided not more than 30 days after the hiring, and the employer shall obtain a written acknowledgement of receipt of the notice by the employee. The notice shall include, except that the notice to service beneficiaries shall not include the disclosures indicated in paragraphs (5) and (6) of this subsection, the following disclosures: (1) that the use of an AEDS, ABSDS, or EMT or surveillance is being implemented, and what type of decisions that will be affected by the AEDS, ABSDS or the EMT or surveillance; (2) copies of the summaries of the impact assessment reports of the AEDS, ABSDS, or EMT conducted by an independent auditor or the department conducted pursuant to subsection d. of section 3, or subsection a. of section 4, of the act, and directions on how to obtain the entire impact assessment report from the public registry maintained by the department; (3) a description of the data and information that will be collected and the outputs that will be used, specifying, in the case of an employee or public employee, for which of the allowable purposes identified in subsection a. of section 3 of this act they will be used; (4) the rights provided by this section and section 8 of this act to employees and service beneficiaries, and their authorized representatives, to have access to all relevant data and information and to contest any disclosure of the notice; (5) a description of any performance standard, productivity quota, or other related measure used in evaluating employees, including public employees making decisions about public benefits or services for service beneficiaries, a description of what data and information is collected, and a description of any adverse consequences or positive incentives associated with the standards or quotas; and (6) the obligation stipulated in subsection c. of this section that an employer or public entity, upon a request of the recognized bargaining representative of the employees, to respond, in the manner specified by that subsection, to concerns raised the representative regarding the AEDS, ABSDS, EMT, or surveillance. b. Employers or public entities shall give employees, and any recognized bargaining representative of the employees, at least 60 days written notice before the implementation of any significant changes in the EMT, AEDS, or ABSDS or in the employer's or public entity's use of an EMT or surveillance or use of an AEDS or ABSDS. 7. All notices required to be provided to employees or service beneficiaries pursuant to section 6 of this act, and all summaries of impact assessment reports required to be included with those notices, shall: a. Be written in clear, plain language easily understood by workers without technical expertise; b. Be translated into any language spoken by at least five percent of the employer's or public entity's workforce; c. Be provided in hard copy form and, if possible, in electronic form; d. Be posted conspicuously in the workplace and made continuously available to workers and their recognized bargaining representative; and e. Disclose that employers and entities are prohibited from retaliating against employees or applicants for employment for exercising their rights under this act.
Plain Language
If a bargaining representative raises specific compliance concerns within 30 days of receiving implementation or modification notice, the employer may not proceed with deployment until it provides a written response addressing each concern — including either agreed-upon modifications or an explanation of why no modification is necessary. This creates a mandatory pause-and-respond mechanism that effectively gives unions a pre-deployment challenge right. If the representative remains unsatisfied, they may pursue administrative, civil, or grievance/arbitration remedies.
c. If a recognized bargaining representative of the employees, within 30 days of receiving a notice pursuant to subsection a. or b. of this section, notifies the employer or public entity of specific concerns they have of an AEDS, ABSDS, EMT, or surveillance not being in compliance with the provisions of this act, other law, or applicable collective bargaining agreement, including whether the impact assessment was accurate in deeming the AEDS, ABSDS, EMT, or surveillance to be in compliance, the employer or public entity shall not implement the AEDS, ABSDS, EMT, or surveillance until the employer or public entity has provided the representative of the employees with a written response to the specific concerns which includes any modification of the AEDS, ABSDS, or EMT which the employer or public entity agrees is needed for compliance, or an explanation of why the employer or public entity believes no modification is necessary to be in compliance. If the employee representative is not satisfied with the response, the representative may seek relief in an administrative action pursuant to section 18 of this act, in a civil action pursuant to the provisions of section 19 of this act, or in a grievance or arbitration procedure outlined in an applicable collective bargaining agreement.
Plain Language
Employers may not take adverse employment actions based on productivity quotas or performance standards that were not previously disclosed to the employee in the written notice required under section 6(a)(5). This is a disclosure-gating rule: undisclosed performance metrics cannot serve as the basis for adverse decisions.
d. The employer or public entity shall not make any employment-related decision which has an adverse impact on an employee if the decision is based, in whole or in part, on a productivity quota or performance standard that was not previously disclosed to the employee pursuant to paragraph (5) of subsection a. of this section.
Pending
H-01.1H-01.2H-01.4H-01.5
Section 8(a)–(b)
Plain Language
Before any adverse employment decision or reduction of public benefits takes effect, the employer or public entity must provide at least 10 days' written advance notice (or notice at the time of decision for applicant rejections) to the affected individual and any bargaining representative. The notice must explain the reasons for the decision, provide access to all relevant data including how the AI system contributed to the decision, and inform the individual of their rights to contest the decision. Upon request within 30 days, the individual is entitled to: (1) review and copy all data used in the decision, including a complete explanation of algorithmic weighting, factors, and processes; (2) appeal to correct inaccurate or biased data and contest any decision made with improper data or in violation of the act; and (3) have the appeal reviewed by a designated human reviewer who is a trained employee of the organization with full authority and discretion to modify or overturn the decision. If unsatisfied with the internal review outcome, the individual may pursue administrative, civil, or grievance/arbitration remedies.
8. a. In the case of an employer or public employer who, with respect to public employees, uses an EMT or other surveillance, or uses an AEDS, to make, or assist in making, an employment-related decision which adversely affects an employee, or in the case of a public entity which uses an ABSDS in making a decision to reduce public benefits or services to a service beneficiary, the employer or public entity shall, at least 10 days before the decision takes effect, provide the service beneficiary or employee and any recognized bargaining representative of the employee with a written notice, which: (1) describes and explains the reasons for the decision; (2) provides access to all relevant data and information about the decision, including a comprehensive explanation of how the EMT, ABSDS, or AEDS are being used in making the decision; and (3) explains that the employee, applicant for employment, service beneficiary, or an authorized representative shall have: the right to access all relevant data and information; the right to contest the decision through the proceduresindicated in subsection b. of this section; and, if the employee, service beneficiary, or applicant is not satisfied with the outcome of that procedure, the right to seek relief in an administrative action pursuant to section 18 of this act, in a civil action pursuant to the provisions of section 19 of this act, or, if an employee is represented by a recognized bargaining representative, in a grievance or arbitration procedure outlined in an applicable collective bargaining agreement. In the case of an applicant for employment or public benefits or services, an employer or public entity that uses an AEDS or ABSDS to make, or assist in making, a decision to reject the application shall provide the written notice described in this subsection not later than the time that the decision is made. b. Upon a request from the employee, service beneficiary, applicant for employment or an authorized representative made not more than 30 days after the employer or public entity provides the written notice required by subsection a. of this section, or not more than 30 days after the adverse decision is implemented if the required notice is not given, the employer or public entity shall: (1) permit the employee, service beneficiary, applicant or authorized representative to review and copy any data and information collected or used to make the decision, and related personnel files; disclose complete data and information regarding the impact assessments of the EMT, the ABSDS, and the AEDS conducted pursuant to section 3 of this act and oversight of the EMT, the ABSDS, and the AEDS conducted pursuant to section 9 of this act, including whether the output of the AEDS or the ABSDS was modified in the oversight process, and if so, how; provide copies of the summaries of the impact assessment reports and disclose how to access the full assessment reports on the public registry maintained by the department; and provide a clear, complete explanation of how the AEDS or the ABSDS produced any outputs related to the decision, including information about the weighting of factors and the data, algorithms, and other processes involved in making the decision; (2) permit the employee, service beneficiary, or applicant for employment to make an appeal to: seek the correction of any inaccurate, incomplete or biased data or information; contest any adverse decision in which data or information was considered which was erroneous, incomplete or biased or which was collected, retained or used by the EMT, ABSDS, or AEDS in a manner which violates the provisions of this act; or contest any adverse decision in which the decision was otherwise made in a manner which violates the provisions of this act; and (3) designate a human reviewer who is an employee of the employer or public entity and who is required to objectively evaluate all evidence, has sufficient authority, discretion, resources, and time to evaluate the decision, has sufficient training and expertise to have a full understanding of the data, algorithms, and other processes involved in making the decision, and has the authority to modify or overturn the decision, including the correction of any inaccurate, incomplete or biased data or information. The reviewer shall consider the appeal made by the employee, service beneficiary, or applicant for employment regarding any of the matters indicated in paragraph (2) of this subsection and issue a determination which shall be the final outcome of the procedure of this subsection for an appeal made to the employer or public entity. An employee, service beneficiary, or applicant for employment who is not satisfied with this final outcome of the procedure may seek relief in an administrative action pursuant to section 18 of this act, in a civil action pursuant to the provisions of section 19 of this act, or, if the employee is represented by a recognized bargaining representative, in a grievance or arbitration procedure outlined in an applicable collective bargaining agreement.
Pending
H-01.6
Section 9(a)–(b)
Plain Language
Employers and public entities may never base employment or public benefits decisions exclusively or determinatively on AI system outputs, monitoring data, or third-party data broker information. All such data must be corroborated by designated internal human reviewers before use in decisions. The employer must establish a meaningful human oversight program that includes: (1) designating trained employee reviewers with expertise in the AI systems and familiarity with impact assessments; (2) granting reviewers authority to dispute, revise, or reject inaccurate, discriminatory, or invalid outputs; (3) requiring human decision-makers to exercise independent judgment and consider non-AI information (supervisory evaluations, personnel files, work product, peer reviews) for consequential decisions; and (4) ensuring reviewers have adequate time, resources, and availability for direct communication with affected individuals. This is the most structurally demanding human oversight requirement in the bill — it requires both a prohibition on sole reliance and an affirmative program of corroboration and independent judgment.
9. a. An employer or public entity shall not rely solely on data or information about employees or service beneficiaries collected through an EMT or other surveillance, or outputs of an AEDS or ABSDS, or information from third parties, including data brokers, when making employment-related decisions, or in the case of a public entity, when making employment-related decisions about its own employees, or making decisions about public benefits or services for service beneficiaries. Any data or information collected through an EMT or other surveillance, or used to produce, or be part of, outputs of an AEDS or ABSDS, shall be corroborated by internal reviewers designated by the employer or public entity pursuant to subsection b. of this section and shall be subject to review and challenge by the affected service beneficiary or employee or their authorized representative, as provided in paragraph (2) of subsection a. of section 5 of this act or subsection b. of section 8 of this act. No decision affecting the terms or conditions of employment or the provision of public benefits or services may be based exclusively or determinatively on AEDS or ABSDS outputs, or data and information collected by an EMT or other surveillance. b. An employer or public entity shall establish meaningful human oversight of all employment-related decisions or decisions about public benefits or services made utilizing data or information collected by an EMT or other surveillance or AEDS or ABSDS outputs. The oversight shall include: (1) designation of internal reviewers who are employees of the employer or public entity, and have sufficient training and expertise in the operation of whichever is used of the EMT, the ABSDS, or the AEDS, familiarity with the most recent impact assessments of the EMT, the ABSDS, or AEDS, and sufficient understanding of their use to identify potential errors, biases, or inaccuracies produced by their use; (2) authority and discretion for the reviewers to dispute, revise, or reject AEDS or ABSDS outputs or data or information collected by an EMT or other surveillance suspected, or found, to be inaccurate, discriminatory, or otherwise invalid; (3) a requirement that a human decision-maker review the data and information collected by an EMT or other surveillance and the AEDS and ABSDS outputs, exercise independent judgment, and consider information beyond AEDS and ABSDS outputs and data and information collected by an EMT or other surveillance, including, in the case of an employee, supervisory evaluations, personnel files, employee work product, or peer reviews, when making consequential employment-related decisions; and (4) a requirement that the reviewers have adequate time and resources to conduct the reviews, and are available for direct communication, in person or by phone or video conference, to applicants for employment or public benefits or services, service beneficiaries affected by an adverse decision regarding public benefits or services, and employees affected by adverse employment-related decisions.
Passed 2026-01-01
H-01.1
Section 2(c)
Plain Language
When a business entity uses biometric surveillance data to make an adverse decision about a consumer — specifically denying access to or removing the consumer from the business premises — the business must provide that consumer with a detailed explanation of the actions taken and the criteria the system applied. This is an adverse-action explanation requirement triggered only when biometric data drives a denial or removal decision. The statute requires the explanation to be 'detailed' and to cover both the actions and the criteria, which is more demanding than a generic notice that biometric data was used. The bill does not specify the format or timing of the explanation.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
Pending 2027-01-01
H-01.1H-01.2H-01.3H-01.5
GBL § 1552(5)(a)-(c)
Plain Language
Before using a high-risk AI system to make or substantially factor into a consequential decision about a consumer, deployers must provide pre-decision notice including: notice that AI is being used, the system's purpose, the nature of the decision, deployer contact information, a plain-language system description, and instructions for accessing the deployer's public summary. If the decision is adverse, the deployer must additionally provide: the principal reasons for the decision (including how the AI contributed, what data types were used, and data sources), the opportunity to correct incorrect personal data, and an appeal process with human review where technically feasible — unless delay would endanger the consumer. All notices must be provided directly to the consumer, in plain language, in all languages the deployer uses in its business, and in formats accessible to consumers with disabilities.
(a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section. (b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
Pending 2026-01-01
H-01.3
Labor Law § 203-g(2)(a)(i), (2)(b)
Plain Language
Employers and employment agencies that use automated employment decision tools to screen job applicants must notify each candidate at least ten business days before the tool is used that an automated tool will be part of their assessment. The notice must also allow the candidate to request an alternative selection process or accommodation. This is a pre-use notice requirement — it must be delivered before the tool is applied to the candidate, not after.
(a) Any employer or employment agency that uses an automated employment decision tool to screen candidates who have applied for a position for an employment decision shall notify each such candidate of the following: (i) That an automated employment decision tool will be used in connection with the assessment or evaluation of such candidate; (b) The notice required by paragraph (a) of this subdivision shall be made no less than ten business days before the use of such automated employment decision tool and shall allow such candidate to request an alternative selection process or accommodation.
Pending 2026-01-01
H-01.1
Labor Law § 203-g(2)(a)(ii)
Plain Language
As part of the required pre-use notice, employers and employment agencies must disclose the specific job qualifications and characteristics that the automated tool will evaluate when assessing the candidate. This goes beyond simply telling the candidate that AI will be used — it requires explaining the criteria the tool applies, enabling the candidate to understand what factors drive the automated assessment.
(ii) The job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate;
Pending 2025-01-01
H-01.1H-01.3
Real Prop. Law § 227-g(3)(a)(i)-(iv), (3)(b)
Plain Language
Landlords using an automated housing decision making tool must notify each applicant at least 24 hours before the tool is used. The notice must cover four elements: (1) that an automated tool will be used; (2) what characteristics the tool assesses; (3) the type of data collected, its source, and the landlord's data retention policy; and (4) if the application is denied, the reason for denial. The notice must also allow the applicant to request an alternative selection process or accommodation. The 24-hour advance notice requirement and the opt-out right are notable — they are more protective than comparable laws that only require pre-decision notice without an alternative process.
Any landlord that uses an automated housing decision making tool to screen applicants for housing shall notify each such applicant of the following: (i) That an automated housing decision making tool will be used in connection with the assessment or evaluation of such applicant; (ii) The characteristics that such automated housing decision making tool will use in the assessment of such applicant; (iii) Information about the type of data collected for such automated housing decision making tool, the source of such data, and the landlord's data retention policy; and (iv) If an application for housing is denied through use of the automated housing decision making tool, the reason for such denial. (b) The notice required by paragraph (a) of this subdivision shall be made no less than twenty-four hours before the use of such automated housing decision making tool and shall allow such applicant to request an alternative selection process or accommodation.
Pending 2025-04-27
H-01.1
State Tech. Law § 507(4)-(5)
Plain Language
Residents have the right to understand how and why an automated system determined an outcome affecting them — including when the system was only one factor in the decision. Explanations must be technically valid, meaningful to the individual, and proportionate to the risk level of the context. This is an individual explanation right, not a general transparency obligation — it applies to specific outcomes affecting specific individuals.
4. New York residents shall have the right to understand how and why an outcome impacting them was determined by an automated system, even when the automated system is not the sole determinant of the outcome.
5. Automated systems shall provide explanations that are technically valid, meaningful to the individual and any other persons who need to understand the system and proportionate to the level of risk based on the context.
Pending 2025-04-27
H-01.4H-01.5
State Tech. Law § 508(1)-(3)
Plain Language
Residents have the right to opt out of automated systems in favor of a human alternative where appropriate, based on reasonable expectations and the risk of harmful impacts. When a system fails, produces an error, or when a resident wishes to appeal or contest an outcome, they must have access to a timely human consideration and remedy through a fallback and escalation process. The human fallback process must be accessible, equitable, effective, maintained, accompanied by operator training, and must not impose an unreasonable burden. This creates both an opt-out right and a mandatory human review/appeal mechanism.
1. New York residents shall have the right to opt out of automated systems, where appropriate, in favor of a human alternative. The appropriateness of such an option shall be determined based on reasonable expectations in a given context, with a focus on ensuring broad accessibility and protecting the public from particularly harmful impacts. In some instances, a human or other alternative may be mandated by law.
2. New York residents shall have access to a timely human consideration and remedy through a fallback and escalation process if an automated system fails, produces an error, or if they wish to appeal or contest its impacts on them.
3. The human consideration and fallback process shall be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.
Pending 2025-04-27
H-01.6
State Tech. Law § 508(4)
Plain Language
Automated systems used in sensitive domains — including criminal justice, employment, education, and health — face heightened requirements beyond the general human fallback provisions. These systems must be tailored to their purpose, provide meaningful oversight access, include user training for residents who interact with the system, and incorporate human consideration for adverse or high-risk decisions. The human consideration requirement for high-risk decisions in sensitive domains effectively mandates human-in-the-loop review before acting on adverse automated outcomes in these contexts.
4. Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall additionally be tailored to their purpose, provide meaningful access for oversight, include training for New York residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
Pending 2026-06-09
H-01.3H-01.4
Civ. Rights Law § 86-a(1)(a)–(d)
Plain Language
Before using a high-risk AI system for a consequential decision, deployers must notify the end user at least five business days in advance — in clear, consumer-friendly terms and in all languages the company offers — that AI will be used. The deployer must also provide the end user a meaningful opportunity to opt out of the automated process and have the decision made by a human instead; opting out may not trigger adverse consequences and the deployer must render a decision within 45 days. When the AI decision would confer a benefit (e.g., social benefits, housing, emergency funds), the deployer must offer the user the option to waive the five-day advance notice, after which notice must still be given as early as practicable. Users are limited to one opt-out per consequential decision within a six-month period. An urgent-necessity exception exists: if compliance would cause imminent detriment to the end user, the notice and opt-out obligations are waived — but the right to request human review is never waived.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
Pending 2026-06-09
H-01.4H-01.5
Civ. Rights Law § 86-a(2)(a)–(b)
Plain Language
After a high-risk AI system has been used in a consequential decision, the deployer must inform the end user within five days. The deployer must then provide a clear appeal process that allows the end user to (1) formally contest the decision, (2) submit supporting information, and (3) obtain meaningful human review. The deployer must respond to appeals within 45 days, extendable once by another 45 days for complex or voluminous appeals, with notice to the user of the extension and reasons. Users are limited to one appeal per consequential decision within a six-month period.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Pending
H-01.6
Gen. Bus. Law § 1154
Plain Language
Before any news media content that was created in whole or material part by generative AI may be published, a human worker must review the content and have the authority to approve, deny, or modify it. This is a mandatory human-in-the-loop requirement — the human must have actual override authority, not merely a rubber-stamp role. The provision cross-references the consumer disclosure requirement in § 1153, tying the publication gate to the disclosure obligation. Note that the copyright exemption in § 1153 may create ambiguity about whether this review requirement applies to AI content that is copyright-eligible and therefore exempt from the disclosure.
Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
Pending 2027-01-01
H-01.4
Civil Rights Law § 108(1)
Plain Language
The Division must promulgate regulations within two years specifying when and how deployers must give individuals the right to opt out of algorithmic decision-making and request a human alternative. The regulation must address notice clarity, timeliness, which types of consequential actions require a human alternative, feasibility, and the balance between individual control and practical effectiveness. Until regulations are promulgated, the specific scope and mechanics of the opt-out right are undefined — this is a directive to the Division, not a self-executing individual right. However, once regulations are finalized, deployers will need to implement opt-out mechanisms and human alternative pathways.
1. Not later than two years after the effective date of this article, the division shall promulgate regulations in accordance with specifying the circumstances and manner in which a deployer shall provide to an individual a means to opt-out of the use of a covered algorithm for a consequential action and to elect to have the consequential action concerning the individual undertaken by a human without the use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall consider the following: (a) how to ensure that any notice or request from a deployer regarding the right to a human alternative is clear and conspicuous, in plain language, easy to execute, and at no cost to an individual; (b) how to ensure that any such notice to individuals is effective, timely, and useful; (c) the specific types of consequential actions for which a human alternative is appropriate, considering the magnitude of the action and risk of harm; (d) the extent to which a human alternative would be beneficial to individuals and the public interest; (e) the extent to which a human alternative can prevent or mitigate harm; (f) the risk of harm to individuals beyond the requestor if a human alternative is available or not available; (g) the feasibility of providing a human alternative in different circumstances; and (h) any other considerations the division deems appropriate to balance the need to give an individual control over a consequential action related to such individual with the practical feasibility and effectiveness of granting such control.
Pending 2027-01-01
H-01.4H-01.5
Civil Rights Law § 108(3)
Plain Language
The Division must promulgate regulations within two years specifying when and how deployers must provide individuals a mechanism to appeal algorithmic consequential actions to a human reviewer. The regulations must ensure the appeal is free, accessible (including to individuals with disabilities), proportionate to the action, non-discriminatory, and timely. The regulations must also address data correction rights and training requirements for human reviewers. Like the opt-out provision in § 108(1), this is a rulemaking directive — the specific appeal right does not become operative until the Division issues regulations. Once in effect, deployers will need human review infrastructure with trained reviewers capable of overriding algorithmic decisions.
3. Not later than two years after the effective date of this article, the division shall promulgate regulations specifying the circumstances and manner in which a deployer shall provide to an individual a mechanism to appeal to a human a consequential action resulting from the deployer's use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall do the following: (a) ensure that the appeal mechanism is clear and conspicuous, in plain language, easy-to-execute, and at no cost to individuals; (b) ensure that the appeal mechanism is proportionate to the consequential action; (c) ensure that the appeal mechanism is reasonably accessible to individuals with disabilities, timely, usable, effective, and non-discriminatory; (d) require, where appropriate, a mechanism for individuals to identify and correct any personal data used by the covered algorithm; (e) specify training requirements for human reviewers with respect to a consequential action; and (f) consider any other circumstances, procedures, or matters the division deems appropriate to balance the need to give an individual a right to appeal a consequential action related to such individual with the practical feasibility and effectiveness of granting such right.
Pending 2026-01-01
H-01.3H-01.4
Civ. Rights Law § 86-a(1)(a)-(d)
Plain Language
Before using a high-risk AI system for a consequential decision, deployers must give end users at least five business days' advance notice — in clear, multilingual terms — that AI will be used. They must also give the end user sufficient time to opt out of the AI decision process and have the decision made by a human instead, with no adverse consequences for opting out. If the decision confers a benefit, the deployer must offer the end user the option to waive the five-day notice period, after which notice must still be given as early as practicable. End users get one opt-out per consequential decision per six-month period. The entire advance notice and opt-out obligation is waived in cases of urgent necessity to confer a benefit (e.g., emergency funds), but even then the end user's right to request human review cannot be waived. Deployers must render a decision within 45 days of an opt-out request.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision:
(i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and
(ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days.
(b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision.
(c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days.
(d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
Pending 2026-01-01
H-01.4H-01.5
Civ. Rights Law § 86-a(2)(a)-(b)
Plain Language
After a consequential decision is made using a high-risk AI system, the deployer must notify the end user within five days. The deployer must also explain the appeal process, which must at minimum allow the end user to (1) formally contest the decision, (2) submit supporting information, and (3) obtain meaningful human review. The deployer must respond within 45 days, with one 45-day extension permitted where reasonably necessary. End users get one appeal per consequential decision per six-month period. Note the election requirement: under § 86-a(5), an end user may exercise either the pre-decision opt-out or the post-decision appeal, but not both for the same consequential decision.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay.
(b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Pending 2025-10-11
H-01.3
GBL § 1552(5)(a), § 1552(5)(c)
Plain Language
Before deploying a high-risk AI decision system to make or substantially influence a consequential decision about a consumer, the deployer must notify the consumer directly. The notice must include: that a high-risk AI system is being used, the system's purpose, the nature of the consequential decision, deployer contact information, a plain-language system description, and instructions for accessing the deployer's public statement under § 1552(6). All notices must be in plain language, in all languages the deployer normally uses for consumer communications, and in formats accessible to consumers with disabilities.
5. (a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
Pending 2025-10-11
H-01.1H-01.2H-01.4H-01.5
GBL § 1552(5)(b), § 1552(5)(c)
Plain Language
When a high-risk AI decision system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement of the principal reasons for the adverse decision, including the AI system's degree of contribution, the types of data it processed, and data sources; (2) an opportunity to correct inaccurate personal data used in the decision; and (3) an appeal mechanism that must include human review if technically feasible, unless delay would endanger the consumer. All adverse-decision communications must be delivered directly to the consumer, in plain language, in all languages the deployer uses for consumer communications, and in disability-accessible formats.
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
Pending 2025-09-05
H-01.6
Gen. Bus. Law § 1154
Plain Language
Before any news media content that was created in whole or material part by generative AI may be published, a human worker must review the content and have the authority to approve, deny, or modify the AI-generated output. This is a mandatory human-in-the-loop review requirement — AI-generated news content cannot go directly to publication without human authorization. The human reviewer must have genuine decisional authority, not merely a rubber-stamp role. This provision is linked to the consumer disclosure requirement in § 1153, meaning it applies specifically to content that would trigger labeling obligations.
Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
Pending 2026-07-22
H-01.3
Exec. Law § 296(23)(b)-(c)
Plain Language
Employers must notify employees when using artificial intelligence for recruitment, hiring, promotion, discharge, discipline, and other employment purposes listed in subdivision (a). Failure to provide notice is itself an independent unlawful discriminatory practice. The specific circumstances requiring notice, timing, and method of delivery will be determined by Division of Human Rights rulemaking. Until those rules are adopted, the obligation to provide notice exists but the precise mechanics are undefined. Subdivision (c) delegates rulemaking authority but creates no independent compliance obligation — it is included here because it qualifies and conditions the notice requirement in subdivision (b).
(b) It shall be an unlawful discriminatory practice for an employer to fail to provide notice to an employee that such employer is using artificial intelligence for the purposes described in paragraph (a) of this subdivision. (c) The division shall adopt any rules or regulations necessary for the implementation and enforcement of this subdivision, including, but not limited to, rules on the circumstances and conditions that require notice, the time period for providing such notice and the means for providing such notice.
Pending 2025-01-01
H-01.6
State Technology Law § 402(1)
Plain Language
State agencies may not use any automated decision-making system — directly or through contractors — for public assistance benefits, rights-affecting functions, or functions materially impacting individual welfare unless the system is subject to continued and operational meaningful human review. The human reviewer must understand the system's risks and limitations, be trained on it, and have actual authority to approve, deny, or modify the system's decisions. This is not a one-time pre-launch check; the meaningful human review must be ongoing and operational throughout deployment.
No state agency, or any entity acting on behalf of such agency, which utilizes or applies any automated decision-making system, directly or indirectly, in performing any function that: (a) is related to the delivery of any public assistance benefit; (b) will have a material impact on the rights, civil liberties, safety or welfare of any individual within the state; or (c) affects any statutorily or constitutionally provided right of an individual, shall utilize such automated decision-making system, unless such automated decision-making system is subject to continued and operational meaningful human review.
Pending 2025-11-01
H-01.6
63 O.S. § 5503(B)
Plain Language
Qualified end-users must retain full authority to amend or override any AI device output based on their own professional clinical judgment. Deployers and other entities are prohibited from pressuring the physician to accept, ignore, or alter the AI's recommendations. This creates both a positive right (the physician can always override) and a negative obligation (no one may pressure the physician to defer to the AI). This is a structural safeguard ensuring meaningful human control over AI-assisted clinical decisions.
B. The qualified end-user of the AI device shall retain authority to amend or overrule outputs from the device based on their professional judgment, and without pressure from the deployer or any other entity to ignore or alter professional judgement.
Pending 2026-11-01
H-01.6
36 O.S. § 6567(D)
Plain Language
When a health benefit plan uses AI tools as an initial step in utilization review, a clinical peer reviewer must independently open, review, and document the individual clinical records or data before issuing any adverse determination. This is a mandatory human-in-the-loop requirement — the clinical peer reviewer cannot simply ratify the AI output but must affirmatively review the underlying clinical records. The documentation requirement ensures an audit trail demonstrating that human review occurred before any adverse action was taken.
D. A clinical peer reviewer who participates in a utilization review process for a health benefit plan that initially uses artificial intelligence tools for a utilization review shall open and document the utilization review of the individual clinical records or data prior to issuing an adverse determination.
Pending 2026-03-10
H-01.3
Section 3(c)
Plain Language
When a consumer requests it, the business entity must provide timely access to a human representative, provided one is reasonably available. This is a consumer-initiated right to escalate from an AI interaction to a human, but it is conditioned on reasonable availability — if no human representative is reasonably available, the obligation does not apply. This is distinct from the right to human review of high-impact decisions in Section 4; this provision applies to any consumer interaction, not just high-impact decisions.
(c) Human representatives.--Upon request, the business entity shall provide the consumer with timely access to a human representative, if a human representative is reasonably available.
Pending 2026-03-10
H-01.3H-01.4
Section 4(a)-(b)
Plain Language
Consumers have the right to request human review of any AI-involved consumer interaction that constitutes a high-impact decision — one that materially affects their legal rights, employment, housing, credit, education, health care, or access to government benefits. When a business entity uses AI in a consumer interaction involving a high-impact decision, it must clearly and conspicuously disclose the consumer's right to request human review. Unlike the general human representative access right in Section 3(c), this right is unconditional — there is no 'reasonably available' qualifier.
(a) Right to human review.--A consumer shall have the right to request that a human representing the business entity review any consumer interaction involving a high-impact decision. (b) Notice.--When the conditions under section 3 are met requiring the disclosure of the use of artificial intelligence in a consumer interaction and involve a high-impact decision, the business entity shall disclose in a clear and conspicuous manner that the consumer has a right to request a human review by the business entity involving the high-impact decision.
Pending 2026-03-10
H-01.4
Section 4(c)
Plain Language
When a consumer requests human review of a high-impact decision, the business entity must begin the review within 14 calendar days and must complete the review and deliver a decision to the consumer within 28 calendar days. These are hard deadlines measured from the date of the request. There is no extension mechanism or exception for complexity.
(c) Time frame.--A business entity shall commence the human review not later than 14 days after the request for a human review is made. The human review shall be completed and the decision delivered to the requester not later than 28 days after the request for a human review is made.
Pending
H-01.1H-01.2H-01.6
§ 28-5.2-2(i)
Plain Language
Employers may not rely primarily on electronic monitoring data when making hiring, promotion, discipline, termination, or compensation decisions. Three requirements must be met: (1) the employer must establish meaningful human oversight — which requires designating an internal reviewer with ADS expertise, familiarity with the latest impact assessment, authority to dispute or reject outputs, and adequate time and resources; (2) a human decision-maker must verify the accuracy and currency of monitoring data, review pending correction requests, and exercise independent judgment; and (3) the human decision-maker must consider non-monitoring information (supervisor evaluations, personnel files, work products, peer reviews). This effectively prevents electronic monitoring from being the sole or primary basis for consequential employment decisions.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
Pending
H-01.1H-01.2
§ 28-5.2-2(j)
Plain Language
When an employer makes any consequential employment decision (hiring, promotion, termination, discipline, compensation) based in whole or in part on electronic monitoring data, it must disclose to the affected employee and their authorized representative within 30 days: (1) that monitoring data was used, (2) which specific tools were used, how they gather and analyze data, and the time increments used, (3) the specific data and judgments derived from it that informed the decision, and (4) any non-monitoring information also used. This post-decision disclosure must occur within 30 days of the decision being made or going into effect, whichever is sooner. The disclosure goes to both the employee and their union or other authorized representative.
(j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
Pending 2026-02-06
H-01.6
§ 28-5.2-2(i)
Plain Language
Employers may not rely primarily on electronic monitoring data for hiring, promotion, discipline, termination, or compensation decisions. Every such decision must involve meaningful human oversight — defined as a process requiring a designated internal reviewer with ADS expertise, familiarity with the most recent impact assessment, and sufficient understanding of system outputs to spot biases and errors. The reviewer must have authority and discretion to dispute, rerun, or reject outputs, and sufficient time and resources to do so. The human decision-maker must independently verify data accuracy, address pending correction requests, exercise independent judgment, and consider non-monitoring information such as supervisory evaluations, personnel files, and peer reviews.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
Pending 2026-02-06
H-01.1H-01.2
§ 28-5.2-2(j)
Plain Language
When an employer makes a hiring, promotion, termination, disciplinary, or compensation decision based in whole or in part on electronic monitoring data, the employer must disclose to the affected employee and their authorized representative — within 30 days of the decision or its effective date, whichever is sooner — four categories of information: that monitoring data was used, which specific tools were used and how they work, the specific data and data-based judgments used in the decision, and any non-monitoring information also considered. This is a post-decision explanation obligation that is more granular than a typical adverse-action notice.
(j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
Pending 2025-01-01
H-01.3
Section 37-31-30(D)(1)(a)-(c)
Plain Language
Before a high-risk AI system makes or substantially factors into a consequential decision about a consumer, the deployer must: (1) notify the consumer that AI is being used for this purpose, (2) provide a plain-language description of the AI system, its purpose, the nature of the consequential decision, and deployer contact information, and (3) inform the consumer of any applicable opt-out rights regarding profiling. This is a pre-decision notice requirement — the consumer must know before the decision is made.
(D)(1) No later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (a) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (b) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by this item; and (c) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer pursuant to Section 30-31-60(A)(1)(a)(iii).
Pending 2025-01-01
H-01.1H-01.2H-01.4H-01.5
Section 37-31-30(D)(2)(a)-(c)
Plain Language
When a high-risk AI system contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) an explanation of the principal reasons for the decision, including the AI system's degree of contribution, the type of data processed, and data sources; (2) an opportunity to correct inaccurate personal data used in the decision; and (3) an opportunity to appeal, with human review if technically feasible. The human review exception applies where appeal delay would pose a risk to the consumer's life or safety. These post-decision rights apply only to adverse decisions.
(2) A deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (a) a statement disclosing the principal reason or reasons for the consequential decision, including: (i) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (ii) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (iii) the source or sources of the data described in item (2)(a)(ii); (b) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (c) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.
Pending 2025-01-01
H-01.3
Section 37-31-30(D)(3)(a)-(b)
Plain Language
All notices and disclosures required under Section 37-31-30(D)(1) and (2) must be provided directly to the consumer, in plain language, in all languages the deployer uses in its ordinary business communications, and in accessible formats for consumers with disabilities. If direct provision is not possible, the deployer must use an alternative method reasonably calculated to reach the consumer. This is a formatting and delivery requirement that conditions the obligations in the preceding subsections.
(3)(a) Except as provided in subitem (b), a deployer shall provide the notice, statement, contact information, and description required by items (1) and (2): (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities. (b) If the deployer is unable to provide the notice, statement, contact information, and description required by items (1) and (2) directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
Pending 2026-07-01
H-01.6
§ 19.2-11.14(B)
Plain Language
Every consequential criminal justice decision — from pretrial detention through sentencing, parole, and rehabilitation — must be made by the judicial officer or other authorized human decision-maker. No such decision may be made without human involvement. AI recommendations or predictions may inform the decision, but the system output is subject to challenge or objection under applicable law. This is both a mandatory human-in-the-loop requirement and a due process safeguard ensuring AI outputs in the criminal justice context remain challengeable.
B. All decisions related to the pre-trial detention or release, prosecution, adjudication, sentencing, probation, parole, correctional supervision, or rehabilitation of criminal offenders shall be made by the judicial officer or other person charged with making such decision. No such decision shall be made without the involvement of a human decision-maker. The use of any recommendation or prediction from an artificial intelligence-based tool shall be subject to any challenge or objection permitted by law.
Failed 2026-07-01
H-01.1H-01.3
Va. Code § 2.2-1202.2(B)(2)
Plain Language
State agencies using an automated decision system as a substantial factor in employment decisions must disclose to affected individuals five categories of information: (1) that an automated system is being used, (2) the system's intended employment use, (3) the types and sources of data inputs, (4) how the system factors into decision-making, and (5) the extent to which personal data will be shared with third parties or fed back into the system. This is a pre-decision notification and explanation obligation triggered whenever the automated system is a substantial factor in the employment decision.
The Director shall require any state agency that uses an automated decision system as a substantial factor in any employment decision to: 2. Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the state agency's decision-making processes; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
Failed 2026-07-01
H-01.6
Va. Code § 2.2-1202.2(C)
Plain Language
State agencies are prohibited from making any employment decision without the involvement of a human decision maker. Agencies may not rely solely on an automated decision system's recommendation or prediction — a human must be meaningfully involved in every employment decision. This is a categorical human-in-the-loop requirement for all state agency employment decisions, not just high-stakes ones.
No employment decision shall be made by a state agency without the involvement of a human decision maker. No state agency shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
Failed 2026-07-01
H-01.5
Va. Code § 2.2-1202.2(D)
Plain Language
The Department of Human Resource Management must create and publicize a dedicated complaint process for applicants and employees to raise concerns about the use of automated decision systems in state employment decisions. The process must include investigation and resolution mechanisms, and must be separate from the existing dispute resolution process under § 2.2-1202.1. This is an infrastructure-building obligation on the Department itself, distinct from giving individual employees a right to contest a specific decision.
The Department shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in the Commonwealth's employment decisions and a process for the investigation and resolution of any such concerns and complaints. Such process shall be separate and apart from the dispute resolution process described in § 2.2-1202.1.
Failed 2026-07-01
H-01.1H-01.3
Va. Code § 15.2-1500.2(B)(2)
Plain Language
Local government entities using an automated decision system as a substantial factor in employment decisions must disclose to affected individuals five categories of information: that an automated system is being used, its intended employment use, the types and sources of data inputs, how the system factors into decision-making, and the extent personal data will be shared with third parties or reused. This mirrors the state agency disclosure obligation under § 2.2-1202.2(B)(2) but applies to local government instrumentalities.
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall: 2. Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the decision-making processes of the department, office, board, commission, agency, or instrumentality of local government; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
Failed 2026-07-01
H-01.6
Va. Code § 15.2-1500.2(C)
Plain Language
Local government entities are prohibited from making employment decisions without human involvement. They may not solely rely on automated decision system outputs. Mirrors the state agency obligation under § 2.2-1202.2(C).
No employment decision shall be made by a department, office, board, commission, agency, or instrumentality of local government without the involvement of a human decision maker. No department, office, board, commission, agency, or instrumentality of local government shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
Failed 2026-07-01
H-01.5
Va. Code § 15.2-1500.2(D)
Plain Language
Local government entities using automated decision systems in employment decisions must create and publicize a complaint process for applicants and employees, including investigation and resolution mechanisms. Unlike the state agency version, there is no requirement for this process to be separate from existing dispute resolution processes. Mirrors the state agency obligation under § 2.2-1202.2(D) but for local government.
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in employment decisions and a process for the investigation and resolution of any such concerns and complaints.
Failed 2026-07-01
H-01.6
Va. Code § 40.1-28.7:12(B)
Plain Language
Private employers may not make any employment decision without the involvement of a human decision maker. No employer may solely rely on automated decision system recommendations or predictions for employment decisions. This is the private-sector counterpart to the state and local government human-in-the-loop requirements. Note that the private employer definition of 'employment decision' adds the qualifier 'any final decision made by an employer,' narrowing the scope slightly compared to the government versions. Knowingly violating this requirement triggers civil penalties up to $500 for a first offense and $1,500 for subsequent offenses.
No employment decision shall be made by an employer without the involvement of a human decision maker. No employer shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
Pending 2025-07-01
H-01.6
21 V.S.A. § 495q(f)(2)
Plain Language
Employers may never rely solely on automated decision system outputs for employment-related decisions. ADS outputs may be used only if three conditions are all satisfied: (1) the ADS outputs are corroborated by human oversight — specifically supervisory/managerial observations, work documentation, personnel records, and coworker consultations; (2) the employer has completed an impact assessment under subsection (g); and (3) the employer has provided the required pre-decision notice under subsection (f)(4). This is a strong human-in-the-loop requirement — human oversight must affirmatively corroborate the ADS output with independent evidence, not merely ratify it.
(2)(A) An employer shall not solely rely on outputs from an automated decision system when making employment-related decisions. (B) An employer may utilize an automated decision system in making employment-related decisions if: (i) the automated decision system outputs considered in making the employment-related decision are corroborated by human oversight of the employee, including supervisory or managerial observations and documentation of the employee's work, personnel records, and consultations with the employee's coworkers; (ii) the employer has conducted an impact assessment of the automated decision system pursuant to subsection (g) of this section; and (iii) the employer is in compliance with the notice requirements of subdivision (4) of this subsection (f).
Pending 2025-07-01
H-01.1H-01.3
21 V.S.A. § 495q(f)(4)
Plain Language
Before using an ADS to make any employment-related decision about an employee, the employer must provide a pre-decision notice in plain, clear, concise language in the employee's primary language. The notice must cover ten specific items: the system's nature, purpose, and scope; its logic and key parameters; input data categories and sources (including electronic monitoring data); performance metrics; output types; developer identity; operator/monitor identity; how to access the impact assessment; employee data access and correction rights; and an anti-retaliation statement. This is a pre-decision notice requirement — it must be provided before the ADS is used, not after.
(4) Prior to using an automated decision system to make an employment-related decision about an employee, the employer must provide the employee with a notice that complies with subdivision (c)(3)(A) of this section and, at a minimum, contains the following information: (A) a plain language explanation of the nature, purpose, and scope for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (B) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system; (C) the specific category and sources of employee input data that the automated decision system will use, including a specific description of any data collected through electronic monitoring; (D) any performance metrics the employer will consider using with the automated decision system; (E) the type of outputs the automated decision system will produce; (F) the individuals or entities that developed the automated decision system; (G) the individual or entities that will operate, monitor, and interpret the results of the automated decision system; (H) information about how an employee can access the results of the most recent impact assessment of the automated decision system; (I) a description of an employee's rights, pursuant to subsection (j) of this section, to access information about the employer's use of the automated decision system and to correct data used by the automated decision system; and (J) a statement that employees are protected from retaliation for exercising the rights described in the notice.
Pending 2025-07-01
H-01.3
9 V.S.A. § 4193c(a)-(b)
Plain Language
Before using an automated decision system for a consequential decision, deployers must provide consumers with a clear, conspicuous, multilingual pre-decision notice. The notice must describe which personal characteristics the system measures, how it measures them, their relevance to the decision, what human oversight exists, how the automated components contribute, and a link to a public webpage describing outputs, data sources, and the latest impact assessment results. This is an affirmative pre-use disclosure obligation — it must be delivered before the system is applied to the consumer.
(a) Any deployer that employs an automated decision system for a consequential decision shall inform the consumer prior to the use of the system for a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that automated decision systems will be used to make a consequential decision or to assist in making a consequential decision. (b) Any notice provided by a deployer to the consumer pursuant to subsection (a) of this section shall include: (1) a description of the personal characteristics or attributes that the system will measure or assess; (2) the method by which the system measures or assesses those attributes or characteristics; (3) how those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (4) any human components of the system; (5) how any automated components of the system are used to inform the consequential decision; and (6) a direct link to a publicly accessible page on the deployer's website that contains a plain-language description of the: (A) system's outputs; (B) types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and (C) results of the most recent impact assessment, or an active link to a web page where a consumer can review those results.
Pending 2025-07-01
H-01.1H-01.2
9 V.S.A. § 4193c(c)
Plain Language
After a consequential decision is made using an automated decision system, deployers must provide the consumer with a single post-decision notice explaining the principal reasons for the decision. The notice must identify the developer, describe the system's output, explain how the system contributed to the decision, identify data types and sources used, explain in plain language how the consumer's personal data informed the outcome, and describe what actions the consumer could have taken or can take to secure a different result. This is an individualized explanation obligation — not a generic disclosure — and must be specific to the consumer's actual decision.
(c) Any deployer that employs an automated decision system for a consequential decision shall provide the consumer with a single notice containing a plain-language explanation of the decision that identifies the principal reason or reasons for the consequential decision, including: (1) the identity of the developer of the automated decision system used in the consequential decision, if the deployer is not also the developer; (2) a description of what the output of the automated decision system is, such as a score, recommendation, or other similar description; (3) the degree and manner to which the automated decision system contributed to the decision; (4) the types and sources of data processed by the automated decision system in making the consequential decision; (5) a plain language explanation of how the consumer's personal data informed the consequential decision; and (6) what actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future.
Pending 2025-07-01
H-01.4H-01.5
9 V.S.A. § 4193c(d)
Plain Language
Deployers must establish and explain an appeal process allowing consumers to formally contest a consequential automated decision, submit supporting information, and obtain meaningful human review. The human reviewer must be trained, impartial, free from conflicts of interest, not involved in the original decision, protected from retaliation for their review decisions, and given sufficient resources. The reviewer must consider consumer-submitted information and may consider other relevant sources. Deployers must respond within 45 days, extendable once by 45 days for complexity, with notice of any extension. This is a robust procedural due process requirement that goes beyond simple human review on request.
(d)(1) A deployer shall provide and explain a process for a consumer to appeal a decision, which shall at minimum allow the consumer to: (A) formally contest the decision; (B) provide information to support their position; and (C) obtain meaningful human review of the decision. (2) For an appeal made pursuant to subdivision (1) of this subsection: (A) a deployer shall designate a human reviewer who: (i) is trained and qualified to understand the consequential decision being appealed, the consequences of the decision for the consumer, how to evaluate and how to serve impartially, including by avoiding prejudgment of the facts at issue, conflict of interest, and bias; (ii) does not have a conflict of interest for or against the deployer or the consumer; (iii) was not involved in the initial decision being appealed; (iv) shall enjoy protection from dismissal or its equivalent, disciplinary measures, or other adverse treatment for exercising their functions under this section; and (v) shall be allocated sufficient human resources by the deployer to conduct an effective appeal of the decision; and (B) the human reviewer shall consider the information provided by the consumer in their appeal and may consider other sources of information relevant to the consequential decision. (3) A deployer shall respond to a consumer's appeal not later than 45 after receipt of the appeal. That period may be extended once by an additional 45 days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the consumer of any extension not later than 45 days after receipt of the appeal, together with the reasons for the delay.
Pending 2027-01-01
H-01.1
Sec. 3(5)(a)-(c)
Plain Language
After making a consequential decision using a high-risk AI system, the deployer must transmit the decision to the consumer without undue delay. If the decision is adverse and relied on personal information beyond what the consumer directly provided, the deployer must also explain: the principal reasons for the decision, the degree and manner of AI contribution, the types of data processed, and the sources of that data. The adverse-decision explanation requirement is conditioned on two triggers: (1) the decision must be adverse, and (2) it must be based on data the consumer did not directly provide. If both conditions are not met, only the timely transmittal of the decision itself is required.
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal information beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Pending 2026-07-01
H-01.3
Sec. 7(1)-(2)
Plain Language
Each time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must notify the consumer before the decision is made. The pre-decision notice must disclose the system's purpose, the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the AI system. This obligation has the earliest effective date in the bill — July 1, 2026 — one year before the risk management and impact assessment obligations take effect. Note that 'substantial factor' has a narrow statutory definition requiring the AI factor to be weighed more heavily than any other factor contributing to the decision.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
Pending 2027-01-01
H-01.1
Sec. 3(5)(a)-(c)
Plain Language
Deployers must transmit consequential decisions to consumers without undue delay. If the decision is adverse and relied on personal data beyond what the consumer directly provided, the deployer must additionally explain: the principal reasons for the decision, the degree and manner of the AI system's contribution, the types of data used, and the data sources. This explanation right is triggered only when (1) the decision is adverse and (2) personal data beyond the consumer's direct submissions was used — favorable decisions and decisions based solely on consumer-provided data do not require the detailed explanation.
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal data beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Pending 2026-07-01
H-01.1H-01.3
Sec. 8(1)-(2)
Plain Language
Every time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must, before the decision is made: (1) notify the consumer that an AI system is being used, and (2) provide a statement disclosing the system's purpose, the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the AI system. This obligation takes effect July 1, 2026 — one year earlier than the risk management and impact assessment provisions. Consequential decisions cover a broad set of high-stakes domains including employment, housing, credit, healthcare, insurance, education, legal services, essential government services, and criminal justice releases.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.