H-01
Human Oversight & Fairness
Human Oversight of Automated Decisions
When AI systems make or inform consequential decisions about individuals — typically covering employment, credit, housing, insurance, healthcare, and public benefits — those individuals must have meaningful rights to understand, review, challenge, and in high-stakes contexts override those decisions. The specific rights and processes vary by jurisdiction and context, but the core principle is that individuals should not be subject to consequential automated decisions without meaningful recourse.
Applies to DeveloperDeployerProfessionalGovernment Sector EmploymentFinancial ServicesHealthcareGovernment System
Bills — Enacted
1
unique bills
Bills — Proposed
54
Last Updated
2026-03-29
Core Obligation

When AI systems make or inform consequential decisions about individuals — typically covering employment, credit, housing, insurance, healthcare, and public benefits — those individuals must have meaningful rights to understand, review, challenge, and in high-stakes contexts override those decisions. The specific rights and processes vary by jurisdiction and context, but the core principle is that individuals should not be subject to consequential automated decisions without meaningful recourse.

Sub-Obligations6 sub-obligations
ID
Name & Description
Enacted
Proposed
H-01.1
Explanation right The individual must receive an explanation of the principal factors that drove the automated decision, in plain language specific enough to be actionable — not a generic statement that AI was used.
1 enacted
31 proposed
H-01.2
Data disclosure right The specific data inputs used in making the decision about this individual must be disclosed, including the right to know what data was used and to correct inaccurate data.
0 enacted
16 proposed
H-01.3
Pre-decision notice The individual must be notified before a consequential automated decision is made — informing them that an automated system will be used and what categories of decisions it can make.
1 enacted
37 proposed
H-01.4
Right to request human review The individual must have a clear, accessible mechanism to request human review of an automated decision. The right must be disclosed at or near the time of the decision. Human review must be available but the individual must invoke it.
1 enacted
27 proposed
H-01.5
Appeal and contestation right A defined process must exist for the individual to formally contest an automated decision and receive a substantive response explaining the outcome. The process must be accessible without unreasonable burden.
1 enacted
22 proposed
H-01.6
Mandatory pre-action human sign-off Before action is taken on an AI recommendation in defined high-stakes contexts, a qualified human reviewer must affirmatively review and authorize the decision. The human must have authority and practical ability to override — not merely ratify — the AI output.
0 enacted
22 proposed
Bills That Map This Requirement 55 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-01-01
H-01.4
Bus. & Prof. Code § 22627(a)
Plain Language
During business hours (8 a.m. to 6 p.m. daily), operators of large private businesses must make human customer service available and must connect any consumer interacting with a chatbot or automated system to a live customer service agent within five minutes of the consumer's request. The five-minute clock starts when the request for human assistance is made, not when the initial interaction began. This obligation applies daily — not just business days — suggesting weekend and holiday coverage is required. Outside of 8 a.m. to 6 p.m., the five-minute human escalation requirement does not apply.
(a) During the business hours of 8 a.m. to 6 p.m. daily, an operator of a large private business who provide goods and services to consumers in California shall provide consumers with human customer service support and communications. During these times, an operator shall connect a person interacting with a customer service chatbot, or automated customer support system, to a customer service agent within five minutes after a request for human customer service is made.
Pending 2027-01-01
H-01.4
Bus. & Prof. Code § 22627(b)
Plain Language
For telephonic customer service platforms specifically, operators must ensure three things: (1) customer calls are answered quickly; (2) after the call is answered, the customer is never placed on hold for more than 5 minutes at any single point, and cumulative hold time for the call does not exceed 10 minutes total; and (3) if a chatbot initially answers the call, human assistance must be provided within five minutes of the call being placed. The five-minute-from-call-placement timer for chatbot-answered calls is stricter than the general five-minute-from-request timer in § 22627(a) because it runs from when the call is made, not from when the consumer requests a human.
(b) For telephonic customer service platforms, the business shall ensure all of the following: (1) That a customer call be answered quickly and, after the call is answered, that a customer is not placed on hold for more than 5 minutes at any point after the call is answered, and that cumulative hold times for a call not exceed more than 10 minutes total. (2) If a call is answered by a customer service chatbot, the operator of the telephonic platform shall provide human assistance within five minutes after the call is made.
Pending 2027-01-01
H-01.4
Bus. & Prof. Code § 22627(c)
Plain Language
For online customer service platforms, operators must give customers the option to request assistance from a human being — this means the option must be affirmatively presented, not buried or hidden. Once the customer makes the request, human assistance must be provided within five minutes. This parallels the general requirement in § 22627(a) but applies specifically to the online channel.
(c) For online customer service platforms, the business shall ensure that a customer is given option to request customer service assistance from a human being and, upon that request, the operator of the online platform shall provide human assistance within five minutes after the request is made.
Pending 2027-01-01
H-01.6
Labor Code § 2821(a), (c)
Plain Language
Employers may not use or deploy AI, clinical decision support systems, or other healthcare technology in a way that replaces or limits a direct patient care worker's ability to exercise professional judgment within their scope of practice. This is an affirmative prohibition — the employer cannot design workflows, policies, or system configurations that effectively override or constrain the clinician's independent judgment. The policy declaration in subdivision (a) provides interpretive context: the legislature's intent is that clinicians retain autonomy over patient care decisions even when AI tools are deployed. This effectively requires that any AI tool used in patient care operate in an advisory capacity, with the clinician retaining final decision-making authority.
(a) It is the public policy of the State of California that a worker providing direct patient care be free to use their professional judgment to make assessments and decisions within their scope of practice as appropriate for their patients. (c) An employer shall not use or deploy technology to replace or limit a worker's use of professional judgment in patient care.
Pending 2026-01-01
H-01.1H-01.3
Bus. & Prof. Code § 22756.2(a)
Plain Language
When a deployer uses a high-risk automated decision system to make a decision about an individual, the deployer must notify that person and disclose: the system's purpose and the specific decision made, how the system was used in the decision, the types of data the system used, the deployer's contact information, and a link to the deployer's public website statement about its automated decision systems. This is a post-decision notification and explanation obligation — the statute does not explicitly require pre-decision notice, but it does require disclosure of the specific decision made.
(a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following: (1) The purpose of the high-risk automated decision system and the specific decision it was used to make. (2) How the high-risk automated decision system was used to make the decision. (3) The type of data used by the high-risk automated decision system. (4) Contact information for the deployer. (5) A link to the statement required by subdivision (b).
Pending 2026-01-01
H-01.4
Bus. & Prof. Code § 22756.2(c)
Plain Language
Deployers must provide individuals who are subject to a decision made by a high-risk automated decision system with an opportunity to appeal the decision for human review. This obligation is conditioned on technical feasibility, which provides some flexibility but does not eliminate the requirement where human review is practicable. The statute does not specify a timeframe for the appeal or the qualifications of the human reviewer.
(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.
Passed 2026-01-01
H-01.3
Lab. Code § 1522(a)-(c), (e)
Plain Language
Employers must provide a written pre-use notice to workers (or their authorized representatives) before deploying an ADS for non-hiring employment-related decisions. The notice must be issued at least 30 days before first deployment, by April 1, 2026 for systems already in use, and within 30 days for new hires. The notice must be plain-language, stand-alone, in the worker's routine communication language, and must describe the types of affected decisions, data categories and sources, key parameters that disproportionately affect output, the ADS vendor, any quotas, and the worker's data access and correction rights. Employers must also maintain an updated list of all ADS currently in use.
(a) An employer shall provide a written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or their authorized representative, according to the following: (1) At least 30 days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this title takes effect, no later than April 1, 2026. (3) To a new worker within 30 days of hiring the worker. (b) An employer shall maintain an updated list of all ADS currently in use. (c) A written notice required by this section shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including, but not limited to, an email, hyperlink, or other written format. (e) A notice issued pursuant to subdivision (a) shall contain the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measured by an ADS to which the worker is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's data used by the ADS. (7) That the employer is prohibited from retaliating against workers for exercising their rights described in paragraph (6).
Passed 2026-01-01
H-01.3
Lab. Code § 1522(d)
Plain Language
When an employer uses an ADS to make hiring decisions for a particular position, the employer must notify each job applicant upon receipt of their application. This is a simpler notice than the pre-use worker notice — it may be delivered via an automatic reply mechanism or included in the job posting itself. Unlike the worker notice, no specific content requirements are enumerated; the employer must simply inform the applicant that an ADS is used in hiring decisions for that position.
(d) An employer shall notify a job applicant upon receiving the application that the employer utilizes an ADS when making hiring decisions, if the employer will use the ADS in making decisions for that position. Notifications may be made using an automatic reply mechanism or on a job posting.
Passed 2026-01-01
H-01.6
Lab. Code § 1524(c)
Plain Language
Employers face a two-tier restriction on ADS use in discipline, termination, and deactivation decisions. First, an employer may never rely solely on an ADS for such decisions — there must always be a human in the loop. Second, when the ADS output is the primary basis for the decision, a human reviewer must affirmatively review the ADS output and also compile and review other relevant information (supervisory evaluations, personnel files, work product, peer reviews, witness interviews). The human reviewer must have actual supplementary information to consider — not merely rubber-stamp the ADS output.
(c) (1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) When an employer relies primarily on ADS output to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the ADS output and compile and review other information that is relevant to the decision, if any. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following: (A) Supervisory or managerial evaluations. (B) Personnel files. (C) Work product of workers. (D) Peer reviews. (E) Witness interviews, that may include relevant online customer reviews.
Passed 2026-01-01
H-01.1H-01.3
Lab. Code § 1526(a)-(b)
Plain Language
When an employer primarily relied on an ADS to make a discipline, termination, or deactivation decision, it must provide the affected worker with a post-decision written notice at the time the worker is informed of the decision. The notice must be plain-language, stand-alone, in the worker's routine communication language, and must identify: a human contact for further information, the fact that an ADS was used, the worker's right to request their data, and the prohibition on retaliation. This is a post-action notice obligation — distinct from the pre-deployment notice under Section 1522.
(a) An employer that primarily relied on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with a written notice at the time the employer informs the worker of the decision. The notice shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including an email, hyperlink, or other written format. (b) A notice issued pursuant to subdivision (a) shall contain all of the following information: (1) The human to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part.
Pending 2027-01-01
H-01.6
Lab. Code § 1522(b)-(c)
Plain Language
Employers may never rely solely on an ADS for discipline, termination, or deactivation decisions. When ADS output assists such a decision, the employer must assign a human reviewer to conduct an independent investigation and compile corroborating evidence — which may include supervisory evaluations, personnel files, work product, peer reviews, or witness interviews. Critically, if the human reviewer cannot corroborate the ADS output, or concludes it is inaccurate, incomplete, or misleading, the employer is prohibited from using that output for the decision. This goes beyond a human-in-the-loop requirement: the human must have genuine override authority and the ADS output is disqualified absent independent corroboration.
(b) (1) An employer shall not rely solely on an ADS when making a disciplinary, termination, or deactivation decision.
(2) If an employer uses an ADS output to assist in making a disciplinary, termination, or deactivation decision, the employer shall direct a human reviewer to conduct an independent investigation and compile corroborating or supporting information for the decision. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following:
(A) Supervisory or managerial evaluations.
(B) Personnel files.
(C) Work product of workers.
(D) Peer reviews.
(E) Witness interviews, that may include relevant online customer reviews.
(c) If an employer cannot corroborate the ADS output, or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading, the employer shall not use the ADS output to discipline, terminate, or deactivate a worker.
Pending 2027-01-01
H-01.1H-01.2H-01.3
Lab. Code § 1524(a)-(c)
Plain Language
When an employer uses ADS to assist in a discipline, termination, or deactivation decision, the employer must provide the affected worker a written postuse notice at the time the worker is informed of the decision. The notice must be a separate, stand-alone, plain-language communication in the worker's routine language, disclosing: (1) that ADS was used, (2) that a human reviewer independently investigated and corroborated the ADS output, (3) contact information for a human the worker can reach for more information and to exercise data access rights, and (4) anti-retaliation protections. If the worker then requests their data, the employer must provide — in a document accessible outside the workplace — the specific decision, the specific ADS inputs and outputs, corroborating evidence, the ADS vendor name and product name, and any completed impact assessments for that ADS. This is a two-stage obligation: the initial notice is automatic at the time of the decision, and the detailed data disclosure is triggered by worker request.
(a) An employer that uses an ADS to assist in making a disciplinary, termination, or deactivation decision shall provide the affected worker with a written postuse notice at the time the employer informs the worker of the decision. The notice shall comply with all of the following:
(1) It shall be written in plain language as a separate, stand-alone communication.
(2) It shall be in the language in which routine communications and other information are provided to workers.
(3) It shall be provided via a simple and easy-to-use method, including an email, hyperlink, or other written format.
(b) The post-use notice shall contain all of the following information:
(1) That the employer used an ADS to assist the employer in the disciplinary, termination, or deactivation decision with respect to the worker.
(2) That a human reviewer conducted an independent investigation and compiled evidence to corroborate the ADS output.
(3) Contact information for the human that the worker may contact for more information about the decision and the worker's right to access a copy of their own data and corroborating evidence that was used in the decision.
(4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part.
(c) When responding to a data access request pursuant to this section, an employer shall provide to the worker a written, plain language document using a simple and easy-to-use method that is accessible away from the workplace containing all of the following:
(1) The specific decision for which the employer used the ADS.
(2) The specific worker input data that the ADS used, and the specific worker output produced by the ADS.
(3) Any additional corroborating or supporting information used in addition to the ADS output in making the decision.
(4) The name of the vender or entity that created the ADS and the product name of the ADS.
(5) A copy of any completed impact assessments regarding the ADS in question.
Enacted 2026-06-30
H-01.1H-01.3
C.R.S. § 6-1-1703(4)(a)
Plain Language
Deployers must, no later than the time the high-risk AI system is deployed to make or substantially factor in a consequential decision about a consumer, provide certain disclosures. The specific disclosures required are enumerated in the original SB 205 § 6-1-1703(4)(a) (e.g., that an AI system is being used, categories of decisions it makes, contact information for the deployer, a description of the purpose). This is a pre-decision or at-decision timing requirement — the deployer cannot make the consequential decision and disclose later.
(4) (a) On and after June 30, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:
Enacted 2026-06-30
H-01.1H-01.4H-01.5
C.R.S. § 6-1-1703(4)(b)
Plain Language
When a high-risk AI system makes or substantially factors into a consequential decision that is adverse to a consumer, the deployer must provide the consumer with specific information. The original SB 205 § 6-1-1703(4)(b) requires: a statement that an AI system was used, contact information for the deployer, a description of the purpose of the AI system, information about the consumer's right to opt out and to appeal, and other relevant details. This post-adverse-decision disclosure obligation gives affected consumers the information they need to exercise appeal rights.
(b) On and after June 30, 2026, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer:
Pending 2026-10-01
H-01.3
Sec. 5
Plain Language
Before making any employment-related decision in which an automated process is used as the decision-maker or a substantial factor, the deployer must provide the affected applicant or employee a detailed written pre-decision notice. The notice must cover eight required elements: that an automated system is being used, its purpose, opt-out rights under CT data privacy law, deployer contact information, availability of human review, how to request reevaluation, a link to the most recent bias audit summary, and how to request additional documentation. This is distinct from the Section 4 data-collection notice — Section 5 is triggered by impending decision-making, not data collection.
Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, before such employment-related decision is made, provide to such applicant or employee a written notice disclosing: (1) That the deployer has deployed an automated employment-related decision process; (2) The purpose of the automated employment-related decision process and the nature of such employment-related decision; (3) Information concerning the right, under subparagraph (C) of subdivision (5) of subsection (a) of section 42-518 of the general statutes, to opt out of the processing of personal data for the purposes set forth in said subparagraph; (4) Contact information for the deployer; (5) The availability of human review pursuant to section 7 of this act; (6) Information concerning how such applicant or employee may request a revaluation of any employment-related decision made in whole or in part by such automated employment-related decision process; (7) A link to the summary of the most recent bias audit required pursuant to section 8 of this act; and (8) Information concerning how to request additional documentation or information about such automated employment-related decision process.
Pending 2026-10-01
H-01.1H-01.2H-01.4H-01.5
Sec. 6(a)-(b)
Plain Language
When an automated employment decision process makes or substantially contributes to an adverse employment decision, the deployer must provide the affected individual three things: (1) a high-level explanation of the principal reasons for the adverse decision — including how the automated system contributed, what data types were processed, and data sources; (2) an opportunity to examine the data used, correct inaccuracies, and appeal based on incorrect data with human review; and (3) upon request, a copy of the most recent bias audit. The explanation must be delivered directly, in plain language, in all languages the deployer ordinarily uses for business communications in the state, and in disability-accessible format.
(a) Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, if such employment-related decision is adverse to such applicant or employee, provide to such applicant or employee: (1) A high-level statement disclosing the principal reason or reasons for such adverse employment-related decision, including, but not limited to, (A) the degree to which, and manner in which, the automated employment-related decision process contributed to such adverse employment-related decision, (B) the type of data that were processed by such automated employment-related decision process in making, or as a substantial factor in making, such adverse employment-related decision, and (C) the source of the data described in subparagraph (B) of this subdivision; (2) An opportunity to (A) examine the data the automated employment-related decision process processed in making, or as a substantial factor in making, such adverse employment-related decision, (B) correct any incorrect data described in subparagraph (A) of this subdivision, and (C) appeal such adverse employment-related decision if such adverse employment-related decision is based upon any incorrect data described in subparagraph (A) of this subdivision. Such appeal shall allow for human review; and (3) Upon request by such applicant or employee, or such applicant or employee's representative, a copy of the most recent bias audit required pursuant to section 8 of this act. (b) A deployer who is required to provide a high-level statement to an applicant for employment or employee in the state pursuant to subdivision (1) of subsection (a) of this section shall provide such statement: (1) Directly to such applicant or employee; (2) In plain language; (3) In all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sales announcements and other information to persons in the state; and (4) In a format that is accessible to individuals with disabilities.
Pending 2026-10-01
H-01.6
Sec. 7(a)-(c)
Plain Language
Deployers must implement meaningful human review over every automated employment-related decision process. The human reviewer must have authority to change decisions, understand the system's limitations including bias risks, and not rely solely on the automated output. Specifically, the reviewer must confirm data accuracy and may modify or veto automated recommendations before any adverse decision. Deployers must also establish procedures to pause, correct, or reverse erroneous outputs, and must maintain logs of all human review reports and interventions. Critically, Section 7(c) imposes an absolute prohibition: no automated system may make a final or determinative employment decision without human review.
(a) For the purposes of this section "human review" means a review conducted by a qualified individual who (1) has the authority to make or change an employment-related decision, (2) understands the capabilities, limitations and risks of the automated employment-related decision process, including, but not limited to, patterns of bias, disparate impact and data quality issues, and (3) does not rely solely on the content, decision, prediction or recommendation generated by the automated employment-related decision process in making a final or determinative employment-related decision. (b) (1) A deployer who has deployed an automated employment-related decision process in making, or as a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall implement human review over such automated employment-related decision process by providing for review of the content, decisions, predictions or recommendations generated by the automated employment-related decision process and any other information relevant to such content, decision, prediction or recommendation in order to confirm the accuracy of data processed by such automated employment-related decision process and, when appropriate, modify or veto any such content, decision, prediction or recommendation generated by such automated decision-making process prior to any adverse employment-related decision. (2) A deployer shall (A) establish procedures necessary to pause, correct or reverse erroneous or harmful content, decision, prediction or recommendation generated by an automated employment-related decision process, and (B) establish and maintain logs listing all human review reports and any intervention taken by an individual conducting such human review. (c) No automated employment-related decision process shall be used by a deployer in making a final or determinative employment-related decision without human review over such final or determinative employment-related decision.
Pending 2026-10-01
H-01.3
Sec. 18 (amending § 46a-60(b)(1)(B))
Plain Language
Under the amended anti-discrimination statute, it is a discriminatory practice for employers to fail to provide advance written notice that an automated employment-related decision process will be used in employment decisions affecting an individual. The notice must at minimum disclose the trade name of the automated system and the types and sources of personal information the system will process. This creates a separate notice obligation within Connecticut's anti-discrimination framework — enforced by CHRO — in addition to the deployer notice obligations in Sections 4 and 5.
(B) For an employer, by the employer or the employer's agent, to fail to provide to any individual advance written notice disclosing, at a minimum, that an automated employment-related decision process will be used to make, to assist in making or in the course of making a decision to hire or employ or to bar or to discharge from employment, or concerning the compensation or terms, conditions or privileges of employment, of such individual. Such notice shall, at a minimum, disclose the trade name of the automated employment-related decision process and the types and sources of personal information concerning the individual that the automated employment-related decision process will process or analyze.
Pending 2025-07-01
H-01.3H-01.1
O.C.G.A. § 10-16-4(a)
Plain Language
Before or at the time a deployer uses an automated decision system to make or assist in a consequential decision about a consumer, the deployer must notify the consumer and provide: the system's purpose and the nature of the consequential decision, deployer contact information, a plain-language description of what personal characteristics the system assesses and how, identification of human and automated components, a link to a public webpage with the system's logic, parameters, outputs, data sources, and latest impact assessment results, and instructions for accessing the deployer's public statement under § 10-16-5. This is a pre-decision disclosure obligation — not a post-hoc notice.
(a) No later than the time that a deployer deploys an automated decision system to make, or assist in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an automated decision system to make, or assist in making, a consequential decision; and (2) Provide to the consumer: (A) A statement disclosing the purpose of the automated decision system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description, in plain language, of the automated decision system, which description shall, at a minimum, include: (i) A description of the personal characteristics or attributes that the system will measure or assess; (ii) The method by which the system measures or assesses those attributes or characteristics; (iii) How those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; (v) How any automated components of such system are used to inform such consequential decision; and (vi) A direct link to a publicly accessible page on the deployer's public website that contains a plain-language description of the logic used in the system, including the key parameters that affect the output of the system; the system's outputs; the types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and the results of the most recent impact assessment, or an active link to a web page where a consumer can review those results; and (D) Instructions on how to access the statement required by Code Section 10-16-5.
Pending 2025-07-01
H-01.1H-01.2H-01.4H-01.5
O.C.G.A. § 10-16-4(b)-(d)
Plain Language
Within one business day after a consequential decision is made, deployers must send the affected consumer a detailed post-decision notice including: the principal factors and variables that drove the decision (with explanation of the AI's contribution, data sources, and how the consumer's personal data informed the factors), information on the right to correct data and submit supplementary information, guidance on actions the consumer could take to secure a different outcome, instructions for correcting any incorrect personal data used, and information about appeal opportunities (which must allow human review if technically feasible). All notices must be provided directly, in plain language, in all languages the deployer ordinarily uses with consumers, and in disability-accessible formats. If direct delivery is impossible, the deployer must use a method reasonably calculated to reach the consumer. A deployer may not use an automated decision system at all if it cannot provide these notices and explanations.
(b) A deployer that has used an automated decision system to make, or assist in making, a consequential decision concerning a consumer shall transmit to such consumer within one business day after such decision a notice that includes: (1) A specific and accurate explanation that identifies the principal factors and variables that led to the consequential decision, including: (A) The degree to which, and manner in which, the automated decision system contributed to the consequential decision; (B) The source or sources of the data processed by the automated decision system; and (C) A plain-language explanation of how the consumer's personal data informed these principal factors and variables when the automated decision system made, or assisted in making, the consequential decision; (2) Information about consumers' right to correct, and how the consumer can submit corrections and provide supplementary information relevant to, the consequential decision; (3) What actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future; (4) Information on opportunities to correct any incorrect personal data that the automated decision system processed in making, or assisting in making, the consequential decision; and (5) Information on opportunities to appeal an adverse consequential decision concerning the consumer arising from the deployment of an automated decision system, which appeal shall, if technically feasible, allow for human review. (c)(1) A deployer shall provide the notice, statement, contact information, and description required by subsections (a) and (b) of this Code section: (A) Directly to the consumer; (B) In plain language; (C) In all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) In a format that is accessible to consumers with disabilities. (2) If the deployer is unable to provide the notice, statement, contact information, and description directly to the consumer, the deployer shall make such information available in a manner that is reasonably calculated to ensure that the consumer receives it. (d) No deployer shall use an automated decision system to make, or assist in making, a consequential decision if it cannot provide notices and explanations that satisfy the requirements of this Code section.
Pending 2028-07-01
H-01.1H-01.3
HRS § 321-__ (Consequential decisions; notice; statement; opt-out; corrections; appeal)(a)
Plain Language
Before using AI to make or substantially contribute to a consequential decision — any decision significantly affecting a patient's physical or mental health — the health care provider must give the patient or authorized representative a written pre-decision notice. The notice must: (1) inform the patient that AI will be used; (2) disclose the purpose of the AI system and the nature of the decision; (3) describe the AI system in plain language; and (4) allow the patient to opt out of profiling of their individually identifiable health information or personal data for decisions with legal or similarly significant effects. The 'substantial factor' trigger is broadly defined and captures any AI-generated content, prediction, or recommendation used as a basis for a consequential decision.
(a) Before using an artificial intelligence system to make, or be a substantial factor in making, a consequential decision, a health care provider shall provide the patient or the patient's authorized representative, as applicable, with a written notice that:
(1) Informs the recipient that the health care provider will be using an artificial intelligence system to make, or be a substantial factor in making, the consequential decision;
(2) Discloses the purpose of the artificial intelligence system and the nature of the consequential decision;
(3) Describes the artificial intelligence system in plain language; and
(4) Allows the patient to opt out of the processing of the patient's individually identifiable health information or other personal data for purposes of profiling in furtherance of decisions that have legal or similarly significant effects concerning the patient.
Pending 2028-07-01
H-01.1H-01.2H-01.4H-01.5
HRS § 321-__ (Consequential decisions; notice; statement; opt-out; corrections; appeal)(b)-(c)
Plain Language
After a consequential decision has been made using AI, the health care provider must give the patient or authorized representative: (1) a written statement describing the decision and its principal reasons — including the degree and manner of AI contribution, the types of data the AI processed, and the sources of that data; (2) an opportunity to correct any incorrect health information or personal data the AI used; and (3) an opportunity to appeal the decision with human review of all related information, to the extent technically feasible. The appeal right has a safety exception: it does not apply when delay would risk the patient's life or safety. All notices and statements must be delivered directly to the patient or authorized representative, or if that is not possible, through a manner reasonably calculated to ensure receipt.
(b) Any health care provider that used an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall provide the patient or the patient's authorized representative, as applicable, with:
(1) A written statement that describes the consequential decision and the principal reasons for the consequential decision, including:
(A) The degree to which, and manner in which, the artificial intelligence system contributed to the consequential decision;
(B) The type of data that was processed by the artificial intelligence system in making the consequential decision; and
(C) The sources of the data described in paragraph (B);
(2) An opportunity to correct any incorrect health information or personal data that the artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and
(3) An opportunity to appeal the consequential decision, including allowing, to the extent technically feasible, human review of all information relating to the consequential decision; provided that this paragraph shall not apply if providing the opportunity for appeal is not in the best interest of the patient, including in instances in which any delay might pose a risk to the life or safety of the patient.
(c) The notice and statement required pursuant to subsections (a) and (b), respectively, shall be provided directly to the patient or the patient's authorized representative, as applicable; provided that if the health care provider is unable to comply with this requirement, the health care provider shall provide the notice or statement in a manner that is reasonably calculated to ensure that the patient or the patient's authorized representative, as applicable, receives the notice or statement.
Pending 2028-07-01
H-01.6
HRS § 321-__ (Consequential decisions; review and validation by qualified oversight personnel)(a)-(c)
Plain Language
Health care providers using AI to make or substantially factor into consequential decisions must designate and maintain AI oversight personnel. This person must be a natural person with qualifications, experience, and expertise to effectively evaluate AI outputs in health care — and may be a third-party contractor. The oversight person must both (1) monitor the provider's AI systems on an ongoing basis and (2) before any AI output is used in a consequential decision, affirmatively review, evaluate, and then validate or override the output. This is a mandatory human-in-the-loop requirement: no AI output may be acted upon for a consequential decision without prior human review and an affirmative validation or override decision. The Department of Health will adopt rules specifying required qualifications for oversight personnel.
(a) Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall maintain an artificial intelligence oversight personnel.
(b) The artificial intelligence oversight personnel:
(1) Shall be a natural person;
(2) Shall have the qualifications, experience, and expertise necessary to effectively evaluate outputs, including but not limited to any information, data, assumptions, predictions, scoring, recommendations, decisions, or conclusions generated by artificial intelligence systems in the field of health care; and
(3) May be retained by contracting with a third-party.
(c) The artificial intelligence oversight personnel shall:
(1) Monitor the artificial intelligence systems used by the health care provider; and
(2) Before the health care provider uses an output generated by an artificial intelligence system to make, or be a substantial factor in making, a consequential decision:
(A) Review and evaluate the output; and
(B) Validate or override the output.
Pending 2026-07-01
H-01.3
Iowa Code § 91F.2(1)-(3)
Plain Language
Employers must provide a plain-language, stand-alone written notice to any employee (or their authorized representative) who will foreseeably be directly affected by an automated decision system used for non-hiring employment decisions. The notice must be provided at least 30 days before first deploying an ADS, by January 1, 2027 for systems already in use, or within 30 days of hiring a new employee. The notice must describe the types of decisions affected, the data categories and sources used, key parameters that disproportionately affect output, the ADS vendor, any quotas, the employee's right to access and correct their data, and the employer's anti-retaliation obligations. The notice must be delivered in the same language used for routine workplace communications.
1. An employer shall provide a written notice that an automated decision system is in use for the purpose of making employment-related decisions, other than hiring decisions, at the workplace to an employee who will foreseeably be directly affected by the automated decision system, or the employee's authorized representative. The employer shall provide the notice by the following dates: a. At least thirty days before an automated decision system is first deployed by the employer. b. If the employer is using an automated decision system to assist in making employment-related decisions as of the effective date of this Act, no later than January 1, 2027. c. To a new employee within thirty days of hiring the employee. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The type of employment-related decisions potentially affected by the automated decision system. b. A general description of the categories of employee-input data the automated decision system will use, the sources of employee input data, and how employee input data will be collected. c. Any key parameters known to disproportionately affect the output of the automated decision system. d. The individuals, vendors, or entities that created the automated decision system. e. If applicable, a description of each quota set or measured by an automated decision system to which the employee is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. f. A description of the employee's right to access and correct the employee's data used by the automated decision system. g. That the employer is prohibited from retaliating against employees for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
Pending 2026-07-01
H-01.3
Iowa Code § 91F.2(4)
Plain Language
When an employer uses an ADS in hiring, it must notify each applicant upon receiving their application that an ADS is part of the hiring process. This notice may be provided via an automatic reply to the application or disclosed in the job posting itself. Unlike the more detailed advance notice required for current employees under § 91F.2(1)-(3), the applicant notification is simpler — it need only disclose that an ADS is used in hiring decisions.
4. If an employer will use an automated decision system in making hiring decisions for a position, the employer shall notify an applicant for the position, upon receiving the application, that the employer utilizes an automated decision system when making hiring decisions. The employer may make the notification using an automatic reply mechanism or on a job posting.
Pending 2026-07-01
H-01.6
Iowa Code § 91F.3(1)(e), (2)
Plain Language
Employers may never rely solely on an ADS for discipline, termination, or deactivation decisions — a human must always be involved. When the ADS output is the primary basis for such a decision, the employer must use a human reviewer who reviews both the ADS output and any other relevant information, which may include supervisory evaluations, personnel files, employee work product, peer reviews, and witness interviews (including online customer reviews). The human reviewer must affirmatively compile and review this supplementary information, not merely rubber-stamp the ADS output.
e. Rely solely on an automated decision system when making a discipline, termination, or deactivation decision. 2. When an employer relies primarily on output from an automated decision system to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the automated decision system output and compile and review other information that is relevant to the decision, if any. For purposes of this subsection, "other information" may include but is not limited to any of the following: a. Supervisory or managerial evaluations. b. Personnel files. c. Work product of employees. d. Peer reviews. e. Witness interviews, which may include relevant online customer reviews.
Pending 2026-07-01
H-01.1
Iowa Code § 91F.4(1)-(3)
Plain Language
When an employer primarily relies on ADS output to make a discipline, termination, or deactivation decision, it must provide the affected employee with a written notice at the time the decision is communicated. The notice must identify a contact person, state that an ADS was used, inform the employee of their right to request a copy of their data, and state that retaliation for exercising rights under the chapter is prohibited. The notice must be a separate, stand-alone plain-language communication delivered in the employee's routine communication language via an accessible method such as email.
1. An employer that primarily relied on an automated decision system to make a discipline, termination, or deactivation decision shall provide the affected employee with a written notice at the time the employer informs the employee of the decision. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The individual to contact for more information about the decision. b. That the employer used an automated decision system to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the employee. c. That the employee has the right to request a copy of the employee's data used by the automated decision system. d. That the employer is prohibited from retaliating against the employee for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
Pending 2027-01-01
H-01.4H-01.5
Iowa Code § 514F.8A(4) (new)
Plain Language
When a provider or covered person appeals a prior authorization denial or downgrade, the appeal must be conducted by a qualified reviewer or clinical peer (matched to the requesting provider type) who was not involved in the initial adverse determination. The appeal reviewer must consider the known clinical aspects of the services under review, including the covered person's relevant medical records and any medical literature the provider submits. This creates a substantive, individualized review obligation — not merely a procedural rubber stamp — and ensures independence from the initial decision-maker.
4. a. If a utilization review organization's decision to deny or downgrade a request for prior authorization is appealed by the requesting health care provider or covered person, the appeal shall be conducted by either of the following: (1) A qualified reviewer, if the health care provider requesting prior authorization is a physician. (2) A clinical peer, if the health care provider requesting prior authorization is not a physician. b. A qualified reviewer or clinical peer involved in the initial denial or downgrade determination of a request for prior authorization that is the subject of an appeal shall not conduct the appeal. c. When conducting an appeal of a request for prior authorization, the qualified reviewer or clinical peer shall consider the known clinical aspects of the health care services under review, including but not limited to medical records relevant to the covered person's medical condition that is the subject of the health care services for which prior authorization is requested, and any relevant medical literature submitted by the health care provider as part of the appeal.
Pending 2026-01-01
H-01.6
Section 10(a)
Plain Language
Public employers may not use, procure, or acquire any automated decision-making system for functions related to public assistance administration, employee rights, civil liberties, safety, or welfare without meaningful and continuing human review. The human reviewer must understand the system's risks and limitations, be trained on the system, have actual authority to intervene and override outputs (including rejecting uncorroborated outputs), and have adequate time and resources. This is not a one-time gate — human review must be continuing throughout the system's operation. The obligation covers both direct use and indirect use through contractors and subcontractors.
(a) An employer shall not use or apply, or authorize any procurement, purchase, or acquisition of any service or system using or relying on any automated decision-making system, directly or indirectly, without meaningful and continuing human review when performing any function that: (1) is related to the administration of any public assistance program; (2) will have an adverse impact on the rights, civil liberties, safety, or welfare of any employee in this State; or (3) affects any statutorily or constitutionally provided rights of an employee.
Pending 2026-01-01
H-01.3H-01.4H-01.5
Section 10(b)
Plain Language
Before or at the time of any automated decision affecting an employee under a covered function, the employer must: (1) notify the affected employee that the decision was made using an automated decision-making system; (2) provide an appeals process for employees directly impacted by such decisions; and (3) provide the opportunity for an alternative human review by an individual working for or on behalf of the employer, independent of the automated system. All three requirements are prerequisites to use — the system may not be used without them in place. The alternative review must be by a human who is independent of the automated system, meaning they cannot simply rubber-stamp the system's output.
(b) An employer shall not use or apply any automated decision-making system, directly or indirectly, to perform any function described in subsection (a) without providing: (1) a notice to any affected employee no later than the time a decision is issued to that employee that a decision concerning the employee was made using an automated decision-making system; (2) an appeals process for decisions made by automated decision-making system in which an employee is impacted as a direct result of the use of the automated decision-making system; and (3) the opportunity for an affected employee to have an appropriate alternative review, by an individual working for or on behalf of the employer with respect to the decision, independent of the automated decision-making system.
Pending 2026-01-01
Section 10(d)
Plain Language
The deployment of an automated decision-making system must not diminish existing employee rights under collective bargaining agreements or alter existing representational or bargaining relationships between employers and labor organizations. This is a preservation clause — it creates no new obligation but confirms that AI adoption cannot be used to circumvent existing labor agreements.
(d) The use of an automated decision-making system shall not affect: (1) existing rights of employees covered by a collective bargaining agreement; or (2) existing representational relationships among labor organizations or bargaining relationships between an employer and a labor organization.
Pending 2027-01-01
H-01.3
Section 15(a)
Plain Language
Before or at the time an automated decision tool is used to make a consequential decision, the deployer must notify the affected individual that an automated tool is being used. The notification must include: the tool's purpose, the deployer's contact information, and a plain-language description of how the automated and human components work together to inform the decision. This is a broad pre-decision notice requirement covering all consequential decision domains — employment, education, housing, healthcare, financial services, criminal justice, and more.
(a) A deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision that an automated decision tool is being used to make, or be a controlling factor in making, the consequential decision. A deployer shall provide to a natural person notified under this subsection all of the following: (1) a statement of the purpose of the automated decision tool; (2) the contact information for the deployer; and (3) a plain language description of the automated decision tool that includes a description of any human components and how any automated component is used to inform a consequential decision.
Pending 2027-01-01
H-01.4
Section 15(b)
Plain Language
When a consequential decision is made solely by an automated decision tool — with no human involvement — the deployer must, if technically feasible, honor a person's request to opt out of the automated process and be subject to an alternative selection process or accommodation. The deployer may request identifying information to locate the person and the relevant decision; if the person declines to provide that information, the opt-out obligation does not apply. Note the two conditions: (1) the decision must be made solely by the tool, and (2) the alternative must be technically feasible. Decisions where a human plays any role do not trigger this opt-out right.
(b) If a consequential decision is made solely based on the output of an automated decision tool, a deployer shall, if technically feasible, accommodate a natural person's request to not be subject to the automated decision tool and to be subject to an alternative selection process or accommodation. After a request is made under this subsection, a deployer may reasonably request, collect, and process information from a natural person for the purposes of identifying the person and the associated consequential decision. If the person does not provide that information, the deployer shall not be obligated to provide an alternative selection process or accommodation.
Pending 2026-01-01
H-01.4
225 ILCS 60/67(b)(2)
Plain Language
Every AI-generated patient communication about clinical information must include clear instructions for how the patient can contact a human health care provider or other appropriate staff member. This is a standalone requirement that applies alongside the AI-generation disclaimer and ensures patients always have an accessible path to a human. The instructions must be included in every covered communication — the bill does not specify format requirements for this element beyond 'clear instructions.'
(2) Clear instructions describing how a patient may contact a human health care provider, employee of the health facility, clinic, physician's office, or office of a group provider, or other appropriate person.
Pending 2026-01-01
H-01.4
Student Educational Technologies Rights Act § 15(a)(2)
Plain Language
Students and their parents have the right to request that a human teacher review any grade that was scored automatically or generated by AI. This creates a right to human review of automated educational decisions. Schools must honor such requests, though the statute does not specify a timeframe for completing the review.
It is the policy of this State that a student and the student's parent have the right to: (2) request a human teacher review any automated scored grade or scored grade generated by artificial intelligence;
Pending 2026-07-01
H-01.6
IC 22-5-10.4-10(1)
Plain Language
Employers are categorically prohibited from relying exclusively on an automated decision system — with no human involvement — to make any employment-related decision affecting a covered individual. This is an absolute prohibition: no amount of predeployment testing, disclosure, or documentation can cure a fully automated employment decision. Every employment decision using an automated decision system must include meaningful human involvement.
An employer may not: (1) rely exclusively on an automated decision system in making an employment related decision with respect to a covered individual;
Pending 2026-07-01
H-01.6
IC 22-5-10.4-10(2)(E)
Plain Language
As a condition of using automated decision system output in any employment decision, the employer must have a human with appropriate and relevant experience independently corroborate the output through meaningful oversight. This is not a rubber-stamp review — the human must have subject-matter expertise relevant to the employment decision and must exercise independent judgment. The statute separately requires that the appeal reviewer (Section 10(2)(G)(ii)) be a different human than the one performing corroboration, creating a two-person minimum for human oversight.
the employer independently corroborates, via meaningful oversight by a human with appropriate and relevant experience, the automated decision system output;
Pending 2026-07-01
H-01.1H-01.2
IC 22-5-10.4-10(2)(F)
Plain Language
Within seven days after making an employment-related decision using an automated decision system output, the employer must provide the affected covered individual — at no cost — with comprehensive, plain-language documentation covering: (1) a description of the automated decision system, (2) a plain-language description and explanation of the input data used, plus a machine-readable copy of that data, (3) how the output was used in the decision, and (4) the reasoning for using the output. This is an individualized post-decision disclosure — not a general policy notice. The documentation must be 'full, accessible, and meaningful,' which likely requires more than boilerplate language.
not later than seven (7) days after making the employment related decision, the employer provides full, accessible, and meaningful documentation in plain language and at no cost to the covered individual on the automated decision system output, including: (i) a description of the automated decision system used to generate the automated decision system output; (ii) a description and explanation, in plain language, of the input date to the automated decision system used to generate the automated decision system output and a machine readable copy of the data; (iii) a description and explanation of how the automated decision system output was used in making the employment related decision; and (iv) the reasoning for the use of the automated decision system output in the employment related decision;
Pending 2026-07-01
H-01.4H-01.5
IC 22-5-10.4-10(2)(G)
Plain Language
After receiving the post-decision documentation, the covered individual must be allowed to (1) dispute the automated decision system output itself to a qualified human, through a process that is accessible, equitable, and not unreasonably burdensome, and (2) separately appeal the employment-related decision to a different qualified human — one who was not the person who corroborated the output under Section 10(2)(E). This creates two distinct rights: a challenge to the AI output and an appeal of the ultimate decision, with the appeal reviewer required to be independent from the initial corroboration step.
the employer allows the covered individual to, after receiving the documentation described in clause (F): (i) dispute, in a manner that is accessible, equitable, and does not pose an unreasonable burden on the covered individual, the automated decision system output to a human with appropriate and relevant experience; and (ii) appeal the employment related decision to a human with appropriate and relevant experience who is not the human for purposes of the corroboration under clause (E).
Pending 2026-07-01
H-01.3
IC 22-5-10.4-11(a)-(c)
Plain Language
Employers must provide a comprehensive advance disclosure to every covered individual describing: the fact that automated decision system outputs are or will be used, a detailed description of the system (data types collected, characteristics measured, job-relatedness of those characteristics, measurement methodology, and plain-language interpretation guidance), the identity of the system operator, how the output factors into employment decisions, and how to dispute or appeal. For employees hired on or before July 1, 2026, the disclosure must be provided by August 1, 2026. For individuals hired after July 1, 2026 — including candidates — the disclosure must be provided before hiring. Any significant changes to the disclosed information, or significant new information becoming available, triggers a 30-day update obligation. This is a pre-decision disclosure obligation distinct from the post-decision documentation required under Section 10(2)(F).
Sec. 11. (a) An employer that uses or intends to use an automated decision system output in making an employment related decision with respect to a covered individual shall, in accordance with subsections (b) and (c), disclose to the covered individual: (1) that the employer uses or intends to use an automated decision system output in making an employment related decision; (2) a description and explanation of the automated decision system used or intended to be used to generate the automated decision system output, including: (A) the types of data collected or intended to be collected as inputs to the automated decision system and the circumstances of the collection; (B) the characteristics that the automated decision system measures or is intended to measure, such as the knowledge, skills, or abilities of the covered individual; (C) how the characteristics relate or would relate to any function required for the work or potential work of the covered individual; (D) how the system measures or is intended to measure the characteristics; and (E) how the covered individual can interpret the automated decision system output in plain language; (3) the identity of the covered individual or entity that operates the automated decision system that provides the automated decision system output; (4) how the employer uses or intends to use the automated decision system output in making the employment related decision; and (5) how the covered individual may dispute or appeal an employment related decision made with respect to the covered individual using an automated decision system output. (b) An employer shall provide the disclosures required by subsection (a) to a covered individual as follows: (1) In the case of a covered individual who was hired on or before July 1, 2026, the disclosure must be provided to the covered individual not later than August 1, 2026. (2) In the case of a covered individual who is hired after July 1, 2026, the disclosure must be provided to the covered individual before hiring. (c) Not later than thirty (30) days after: (1) any information provided by an employer to a covered individual through a disclosure required by subsection (a) significantly changes; or (2) any significant new information required to be provided in the disclosure becomes available; the employer shall provide the covered individual with an updated disclosure.
Pending 2025-08-01
H-01.3
R.S. 23:972(A)-(E)
Plain Language
Employers must provide detailed written pre-deployment notice to any worker (or authorized representative) who will foreseeably be directly affected by an ADS used for employment-related decisions other than hiring. Notice must be given at least 30 days before first deployment, immediately for existing ADS at the time the law takes effect, and within 30 days of a new hire's start date. The notice must be plain-language, standalone, in the workers' routine communication language, and delivered via an accessible method. For hiring-specific ADS, employers must notify job applicants upon receiving their application — this can be done via auto-reply or on the job posting. The notice must include eight categories of information: decision types affected, data categories and sources, disproportionate parameters, ADS creator identity, quotas if applicable, data access/correction rights, anti-retaliation notice, and appeal rights. Employers must also maintain an updated inventory of all ADS currently in use.
A. An employer shall provide written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or his authorized representative. The notice shall be provided at any of the following time periods: (1) At least thirty days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this Part takes effect. (3) To a new worker within thirty days of his hiring date. B. An employer shall maintain an updated list of all ADS currently in use. C. A written notice required by this Section shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. D. An employer who uses an ADS to make hiring decisions shall notify a job applicant upon receiving his application that the employer utilizes an ADS for hiring decisions. Notifications may be made using an automatic reply mechanism or on the job posting. E. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measure by an ADS that the worker is subject to, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's own data used by the ADS. (7) That the employer shall be prohibited from retaliating against a worker who exercises his rights as provided in Paragraph (6) of this Subsection. (8) That the worker has a right to appeal any decision made with the assistance of an ADS and the process to appeal that decision.
Pending 2025-08-01
H-01.6
R.S. 23:973(C)(1)-(3)
Plain Language
Employers may never rely solely on an ADS for discipline, termination, or deactivation decisions — a human must always be involved. For any employment-related decision assisted by ADS output, the employer or vendor must: (1) ensure accuracy of the ADS output, and (2) assign a designated internal reviewer to independently investigate and compile corroborating evidence. The reviewer must have sufficient authority, expertise in the ADS, and training to make a well-informed decision, and must be protected from retaliation. If the employer cannot corroborate the ADS output, or if the reviewer concludes the output is inaccurate, incomplete, or misleading, the employer may not rely on the ADS for that decision. This goes beyond a right to request human review — it is a mandatory human review and corroboration requirement before any ADS-assisted employment decision.
C.(1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) If an employer or a vendor utilizes an ADS output to assist in making an employment-related decision, the employer or vendor shall do all of the following: (a) Ensure the accuracy of the ADS output. (b)(i) Use a designated internal reviewer to conduct a separate investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (ii) The designated internal reviewer required by this Subparagraph shall have all of the following: (aa) Sufficient authority, discretion, resources, and time to corroborate the ADS output. (bb) Sufficient expertise in the operation of similar systems and a sufficient understanding of the ADS in question to interpret its outputs as well as results of relevant impact assessments. (cc) Education, training, or experience sufficient to allow the reviewer to make a well-informed decision. (iii) The designated internal reviewer shall be protected from retaliation for exercising his responsibilities. (3) An employer shall not rely on an ADS to make an employment-related decision if the employer cannot corroborate the ADS output or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading.
Pending 2025-08-01
H-01.1
R.S. 23:974(A)-(B)
Plain Language
When an employer primarily relies on an ADS to make a discipline, termination, or deactivation decision, the employer must provide the affected worker with a post-decision written notice at the time the decision is made. The notice must be plain-language, standalone, in the worker's routine language, and accessible. It must inform the worker of: the human contact for more information and data requests, that an ADS was used, the worker's right to request a copy of their data, the anti-retaliation prohibition, and the worker's right to appeal under R.S. 23:975. This is a post-decision notice — distinct from the pre-deployment notice in §972.
A. An employer that primarily relies on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with written notice at the time such decision is made. The notice shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. B. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The human individual to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in any discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising his right pursuant to this Part. (5) The worker's right to appeal the decision as provided in R.S. 23:975.
Pending 2025-08-01
H-01.4H-01.5
R.S. 23:975(A)-(C)
Plain Language
Workers have a statutory right to appeal any ADS-assisted employment-related decision. Employers or vendors must provide an appeal form (or electronic link) within 30 days of notification, allowing the worker to request input/output data access, request corroborating evidence, state reasons for appeal, and designate an authorized representative. The employer or vendor must respond within 14 business days by assigning a human reviewer who was not involved in the original decision, has authority to overturn it, and can objectively evaluate all evidence. The written response must describe the appeal result and its reasons. If the reviewer determines the decision should be overturned, the employer or vendor must rectify it within 21 business days. Both employers and vendors are directly obligated under this section.
A. If an employer has used an ADS to make an employment-related decision about a worker, the affected worker has the right to appeal that decision, request a human review, request submission of additional information, and correct any errors in the data used by the ADS. B. An employer or a vendor that used an ADS to make an employment-related decision shall provide an affected worker with a form or a hyperlink to an electronic form that provides that the worker has a right to appeal the decision within thirty days from the date that the worker was notified. The appeal form provided to an affected worker shall include all of the following: (1) The option to request access to the data used as input to or as output from the ADS. (2) The option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the ADS. (3) The worker's reason or justification for an appeal and any evidence to support the appeal. (4) A designation for an authorized representative who can also access the data. C.(1) An employer or a vendor shall respond to an appeal within fourteen business days. (2)(a)(i) In responding to an appeal, the employer or vendor shall designate a human reviewer who shall meet all of the following requirements: (aa) He can objectively evaluate all evidence. (bb) He has sufficient authority, discretion, and resources to evaluate the decision. (cc) He has the authority to overturn the decision. (ii) The employer or vendor shall not designate a person who was involved in the decision that the worker is appealing. (b) The response provided to the worker shall be composed on a clear, written document which describes the result of the appeal and the reasons for that result. (3) If the human reviewer determines that the employment-related decision should be overturned, the employer or vendor shall rectify the decision within twenty-one business days.
Pending 2027-01-01
H-01.4H-01.5
R.S. 22:1260.49(E)(1)-(3)
Plain Language
Insureds have an express right to appeal any determination they learn was made with an AI or automated decision system recommendation. Critically, any adverse determination where AI materially contributed is presumed invalid — the insurer bears the burden of demonstrating that the determination was independently reached through documented clinical judgment without reliance on algorithmic output. If an adverse determination is appealed on AI grounds, the insurer is prohibited from using AI in any subsequent review of that claim. This creates a strong rebuttable presumption against AI-influenced adverse determinations and a categorical ban on AI use in the re-review process.
E.(1) Any insured has the right to appeal a determination that he has learned was made with a recommendation from an artificial intelligence or an automated decision system. (2) Any adverse determination in which artificial intelligence or an automated decision system materially contributed to the determination shall be presumed invalid unless the health insurance issuer demonstrates that the determination was independently reached through documented clinical judgment without reliance upon algorithmic output. (3) If an adverse determination is appealed on the basis of the use of an artificial intelligence or an automated decision system, the insurer shall not use an artificial intelligence or an automated decision system in any subsequent review of the claim.
Pending 2027-01-01
H-01.2
R.S. 22:2401(4)
Plain Language
As part of the appeals process for coverage determinations, health insurance issuers must allow covered persons to review and obtain copies of all documents relevant to any AI or automated decision system used in the utilization review or determination process. This is a right to access AI-related documentation that supplements the existing appeals process requirements, giving insureds visibility into the AI tools that influenced their coverage decisions.
(4) Allow covered persons, upon request, to review and have copies of all documents relevant to any artificial intelligence or an automated decision system as defined in R.S. 22:1260.49(A)(1) used in the utilization review or determination process.
Pre-filed 2025-07-07
H-01.1H-01.3H-01.5
Chapter 93M, Section 3(c)
Plain Language
When an AI system materially influences a consequential decision about a consumer, deployers must: (1) notify the consumer that AI was involved, (2) explain the system's purpose and how it influenced the specific decision, and (3) provide a process for the consumer to appeal or correct adverse decisions. The notification trigger is 'material influence' on a consequential decision — meaning the AI system determines or heavily weighs inputs that directly affect the outcome. This bundles three distinct consumer rights: pre/at-decision notice, explanation, and appeal. The appeal process must cover both adverse decisions (reversal) and corrections (data accuracy).
(c) Consumer Protections: Deployers must: (1) Notify consumers when an AI system materially influences a consequential decision; (2) Provide consumers with: (i) The purpose of the system; (ii) An explanation of how the system influenced the decision; (iii) A process to appeal or correct adverse decisions.
Pre-filed 2025-07-07
H-01.3
Section 4(c)
Plain Language
Consumers must be notified when AI systems are targeting or influencing them in ways that materially impact their decisions, and when algorithms are used to determine pricing, eligibility, or access to services. This is a real-time notification obligation separate from the general public website disclosure in Section 4(a)-(b). It applies to any corporation using AI for targeting or behavioral influence — not limited to high-risk AI systems. The pricing and eligibility trigger is notable: algorithmic pricing and eligibility determinations always require consumer notification regardless of whether they rise to the level of a 'consequential decision' under Section 1(4).
(c) Consumer Notification: Consumers must be notified when: (1) They are being targeted or influenced by AI systems in a way that materially impacts their decisions; (2) Algorithms are used to determine pricing, eligibility, or access to services.
Pre-filed 2025-01-17
Ch. 110I, § 4(a)
Plain Language
Covered entities are categorically prohibited from using biometric data to make or assist in making decisions that produce legal effects or similarly significant effects on end users. This is a blanket ban — not a requirement for human oversight or impact assessment — covering a broad range of consequential decisions including financial services, housing, insurance, education, criminal justice, employment, healthcare, and access to basic necessities. Unlike most automated decision-making statutes that require safeguards, this provision prohibits the use of biometric data in such decisions entirely.
(a) Covered entities shall not use biometric data to help make decisions that produce legal effects or similarly significant effects concerning end users. Decisions that include legal effects or similarly significant effects concerning end users include, without limitation, denial or degradation of consequential services or support, such as financial or lending services, housing, insurance, educational enrollment, criminal justice, employment opportunities, health care services, and access to basic necessities, such as food and water.
Pre-filed 2025-07-17
H-01.1H-01.2H-01.3H-01.4H-01.5
Ch. 93M § 3(d)
Plain Language
Before making a consequential decision about a consumer using a high-risk AI system, the deployer must: (1) notify the consumer that AI will be used, (2) disclose the system's purpose, the nature of the decision, deployer contact information, and a plain-language description of the system, and (3) inform the consumer about opt-out rights for profiling. If the decision is adverse, the deployer must additionally provide: the principal reasons for the decision (including the AI system's contribution, data types used, and data sources), an opportunity to correct incorrect personal data, and an appeal mechanism that includes human review where technically feasible. All notices must be provided directly, in plain language, in all languages the deployer uses in its business, and in disability-accessible formats. If direct delivery is impossible, a method reasonably calculated to reach the consumer is acceptable.
(d) (1) Not later than 6 months after the effective date of this act, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (ii) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and (iii) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer. (2) Not later than 6 months after the effective date of this act, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (i) a statement disclosing the principal reason or reasons for the consequential decision, including: (A) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (B) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) the source or sources of the data described in subsection (d)(2)(i)(B) of this section; (ii) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (iii) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer. (3) (i) except as provided in subsection (d)(3)(ii) of this section, a deployer shall provide the notice, statement, contact information, and description required by subsections (c)(1) and (d)(2) of this section: (A) directly to the consumer; (B) in plain language; (C) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) in a format that is accessible to consumers with disabilities. (ii) if the deployer is unable to provide the notice, statement, contact information, and description required by subsections (d)(1) and (d)(2) of this section directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
Pre-filed 2025-01-14
H-01.6
Chapter 149B, § 2(h)
Plain Language
When employment decisions (hiring, promotion, discipline, termination, compensation) are based in whole or part on electronically monitored data, an employer may not rely primarily on that data. Three requirements apply: (1) the employer must establish meaningful human oversight — including a designated internal reviewer with expertise, authority to override, and adequate time/resources; (2) a human decision-maker must actually review the monitoring data, verify its accuracy, address pending correction requests, and exercise independent judgment; and (3) the human must consider non-monitoring information such as supervisory evaluations, personnel files, work products, or peer reviews.
(h) An employer shall not rely primarily on employee data collected through electronic monitoring when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer shall establish meaningful human oversight of such decisions based in whole or in part on data collected through electronic monitoring. (ii) A human decision-maker must actually review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (iii) The human decision-maker must consider information other than information collected through electronic monitoring when making each such decision, such as but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
Pre-filed 2025-01-14
H-01.1H-01.2
Chapter 149B, § 2(i)
Plain Language
When an employment decision is based in whole or part on electronically monitored data, the employer must disclose to affected employees — at least 30 days before the decision takes effect — four categories of information: that monitoring data was used, which specific tools were used and how they work, the specific data and derived judgments used in the decision, and any non-monitoring information also used. The 30-day advance notice requirement is unusually long and creates a significant operational constraint, effectively requiring employers to finalize their decision rationale a month before implementation.
(i) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision based in whole or part on data gathered through the use of electronic monitoring, it shall disclose to affected employees no less than thirty days prior to the decision going into effect: (i) that the decision was based in whole or part on data gathered through electronic monitoring; (ii) the specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (iii) the specific data, and judgments based upon such data, used in the decision-making process; and (iv) any information used in the decision-making process gathered through sources other than electronic monitoring.
Pre-filed 2025-01-14
H-01.3
Chapter 149B, § 4(a)-(b)
Plain Language
Employers using ADS tools to evaluate employees or candidates must provide detailed notice at least 10 business days before use. The notice must cover six categories: that an ADS tool will be used, the qualifications/characteristics assessed and data inputs/outputs, data sources and retention policies, results of the most recent impact assessment including disparate impact findings, how to request an alternative non-ADS process or accommodation, and how to request reevaluation or file a civil complaint. The notice must be in plain language, included in job postings, posted on the employer's website in all regularly used languages, provided directly to candidates in their language, and made accessible for persons with disabilities.
(a) Any employer that uses an automated employment decision tool to assess or evaluate an employee or candidate shall notify employees and candidates subject to the tool no less than ten business days before such use: (i) that an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate; (ii) the job qualifications and characteristics that such automated employment decision tool will assess, what employee or candidate data or attributes the tool will use to conduct that assessment, and what kind of outputs the tool will produce as an evaluation of such employee or candidate; (iii) what employee or candidate data is collected for the automated employment decision tool, the source of such data and the employer's data retention policy. Information pursuant to this section shall not be disclosed where such disclosure would violate local, state, or federal law, or interfere with a law enforcement investigation; (iv) the results of the most recent impact assessment of the automated employment decision tool, including any findings of a disparate impact and associated response from the employer, or information about how to access that information if publicly available; (v) information about how an employee or candidate may request an alternative selection process or accommodation that does not involve the use of an automated employment decision tool and details about that alternative process or accommodation process; and (vi) information about how the employee or candidate may: (A) request reevaluation of the employment decision made by the automated employment decision tool in accordance with section one thousand thirteen of this article; and (B) notification of the employee or candidate's right to file a complaint in a civil court in accordance with section seven of this chapter or otherwise exercise the rights described in this chapter. (b) The notice required by this section shall be: (i) written in clear and plain language; (ii) included in each job posting or advertisement for each position for which the automated employment decision tool will be used; (iii) posted on the employer's website in any language that the employer regularly uses to communicate with employees; (iv) provided directly to each candidate who applies for a position in the language with which that candidate communicates with the employer; (v) made available in formats that are reasonably accessible to and usable by individuals with disabilities; and (vi) otherwise presented in a manner that ensures the notice clearly and effectively communicates the required information to employees.
Pre-filed 2025-01-14
H-01.6
Chapter 149B, § 5(b)-(c)
Plain Language
Employers may not rely primarily on ADS output for consequential employment decisions. Four requirements apply: meaningful human oversight with a qualified internal reviewer (evaluated based on tool complexity, the reviewer's training and experience, and ability to consult experts); actual human review of ADS output with independent judgment; the human must consider non-ADS information; and the employer itself must consider non-ADS information. Additionally, employers may not condition employment consideration on consent to ADS evaluation and may not disadvantage anyone who requests an accommodation. The reviewer competency standard is more detailed than most jurisdictions, providing multi-factor guidance.
(b) An employer shall not rely primarily on output from an automated decision tool when making hiring, promotion, termination, disciplinary, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer must establish meaningful human oversight of such decisions based in whole or in part on the output of automated employment decision tools. In determining whether an internal reviewer employs the requisite knowledge and skill to provide meaningful human oversight, relevant factors include the relative complexity and specialized nature of the automated decision tool, the reviewer's general experience, the reviewer's training and experience in the field, the preparation and study the reviewer is able to give the matter and whether it is feasible to refer the matter to, or associate or consult with, an expert with established competence in the field automated decision tools. (ii) A human decision-maker must actually review any output of an automated employment decision tool and exercise independent judgment in making each such decision; (iii) The human decision-maker must consider information other than automated employment decision tool outputs when making each such decision, such as but not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews; and (iv) An employer shall consider information other than automated employment decision tool outputs when making hiring, promotion, termination, disciplinary, or compensation decisions, such as supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) An employer shall not require employees or candidates to consent to the use of an automated employment decision tool in an employment decision in order to be considered for an employment decision, nor shall an employer discipline or disadvantage an employee or candidate for employment as a result of their request for accommodation.
Pending 2026-02-24
H-01.3
Sec. 13(1)-(3)
Plain Language
Employers must display a workplace poster notifying employees of electronic monitoring or automated decision tool use. At least 30 days before implementing such a tool, the employer must provide written notice to all employees and must also include the notice in every job posting, post it on the employer's website, provide it directly to every applicant, and make it available in accessible formats accounting for non-English first languages and disabilities. The notice must include the right to opt out. If a covered individual opts out, the employer may not use the tool to make any employment-related decisions for that individual. The opt-out right is unusually strong — it creates an absolute prohibition on using the tool for any employment decision affecting the opting-out individual, not merely a right to alternative review.
Sec. 13. (1) If an employer uses an electronic monitoring tool or automated decisions tool, the employer must display a poster at the employer's place of business, in a conspicuous place accessible to the employer's employees, that includes, but is not limited to, notice of the use of an electronic monitoring tool or automated decisions tool. (2) Not less than 30 days before an employer implements an electronic monitoring tool or automated decisions tool, the employer shall provide notice, in writing, of the tool's use to all of the employer's employees. The employer shall also include the notice in every job posting, post the notice on the employer's website, provide the notice directly to every applicant, and make the notice available in accessible formats that account for the applicant's first language, if it is not English, and any disability the applicant may have. The notice must provide a covered individual with the ability to opt out of the electronic monitoring tool or automated decisions tool. (3) If a covered individual opts out of the use of an electronic monitoring tool or automated decisions tool under subsection (2), the employer shall not use the electronic monitoring tool or automated decisions tool to make any employment-related decisions for that covered individual.
Pending 2026-08-01
H-01.3
Minn. Stat. § 181.9922, subd. 1(a)-(f), subd. 2
Plain Language
Before deploying any automated decision system for employment-related decisions, employers must provide affected workers (including job applicants), their authorized representatives, and any representing union with a detailed written pre-use notice. For new systems, this notice must come at least 30 days before deployment; for existing systems, by September 1, 2026. The notice must be plain-language, standalone, in the worker's routine communication language, and must describe the system's purpose, data sources, logic, vendors, impact assessment results, a full list of ADS in use, and worker rights. Workers must provide affirmative written consent before being subjected to the ADS, and must be allowed to opt out if reasonable alternatives exist. A copy of each notice must also be filed with the Commissioner of Labor and Industry within ten days. Violations carry $1,000 per violation per day per affected worker.
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Pending 2026-08-01
H-01.6
Minn. Stat. § 181.9924, subd. 2(a)-(d)
Plain Language
Employers may never rely solely on an ADS for any employment-related decision — a human must always be involved. When using an ADS in part, the employer must verify the accuracy of the output and assign a designated internal reviewer who conducts an independent investigation and compiles corroborating evidence. The reviewer must have sufficient authority, discretion, expertise, training (including ADS bias awareness and worker rights), and protection from retaliation. Critically, if the reviewer cannot corroborate the ADS output or finds it inaccurate, incomplete, or misleading, the employer must not rely on the ADS for that decision. This is among the most prescriptive human-in-the-loop requirements in U.S. state AI law — it requires affirmative corroboration, not merely rubber-stamp human review.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Pending 2026-08-01
H-01.1H-01.2
Minn. Stat. § 181.9925, subd. 1(a)-(d), subd. 2(a)-(c)
Plain Language
After using an ADS in an employment decision, the employer must provide the affected worker with a post-decision written notice — at the time of the decision or within 15 business days (whichever is earlier), or at least 30 days before discipline or termination takes effect. The notice must acknowledge ADS use, describe worker rights, provide an appeal form, and state the anti-retaliation prohibition. For repeated quarterly use of the same ADS, a full notice is required for the first use and a summary notice at quarter-end. Workers who request access must receive, within 14 calendar days, a detailed individualized explanation including: the specific data used, all outputs, the rationale for the decision, corroborating evidence, the system's logic as applied to them, key parameters, aggregate comparison statistics, vendor identity, and impact assessments. Vendors must fully assist employers in responding to access requests. This is one of the most detailed post-decision explanation and access regimes in U.S. state AI law.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision. Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request. (c) A service provider, contractor, or vendor must provide full assistance to the employer in responding to a worker request for access, including any of that worker's input or output data in the service provider, contractor, or vender's possession and any relevant information about the automated decision system.
Pending 2026-08-01
H-01.4H-01.5
Minn. Stat. § 181.9926(a)-(f)
Plain Language
Workers have a right to appeal any ADS-informed employment decision within 30 days of receiving post-decision notice. The employer must provide an appeal form that allows workers to request data access, corroborating evidence, submit their own evidence, and designate an authorized representative. The employer must respond within five business days by designating an independent human reviewer who was not involved in the original decision, has authority to overturn it, is trained on ADS limitations and worker rights, and objectively evaluates all evidence. The reviewer must produce a written decision with reasons, delivered to both employer and worker. If the decision is overturned, the employer must rectify it within five business days. This is a structured adversarial appeal process — the reviewer must be independent and empowered, not merely consultative.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
Pending 2025-08-01
H-01.3
Minn. Stat. § 363A.08, subd. 9(b)(2)
Plain Language
Employers must provide notice to employees and applicants when the employer is using AI in employment decisions covered by subdivision 9(b)(1) — i.e., recruitment, hiring, promotion, renewal, training selection, discharge, discipline, tenure, or terms and conditions of employment. Failure to provide this notice is itself an independent unfair employment practice. The bill does not specify the timing, format, or content of the notice, which creates compliance ambiguity — employers should err on the side of providing clear written notice before or at the time AI is used in the relevant employment process.
(2) fail to provide notice to an employee or applicant for employment that the employer is using artificial intelligence for the purposes described in clause (1).
Pending 2026-09-01
H-01.3
§ 181.9922, Subd. 1(a)-(f), Subd. 2
Plain Language
Before deploying any automated decision system for employment-related decisions, employers must provide a detailed written pre-use notice to all affected workers (including job applicants and independent contractors), their authorized representatives, and any union representing affected workers. For new systems, notice must go out at least 30 days before introduction; for existing systems, by September 1, 2026. The notice must describe the system's purpose, what data it collects, the logic used, who created and manages it, what it evaluates, impact assessment results, a list of all ADS in use, and workers' rights. Notice must be in plain language and in the language normally used for workplace communications. Workers must provide affirmative written consent before being subject to an ADS, and must be allowed to opt out if reasonable alternatives exist. A copy of each notice must be filed with the Commissioner of Labor and Industry within 10 days.
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Pending 2026-09-01
H-01.6
§ 181.9924, Subd. 2(a)-(d)
Plain Language
Employers may never rely solely on an ADS for employment-related decisions — a human must always be in the loop. When using an ADS to inform a decision, the employer must verify the output's accuracy and designate an internal reviewer who independently investigates and compiles corroborating information. The reviewer must have authority, discretion, resources, expertise in ADS operations, training on system biases and worker rights, and retaliation protection. Critically, if the reviewer cannot corroborate the ADS output or finds it inaccurate, incomplete, or misleading, the employer must not rely on the ADS for that decision. This goes beyond typical 'human in the loop' requirements by mandating affirmative corroboration with independent evidence and imposing disqualification when corroboration fails.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Pending 2026-09-01
H-01.1H-01.2
§ 181.9925, Subd. 1(a)-(d), Subd. 2(a)-(c)
Plain Language
After using an ADS to inform an employment decision, the employer must send the affected worker a post-decision notice acknowledging ADS use, describing the worker's rights, providing an appeal form or link, and affirming anti-retaliation protection. Standard timing is at the time the worker learns of the decision or within 15 business days, whichever is earlier; for discipline or termination, notice must come at least 30 days before the action takes effect. For repeated identical ADS use, a full notice is required for the first use each quarter, with a summary notice at quarter-end. Upon a worker's access request, the employer must respond within 14 calendar days with granular information: a plain-language explanation of the decision, the specific worker data used, all outputs, the rationale for the decision, the relative roles of ADS output and human judgment, how the system logic applied to the worker, key parameters, the range of possible outputs and aggregate statistics for comparison, the system vendor and product name, and any completed impact assessments. Vendors and contractors must fully assist the employer in fulfilling access requests.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision. Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request. (c) A service provider, contractor, or vendor must provide full assistance to the employer in responding to a worker request for access, including any of that worker's input or output data in the service provider, contractor, or vender's possession and any relevant information about the automated decision system.
Pending 2026-09-01
H-01.4H-01.5
§ 181.9926(a)-(f)
Plain Language
Every worker subject to an ADS-informed employment decision must receive an appeal form or link. The form must allow the worker to request access to ADS input/output data and human reviewer evidence, state their reason for appeal with supporting evidence, and designate an authorized representative. Workers have 30 days from post-decision notice to submit the appeal. The employer must respond within 5 business days by assigning a human reviewer who was not involved in the original decision, has authority to overturn it, is trained on ADS limitations and worker rights, and must objectively evaluate all evidence. The reviewer must produce a written decision with reasoning provided to both the worker and employer. If the decision is overturned, the employer must rectify it within 5 business days. This creates a fully structured internal appeal process with independence requirements, written determinations, and mandatory remediation.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
Pending 2026-02-01
H-01.1H-01.3
Sec. 4(4)(a)(i)-(iii), (c)
Plain Language
Before deploying a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must: notify the consumer that an AI system is being used for the decision; disclose the system's purpose and the nature of the decision; provide deployer contact information and a plain-language system description; and provide instructions for accessing the deployer's public statement. Where applicable under Nebraska's data privacy law (§ 87-1107), the deployer must also inform the consumer of their right to opt out of profiling. All disclosures must be direct, in plain language, multilingual where the deployer ordinarily communicates in multiple languages, and accessible to consumers with disabilities. If direct delivery is infeasible, the deployer must use a method reasonably calculated to reach the consumer.
(4)(a) On and after February 1, 2026, prior to deploying any high-risk artificial intelligence system to make or be a substantial factor in making any consequential decision concerning any consumer, the deployer shall: (i) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make or be a substantial factor in making a consequential decision; (ii) Provide to the consumer: (A) A statement that discloses the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description written in plain language that describes the high-risk artificial intelligence system; and (D) Instructions on how to access the statement described in subdivision (5)(a) of this section; and (iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107. (c)(i) Except as provided in subdivision (c)(ii) of this subsection, a deployer shall provide the notice, statement, contact information, and description required under subdivisions (4)(a) and (b) of this section: (A) Directly to the consumer; (B) In plain language; (C) In each language in which the deployer in the ordinary course of business provides any contract, disclaimer, sale announcement, or other information to any consumer; and (D) In a format that is accessible to any consumer with any disability. (ii) If the deployer is unable to provide the notice, statement, contact information, and description required under subdivisions (a) and (b) of this subsection directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
Pending 2026-02-01
H-01.1H-01.2H-01.4H-01.5
Sec. 4(4)(b)(i)-(iii)
Plain Language
When a high-risk AI system makes or substantially factors into an adverse consequential decision about a consumer, the deployer must provide: a statement disclosing each principal reason for the decision (including how the AI contributed, the types of data processed, and the data sources); an opportunity to correct any incorrect personal data the system used; and an opportunity to appeal the decision, with human review if technically feasible. The appeal requirement has a narrow exception where delay would risk the consumer's life or safety. These are post-decision adverse action obligations — they supplement the pre-deployment notice in Sec. 4(4)(a).
(b) On and after February 1, 2026, for each high-risk artificial intelligence system that makes or is a substantial factor in making any consequential decision that is adverse to any consumer, the deployer of such high-risk artificial intelligence system shall provide to such consumer: (i) A statement that discloses each principal reason for the consequential decision, including: (A) The degree to and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) Each source of the data described in subdivision (b)(i)(B) of this subsection; (ii) An opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making or processed as a substantial factor in making the consequential decision; and (iii) An opportunity to appeal any adverse consequential decision concerning the consumer arising from the deployment of the high-risk artificial intelligence system unless providing the opportunity for appeal is not in the best interest of the consumer, including instances when any delay might pose a risk to the life or safety of such consumer. Any such appeal shall allow for human review if technically feasible.
Passed
H-01.1
Section 2(c)
Plain Language
When a business entity uses biometric surveillance data to deny a consumer premises access or to remove them, it must provide the consumer with a detailed explanation of both the actions taken and the criteria used to reach that determination. This is an adverse-action explanation obligation — it is triggered only when biometric data leads to a tangible exclusion decision, not merely by operating the system. The explanation must be detailed, not generic.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
Passed 2026-01-01
H-01.1
Section 2(c)
Plain Language
When a business entity uses biometric surveillance data to make an adverse decision about a consumer — specifically denying access to or removing the consumer from the business premises — the business must provide that consumer with a detailed explanation of the actions taken and the criteria the system applied. This is an adverse-action explanation requirement triggered only when biometric data drives a denial or removal decision. The statute requires the explanation to be 'detailed' and to cover both the actions and the criteria, which is more demanding than a generic notice that biometric data was used. The bill does not specify the format or timing of the explanation.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
Pre-filed 2026-02-02
H-01.3
Section 1.a.(1)-(3)
Plain Language
Before requesting an applicant to complete a video interview that will be analyzed by AI, the employer must do three things: (1) notify the applicant that AI may be used to analyze the video and assess their fitness for the position; (2) explain how the AI works and what general types of characteristics it evaluates; and (3) obtain written consent (which may be electronic) from the applicant to be evaluated by the AI system. If the applicant does not consent, the employer may not use AI to evaluate that applicant. All three steps must be completed before the interview takes place.
a. An employer in the State that requests applicants to record video interviews and uses an artificial intelligence analysis of the applicant-submitted video shall, prior to making the request for a video interview: (1) notify an applicant before the interview that artificial intelligence may be used to analyze the applicant's video interview and consider the applicant's fitness for the position; (2) provide an applicant with information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants; and (3) obtain, before the interview, written consent, which may be electronic, from the applicant to be evaluated by the artificial intelligence program as described in the information provided. An employer shall not use artificial intelligence to evaluate an applicant who has not consented to the use of artificial intelligence analysis.
Pending 2027-01-01
H-01.3
GBL § 1552(5)(a)
Plain Language
Before deploying a high-risk AI decision system to make or substantially contribute to a consequential decision about a consumer, the deployer must provide the consumer with pre-decision notice including: (1) that AI is being used to make or contribute to the decision, (2) the system's purpose, (3) the nature of the consequential decision, (4) deployer contact information, (5) a plain-language description of the system, and (6) instructions for accessing the deployer's public statement under § 1552(6). The notice must be provided directly to the consumer, in plain language, in all languages the deployer ordinarily uses, and in a disability-accessible format (per § 1552(5)(c)).
(a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section.
Pending 2027-01-01
H-01.1H-01.2H-01.4H-01.5
GBL § 1552(5)(b)-(c)
Plain Language
When a high-risk AI decision system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide the consumer with: (1) an explanation of the principal reasons for the decision, including the AI system's degree of contribution, the types of data processed, and data sources; (2) the opportunity to correct inaccurate personal data used in the decision; and (3) the opportunity to appeal the decision, which must include human review if technically feasible unless delay would endanger the consumer. All notices must be delivered directly, in plain language, in all languages the deployer ordinarily uses, and in disability-accessible formats. This creates a right to explanation, data correction, and human-reviewed appeal for adverse automated decisions.
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
Pending 2026-01-01
H-01.1H-01.3
Labor Law § 203-g(2)(a)-(b)
Plain Language
Employers and employment agencies that use automated employment decision tools to screen job applicants must provide each candidate with advance notice at least ten business days before the tool is used. The notice must cover three items: (1) that an automated tool will be used to assess the candidate, (2) the specific job qualifications and characteristics the tool evaluates, and (3) what data is collected, where it comes from, and the employer's data retention policy. In addition, the notice must allow candidates to request an alternative selection process or accommodation. This is a pre-decision transparency obligation — the notice must be delivered before the automated screening occurs, not after.
2. Notices required. (a) Any employer or employment agency that uses an automated employment decision tool to screen candidates who have applied for a position for an employment decision shall notify each such candidate of the following: (i) That an automated employment decision tool will be used in connection with the assessment or evaluation of such candidate; (ii) The job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate; and (iii) Information about the type of data collected for such automated employment decision tool, the source of such data, and the employer or employment agency's data retention policy. (b) The notice required by paragraph (a) of this subdivision shall be made no less than ten business days before the use of such automated employment decision tool and shall allow such candidate to request an alternative selection process or accommodation.
Pending 2025-01-23
H-01.1H-01.3
Real Prop. Law § 227-g(3)(a)(i)-(iv)
Plain Language
Before screening an applicant, landlords must notify the applicant that an automated tool will be used, disclose which characteristics the tool evaluates, and provide information about the types of data collected, data sources, and the landlord's data retention policy. This is a pre-decision transparency obligation — the applicant must know an automated system is being used and understand what factors and data it relies on before the tool processes their application. If an applicant is denied housing through the automated tool, the landlord must disclose the reason for denial.
Any landlord that uses an automated housing decision making tool to screen applicants for housing shall notify each such applicant of the following: (i) That an automated housing decision making tool will be used in connection with the assessment or evaluation of such applicant; (ii) The characteristics that such automated housing decision making tool will use in the assessment of such applicant; (iii) Information about the type of data collected for such automated housing decision making tool, the source of such data, and the landlord's data retention policy; (iv) If an application for housing is denied through use of the automated housing decision making tool, the reason for such denial.
Pending 2025-01-23
H-01.1H-01.4
Real Prop. Law § 227-g(3)(b)
Plain Language
The notice required under subdivision 3(a) must be provided at least 24 hours before the tool is used, and the applicant must be given the opportunity to request an alternative selection process or accommodation. The 24-hour advance notice requirement and the right to request an alternative process function together as a meaningful opt-out mechanism — applicants who do not wish to be evaluated by the automated tool can request a non-automated path before the tool is ever applied to them.
(b) The notice required by paragraph (a) of this subdivision shall be made no less than twenty-four hours before the use of such automated housing decision making tool and shall allow such applicant to request an alternative selection process or accommodation.
Pending 2025-04-27
H-01.1
State Tech. Law § 507(4)-(5)
Plain Language
Residents have the right to understand how and why an automated system contributed to an outcome affecting them — even when the system was only one factor in the decision. Explanations must be technically valid, meaningful to the affected individual (not just generic boilerplate), and proportionate to the level of risk in the specific context. Higher-risk decisions require more detailed explanations. This right extends to hybrid human-AI decisions, not only fully automated ones.
4. New York residents shall have the right to understand how and why an outcome impacting them was determined by an automated system, even when the automated system is not the sole determinant of the outcome.
5. Automated systems shall provide explanations that are technically valid, meaningful to the individual and any other persons who need to understand the system and proportionate to the level of risk based on the context.
Pending 2025-04-27
H-01.4H-01.5
State Tech. Law § 508(1)-(3)
Plain Language
Residents have the right to opt out of automated systems in favor of a human alternative where appropriate — assessed based on reasonable expectations in context, with emphasis on broad accessibility and protection from harmful impacts. Separately, residents must have access to a timely human review and remedy process when an automated system fails, produces errors, or when they wish to appeal or contest an outcome. The human fallback process must be accessible, equitable, effective, maintained over time, accompanied by operator training, and must not impose unreasonable burdens on the public. The opt-out right is qualified ('where appropriate'), but the human fallback for errors and appeals appears to be a mandatory right.
1. New York residents shall have the right to opt out of automated systems, where appropriate, in favor of a human alternative. The appropriateness of such an option shall be determined based on reasonable expectations in a given context, with a focus on ensuring broad accessibility and protecting the public from particularly harmful impacts. In some instances, a human or other alternative may be mandated by law.
2. New York residents shall have access to a timely human consideration and remedy through a fallback and escalation process if an automated system fails, produces an error, or if they wish to appeal or contest its impacts on them.
3. The human consideration and fallback process shall be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.
Pending 2025-04-27
H-01.6
State Tech. Law § 508(4)
Plain Language
Automated systems deployed in sensitive domains — specifically including criminal justice, employment, education, and health — face heightened obligations beyond the general requirements: they must be tailored to their specific purpose, provide meaningful oversight access, include user training for residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions. The human consideration requirement for adverse decisions in sensitive domains is the strongest human oversight obligation in the statute — it effectively requires human-in-the-loop review for consequential decisions in these sectors.
4. Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall additionally be tailored to their purpose, provide meaningful access for oversight, include training for New York residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
Pending
H-01.3H-01.4
Civil Rights Law § 86-a(1)(a)-(d)
Plain Language
Before using a high-risk AI system to make or assist in making a consequential decision, a deployer must give the end user at least five business days' advance notice — in clear, conspicuous, multilingual terms — that AI will be used. The deployer must also provide a meaningful opportunity to opt out and have the decision made by a human instead, with no adverse consequences for opting out and a 45-day deadline to render the human decision. When the AI decision would confer a benefit, the deployer must offer the end user the option to waive the five-day waiting period; if waived, notice must still be given as early as practicable. End users may exercise the opt-out no more than once per consequential decision in a six-month period. An urgent-necessity exception applies where compliance would cause imminent detriment to the end user's welfare (e.g., emergency benefits), but even in that case the right to request human review is never waived. These rights cannot be waived by contract.
1. (a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
Pending
H-01.4H-01.5
Civil Rights Law § 86-a(2)(a)-(b)
Plain Language
After a high-risk AI system has been used in a consequential decision, the deployer must notify the end user within five days and provide an accessible appeal process. The appeal must allow the end user to (1) formally contest the decision, (2) submit supporting information, and (3) obtain meaningful human review. The deployer must respond within 45 days, extendable once by 45 days for complex or high-volume appeals with notice and reasons to the end user. Each end user may appeal the same consequential decision only once within a six-month period. Under § 86-a(5), an end user who exercised the pre-decision opt-out right under subdivision 1 cannot also exercise the post-decision appeal right under this subdivision for the same decision.
2. (a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Pending 2025-10-12
H-01.6
GBL § 1154
Plain Language
Before any news media content that was created in whole or in material part by generative AI may be published (with the consumer disclosure required by § 1153), a human worker must review the content and must have authority to approve, deny, or modify the AI system's output. This is a mandatory human-in-the-loop requirement — the human reviewer must have genuine override authority, not merely a rubber-stamp role. The obligation is tied to the publication act: AI-generated content cannot be published until it has been through this human review gate. Note the threshold here ('in whole or in material part') is broader than the consumer disclosure threshold ('substantially composed'), potentially requiring human review even when the § 1153 labeling requirement does not apply.
Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
Pending 2027-01-01
H-01.4
Civil Rights Law § 108(1)-(2)
Plain Language
The Division must promulgate regulations within two years of the effective date specifying when and how deployers must provide individuals a right to opt out of algorithmic decision-making and elect a human-only alternative for consequential actions. The regulations must consider notice design, which consequential actions warrant a human alternative, feasibility, and the public interest. Separately, developers and deployers are immediately prohibited from using deceptive statements, dark patterns, or manipulative interface design to discourage individuals from exercising any right under the article. The opt-out/human alternative right itself will not take effect until implementing regulations are promulgated, but the prohibition on conditioning rights exercise through deception or manipulative design is operative from the effective date.
1. Not later than two years after the effective date of this article, the division shall promulgate regulations in accordance with specifying the circumstances and manner in which a deployer shall provide to an individual a means to opt-out of the use of a covered algorithm for a consequential action and to elect to have the consequential action concerning the individual undertaken by a human without the use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall consider the following: (a) how to ensure that any notice or request from a deployer regarding the right to a human alternative is clear and conspicuous, in plain language, easy to execute, and at no cost to an individual; (b) how to ensure that any such notice to individuals is effective, timely, and useful; (c) the specific types of consequential actions for which a human alternative is appropriate, considering the magnitude of the action and risk of harm; (d) the extent to which a human alternative would be beneficial to individuals and the public interest; (e) the extent to which a human alternative can prevent or mitigate harm; (f) the risk of harm to individuals beyond the requestor if a human alternative is available or not available; (g) the feasibility of providing a human alternative in different circumstances; and (h) any other considerations the division deems appropriate to balance the need to give an individual control over a consequential action related to such individual with the practical feasibility and effectiveness of granting such control. 2. A developer or deployer may not condition, effectively condition, attempt to condition, or attempt to effectively condition the exercise of any individual right under this article or individual choice through: (a) the use of any false, fictitious, fraudulent, or materially misleading statement or representation; or (b) the design, modification, or manipulation of any user interface with the purpose or substantial effect of obscuring, subverting, or impairing a reasonable individual's autonomy, decision making, or choice to exercise any such right.
Pending 2027-01-01
H-01.5
Civil Rights Law § 108(3)
Plain Language
The Division must promulgate regulations within two years specifying when and how deployers must provide individuals a mechanism to appeal algorithmic consequential actions to a human reviewer. The regulations must ensure the appeal mechanism is free, accessible (including to individuals with disabilities), proportionate, non-discriminatory, and effective. Where appropriate, individuals must be able to identify and correct personal data used by the algorithm. Human reviewers must be trained. Like the opt-out right, this appeal right's specific parameters will be defined by regulation, but the legislative mandate to create an appeal mechanism is unambiguous.
3. Not later than two years after the effective date of this article, the division shall promulgate regulations specifying the circumstances and manner in which a deployer shall provide to an individual a mechanism to appeal to a human a consequential action resulting from the deployer's use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall do the following: (a) ensure that the appeal mechanism is clear and conspicuous, in plain language, easy-to-execute, and at no cost to individuals; (b) ensure that the appeal mechanism is proportionate to the consequential action; (c) ensure that the appeal mechanism is reasonably accessible to individuals with disabilities, timely, usable, effective, and non-discriminatory; (d) require, where appropriate, a mechanism for individuals to identify and correct any personal data used by the covered algorithm; (e) specify training requirements for human reviewers with respect to a consequential action; and (f) consider any other circumstances, procedures, or matters the division deems appropriate to balance the need to give an individual a right to appeal a consequential action related to such individual with the practical feasibility and effectiveness of granting such right.
Pending 2027-01-01
H-01.3
Civil Rights Law § 110(6)-(7)
Plain Language
Deployers must create a short-form notice (max 500 words) for each covered algorithm that summarizes individual rights, highlights unexpected practices or consequential actions, and is written in plain language accessible to individuals with disabilities. For individuals with whom the deployer has an existing relationship, the notice must be delivered electronically at the first interaction with the algorithm. For individuals without a pre-existing relationship, the notice must be posted on the deployer's website. This is distinct from the comprehensive disclosure in § 110(1) — it is a concise, user-facing summary designed to provide meaningful notice at the point of algorithmic interaction.
6. A deployer shall provide a short-form notice regarding a covered algorithm it develops, offers, licenses, or uses in a manner that: (a) is concise, clear, conspicuous, in plain language, and not misleading; (b) is readily accessible to individuals with disabilities; (c) is based on what is reasonably anticipated within the context of the relationship between the individual and the deployer; (d) includes an overview of each applicable individual right and disclosure in a manner that draws attention to any practice that may be unexpected to a reasonable individual or that involves a consequential action; (e) is not more than five hundred words in length; and (f) is available to the public at no cost. 7. (a) If a deployer has a relationship with an individual, the deployer shall provide an electronic version of the short-form notice directly to the individual upon the individual's first interaction with the covered algorithm. (b) If a deployer does not have a relationship with an individual, the deployer shall provide the short-form notice in a clear, conspicuous, accessible, and not misleading manner on their website.
Pending 2027-01-01
H-01.3
Civ. Rights Law § 86-a(1)(a)(i), (1)(b), (1)(c)
Plain Language
Before using a high-risk AI system for a consequential decision, a deployer must notify the end user at least five business days in advance — in clear, multilingual, consumer-friendly terms — that AI will be used. When the decision would confer a benefit on the end user, the deployer must offer the end user the option to waive the five-day notice; if waived, notice must still be given as early as practicable. An urgency exception applies when the decision confers a benefit and delay would cause imminent detriment to the end user, though even under the urgency exception, the end user's right to request human review is never waived. Notice must be provided in every language in which the deployer offers its end services.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision:
(i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and
(b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision.
(c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days.
Pending 2027-01-01
H-01.4
Civ. Rights Law § 86-a(1)(a)(ii), (1)(d)
Plain Language
Deployers must give end users a clear, accessible opportunity to opt out of having a consequential decision made by an AI system and instead have it made by a human representative. The deployer must render the human decision within 45 days. Consumers may not be punished or face any adverse action for exercising the opt-out. The opt-out right is limited to one exercise per consequential decision per six-month period. An end user cannot exercise both the opt-out right (pre-decision) and the appeal right (post-decision) with respect to the same consequential decision.
(ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days.
(d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
Pending 2027-01-01
H-01.4H-01.5
Civ. Rights Law § 86-a(2)(a)-(b)
Plain Language
Within five days after a high-risk AI system has been used for a consequential decision, the deployer must inform the end user and provide an accessible appeal process. The appeal must, at minimum, allow the end user to formally contest the decision, submit supporting information, and obtain meaningful human review. The deployer must respond within 45 days, extendable once by 45 days if reasonably necessary — with notice and reasons provided to the end user. Each end user is limited to one appeal per consequential decision per six-month period. Notably, an end user cannot exercise both the pre-decision opt-out and the post-decision appeal for the same decision.
2. (a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay.
(b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Pending 2025-10-11
H-01.3
GBL § 1552(5)(a)
Plain Language
Before using a high-risk AI decision system to make or substantially contribute to a consequential decision about a consumer, the deployer must provide pre-decision notice including: notification that an AI system will be used, the system's purpose, the nature of the consequential decision, deployer contact information, a plain-language system description, and instructions for accessing the deployer's public summary statement under § 1552(6). This notice must be provided directly to the consumer, in plain language, in all languages the deployer ordinarily uses for consumer communications, and in an accessible format for consumers with disabilities.
5. (a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section.
Pending 2025-10-11
H-01.1H-01.2H-01.4H-01.5
GBL § 1552(5)(b)-(c)
Plain Language
When a high-risk AI decision system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement explaining the principal reasons for the adverse decision, including the degree and manner of AI contribution, the type of data processed, and the data source; (2) an opportunity to correct incorrect personal data the system used; and (3) an opportunity to appeal, which must include human review if technically feasible, unless delay would endanger the consumer. All notices must be delivered directly in plain language, in all languages the deployer uses in ordinary business, and in a disability-accessible format.
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
Pending 2025-09-05
H-01.6
Gen. Bus. Law § 1154
Plain Language
Before any news media content created in whole or in material part by generative AI may be published, a human worker must review it and have the authority to approve, deny, or modify the AI's output. This is a mandatory human-in-the-loop requirement — not just a review right but an affirmative gating condition on publication. The human reviewer must have genuine override authority, not merely a rubber-stamp role. The provision references the § 1153 disclosure, linking the human oversight requirement to the consumer disclosure obligation — content that passes human review and is published must still carry the AI disclosure label (unless copyright-eligible).
Oversight of artificial intelligence systems. Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
Pending 2026-07-22
H-01.3
Exec. Law § 296(23)(b)-(c)
Plain Language
Employers must notify employees when artificial intelligence is being used for any of the employment purposes covered by the statute (recruitment, hiring, promotion, discharge, discipline, etc.). Failure to provide this notice is itself an independent unlawful discriminatory practice—separate from and in addition to any substantive discrimination. The specific timing, format, and triggering conditions for the notice will be determined by Division of Human Rights rulemaking. Until regulations are adopted, employers should err on the side of providing clear written notice before or at the time AI is used in any covered employment decision. The delegation to the Division means compliance specifics may change; employers should monitor rulemaking.
(b) It shall be an unlawful discriminatory practice for an employer to fail to provide notice to an employee that such employer is using artificial intelligence for the purposes described in paragraph (a) of this subdivision. (c) The division shall adopt any rules or regulations necessary for the implementation and enforcement of this subdivision, including, but not limited to, rules on the circumstances and conditions that require notice, the time period for providing such notice and the means for providing such notice.
Pre-filed 2025-11-01
H-01.6
63 O.S. § 5503(B)
Plain Language
The qualified end-user must have actual authority to amend or overrule any output from the AI device based on their independent professional judgment. Deployers and other entities are prohibited from pressuring the physician to accept, ignore, or alter their clinical judgment regarding AI outputs. This is not merely a right to override — it is an affirmative prohibition on institutional interference with physician judgment regarding AI device outputs.
B. The qualified end-user of the AI device shall retain authority to amend or overrule outputs from the device based on their professional judgment, and without pressure from the deployer or any other entity to ignore or alter professional judgement.
Pre-filed 2026-11-01
H-01.6
36 O.S. § 6567(D)
Plain Language
When a health benefit plan initially uses AI tools in a utilization review, any clinical peer reviewer involved in that process must personally open and review the individual enrollee's clinical records or data before issuing an adverse determination. The reviewer must also document that this review occurred. This creates a mandatory human-review-before-adverse-action requirement specifically for AI-assisted utilization reviews, ensuring that no adverse determination issues without a documented human review of individualized clinical data.
D. A clinical peer reviewer who participates in a utilization review process for a health benefit plan that initially uses artificial intelligence tools for a utilization review shall open and document the utilization review of the individual clinical records or data prior to issuing an adverse determination.
Pending 2026-03-10
H-01.3H-01.4
Section 4(a)-(c)
Plain Language
When an AI-assisted consumer interaction involves a high-impact decision, the business entity must (1) clearly and conspicuously notify the consumer of their right to request human review, and (2) honor any such request by commencing the review within 14 days and delivering a final decision within 28 days. High-impact decisions cover those materially affecting legal rights, employment, housing, credit, education, health care, or government benefits access. The human review right is consumer-invoked — it does not require pre-clearance by a human before the AI-informed decision takes effect. Unlike Section 3(c)'s access-to-human provision, this right has no 'reasonably available' qualifier and contains mandatory timelines.
(a) Right to human review.--A consumer shall have the right to request that a human representing the business entity review any consumer interaction involving a high-impact decision. (b) Notice.--When the conditions under section 3 are met requiring the disclosure of the use of artificial intelligence in a consumer interaction and involve a high-impact decision, the business entity shall disclose in a clear and conspicuous manner that the consumer has a right to request a human review by the business entity involving the high-impact decision. (c) Time frame.--A business entity shall commence the human review not later than 14 days after the request for a human review is made. The human review shall be completed and the decision delivered to the requester not later than 28 days after the request for a human review is made.
Pending 2026-02-12
H-01.1H-01.2H-01.3
§ 28-5.2-2(c)
Plain Language
Before using any electronic monitoring tool, employers must provide prior written notice to all affected candidates and employees, obtain written acknowledgment, and conspicuously post the notice. The notice must be comprehensive — covering the monitoring purpose, specific data collected, monitoring schedule, whether data feeds into an ADS or employment decisions, use in discipline or litigation, productivity assessment use, data storage location and retention period, a least-invasiveness explanation, the employee's right to refuse data sale/transfer/disclosure, and how to exercise statutory rights. This is a pre-deployment transparency obligation — monitoring may not begin until notice is given and acknowledgment obtained.
(c) Any employer that uses an electronic monitoring tool shall give prior written notice and shall obtain written acknowledgment from all candidates and employees subject to electronic monitoring and shall also post said notice in a conspicuous place which is readily available for viewing by candidates for employment and employees. Such notice shall include, at a minimum, the following: (1) A description of the purpose for which the electronic monitoring tool will be used, as specified in subsection (a)(1) of this section; (2) A description of the specific employee data to be collected, stored, secured, and disposed of (and the schedule therefor), and the activities, locations, communications, and job roles that will be electronically monitored by the tool; (3) A description of the dates, times, and frequency that electronic monitoring will occur; (4) Whether and how any employee data collected by the electronic monitoring tool will be used as an input in an automated decision system; (5) Whether and how any employee data collected by the electronic monitoring tool will alone or in conjunction with an automated decision system be used to make an employment decision by the employer or employment agency; (6) Whether and how any employee data collected by the electronic monitoring tool may be stored and utilized in discipline, in internal policy compliance, in administrative agency adjudications, in litigation (whether or not it involves the employee or not as a party); (7) Whether any employee data collected by the electronic monitoring tool will be used to assess employees' productivity performance or to set productivity standards, and if so, how; (8) A description of where any employee data collected by the electronic monitoring tool will be stored and the length of time it will be retained; (9) An explanation for how the specific electronic monitoring practice is the least invasive means available to accomplish the monitoring purpose; (10) That an employee is entitled to notice and maintains the right to refuse the sale, transfer, or disclosure of their employee data, subject to the provisions of subsection (g) of this section; and (11) A clear and reasonably understandable description of how an employee can exercise the rights described in this chapter.
Pending 2026-02-12
H-01.6
§ 28-5.2-2(i)
Plain Language
Employers may not rely primarily on electronic monitoring data for hiring, promotion, discipline, termination, or compensation decisions. Three affirmative obligations apply: (1) the employer must establish meaningful human oversight — which requires designating an internal reviewer with ADS expertise, authority to reject outputs, and adequate time/resources; (2) the human decision-maker must verify data accuracy, address pending correction requests, and exercise independent judgment; and (3) the human must consider non-monitoring information such as supervisor evaluations, personnel files, work products, or peer reviews. This effectively prevents automated employment decisions based solely on surveillance data.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
Pending 2026-02-12
H-01.1H-01.2
§ 28-5.2-2(j)
Plain Language
When an employer makes a hiring, promotion, termination, disciplinary, or compensation decision based in whole or in part on electronic monitoring data, the employer must disclose four categories of information to the affected employee and their authorized representative within 30 days: (1) that monitoring data was used, (2) the specific tools used and how they gather and analyze data, (3) the specific data and judgments derived from it, and (4) any non-monitoring information also used. This is a post-decision explanation obligation — it is triggered by the decision, not by an employee request, and has a fixed 30-day timeline.
(j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
Pending 2026-02-06
H-01.1H-01.6
§ 28-5.2-2(i)-(j)
Plain Language
Employers may not rely primarily on electronic monitoring data when making hiring, promotion, discipline, termination, or compensation decisions. Every such decision must involve meaningful human oversight: a designated internal reviewer with expertise in the ADS, familiarity with the most recent impact assessment, authority to dispute or reject outputs, and sufficient time and resources. The human decision-maker must independently verify accuracy, address pending correction requests, exercise independent judgment, and consider non-monitoring information (supervisory evaluations, personnel files, work products, peer reviews). When such a decision is made, the employer must disclose to the affected employee and their authorized representative within 30 days: that monitoring data was used, which tools were used and how they work, the specific data and judgments relied upon, and any non-monitoring information considered.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
Pending
H-01.1H-01.2H-01.3H-01.4H-01.5
S.C. Code § 37-31-30(D)
Plain Language
Before making a consequential decision using a high-risk AI system, deployers must notify the consumer that AI will be used, disclose the system's purpose and the nature of the decision, provide deployer contact information, and inform the consumer of any opt-out rights. If the decision is adverse, the deployer must additionally provide: the principal reasons for the decision (including how the AI contributed, what data types were used, and their sources), an opportunity to correct inaccurate personal data, and an opportunity to appeal with human review if technically feasible. All notices must be delivered directly, in plain language, in all languages the deployer normally uses for business communications, and in disability-accessible formats. If direct delivery is impossible, the deployer must use a method reasonably calculated to reach the consumer.
(D)(1) No later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (a) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (b) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by this item; and (c) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer pursuant to Section 30-31-60(A)(1)(a)(iii). (2) A deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (a) a statement disclosing the principal reason or reasons for the consequential decision, including: (i) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (ii) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (iii) the source or sources of the data described in item (2)(a)(ii); (b) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (c) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer. (3)(a) Except as provided in subitem (b), a deployer shall provide the notice, statement, contact information, and description required by items (1) and (2): (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities. (b) If the deployer is unable to provide the notice, statement, contact information, and description required by items (1) and (2) directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
Pending 2026-07-01
H-01.6
Va. Code § 19.2-11.14(B)
Plain Language
Every decision regarding pre-trial detention, release, prosecution, adjudication, sentencing, probation, parole, correctional supervision, or rehabilitation must be made by the responsible judicial officer or authorized human decision-maker — never by an AI system alone. AI-based tools may inform these decisions, but a human must always be involved. Any AI recommendation or prediction used in such decisions remains subject to all legal challenges or objections available under existing law. This provision predates the new subsections added by HB 1294 but is reenacted as part of the amended section.
B. All decisions related to the pre-trial detention or release, prosecution, adjudication, sentencing, probation, parole, correctional supervision, or rehabilitation of criminal offenders shall be made by the judicial officer or other person charged with making such decision. No such decision shall be made without the involvement of a human decision-maker. The use of any recommendation or prediction from an artificial intelligence-based tool shall be subject to any challenge or objection permitted by law.
Pending 2026-07-01
H-01.3H-01.1
§ 2.2-1202.2(B)(2)
Plain Language
State agencies that use an automated decision system as a substantial factor in employment decisions must disclose five categories of information to affected individuals: (1) that an automated system is being used, (2) its intended use (e.g., evaluating candidates, compensation, promotion), (3) data input types and sources, (4) how the system fits into the agency's decision-making process, and (5) whether personal data will be shared with third parties or fed back into the system. This is a pre-decision notification and explanation obligation — the individual must be informed both that an automated system is in play and how it operates.
Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the state agency's decision-making processes; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
Pending 2026-07-01
H-01.6
§ 2.2-1202.2(C)
Plain Language
State agencies are categorically prohibited from making any employment decision without a human decision maker in the loop. An automated decision system's recommendation or prediction cannot be the sole basis for any employment decision — including recruitment, hiring, promotion, discipline, or termination. This is a mandatory human-in-the-loop requirement, not merely a right the individual must invoke. The prohibition applies regardless of whether the individual requests human review.
No employment decision shall be made by a state agency without the involvement of a human decision maker. No state agency shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
Pending 2026-07-01
H-01.5
§ 2.2-1202.2(D)
Plain Language
The Department of Human Resource Management must create and publicly advertise a formal complaint process specifically for concerns about automated decision system use in state employment decisions. This process must include both a mechanism for filing concerns and a process for investigating and resolving them. It must be separate from the existing state employee dispute resolution process under § 2.2-1202.1, ensuring a dedicated channel for AI-related employment complaints.
The Department shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in the Commonwealth's employment decisions and a process for the investigation and resolution of any such concerns and complaints. Such process shall be separate and apart from the dispute resolution process described in § 2.2-1202.1.
Pending 2026-07-01
H-01.3H-01.1
§ 15.2-1500.2(B)(2)
Plain Language
Local government entities that use an automated decision system as a substantial factor in employment decisions must make the same five-category disclosure as state agencies: that an automated system is used, its intended purpose, data input types and sources, how the system fits into the entity's decision-making, and whether personal data will be shared with third parties or recycled into the system. This mirrors the state agency obligation in § 2.2-1202.2(B)(2) but applies to all departments, offices, boards, commissions, agencies, and instrumentalities of local government.
Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the decision-making processes of the department, office, board, commission, agency, or instrumentality of local government; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
Pending 2026-07-01
H-01.6
§ 15.2-1500.2(C)
Plain Language
Local government entities are categorically prohibited from making any employment decision without a human decision maker involved. Automated system recommendations or predictions cannot be the sole basis for any employment decision. This mirrors the state agency human-in-the-loop requirement and is a mandatory structural requirement, not a right the individual must invoke.
No employment decision shall be made by a department, office, board, commission, agency, or instrumentality of local government without the involvement of a human decision maker. No department, office, board, commission, agency, or instrumentality of local government shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
Pending 2026-07-01
H-01.5
§ 15.2-1500.2(D)
Plain Language
Each local government entity using automated decision systems as a substantial factor in employment decisions must create and publicize a formal complaint and investigation process. Unlike the state agency version, each local entity is responsible for establishing its own process — there is no centralized Department equivalent. The process must cover both filing of concerns and their investigation and resolution.
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in employment decisions and a process for the investigation and resolution of any such concerns and complaints.
Pending 2026-07-01
H-01.6
§ 40.1-28.7:12(B)-(C)
Plain Language
Private employers are prohibited from making any employment decision without a human decision maker. No employer may solely rely on an automated decision system's recommendation or prediction for any final employment decision — covering recruitment, hiring, promotion, discipline, and termination. Note that the private employer definition of 'employment decision' is narrower than the government versions, requiring a 'final' decision. Knowing violations are subject to civil penalties up to $500 for a first offense and $1,500 for subsequent violations, assessed by the Commissioner of Labor and Industry after a notice-and-conference process. The Commissioner may also seek injunctive relief in circuit court. The penalty assessment considers business size and violation gravity.
B. No employment decision shall be made by an employer without the involvement of a human decision maker. No employer shall solely use any recommendation or prediction from an automated decision system to make an employment decision. C. Any employer that knowingly violates the provisions of this section shall be subject to a civil penalty not to exceed $500 for a first violation and $1,500 for each subsequent violation. The Commissioner shall notify any employer that he alleges has violated the provisions of this section by certified mail. Such notice shall contain a description of the alleged violation. Within 15 days of receipt of notice of the alleged violation, the employer may request an informal conference regarding such violation with the Commissioner. In determining the amount of any penalty to be imposed, the Commissioner shall consider the size of the business of the employer charged and the gravity of the violation. The decision of the Commissioner shall be final. Civil penalties under this section shall be assessed by the Commissioner and paid to the Literary Fund. The Commissioner shall prescribe procedures for the payment of proposed penalties that are not contested by employers. E. The Commissioner or his authorized representative shall have the right to petition a circuit court for injunctive or such other relief as may be necessary for the enforcement of this section.
Pre-filed 2025-07-01
H-01.1H-01.3H-01.6
21 V.S.A. § 495q(f)(1)-(4)
Plain Language
Employers face both prohibitions and conditions when using automated decision systems (ADS) for employment-related decisions. Five categorical prohibitions apply: the ADS may not violate law, predict behavior unrelated to essential job functions, profile legal-rights exercise likelihood, predict emotions or personality, or use customer/client data as input. Health-related ADS outputs may not be used for employment decisions. Employers may never solely rely on ADS outputs — all ADS-informed decisions must be corroborated by human oversight (supervisory observations, personnel records, coworker consultations), the employer must have completed an impact assessment, and the employer must have provided the employee with a detailed pre-use notice covering 10 specified items including system logic, data sources, output types, developer identity, and employee rights.
(f) Restrictions on use of automated decision systems. (1) An employer shall not use an automated decision system in a manner that: (A) violates or results in a violation of State or federal law; (B) makes predictions about an employee's behavior that are unrelated to the employee's essential job functions; (C) identifies, profiles, or predicts the likelihood that an employee will exercise the employee's legal rights; (D) makes predictions about an employee's emotions, personality, or other sentiments; or (E) use customer or client data, including customer or client reviews and feedback, as an input of the automated decision system. (2)(A) An employer shall not solely rely on outputs from an automated decision system when making employment-related decisions. (B) An employer may utilize an automated decision system in making employment-related decisions if: (i) the automated decision system outputs considered in making the employment-related decision are corroborated by human oversight of the employee, including supervisory or managerial observations and documentation of the employee's work, personnel records, and consultations with the employee's coworkers; (ii) the employer has conducted an impact assessment of the automated decision system pursuant to subsection (g) of this section; and (iii) the employer is in compliance with the notice requirements of subdivision (4) of this subsection (f). (3) An employer shall not use any automated decision system outputs regarding an employee's physical or mental health in relation to an employment-related decision. (4) Prior to using an automated decision system to make an employment-related decision about an employee, the employer must provide the employee with a notice that complies with subdivision (c)(3)(A) of this section and, at a minimum, contains the following information: (A) a plain language explanation of the nature, purpose, and scope for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (B) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system; (C) the specific category and sources of employee input data that the automated decision system will use, including a specific description of any data collected through electronic monitoring; (D) any performance metrics the employer will consider using with the automated decision system; (E) the type of outputs the automated decision system will produce; (F) the individuals or entities that developed the automated decision system; (G) the individual or entities that will operate, monitor, and interpret the results of the automated decision system; (H) information about how an employee can access the results of the most recent impact assessment of the automated decision system; (I) a description of an employee's rights, pursuant to subsection (j) of this section, to access information about the employer's use of the automated decision system and to correct data used by the automated decision system; and (J) a statement that employees are protected from retaliation for exercising the rights described in the notice.
Pending 2025-07-01
H-01.3
9 V.S.A. § 4193c(a)-(b)
Plain Language
Before using an automated decision system for a consequential decision, deployers must provide consumers with a detailed pre-decision notice. The notice must be clear, conspicuous, consumer-friendly, and available in each language the deployer offers its services in. It must describe what personal characteristics the system measures, how it measures them, why they are relevant, what human and automated components contribute to the decision, and include a link to a public webpage with descriptions of the system's outputs, data types and sources, and the most recent impact assessment results. This is a proactive disclosure obligation — it must happen before the system is used, not after.
(a) Any deployer that employs an automated decision system for a consequential decision shall inform the consumer prior to the use of the system for a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that automated decision systems will be used to make a consequential decision or to assist in making a consequential decision. (b) Any notice provided by a deployer to the consumer pursuant to subsection (a) of this section shall include: (1) a description of the personal characteristics or attributes that the system will measure or assess; (2) the method by which the system measures or assesses those attributes or characteristics; (3) how those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (4) any human components of the system; (5) how any automated components of the system are used to inform the consequential decision; and (6) a direct link to a publicly accessible page on the deployer's website that contains a plain-language description of the: (A) system's outputs; (B) types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and (C) results of the most recent impact assessment, or an active link to a web page where a consumer can review those results.
Pending 2025-07-01
H-01.1H-01.2
9 V.S.A. § 4193c(c)
Plain Language
After a consequential decision is made using an automated decision system, the deployer must provide the consumer with a single, plain-language notice explaining the decision. The notice must identify the principal reasons for the decision, the developer of the system (if different from the deployer), what the system's output was, how much the system contributed to the decision, what data was processed, how the consumer's personal data specifically informed the outcome, and what the consumer could have done or could do in the future to secure a different decision. This is a post-decision explanation obligation, distinct from the pre-decision notice in subsection (a).
(c) Any deployer that employs an automated decision system for a consequential decision shall provide the consumer with a single notice containing a plain-language explanation of the decision that identifies the principal reason or reasons for the consequential decision, including: (1) the identity of the developer of the automated decision system used in the consequential decision, if the deployer is not also the developer; (2) a description of what the output of the automated decision system is, such as a score, recommendation, or other similar description; (3) the degree and manner to which the automated decision system contributed to the decision; (4) the types and sources of data processed by the automated decision system in making the consequential decision; (5) a plain language explanation of how the consumer's personal data informed the consequential decision; and (6) what actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future.
Pending 2025-07-01
H-01.4H-01.5
9 V.S.A. § 4193c(d)
Plain Language
Deployers must provide consumers with a meaningful appeal process for consequential decisions. At minimum, consumers must be able to formally contest the decision, submit supporting information, and obtain human review. The human reviewer must be trained, impartial, free of conflicts of interest, not involved in the initial decision, and protected from retaliation for exercising their review functions. The deployer must allocate sufficient resources for effective appeals. The deployer must respond within 45 days, with one possible 45-day extension for complex or high-volume appeals. This is an unusually detailed human review requirement — it specifies reviewer qualifications, independence, and anti-retaliation protections, going beyond most comparable state frameworks.
(d)(1) A deployer shall provide and explain a process for a consumer to appeal a decision, which shall at minimum allow the consumer to: (A) formally contest the decision; (B) provide information to support their position; and (C) obtain meaningful human review of the decision. (2) For an appeal made pursuant to subdivision (1) of this subsection: (A) a deployer shall designate a human reviewer who: (i) is trained and qualified to understand the consequential decision being appealed, the consequences of the decision for the consumer, how to evaluate and how to serve impartially, including by avoiding prejudgment of the facts at issue, conflict of interest, and bias; (ii) does not have a conflict of interest for or against the deployer or the consumer; (iii) was not involved in the initial decision being appealed; (iv) shall enjoy protection from dismissal or its equivalent, disciplinary measures, or other adverse treatment for exercising their functions under this section; and (v) shall be allocated sufficient human resources by the deployer to conduct an effective appeal of the decision; and (B) the human reviewer shall consider the information provided by the consumer in their appeal and may consider other sources of information relevant to the consequential decision. (3) A deployer shall respond to a consumer's appeal not later than 45 after receipt of the appeal. That period may be extended once by an additional 45 days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the consumer of any extension not later than 45 days after receipt of the appeal, together with the reasons for the delay.
Pending 2027-01-01
H-01.1
Sec. 3(5)(a)-(c)
Plain Language
Deployers must communicate consequential AI decisions to affected consumers without undue delay. When the decision is adverse and relied on personal information beyond what the consumer directly provided, the deployer must additionally provide a statement explaining: the principal reasons for the decision, the degree and manner in which the AI contributed, what types of data the system processed, and the sources of that data. Note the adverse-decision explanation is only triggered when the decision uses personal information from sources other than the consumer — if the decision is based solely on consumer-provided data, the explanation obligation does not apply (though the prompt-notification obligation still does).
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal information beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Pending 2026-07-01
H-01.3
Sec. 7(1)-(2)
Plain Language
Beginning July 1, 2026, every time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must notify the consumer before the decision is made. The notification must include the system's purpose, the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the system. This is the earliest operative obligation in the bill — it takes effect a full year before the risk management and impact assessment obligations.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
Pending 2027-01-01
H-01.1H-01.3
Sec. 3(4)
Plain Language
Before or at the time a high-risk AI system interacts with a consumer, the deployer must disclose: that the consumer is interacting with AI, the system's purpose and nature, the nature of the consequential decision, deployer contact information, and a plain-language description covering what personal attributes the system measures, how it measures them, their relevance to the decision, human oversight components, and how automated components inform decisions. This is a comprehensive pre-decision disclosure requirement — considerably more detailed than a simple AI identity notice.
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
Pending 2027-01-01
H-01.1
Sec. 3(5)
Plain Language
Deployers must transmit consequential decisions to affected consumers without undue delay. When the decision is adverse and relied on personal data beyond what the consumer directly provided, the deployer must also explain: the principal reasons for the decision, how much and in what way the AI system contributed, what types of data were used, and where that data came from. This adverse-decision explanation obligation is triggered only when the decision relied on data the consumer did not directly supply — if the decision is based solely on consumer-provided information, only the decision itself must be communicated.
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal data beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Pending 2026-07-01
H-01.1H-01.3
Sec. 8(1)-(2)
Plain Language
Each time a deployer uses a high-risk AI system to make or substantially factor into a consequential decision about a Washington consumer, the deployer must — before the decision is made — notify the consumer that AI is being used and provide a statement disclosing: the AI system's purpose and the nature of the consequential decisions it makes, the deployer's contact information, and a plain-language description of the system. This obligation takes effect July 1, 2026, one year earlier than the risk management and impact assessment obligations. The notification must occur on a per-decision basis, not just at onboarding.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.