When AI systems make or inform consequential decisions about individuals — typically covering employment, credit, housing, insurance, healthcare, and public benefits — those individuals must have meaningful rights to understand, review, challenge, and in high-stakes contexts override those decisions. The specific rights and processes vary by jurisdiction and context, but the core principle is that individuals should not be subject to consequential automated decisions without meaningful recourse.
(a) During the business hours of 8 a.m. to 6 p.m. daily, an operator of a large private business who provide goods and services to consumers in California shall provide consumers with human customer service support and communications. During these times, an operator shall connect a person interacting with a customer service chatbot, or automated customer support system, to a customer service agent within five minutes after a request for human customer service is made. (b) For telephonic customer service platforms, the business shall ensure all of the following: (1) That a customer call be answered quickly and, after the call is answered, that a customer is not placed on hold for more than 5 minutes at any point after the call is answered, and that cumulative hold times for a call not exceed more than 10 minutes total. (2) If a call is answered by a customer service chatbot, the operator of the telephonic platform shall provide human assistance within five minutes after the call is made. (c) For online customer service platforms, the business shall ensure that a customer is given option to request customer service assistance from a human being and, upon that request, the operator of the online platform shall provide human assistance within five minutes after the request is made.
(c) An employer shall not use or deploy technology to replace or limit a worker's use of professional judgment in patient care.
(a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following: (1) The purpose of the high-risk automated decision system and the specific decision it was used to make. (2) How the high-risk automated decision system was used to make the decision. (3) The type of data used by the high-risk automated decision system. (4) Contact information for the deployer. (5) A link to the statement required by subdivision (b).
(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.
(a) An employer shall provide a written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or their authorized representative, according to the following: (1) At least 30 days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this title takes effect, no later than April 1, 2026. (3) To a new worker within 30 days of hiring the worker. (c) A written notice required by this section shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including, but not limited to, an email, hyperlink, or other written format. (e) A notice issued pursuant to subdivision (a) shall contain the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measured by an ADS to which the worker is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's data used by the ADS. (7) That the employer is prohibited from retaliating against workers for exercising their rights described in paragraph (6).
(d) An employer shall notify a job applicant upon receiving the application that the employer utilizes an ADS when making hiring decisions, if the employer will use the ADS in making decisions for that position. Notifications may be made using an automatic reply mechanism or on a job posting.
(c) (1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) When an employer relies primarily on ADS output to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the ADS output and compile and review other information that is relevant to the decision, if any. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following: (A) Supervisory or managerial evaluations. (B) Personnel files. (C) Work product of workers. (D) Peer reviews. (E) Witness interviews, that may include relevant online customer reviews.
(a) An employer that primarily relied on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with a written notice at the time the employer informs the worker of the decision. The notice shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including an email, hyperlink, or other written format. (b) A notice issued pursuant to subdivision (a) shall contain all of the following information: (1) The human to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part.
(b) (1) An employer shall not rely solely on an ADS when making a disciplinary, termination, or deactivation decision. (2) If an employer uses an ADS output to assist in making a disciplinary, termination, or deactivation decision, the employer shall direct a human reviewer to conduct an independent investigation and compile corroborating or supporting information for the decision. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following: (A) Supervisory or managerial evaluations. (B) Personnel files. (C) Work product of workers. (D) Peer reviews. (E) Witness interviews, that may include relevant online customer reviews. (c) If an employer cannot corroborate the ADS output, or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading, the employer shall not use the ADS output to discipline, terminate, or deactivate a worker.
(a) An employer that uses an ADS to assist in making a disciplinary, termination, or deactivation decision shall provide the affected worker with a written postuse notice at the time the employer informs the worker of the decision. The notice shall comply with all of the following: (1) It shall be written in plain language as a separate, stand-alone communication. (2) It shall be in the language in which routine communications and other information are provided to workers. (3) It shall be provided via a simple and easy-to-use method, including an email, hyperlink, or other written format. (b) The post-use notice shall contain all of the following information: (1) That the employer used an ADS to assist the employer in the disciplinary, termination, or deactivation decision with respect to the worker. (2) That a human reviewer conducted an independent investigation and compiled evidence to corroborate the ADS output. (3) Contact information for the human that the worker may contact for more information about the decision and the worker's right to access a copy of their own data and corroborating evidence that was used in the decision. (4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part.
(c) When responding to a data access request pursuant to this section, an employer shall provide to the worker a written, plain language document using a simple and easy-to-use method that is accessible away from the workplace containing all of the following: (1) The specific decision for which the employer used the ADS. (2) The specific worker input data that the ADS used, and the specific worker output produced by the ADS. (3) Any additional corroborating or supporting information used in addition to the ADS output in making the decision. (4) The name of the vender or entity that created the ADS and the product name of the ADS. (5) A copy of any completed impact assessments regarding the ADS in question.
(1) PRIOR TO A DEPLOYER USING A COVERED ADMT TO MATERIALLY INFLUENCE A CONSEQUENTIAL DECISION, THE DEPLOYER SHALL PROVIDE A CLEAR AND CONSPICUOUS NOTICE TO A CONSUMER THAT THE DEPLOYER USED OR WILL USE A COVERED ADMT IN A CONSEQUENTIAL DECISION AFFECTING THE CONSUMER AND INSTRUCTIONS REGARDING HOW THE CONSUMER MAY OBTAIN THE ADDITIONAL INFORMATION DESCRIBED IN THIS SECTION. (2) A DEPLOYER COMPLIES WITH SUBSECTION (1) OF THIS SECTION BY MAINTAINING A PROMINENT PUBLIC NOTICE THAT IS REASONABLY ACCESSIBLE AT POINTS OF CONSUMER INTERACTION, INCLUDING THROUGH A LINK OR POSTING THAT IS REASONABLY PROXIMATE TO THE INTERACTION OR TRANSACTION IN WHICH A CONSEQUENTIAL DECISION MAY OCCUR.
(3) IF A DEPLOYER USES A COVERED ADMT TO MATERIALLY INFLUENCE A CONSEQUENTIAL DECISION THAT RESULTS IN AN ADVERSE OUTCOME FOR A CONSUMER, THE DEPLOYER SHALL PROVIDE WITHIN THIRTY DAYS AFTER MAKING THE DECISION: (a) A PLAIN LANGUAGE DESCRIPTION OF THE CONSEQUENTIAL DECISION AND THE ROLE THE COVERED ADMT PLAYED IN THE CONSEQUENTIAL DECISION; (b) INSTRUCTIONS AND A SIMPLE-TO-FOLLOW PROCESS TO REQUEST ADDITIONAL INFORMATION ABOUT THE COVERED ADMT AND THE INPUTS, INCLUDING THE NAME OF THE COVERED ADMT, THE COVERED ADMT VERSION NUMBER, IF APPLICABLE, THE COVERED ADMT DEVELOPER, AND THE TYPES, CATEGORIES, AND SOURCES OF PERSONAL DATA USED, TO THE EXTENT THE DEPLOYER RECEIVES THE NECESSARY INFORMATION FROM THE DEVELOPER IN COMPLIANCE WITH SECTION 6-1-1702; AND (c) AN EXPLANATION OF THE CONSUMER RIGHTS DESCRIBED IN SECTION 6-1-1705 AND HOW TO EXERCISE THEM.
(1) (a) WHEN A CONSUMER EXPERIENCES AN ADVERSE OUTCOME RESULTING FROM A CONSEQUENTIAL DECISION IN WHICH A COVERED ADMT MATERIALLY INFLUENCES THE CONSEQUENTIAL DECISION, THE CONSUMER MAY REQUEST AND THE DEPLOYER SHALL PROVIDE IN RESPONSE TO THE REQUEST: ... (II) AN OPPORTUNITY FOR MEANINGFUL HUMAN REVIEW AND RECONSIDERATION OF THE CONSEQUENTIAL DECISION, TO THE EXTENT COMMERCIALLY REASONABLE.
(3) (a) SECTIONS 6-1-1701, 6-1-1702, 6-1-1703, 6-1-1704, 6-1-1705, AND 6-1-1706 DO NOT APPLY TO A COVERED ENTITY WITHIN THE MEANING OF THE FEDERAL "HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT OF 1996", 42 U.S.C. SECS. 1320d TO 1320d-9, AND THE REGULATIONS PROMULGATED UNDER THE FEDERAL ACT, OR A COVERED ENTITY'S BUSINESS ASSOCIATES FOR ANY SERVICES RENDERED TO A COVERED ENTITY, TO THE EXTENT THE COVERED ENTITY IS DOING BUSINESS IN COLORADO, EXCEPT FOR A CONSEQUENTIAL DECISION RELATED TO EMPLOYMENT OR AN EMPLOYMENT OPPORTUNITY. (b) NOTWITHSTANDING SUBSECTION (3)(a) OF THIS SECTION, FOR A COVERED ENTITY THAT IS A HEALTH-CARE PROVIDER, AS DEFINED IN 45 CFR 160.103, THIS SUBSECTION (3) APPLIES ONLY IF THE HEALTH-CARE PROVIDER IS OPERATING FROM A LOCATION WITHIN COLORADO. (c) A COVERED ENTITY SHALL PROVIDE PATIENTS WITH A GENERAL NOTICE OF USE OF ADVANCED TECHNOLOGIES, INCLUDING A COVERED ADMT. THE NOTICE MAY BE INCORPORATED WITH OTHER NOTICES DESCRIBING PATIENT RIGHTS AND HOW THE COVERED ENTITY PROVIDES CARE. (d) NOTWITHSTANDING SUBSECTION (3)(a) OF THIS SECTION, A COVERED ENTITY THAT USES A COVERED ADMT TO DETERMINE A PATIENT'S ELIGIBILITY FOR FINANCIAL ASSISTANCE, INCLUDING DISCOUNTED CARE AS DESCRIBED IN SECTION 25.5-3-502, SHALL PROVIDE A PATIENT THE FOLLOWING DISCLOSURES: (I) A PLAIN LANGUAGE DESCRIPTION OF THE CONSEQUENTIAL DECISION AND THE ROLE OF THE COVERED ADMT IN THE CONSEQUENTIAL DECISION; (II) THE TYPES OF INFORMATION ABOUT THE INDIVIDUAL THE COVERED ENTITY RELIED UPON IN MAKING ITS DETERMINATION OF ELIGIBILITY, EXCEPT FOR TRADE SECRETS AND OTHER CONFIDENTIAL OR LEGALLY PROTECTED INFORMATION; (III) INFORMATION ON HOW TO REQUEST CORRECTION OF MATERIALLY INACCURATE PERSONAL DATA HELD BY THE COVERED ENTITY CONSISTENT WITH THE FEDERAL "HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT OF 1996", 42 U.S.C. SECS. 1320d TO 1320d-9 AND SECTION 25.5-3-502; AND (IV) INFORMATION ON HOW TO REQUEST MEANINGFUL HUMAN REVIEW OR RECONSIDERATION, WHERE APPLICABLE. (e) A COVERED ENTITY MAY COMPLY WITH SUBSECTION (3)(d) OF THIS SECTION THROUGH EITHER AN ADVANCE GENERAL DISCLOSURE OF THE INFORMATION REQUIRED BY SUBSECTION (3)(d) OF THIS SECTION OR THROUGH A NOTICE PROVIDED WITHIN THIRTY CALENDAR DAYS AFTER AN ADVERSE OUTCOME. THIS SECTION DOES NOT CREATE A SEPARATE AND DUPLICATIVE DISCLOSURE PROCESS OR APPEAL PROCESS IF THE REVIEW OPPORTUNITIES AND INFORMATION DESCRIBED IN SUBSECTION (3)(d) OF THIS SECTION ARE PROVIDED.
(4) (a) On and after June 30, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:
(b) On and after June 30, 2026, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer:
Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, before such employment-related decision is made, provide to such applicant or employee a written notice disclosing: (1) That the deployer has deployed an automated employment-related decision process; (2) The purpose of the automated employment-related decision process and the nature of such employment-related decision; (3) Information concerning the right, under subparagraph (C) of subdivision (5) of subsection (a) of section 42-518 of the general statutes, to opt out of the processing of personal data for the purposes set forth in said subparagraph; (4) Contact information for the deployer; (5) The availability of human review pursuant to section 7 of this act; (6) Information concerning how such applicant or employee may request a revaluation of any employment-related decision made in whole or in part by such automated employment-related decision process; (7) A link to the summary of the most recent bias audit required pursuant to section 8 of this act; and (8) Information concerning how to request additional documentation or information about such automated employment-related decision process.
(a) Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, if such employment-related decision is adverse to such applicant or employee, provide to such applicant or employee: (1) A high-level statement disclosing the principal reason or reasons for such adverse employment-related decision, including, but not limited to, (A) the degree to which, and manner in which, the automated employment-related decision process contributed to such adverse employment-related decision, (B) the type of data that were processed by such automated employment-related decision process in making, or as a substantial factor in making, such adverse employment-related decision, and (C) the source of the data described in subparagraph (B) of this subdivision; (2) An opportunity to (A) examine the data the automated employment-related decision process processed in making, or as a substantial factor in making, such adverse employment-related decision, (B) correct any incorrect data described in subparagraph (A) of this subdivision, and (C) appeal such adverse employment-related decision if such adverse employment-related decision is based upon any incorrect data described in subparagraph (A) of this subdivision. Such appeal shall allow for human review; and (3) Upon request by such applicant or employee, or such applicant or employee's representative, a copy of the most recent bias audit required pursuant to section 8 of this act. (b) A deployer who is required to provide a high-level statement to an applicant for employment or employee in the state pursuant to subdivision (1) of subsection (a) of this section shall provide such statement: (1) Directly to such applicant or employee; (2) In plain language; (3) In all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sales announcements and other information to persons in the state; and (4) In a format that is accessible to individuals with disabilities.
(a) For the purposes of this section "human review" means a review conducted by a qualified individual who (1) has the authority to make or change an employment-related decision, (2) understands the capabilities, limitations and risks of the automated employment-related decision process, including, but not limited to, patterns of bias, disparate impact and data quality issues, and (3) does not rely solely on the content, decision, prediction or recommendation generated by the automated employment-related decision process in making a final or determinative employment-related decision. (b) (1) A deployer who has deployed an automated employment-related decision process in making, or as a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall implement human review over such automated employment-related decision process by providing for review of the content, decisions, predictions or recommendations generated by the automated employment-related decision process and any other information relevant to such content, decision, prediction or recommendation in order to confirm the accuracy of data processed by such automated employment-related decision process and, when appropriate, modify or veto any such content, decision, prediction or recommendation generated by such automated decision-making process prior to any adverse employment-related decision. (2) A deployer shall (A) establish procedures necessary to pause, correct or reverse erroneous or harmful content, decision, prediction or recommendation generated by an automated employment-related decision process, and (B) establish and maintain logs listing all human review reports and any intervention taken by an individual conducting such human review. (c) No automated employment-related decision process shall be used by a deployer in making a final or determinative employment-related decision without human review over such final or determinative employment-related decision.
(B) For an employer, by the employer or the employer's agent, to fail to provide to any individual advance written notice disclosing, at a minimum, that an automated employment-related decision process will be used to make, to assist in making or in the course of making a decision to hire or employ or to bar or to discharge from employment, or concerning the compensation or terms, conditions or privileges of employment, of such individual. Such notice shall, at a minimum, disclose the trade name of the automated employment-related decision process and the types and sources of personal information concerning the individual that the automated employment-related decision process will process or analyze.
In the city, any employer or employment agency that uses an automated employment decision tool to screen an employee or a candidate who has applied for a position for an employment decision shall notify each such employee or candidate who resides in the city of the following: 1. That an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate that resides in the city. Such notice shall be made no less than ten business days before such use and allow a candidate to request an alternative selection process or accommodation; 2. The job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate or employee. Such notice shall be made no less than 10 business days before such use;
No later than the time that a deployer deploys an automated decision system to make, or assist in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an automated decision system to make, or assist in making, a consequential decision; and (2) Provide to the consumer: (A) A statement disclosing the purpose of the automated decision system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description, in plain language, of the automated decision system, which description shall, at a minimum, include: (i) A description of the personal characteristics or attributes that the system will measure or assess; (ii) The method by which the system measures or assesses those attributes or characteristics; (iii) How those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; (v) How any automated components of such system are used to inform such consequential decision; and (vi) A direct link to a publicly accessible page on the deployer's public website that contains a plain-language description of the logic used in the system, including the key parameters that affect the output of the system; the system's outputs; the types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and the results of the most recent impact assessment, or an active link to a web page where a consumer can review those results; and (D) Instructions on how to access the statement required by Code Section 10-16-5.
(b) A deployer that has used an automated decision system to make, or assist in making, a consequential decision concerning a consumer shall transmit to such consumer within one business day after such decision a notice that includes: (1) A specific and accurate explanation that identifies the principal factors and variables that led to the consequential decision, including: (A) The degree to which, and manner in which, the automated decision system contributed to the consequential decision; (B) The source or sources of the data processed by the automated decision system; and (C) A plain-language explanation of how the consumer's personal data informed these principal factors and variables when the automated decision system made, or assisted in making, the consequential decision; (2) Information about consumers' right to correct, and how the consumer can submit corrections and provide supplementary information relevant to, the consequential decision; (3) What actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future; (4) Information on opportunities to correct any incorrect personal data that the automated decision system processed in making, or assisting in making, the consequential decision; and (5) Information on opportunities to appeal an adverse consequential decision concerning the consumer arising from the deployment of an automated decision system, which appeal shall, if technically feasible, allow for human review. (c)(1) A deployer shall provide the notice, statement, contact information, and description required by subsections (a) and (b) of this Code section: (A) Directly to the consumer; (B) In plain language; (C) In all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) In a format that is accessible to consumers with disabilities. (2) If the deployer is unable to provide the notice, statement, contact information, and description directly to the consumer, the deployer shall make such information available in a manner that is reasonably calculated to ensure that the consumer receives it. (d) No deployer shall use an automated decision system to make, or assist in making, a consequential decision if it cannot provide notices and explanations that satisfy the requirements of this Code section.
(a) Before using an artificial intelligence system to make, or be a substantial factor in making, a consequential decision, a health care provider shall provide the patient or the patient's authorized representative, as applicable, with a written notice that: (1) Informs the recipient that the health care provider will be using an artificial intelligence system to make, or be a substantial factor in making, the consequential decision; (2) Discloses the purpose of the artificial intelligence system and the nature of the consequential decision; (3) Describes the artificial intelligence system in plain language; and (4) Allows the patient to opt out of the processing of the patient's individually identifiable health information or other personal data for purposes of profiling in furtherance of decisions that have legal or similarly significant effects concerning the patient. (b) Any health care provider that used an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall provide the patient or the patient's authorized representative, as applicable, with: (1) A written statement that describes the consequential decision and the principal reasons for the consequential decision, including: (A) The degree to which, and manner in which, the artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the artificial intelligence system in making the consequential decision; and (C) The sources of the data described in paragraph (B); (2) An opportunity to correct any incorrect health information or personal data that the artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (3) An opportunity to appeal the consequential decision, including allowing, to the extent technically feasible, human review of all information relating to the consequential decision; provided that this paragraph shall not apply if providing the opportunity for appeal is not in the best interest of the patient, including in instances in which any delay might pose a risk to the life or safety of the patient. (c) The notice and statement required pursuant to subsections (a) and (b), respectively, shall be provided directly to the patient or the patient's authorized representative, as applicable; provided that if the health care provider is unable to comply with this requirement, the health care provider shall provide the notice or statement in a manner that is reasonably calculated to ensure that the patient or the patient's authorized representative, as applicable, receives the notice or statement.
(a) Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall maintain an artificial intelligence oversight personnel. (b) The artificial intelligence oversight personnel: (1) Shall be a natural person; (2) Shall have the qualifications, experience, and expertise necessary to effectively evaluate outputs, including but not limited to any information, data, assumptions, predictions, scoring, recommendations, decisions, or conclusions generated by artificial intelligence systems in the field of health care; and (3) May be retained by contracting with a third-party. (c) The artificial intelligence oversight personnel shall: (1) Monitor the artificial intelligence systems used by the health care provider; and (2) Before the health care provider uses an output generated by an artificial intelligence system to make, or be a substantial factor in making, a consequential decision: (A) Review and evaluate the output; and (B) Validate or override the output.
1. An employer shall provide a written notice that an automated decision system is in use for the purpose of making employment-related decisions, other than hiring decisions, at the workplace to an employee who will foreseeably be directly affected by the automated decision system, or the employee's authorized representative. The employer shall provide the notice by the following dates: a. At least thirty days before an automated decision system is first deployed by the employer. b. If the employer is using an automated decision system to assist in making employment-related decisions as of the effective date of this Act, no later than January 1, 2027. c. To a new employee within thirty days of hiring the employee. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The type of employment-related decisions potentially affected by the automated decision system. b. A general description of the categories of employee-input data the automated decision system will use, the sources of employee input data, and how employee input data will be collected. c. Any key parameters known to disproportionately affect the output of the automated decision system. d. The individuals, vendors, or entities that created the automated decision system. e. If applicable, a description of each quota set or measured by an automated decision system to which the employee is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. f. A description of the employee's right to access and correct the employee's data used by the automated decision system. g. That the employer is prohibited from retaliating against employees for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
4. If an employer will use an automated decision system in making hiring decisions for a position, the employer shall notify an applicant for the position, upon receiving the application, that the employer utilizes an automated decision system when making hiring decisions. The employer may make the notification using an automatic reply mechanism or on a job posting.
1. An employer shall not use an automated decision system to do any of the following: ... e. Rely solely on an automated decision system when making a discipline, termination, or deactivation decision. 2. When an employer relies primarily on output from an automated decision system to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the automated decision system output and compile and review other information that is relevant to the decision, if any. For purposes of this subsection, "other information" may include but is not limited to any of the following: a. Supervisory or managerial evaluations. b. Personnel files. c. Work product of employees. d. Peer reviews. e. Witness interviews, which may include relevant online customer reviews.
1. An employer that primarily relied on an automated decision system to make a discipline, termination, or deactivation decision shall provide the affected employee with a written notice at the time the employer informs the employee of the decision. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The individual to contact for more information about the decision. b. That the employer used an automated decision system to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the employee. c. That the employee has the right to request a copy of the employee's data used by the automated decision system. d. That the employer is prohibited from retaliating against the employee for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
(a) An employer shall not use or apply, or authorize any procurement, purchase, or acquisition of any service or system using or relying on any automated decision-making system, directly or indirectly, without meaningful and continuing human review when performing any function that: (1) is related to the administration of any public assistance program; (2) will have an adverse impact on the rights, civil liberties, safety, or welfare of any employee in this State; or (3) affects any statutorily or constitutionally provided rights of an employee.
(b) An employer shall not use or apply any automated decision-making system, directly or indirectly, to perform any function described in subsection (a) without providing: (1) a notice to any affected employee no later than the time a decision is issued to that employee that a decision concerning the employee was made using an automated decision-making system; (2) an appeals process for decisions made by automated decision-making system in which an employee is impacted as a direct result of the use of the automated decision-making system; and (3) the opportunity for an affected employee to have an appropriate alternative review, by an individual working for or on behalf of the employer with respect to the decision, independent of the automated decision-making system.
(a) A deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision that an automated decision tool is being used to make, or be a controlling factor in making, the consequential decision. A deployer shall provide to a natural person notified under this subsection all of the following: (1) a statement of the purpose of the automated decision tool; (2) the contact information for the deployer; and (3) a plain language description of the automated decision tool that includes a description of any human components and how any automated component is used to inform a consequential decision.
(b) If a consequential decision is made solely based on the output of an automated decision tool, a deployer shall, if technically feasible, accommodate a natural person's request to not be subject to the automated decision tool and to be subject to an alternative selection process or accommodation. After a request is made under this subsection, a deployer may reasonably request, collect, and process information from a natural person for the purposes of identifying the person and the associated consequential decision. If the person does not provide that information, the deployer shall not be obligated to provide an alternative selection process or accommodation.
(a) It is the policy of this State that a student and the student's parent have the right to: (2) request a human teacher review any automated scored grade or scored grade generated by artificial intelligence;
An employer may not: (1) rely exclusively on an automated decision system in making an employment related decision with respect to a covered individual;
(E) the employer independently corroborates, via meaningful oversight by a human with appropriate and relevant experience, the automated decision system output;
(F) not later than seven (7) days after making the employment related decision, the employer provides full, accessible, and meaningful documentation in plain language and at no cost to the covered individual on the automated decision system output, including: (i) a description of the automated decision system used to generate the automated decision system output; (ii) a description and explanation, in plain language, of the input date to the automated decision system used to generate the automated decision system output and a machine readable copy of the data; (iii) a description and explanation of how the automated decision system output was used in making the employment related decision; and (iv) the reasoning for the use of the automated decision system output in the employment related decision;
(G) the employer allows the covered individual to, after receiving the documentation described in clause (F): (i) dispute, in a manner that is accessible, equitable, and does not pose an unreasonable burden on the covered individual, the automated decision system output to a human with appropriate and relevant experience; and (ii) appeal the employment related decision to a human with appropriate and relevant experience who is not the human for purposes of the corroboration under clause (E).
Sec. 11. (a) An employer that uses or intends to use an automated decision system output in making an employment related decision with respect to a covered individual shall, in accordance with subsections (b) and (c), disclose to the covered individual: (1) that the employer uses or intends to use an automated decision system output in making an employment related decision; (2) a description and explanation of the automated decision system used or intended to be used to generate the automated decision system output, including: (A) the types of data collected or intended to be collected as inputs to the automated decision system and the circumstances of the collection; (B) the characteristics that the automated decision system measures or is intended to measure, such as the knowledge, skills, or abilities of the covered individual; (C) how the characteristics relate or would relate to any function required for the work or potential work of the covered individual; (D) how the system measures or is intended to measure the characteristics; and (E) how the covered individual can interpret the automated decision system output in plain language; (3) the identity of the covered individual or entity that operates the automated decision system that provides the automated decision system output; (4) how the employer uses or intends to use the automated decision system output in making the employment related decision; and (5) how the covered individual may dispute or appeal an employment related decision made with respect to the covered individual using an automated decision system output. (b) An employer shall provide the disclosures required by subsection (a) to a covered individual as follows: (1) In the case of a covered individual who was hired on or before July 1, 2026, the disclosure must be provided to the covered individual not later than August 1, 2026. (2) In the case of a covered individual who is hired after July 1, 2026, the disclosure must be provided to the covered individual before hiring. (c) Not later than thirty (30) days after: (1) any information provided by an employer to a covered individual through a disclosure required by subsection (a) significantly changes; or (2) any significant new information required to be provided in the disclosure becomes available; the employer shall provide the covered individual with an updated disclosure.
(D) the use is designed for purposes of making the employment related decision;
(b) When an artificial intelligence system makes external decisions related to citizens of the Commonwealth, a department, agency, or administrative body shall: 1. Disclose how artificial intelligence is used in the decision-making process; 2. Provide the extent of human involvement in validating and oversight of any decision made; and 3. Make readily available options for individuals to appeal a consequential decision that involves artificial intelligence.
A. An employer shall provide written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or his authorized representative. The notice shall be provided at any of the following time periods: (1) At least thirty days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this Part takes effect. (3) To a new worker within thirty days of his hiring date. C. A written notice required by this Section shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. E. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measure by an ADS that the worker is subject to, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's own data used by the ADS. (7) That the employer shall be prohibited from retaliating against a worker who exercises his rights as provided in Paragraph (6) of this Subsection. (8) That the worker has a right to appeal any decision made with the assistance of an ADS and the process to appeal that decision.
D. An employer who uses an ADS to make hiring decisions shall notify a job applicant upon receiving his application that the employer utilizes an ADS for hiring decisions. Notifications may be made using an automatic reply mechanism or on the job posting.
C.(1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) If an employer or a vendor utilizes an ADS output to assist in making an employment-related decision, the employer or vendor shall do all of the following: (a) Ensure the accuracy of the ADS output. (b)(i) Use a designated internal reviewer to conduct a separate investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (ii) The designated internal reviewer required by this Subparagraph shall have all of the following: (aa) Sufficient authority, discretion, resources, and time to corroborate the ADS output. (bb) Sufficient expertise in the operation of similar systems and a sufficient understanding of the ADS in question to interpret its outputs as well as results of relevant impact assessments. (cc) Education, training, or experience sufficient to allow the reviewer to make a well-informed decision. (iii) The designated internal reviewer shall be protected from retaliation for exercising his responsibilities. (3) An employer shall not rely on an ADS to make an employment-related decision if the employer cannot corroborate the ADS output or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading.
A. An employer that primarily relies on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with written notice at the time such decision is made. The notice shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. B. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The human individual to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in any discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising his right pursuant to this Part. (5) The worker's right to appeal the decision as provided in R.S. 23:975.
A. If an employer has used an ADS to make an employment-related decision about a worker, the affected worker has the right to appeal that decision, request a human review, request submission of additional information, and correct any errors in the data used by the ADS. B. An employer or a vendor that used an ADS to make an employment-related decision shall provide an affected worker with a form or a hyperlink to an electronic form that provides that the worker has a right to appeal the decision within thirty days from the date that the worker was notified. The appeal form provided to an affected worker shall include all of the following: (1) The option to request access to the data used as input to or as output from the ADS. (2) The option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the ADS. (3) The worker's reason or justification for an appeal and any evidence to support the appeal. (4) A designation for an authorized representative who can also access the data. C.(1) An employer or a vendor shall respond to an appeal within fourteen business days. (2)(a)(i) In responding to an appeal, the employer or vendor shall designate a human reviewer who shall meet all of the following requirements: (aa) He can objectively evaluate all evidence. (bb) He has sufficient authority, discretion, and resources to evaluate the decision. (cc) He has the authority to overturn the decision. (ii) The employer or vendor shall not designate a person who was involved in the decision that the worker is appealing. (b) The response provided to the worker shall be composed on a clear, written document which describes the result of the appeal and the reasons for that result. (3) If the human reviewer determines that the employment-related decision should be overturned, the employer or vendor shall rectify the decision within twenty-one business days.
E.(1) Any insured has the right to appeal a determination that he has learned was made with a recommendation from an artificial intelligence or an automated decision system. (2) Any adverse determination in which artificial intelligence or an automated decision system materially contributed to the determination shall be presumed invalid unless the health insurance issuer demonstrates that the determination was independently reached through documented clinical judgment without reliance upon algorithmic output. (3) If an adverse determination is appealed on the basis of the use of an artificial intelligence or an automated decision system, the insurer shall not use an artificial intelligence or an automated decision system in any subsequent review of the claim.
(4) Allow covered persons, upon request, to review and have copies of all documents relevant to any artificial intelligence or an automated decision system as defined in R.S. 22:1260.49(A)(1) used in the utilization review or determination process.
(c) Consumer Protections: Deployers must: (1) Notify consumers when an AI system materially influences a consequential decision; (2) Provide consumers with: (i) The purpose of the system; (ii) An explanation of how the system influenced the decision; (iii) A process to appeal or correct adverse decisions.
(a) Covered entities shall not use biometric data to help make decisions that produce legal effects or similarly significant effects concerning end users. Decisions that include legal effects or similarly significant effects concerning end users include, without limitation, denial or degradation of consequential services or support, such as financial or lending services, housing, insurance, educational enrollment, criminal justice, employment opportunities, health care services, and access to basic necessities, such as food and water.
(d) (1) Not later than 6 months after the effective date of this act, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (ii) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and (iii) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.
(2) Not later than 6 months after the effective date of this act, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (i) a statement disclosing the principal reason or reasons for the consequential decision, including: (A) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (B) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) the source or sources of the data described in subsection (d)(2)(i)(B) of this section; (ii) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (iii) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.
(h) An employer shall not rely primarily on employee data collected through electronic monitoring when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer shall establish meaningful human oversight of such decisions based in whole or in part on data collected through electronic monitoring. (ii) A human decision-maker must actually review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (iii) The human decision-maker must consider information other than information collected through electronic monitoring when making each such decision, such as but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
(i) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision based in whole or part on data gathered through the use of electronic monitoring, it shall disclose to affected employees no less than thirty days prior to the decision going into effect: (i) that the decision was based in whole or part on data gathered through electronic monitoring; (ii) the specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (iii) the specific data, and judgments based upon such data, used in the decision-making process; and (iv) any information used in the decision-making process gathered through sources other than electronic monitoring.
(a) Any employer that uses an automated employment decision tool to assess or evaluate an employee or candidate shall notify employees and candidates subject to the tool no less than ten business days before such use: (i) that an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate; (ii) the job qualifications and characteristics that such automated employment decision tool will assess, what employee or candidate data or attributes the tool will use to conduct that assessment, and what kind of outputs the tool will produce as an evaluation of such employee or candidate; (iii) what employee or candidate data is collected for the automated employment decision tool, the source of such data and the employer's data retention policy. Information pursuant to this section shall not be disclosed where such disclosure would violate local, state, or federal law, or interfere with a law enforcement investigation; (iv) the results of the most recent impact assessment of the automated employment decision tool, including any findings of a disparate impact and associated response from the employer, or information about how to access that information if publicly available; (v) information about how an employee or candidate may request an alternative selection process or accommodation that does not involve the use of an automated employment decision tool and details about that alternative process or accommodation process; and (vi) information about how the employee or candidate may: (A) request reevaluation of the employment decision made by the automated employment decision tool in accordance with section one thousand thirteen of this article; and (B) notification of the employee or candidate's right to file a complaint in a civil court in accordance with section seven of this chapter or otherwise exercise the rights described in this chapter. (b) The notice required by this section shall be: (i) written in clear and plain language; (ii) included in each job posting or advertisement for each position for which the automated employment decision tool will be used; (iii) posted on the employer's website in any language that the employer regularly uses to communicate with employees; (iv) provided directly to each candidate who applies for a position in the language with which that candidate communicates with the employer; (v) made available in formats that are reasonably accessible to and usable by individuals with disabilities; and (vi) otherwise presented in a manner that ensures the notice clearly and effectively communicates the required information to employees.
(b) An employer shall not rely primarily on output from an automated decision tool when making hiring, promotion, termination, disciplinary, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer must establish meaningful human oversight of such decisions based in whole or in part on the output of automated employment decision tools. In determining whether an internal reviewer employs the requisite knowledge and skill to provide meaningful human oversight, relevant factors include the relative complexity and specialized nature of the automated decision tool, the reviewer's general experience, the reviewer's training and experience in the field, the preparation and study the reviewer is able to give the matter and whether it is feasible to refer the matter to, or associate or consult with, an expert with established competence in the field automated decision tools. (ii) A human decision-maker must actually review any output of an automated employment decision tool and exercise independent judgment in making each such decision; (iii) The human decision-maker must consider information other than automated employment decision tool outputs when making each such decision, such as but not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews; and (iv) An employer shall consider information other than automated employment decision tool outputs when making hiring, promotion, termination, disciplinary, or compensation decisions, such as supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
(c) An employer shall not require employees or candidates to consent to the use of an automated employment decision tool in an employment decision in order to be considered for an employment decision, nor shall an employer discipline or disadvantage an employee or candidate for employment as a result of their request for accommodation.
Sec. 13. (1) If an employer uses an electronic monitoring tool or automated decisions tool, the employer must display a poster at the employer's place of business, in a conspicuous place accessible to the employer's employees, that includes, but is not limited to, notice of the use of an electronic monitoring tool or automated decisions tool. (2) Not less than 30 days before an employer implements an electronic monitoring tool or automated decisions tool, the employer shall provide notice, in writing, of the tool's use to all of the employer's employees. The employer shall also include the notice in every job posting, post the notice on the employer's website, provide the notice directly to every applicant, and make the notice available in accessible formats that account for the applicant's first language, if it is not English, and any disability the applicant may have. The notice must provide a covered individual with the ability to opt out of the electronic monitoring tool or automated decisions tool. (3) If a covered individual opts out of the use of an electronic monitoring tool or automated decisions tool under subsection (2), the employer shall not use the electronic monitoring tool or automated decisions tool to make any employment-related decisions for that covered individual.
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision. Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
(2) fail to provide notice to an employee or applicant for employment that the employer is using artificial intelligence for the purposes described in clause (1).
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision.
Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request. (c) A service provider, contractor, or vendor must provide full assistance to the employer in responding to a worker request for access, including any of that worker's input or output data in the service provider, contractor, or vender's possession and any relevant information about the automated decision system.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
(4)(a) On and after February 1, 2026, prior to deploying any high-risk artificial intelligence system to make or be a substantial factor in making any consequential decision concerning any consumer, the deployer shall: (i) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make or be a substantial factor in making a consequential decision; (ii) Provide to the consumer: (A) A statement that discloses the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description written in plain language that describes the high-risk artificial intelligence system; and (D) Instructions on how to access the statement described in subdivision (5)(a) of this section; and (iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107.
(b) On and after February 1, 2026, for each high-risk artificial intelligence system that makes or is a substantial factor in making any consequential decision that is adverse to any consumer, the deployer of such high-risk artificial intelligence system shall provide to such consumer: (i) A statement that discloses each principal reason for the consequential decision, including: (A) The degree to and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) Each source of the data described in subdivision (b)(i)(B) of this subsection; (ii) An opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making or processed as a substantial factor in making the consequential decision; and (iii) An opportunity to appeal any adverse consequential decision concerning the consumer arising from the deployment of the high-risk artificial intelligence system unless providing the opportunity for appeal is not in the best interest of the consumer, including instances when any delay might pose a risk to the life or safety of such consumer. Any such appeal shall allow for human review if technically feasible.
(c)(i) Except as provided in subdivision (c)(ii) of this subsection, a deployer shall provide the notice, statement, contact information, and description required under subdivisions (4)(a) and (b) of this section: (A) Directly to the consumer; (B) In plain language; (C) In each language in which the deployer in the ordinary course of business provides any contract, disclaimer, sale announcement, or other information to any consumer; and (D) In a format that is accessible to any consumer with any disability. (ii) If the deployer is unable to provide the notice, statement, contact information, and description required under subdivisions (a) and (b) of this subsection directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
6. a. An employer or public entity shall not implement an EMT or other surveillance or the use of an AEDS or ABSDS unless the employer or public entity has provided a written notice to all affected service beneficiaries and employees, including public employees making decisions about public benefits or services for service beneficiaries, and to any recognized bargaining representative of the employees, at least 60 days prior to implementation. If the EMT, AEDS, or SBSDS was in operation on the effective date of this act, the written notice shall be provided not more than 60 days after the effective date of this act. For an employee hired after the effective date of this act, written notice shall be provided not more than 30 days after the hiring, and the employer shall obtain a written acknowledgement of receipt of the notice by the employee. The notice shall include, except that the notice to service beneficiaries shall not include the disclosures indicated in paragraphs (5) and (6) of this subsection, the following disclosures: (1) that the use of an AEDS, ABSDS, or EMT or surveillance is being implemented, and what type of decisions that will be affected by the AEDS, ABSDS or the EMT or surveillance; (2) copies of the summaries of the impact assessment reports of the AEDS, ABSDS, or EMT conducted by an independent auditor or the department conducted pursuant to subsection d. of section 3, or subsection a. of section 4, of the act, and directions on how to obtain the entire impact assessment report from the public registry maintained by the department; (3) a description of the data and information that will be collected and the outputs that will be used, specifying, in the case of an employee or public employee, for which of the allowable purposes identified in subsection a. of section 3 of this act they will be used; (4) the rights provided by this section and section 8 of this act to employees and service beneficiaries, and their authorized representatives, to have access to all relevant data and information and to contest any disclosure of the notice; (5) a description of any performance standard, productivity quota, or other related measure used in evaluating employees, including public employees making decisions about public benefits or services for service beneficiaries, a description of what data and information is collected, and a description of any adverse consequences or positive incentives associated with the standards or quotas; and (6) the obligation stipulated in subsection c. of this section that an employer or public entity, upon a request of the recognized bargaining representative of the employees, to respond, in the manner specified by that subsection, to concerns raised the representative regarding the AEDS, ABSDS, EMT, or surveillance. b. Employers or public entities shall give employees, and any recognized bargaining representative of the employees, at least 60 days written notice before the implementation of any significant changes in the EMT, AEDS, or ABSDS or in the employer's or public entity's use of an EMT or surveillance or use of an AEDS or ABSDS. 7. All notices required to be provided to employees or service beneficiaries pursuant to section 6 of this act, and all summaries of impact assessment reports required to be included with those notices, shall: a. Be written in clear, plain language easily understood by workers without technical expertise; b. Be translated into any language spoken by at least five percent of the employer's or public entity's workforce; c. Be provided in hard copy form and, if possible, in electronic form; d. Be posted conspicuously in the workplace and made continuously available to workers and their recognized bargaining representative; and e. Disclose that employers and entities are prohibited from retaliating against employees or applicants for employment for exercising their rights under this act.
c. If a recognized bargaining representative of the employees, within 30 days of receiving a notice pursuant to subsection a. or b. of this section, notifies the employer or public entity of specific concerns they have of an AEDS, ABSDS, EMT, or surveillance not being in compliance with the provisions of this act, other law, or applicable collective bargaining agreement, including whether the impact assessment was accurate in deeming the AEDS, ABSDS, EMT, or surveillance to be in compliance, the employer or public entity shall not implement the AEDS, ABSDS, EMT, or surveillance until the employer or public entity has provided the representative of the employees with a written response to the specific concerns which includes any modification of the AEDS, ABSDS, or EMT which the employer or public entity agrees is needed for compliance, or an explanation of why the employer or public entity believes no modification is necessary to be in compliance. If the employee representative is not satisfied with the response, the representative may seek relief in an administrative action pursuant to section 18 of this act, in a civil action pursuant to the provisions of section 19 of this act, or in a grievance or arbitration procedure outlined in an applicable collective bargaining agreement.
d. The employer or public entity shall not make any employment-related decision which has an adverse impact on an employee if the decision is based, in whole or in part, on a productivity quota or performance standard that was not previously disclosed to the employee pursuant to paragraph (5) of subsection a. of this section.
8. a. In the case of an employer or public employer who, with respect to public employees, uses an EMT or other surveillance, or uses an AEDS, to make, or assist in making, an employment-related decision which adversely affects an employee, or in the case of a public entity which uses an ABSDS in making a decision to reduce public benefits or services to a service beneficiary, the employer or public entity shall, at least 10 days before the decision takes effect, provide the service beneficiary or employee and any recognized bargaining representative of the employee with a written notice, which: (1) describes and explains the reasons for the decision; (2) provides access to all relevant data and information about the decision, including a comprehensive explanation of how the EMT, ABSDS, or AEDS are being used in making the decision; and (3) explains that the employee, applicant for employment, service beneficiary, or an authorized representative shall have: the right to access all relevant data and information; the right to contest the decision through the proceduresindicated in subsection b. of this section; and, if the employee, service beneficiary, or applicant is not satisfied with the outcome of that procedure, the right to seek relief in an administrative action pursuant to section 18 of this act, in a civil action pursuant to the provisions of section 19 of this act, or, if an employee is represented by a recognized bargaining representative, in a grievance or arbitration procedure outlined in an applicable collective bargaining agreement. In the case of an applicant for employment or public benefits or services, an employer or public entity that uses an AEDS or ABSDS to make, or assist in making, a decision to reject the application shall provide the written notice described in this subsection not later than the time that the decision is made. b. Upon a request from the employee, service beneficiary, applicant for employment or an authorized representative made not more than 30 days after the employer or public entity provides the written notice required by subsection a. of this section, or not more than 30 days after the adverse decision is implemented if the required notice is not given, the employer or public entity shall: (1) permit the employee, service beneficiary, applicant or authorized representative to review and copy any data and information collected or used to make the decision, and related personnel files; disclose complete data and information regarding the impact assessments of the EMT, the ABSDS, and the AEDS conducted pursuant to section 3 of this act and oversight of the EMT, the ABSDS, and the AEDS conducted pursuant to section 9 of this act, including whether the output of the AEDS or the ABSDS was modified in the oversight process, and if so, how; provide copies of the summaries of the impact assessment reports and disclose how to access the full assessment reports on the public registry maintained by the department; and provide a clear, complete explanation of how the AEDS or the ABSDS produced any outputs related to the decision, including information about the weighting of factors and the data, algorithms, and other processes involved in making the decision; (2) permit the employee, service beneficiary, or applicant for employment to make an appeal to: seek the correction of any inaccurate, incomplete or biased data or information; contest any adverse decision in which data or information was considered which was erroneous, incomplete or biased or which was collected, retained or used by the EMT, ABSDS, or AEDS in a manner which violates the provisions of this act; or contest any adverse decision in which the decision was otherwise made in a manner which violates the provisions of this act; and (3) designate a human reviewer who is an employee of the employer or public entity and who is required to objectively evaluate all evidence, has sufficient authority, discretion, resources, and time to evaluate the decision, has sufficient training and expertise to have a full understanding of the data, algorithms, and other processes involved in making the decision, and has the authority to modify or overturn the decision, including the correction of any inaccurate, incomplete or biased data or information. The reviewer shall consider the appeal made by the employee, service beneficiary, or applicant for employment regarding any of the matters indicated in paragraph (2) of this subsection and issue a determination which shall be the final outcome of the procedure of this subsection for an appeal made to the employer or public entity. An employee, service beneficiary, or applicant for employment who is not satisfied with this final outcome of the procedure may seek relief in an administrative action pursuant to section 18 of this act, in a civil action pursuant to the provisions of section 19 of this act, or, if the employee is represented by a recognized bargaining representative, in a grievance or arbitration procedure outlined in an applicable collective bargaining agreement.
9. a. An employer or public entity shall not rely solely on data or information about employees or service beneficiaries collected through an EMT or other surveillance, or outputs of an AEDS or ABSDS, or information from third parties, including data brokers, when making employment-related decisions, or in the case of a public entity, when making employment-related decisions about its own employees, or making decisions about public benefits or services for service beneficiaries. Any data or information collected through an EMT or other surveillance, or used to produce, or be part of, outputs of an AEDS or ABSDS, shall be corroborated by internal reviewers designated by the employer or public entity pursuant to subsection b. of this section and shall be subject to review and challenge by the affected service beneficiary or employee or their authorized representative, as provided in paragraph (2) of subsection a. of section 5 of this act or subsection b. of section 8 of this act. No decision affecting the terms or conditions of employment or the provision of public benefits or services may be based exclusively or determinatively on AEDS or ABSDS outputs, or data and information collected by an EMT or other surveillance. b. An employer or public entity shall establish meaningful human oversight of all employment-related decisions or decisions about public benefits or services made utilizing data or information collected by an EMT or other surveillance or AEDS or ABSDS outputs. The oversight shall include: (1) designation of internal reviewers who are employees of the employer or public entity, and have sufficient training and expertise in the operation of whichever is used of the EMT, the ABSDS, or the AEDS, familiarity with the most recent impact assessments of the EMT, the ABSDS, or AEDS, and sufficient understanding of their use to identify potential errors, biases, or inaccuracies produced by their use; (2) authority and discretion for the reviewers to dispute, revise, or reject AEDS or ABSDS outputs or data or information collected by an EMT or other surveillance suspected, or found, to be inaccurate, discriminatory, or otherwise invalid; (3) a requirement that a human decision-maker review the data and information collected by an EMT or other surveillance and the AEDS and ABSDS outputs, exercise independent judgment, and consider information beyond AEDS and ABSDS outputs and data and information collected by an EMT or other surveillance, including, in the case of an employee, supervisory evaluations, personnel files, employee work product, or peer reviews, when making consequential employment-related decisions; and (4) a requirement that the reviewers have adequate time and resources to conduct the reviews, and are available for direct communication, in person or by phone or video conference, to applicants for employment or public benefits or services, service beneficiaries affected by an adverse decision regarding public benefits or services, and employees affected by adverse employment-related decisions.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
(a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section. (b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
(a) Any employer or employment agency that uses an automated employment decision tool to screen candidates who have applied for a position for an employment decision shall notify each such candidate of the following: (i) That an automated employment decision tool will be used in connection with the assessment or evaluation of such candidate; (b) The notice required by paragraph (a) of this subdivision shall be made no less than ten business days before the use of such automated employment decision tool and shall allow such candidate to request an alternative selection process or accommodation.
(ii) The job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate;
Any landlord that uses an automated housing decision making tool to screen applicants for housing shall notify each such applicant of the following: (i) That an automated housing decision making tool will be used in connection with the assessment or evaluation of such applicant; (ii) The characteristics that such automated housing decision making tool will use in the assessment of such applicant; (iii) Information about the type of data collected for such automated housing decision making tool, the source of such data, and the landlord's data retention policy; and (iv) If an application for housing is denied through use of the automated housing decision making tool, the reason for such denial. (b) The notice required by paragraph (a) of this subdivision shall be made no less than twenty-four hours before the use of such automated housing decision making tool and shall allow such applicant to request an alternative selection process or accommodation.
4. New York residents shall have the right to understand how and why an outcome impacting them was determined by an automated system, even when the automated system is not the sole determinant of the outcome. 5. Automated systems shall provide explanations that are technically valid, meaningful to the individual and any other persons who need to understand the system and proportionate to the level of risk based on the context.
1. New York residents shall have the right to opt out of automated systems, where appropriate, in favor of a human alternative. The appropriateness of such an option shall be determined based on reasonable expectations in a given context, with a focus on ensuring broad accessibility and protecting the public from particularly harmful impacts. In some instances, a human or other alternative may be mandated by law. 2. New York residents shall have access to a timely human consideration and remedy through a fallback and escalation process if an automated system fails, produces an error, or if they wish to appeal or contest its impacts on them. 3. The human consideration and fallback process shall be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.
4. Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall additionally be tailored to their purpose, provide meaningful access for oversight, include training for New York residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
1. Not later than two years after the effective date of this article, the division shall promulgate regulations in accordance with specifying the circumstances and manner in which a deployer shall provide to an individual a means to opt-out of the use of a covered algorithm for a consequential action and to elect to have the consequential action concerning the individual undertaken by a human without the use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall consider the following: (a) how to ensure that any notice or request from a deployer regarding the right to a human alternative is clear and conspicuous, in plain language, easy to execute, and at no cost to an individual; (b) how to ensure that any such notice to individuals is effective, timely, and useful; (c) the specific types of consequential actions for which a human alternative is appropriate, considering the magnitude of the action and risk of harm; (d) the extent to which a human alternative would be beneficial to individuals and the public interest; (e) the extent to which a human alternative can prevent or mitigate harm; (f) the risk of harm to individuals beyond the requestor if a human alternative is available or not available; (g) the feasibility of providing a human alternative in different circumstances; and (h) any other considerations the division deems appropriate to balance the need to give an individual control over a consequential action related to such individual with the practical feasibility and effectiveness of granting such control.
3. Not later than two years after the effective date of this article, the division shall promulgate regulations specifying the circumstances and manner in which a deployer shall provide to an individual a mechanism to appeal to a human a consequential action resulting from the deployer's use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall do the following: (a) ensure that the appeal mechanism is clear and conspicuous, in plain language, easy-to-execute, and at no cost to individuals; (b) ensure that the appeal mechanism is proportionate to the consequential action; (c) ensure that the appeal mechanism is reasonably accessible to individuals with disabilities, timely, usable, effective, and non-discriminatory; (d) require, where appropriate, a mechanism for individuals to identify and correct any personal data used by the covered algorithm; (e) specify training requirements for human reviewers with respect to a consequential action; and (f) consider any other circumstances, procedures, or matters the division deems appropriate to balance the need to give an individual a right to appeal a consequential action related to such individual with the practical feasibility and effectiveness of granting such right.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
5. (a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
(b) It shall be an unlawful discriminatory practice for an employer to fail to provide notice to an employee that such employer is using artificial intelligence for the purposes described in paragraph (a) of this subdivision. (c) The division shall adopt any rules or regulations necessary for the implementation and enforcement of this subdivision, including, but not limited to, rules on the circumstances and conditions that require notice, the time period for providing such notice and the means for providing such notice.
No state agency, or any entity acting on behalf of such agency, which utilizes or applies any automated decision-making system, directly or indirectly, in performing any function that: (a) is related to the delivery of any public assistance benefit; (b) will have a material impact on the rights, civil liberties, safety or welfare of any individual within the state; or (c) affects any statutorily or constitutionally provided right of an individual, shall utilize such automated decision-making system, unless such automated decision-making system is subject to continued and operational meaningful human review.
B. The qualified end-user of the AI device shall retain authority to amend or overrule outputs from the device based on their professional judgment, and without pressure from the deployer or any other entity to ignore or alter professional judgement.
D. A clinical peer reviewer who participates in a utilization review process for a health benefit plan that initially uses artificial intelligence tools for a utilization review shall open and document the utilization review of the individual clinical records or data prior to issuing an adverse determination.
(c) Human representatives.--Upon request, the business entity shall provide the consumer with timely access to a human representative, if a human representative is reasonably available.
(a) Right to human review.--A consumer shall have the right to request that a human representing the business entity review any consumer interaction involving a high-impact decision. (b) Notice.--When the conditions under section 3 are met requiring the disclosure of the use of artificial intelligence in a consumer interaction and involve a high-impact decision, the business entity shall disclose in a clear and conspicuous manner that the consumer has a right to request a human review by the business entity involving the high-impact decision.
(c) Time frame.--A business entity shall commence the human review not later than 14 days after the request for a human review is made. The human review shall be completed and the decision delivered to the requester not later than 28 days after the request for a human review is made.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
(j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
(j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
(D)(1) No later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (a) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (b) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by this item; and (c) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer pursuant to Section 30-31-60(A)(1)(a)(iii).
(2) A deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (a) a statement disclosing the principal reason or reasons for the consequential decision, including: (i) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (ii) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (iii) the source or sources of the data described in item (2)(a)(ii); (b) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (c) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.
(3)(a) Except as provided in subitem (b), a deployer shall provide the notice, statement, contact information, and description required by items (1) and (2): (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities. (b) If the deployer is unable to provide the notice, statement, contact information, and description required by items (1) and (2) directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
B. All decisions related to the pre-trial detention or release, prosecution, adjudication, sentencing, probation, parole, correctional supervision, or rehabilitation of criminal offenders shall be made by the judicial officer or other person charged with making such decision. No such decision shall be made without the involvement of a human decision-maker. The use of any recommendation or prediction from an artificial intelligence-based tool shall be subject to any challenge or objection permitted by law.
The Director shall require any state agency that uses an automated decision system as a substantial factor in any employment decision to: 2. Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the state agency's decision-making processes; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
No employment decision shall be made by a state agency without the involvement of a human decision maker. No state agency shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
The Department shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in the Commonwealth's employment decisions and a process for the investigation and resolution of any such concerns and complaints. Such process shall be separate and apart from the dispute resolution process described in § 2.2-1202.1.
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall: 2. Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the decision-making processes of the department, office, board, commission, agency, or instrumentality of local government; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
No employment decision shall be made by a department, office, board, commission, agency, or instrumentality of local government without the involvement of a human decision maker. No department, office, board, commission, agency, or instrumentality of local government shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in employment decisions and a process for the investigation and resolution of any such concerns and complaints.
No employment decision shall be made by an employer without the involvement of a human decision maker. No employer shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
(2)(A) An employer shall not solely rely on outputs from an automated decision system when making employment-related decisions. (B) An employer may utilize an automated decision system in making employment-related decisions if: (i) the automated decision system outputs considered in making the employment-related decision are corroborated by human oversight of the employee, including supervisory or managerial observations and documentation of the employee's work, personnel records, and consultations with the employee's coworkers; (ii) the employer has conducted an impact assessment of the automated decision system pursuant to subsection (g) of this section; and (iii) the employer is in compliance with the notice requirements of subdivision (4) of this subsection (f).
(4) Prior to using an automated decision system to make an employment-related decision about an employee, the employer must provide the employee with a notice that complies with subdivision (c)(3)(A) of this section and, at a minimum, contains the following information: (A) a plain language explanation of the nature, purpose, and scope for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (B) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system; (C) the specific category and sources of employee input data that the automated decision system will use, including a specific description of any data collected through electronic monitoring; (D) any performance metrics the employer will consider using with the automated decision system; (E) the type of outputs the automated decision system will produce; (F) the individuals or entities that developed the automated decision system; (G) the individual or entities that will operate, monitor, and interpret the results of the automated decision system; (H) information about how an employee can access the results of the most recent impact assessment of the automated decision system; (I) a description of an employee's rights, pursuant to subsection (j) of this section, to access information about the employer's use of the automated decision system and to correct data used by the automated decision system; and (J) a statement that employees are protected from retaliation for exercising the rights described in the notice.
(a) Any deployer that employs an automated decision system for a consequential decision shall inform the consumer prior to the use of the system for a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that automated decision systems will be used to make a consequential decision or to assist in making a consequential decision. (b) Any notice provided by a deployer to the consumer pursuant to subsection (a) of this section shall include: (1) a description of the personal characteristics or attributes that the system will measure or assess; (2) the method by which the system measures or assesses those attributes or characteristics; (3) how those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (4) any human components of the system; (5) how any automated components of the system are used to inform the consequential decision; and (6) a direct link to a publicly accessible page on the deployer's website that contains a plain-language description of the: (A) system's outputs; (B) types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and (C) results of the most recent impact assessment, or an active link to a web page where a consumer can review those results.
(c) Any deployer that employs an automated decision system for a consequential decision shall provide the consumer with a single notice containing a plain-language explanation of the decision that identifies the principal reason or reasons for the consequential decision, including: (1) the identity of the developer of the automated decision system used in the consequential decision, if the deployer is not also the developer; (2) a description of what the output of the automated decision system is, such as a score, recommendation, or other similar description; (3) the degree and manner to which the automated decision system contributed to the decision; (4) the types and sources of data processed by the automated decision system in making the consequential decision; (5) a plain language explanation of how the consumer's personal data informed the consequential decision; and (6) what actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future.
(d)(1) A deployer shall provide and explain a process for a consumer to appeal a decision, which shall at minimum allow the consumer to: (A) formally contest the decision; (B) provide information to support their position; and (C) obtain meaningful human review of the decision. (2) For an appeal made pursuant to subdivision (1) of this subsection: (A) a deployer shall designate a human reviewer who: (i) is trained and qualified to understand the consequential decision being appealed, the consequences of the decision for the consumer, how to evaluate and how to serve impartially, including by avoiding prejudgment of the facts at issue, conflict of interest, and bias; (ii) does not have a conflict of interest for or against the deployer or the consumer; (iii) was not involved in the initial decision being appealed; (iv) shall enjoy protection from dismissal or its equivalent, disciplinary measures, or other adverse treatment for exercising their functions under this section; and (v) shall be allocated sufficient human resources by the deployer to conduct an effective appeal of the decision; and (B) the human reviewer shall consider the information provided by the consumer in their appeal and may consider other sources of information relevant to the consequential decision. (3) A deployer shall respond to a consumer's appeal not later than 45 after receipt of the appeal. That period may be extended once by an additional 45 days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the consumer of any extension not later than 45 days after receipt of the appeal, together with the reasons for the delay.
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal information beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal data beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.