When AI systems make or inform consequential decisions about individuals — typically covering employment, credit, housing, insurance, healthcare, and public benefits — those individuals must have meaningful rights to understand, review, challenge, and in high-stakes contexts override those decisions. The specific rights and processes vary by jurisdiction and context, but the core principle is that individuals should not be subject to consequential automated decisions without meaningful recourse.
(a) During the business hours of 8 a.m. to 6 p.m. daily, an operator of a large private business who provide goods and services to consumers in California shall provide consumers with human customer service support and communications. During these times, an operator shall connect a person interacting with a customer service chatbot, or automated customer support system, to a customer service agent within five minutes after a request for human customer service is made.
(b) For telephonic customer service platforms, the business shall ensure all of the following: (1) That a customer call be answered quickly and, after the call is answered, that a customer is not placed on hold for more than 5 minutes at any point after the call is answered, and that cumulative hold times for a call not exceed more than 10 minutes total. (2) If a call is answered by a customer service chatbot, the operator of the telephonic platform shall provide human assistance within five minutes after the call is made.
(c) For online customer service platforms, the business shall ensure that a customer is given option to request customer service assistance from a human being and, upon that request, the operator of the online platform shall provide human assistance within five minutes after the request is made.
(a) It is the public policy of the State of California that a worker providing direct patient care be free to use their professional judgment to make assessments and decisions within their scope of practice as appropriate for their patients. (c) An employer shall not use or deploy technology to replace or limit a worker's use of professional judgment in patient care.
(a) If a deployer uses a high-risk automated decision system to make a decision regarding a natural person, the deployer shall notify the natural person of that fact and disclose to that natural person all of the following: (1) The purpose of the high-risk automated decision system and the specific decision it was used to make. (2) How the high-risk automated decision system was used to make the decision. (3) The type of data used by the high-risk automated decision system. (4) Contact information for the deployer. (5) A link to the statement required by subdivision (b).
(c) A deployer shall provide, as technically feasible, a natural person that is the subject of a decision made by a high-risk automated decision system an opportunity to appeal that decision for review by a natural person.
(a) An employer shall provide a written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or their authorized representative, according to the following: (1) At least 30 days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this title takes effect, no later than April 1, 2026. (3) To a new worker within 30 days of hiring the worker. (b) An employer shall maintain an updated list of all ADS currently in use. (c) A written notice required by this section shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including, but not limited to, an email, hyperlink, or other written format. (e) A notice issued pursuant to subdivision (a) shall contain the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measured by an ADS to which the worker is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's data used by the ADS. (7) That the employer is prohibited from retaliating against workers for exercising their rights described in paragraph (6).
(d) An employer shall notify a job applicant upon receiving the application that the employer utilizes an ADS when making hiring decisions, if the employer will use the ADS in making decisions for that position. Notifications may be made using an automatic reply mechanism or on a job posting.
(c) (1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) When an employer relies primarily on ADS output to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the ADS output and compile and review other information that is relevant to the decision, if any. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following: (A) Supervisory or managerial evaluations. (B) Personnel files. (C) Work product of workers. (D) Peer reviews. (E) Witness interviews, that may include relevant online customer reviews.
(a) An employer that primarily relied on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with a written notice at the time the employer informs the worker of the decision. The notice shall be all of the following: (1) Written in plain language as a separate, stand-alone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including an email, hyperlink, or other written format. (b) A notice issued pursuant to subdivision (a) shall contain all of the following information: (1) The human to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part.
(b) (1) An employer shall not rely solely on an ADS when making a disciplinary, termination, or deactivation decision. (2) If an employer uses an ADS output to assist in making a disciplinary, termination, or deactivation decision, the employer shall direct a human reviewer to conduct an independent investigation and compile corroborating or supporting information for the decision. For purposes of this paragraph, "other information" may include, but is not limited to, any of the following: (A) Supervisory or managerial evaluations. (B) Personnel files. (C) Work product of workers. (D) Peer reviews. (E) Witness interviews, that may include relevant online customer reviews. (c) If an employer cannot corroborate the ADS output, or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading, the employer shall not use the ADS output to discipline, terminate, or deactivate a worker.
(a) An employer that uses an ADS to assist in making a disciplinary, termination, or deactivation decision shall provide the affected worker with a written postuse notice at the time the employer informs the worker of the decision. The notice shall comply with all of the following: (1) It shall be written in plain language as a separate, stand-alone communication. (2) It shall be in the language in which routine communications and other information are provided to workers. (3) It shall be provided via a simple and easy-to-use method, including an email, hyperlink, or other written format. (b) The post-use notice shall contain all of the following information: (1) That the employer used an ADS to assist the employer in the disciplinary, termination, or deactivation decision with respect to the worker. (2) That a human reviewer conducted an independent investigation and compiled evidence to corroborate the ADS output. (3) Contact information for the human that the worker may contact for more information about the decision and the worker's right to access a copy of their own data and corroborating evidence that was used in the decision. (4) That the employer is prohibited from retaliating against the worker for exercising their rights under this part. (c) When responding to a data access request pursuant to this section, an employer shall provide to the worker a written, plain language document using a simple and easy-to-use method that is accessible away from the workplace containing all of the following: (1) The specific decision for which the employer used the ADS. (2) The specific worker input data that the ADS used, and the specific worker output produced by the ADS. (3) Any additional corroborating or supporting information used in addition to the ADS output in making the decision. (4) The name of the vender or entity that created the ADS and the product name of the ADS. (5) A copy of any completed impact assessments regarding the ADS in question.
(4) (a) On and after June 30, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:
(b) On and after June 30, 2026, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer:
Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, before such employment-related decision is made, provide to such applicant or employee a written notice disclosing: (1) That the deployer has deployed an automated employment-related decision process; (2) The purpose of the automated employment-related decision process and the nature of such employment-related decision; (3) Information concerning the right, under subparagraph (C) of subdivision (5) of subsection (a) of section 42-518 of the general statutes, to opt out of the processing of personal data for the purposes set forth in said subparagraph; (4) Contact information for the deployer; (5) The availability of human review pursuant to section 7 of this act; (6) Information concerning how such applicant or employee may request a revaluation of any employment-related decision made in whole or in part by such automated employment-related decision process; (7) A link to the summary of the most recent bias audit required pursuant to section 8 of this act; and (8) Information concerning how to request additional documentation or information about such automated employment-related decision process.
(a) Except as provided in subsection (b) of section 2 of this act, a deployer who has deployed an automated employment-related decision process to make, or be a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall, if such employment-related decision is adverse to such applicant or employee, provide to such applicant or employee: (1) A high-level statement disclosing the principal reason or reasons for such adverse employment-related decision, including, but not limited to, (A) the degree to which, and manner in which, the automated employment-related decision process contributed to such adverse employment-related decision, (B) the type of data that were processed by such automated employment-related decision process in making, or as a substantial factor in making, such adverse employment-related decision, and (C) the source of the data described in subparagraph (B) of this subdivision; (2) An opportunity to (A) examine the data the automated employment-related decision process processed in making, or as a substantial factor in making, such adverse employment-related decision, (B) correct any incorrect data described in subparagraph (A) of this subdivision, and (C) appeal such adverse employment-related decision if such adverse employment-related decision is based upon any incorrect data described in subparagraph (A) of this subdivision. Such appeal shall allow for human review; and (3) Upon request by such applicant or employee, or such applicant or employee's representative, a copy of the most recent bias audit required pursuant to section 8 of this act. (b) A deployer who is required to provide a high-level statement to an applicant for employment or employee in the state pursuant to subdivision (1) of subsection (a) of this section shall provide such statement: (1) Directly to such applicant or employee; (2) In plain language; (3) In all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sales announcements and other information to persons in the state; and (4) In a format that is accessible to individuals with disabilities.
(a) For the purposes of this section "human review" means a review conducted by a qualified individual who (1) has the authority to make or change an employment-related decision, (2) understands the capabilities, limitations and risks of the automated employment-related decision process, including, but not limited to, patterns of bias, disparate impact and data quality issues, and (3) does not rely solely on the content, decision, prediction or recommendation generated by the automated employment-related decision process in making a final or determinative employment-related decision. (b) (1) A deployer who has deployed an automated employment-related decision process in making, or as a substantial factor in making, an employment-related decision concerning an applicant for employment or employee in the state shall implement human review over such automated employment-related decision process by providing for review of the content, decisions, predictions or recommendations generated by the automated employment-related decision process and any other information relevant to such content, decision, prediction or recommendation in order to confirm the accuracy of data processed by such automated employment-related decision process and, when appropriate, modify or veto any such content, decision, prediction or recommendation generated by such automated decision-making process prior to any adverse employment-related decision. (2) A deployer shall (A) establish procedures necessary to pause, correct or reverse erroneous or harmful content, decision, prediction or recommendation generated by an automated employment-related decision process, and (B) establish and maintain logs listing all human review reports and any intervention taken by an individual conducting such human review. (c) No automated employment-related decision process shall be used by a deployer in making a final or determinative employment-related decision without human review over such final or determinative employment-related decision.
(B) For an employer, by the employer or the employer's agent, to fail to provide to any individual advance written notice disclosing, at a minimum, that an automated employment-related decision process will be used to make, to assist in making or in the course of making a decision to hire or employ or to bar or to discharge from employment, or concerning the compensation or terms, conditions or privileges of employment, of such individual. Such notice shall, at a minimum, disclose the trade name of the automated employment-related decision process and the types and sources of personal information concerning the individual that the automated employment-related decision process will process or analyze.
(a) No later than the time that a deployer deploys an automated decision system to make, or assist in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an automated decision system to make, or assist in making, a consequential decision; and (2) Provide to the consumer: (A) A statement disclosing the purpose of the automated decision system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description, in plain language, of the automated decision system, which description shall, at a minimum, include: (i) A description of the personal characteristics or attributes that the system will measure or assess; (ii) The method by which the system measures or assesses those attributes or characteristics; (iii) How those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; (v) How any automated components of such system are used to inform such consequential decision; and (vi) A direct link to a publicly accessible page on the deployer's public website that contains a plain-language description of the logic used in the system, including the key parameters that affect the output of the system; the system's outputs; the types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and the results of the most recent impact assessment, or an active link to a web page where a consumer can review those results; and (D) Instructions on how to access the statement required by Code Section 10-16-5.
(b) A deployer that has used an automated decision system to make, or assist in making, a consequential decision concerning a consumer shall transmit to such consumer within one business day after such decision a notice that includes: (1) A specific and accurate explanation that identifies the principal factors and variables that led to the consequential decision, including: (A) The degree to which, and manner in which, the automated decision system contributed to the consequential decision; (B) The source or sources of the data processed by the automated decision system; and (C) A plain-language explanation of how the consumer's personal data informed these principal factors and variables when the automated decision system made, or assisted in making, the consequential decision; (2) Information about consumers' right to correct, and how the consumer can submit corrections and provide supplementary information relevant to, the consequential decision; (3) What actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future; (4) Information on opportunities to correct any incorrect personal data that the automated decision system processed in making, or assisting in making, the consequential decision; and (5) Information on opportunities to appeal an adverse consequential decision concerning the consumer arising from the deployment of an automated decision system, which appeal shall, if technically feasible, allow for human review. (c)(1) A deployer shall provide the notice, statement, contact information, and description required by subsections (a) and (b) of this Code section: (A) Directly to the consumer; (B) In plain language; (C) In all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) In a format that is accessible to consumers with disabilities. (2) If the deployer is unable to provide the notice, statement, contact information, and description directly to the consumer, the deployer shall make such information available in a manner that is reasonably calculated to ensure that the consumer receives it. (d) No deployer shall use an automated decision system to make, or assist in making, a consequential decision if it cannot provide notices and explanations that satisfy the requirements of this Code section.
(a) Before using an artificial intelligence system to make, or be a substantial factor in making, a consequential decision, a health care provider shall provide the patient or the patient's authorized representative, as applicable, with a written notice that: (1) Informs the recipient that the health care provider will be using an artificial intelligence system to make, or be a substantial factor in making, the consequential decision; (2) Discloses the purpose of the artificial intelligence system and the nature of the consequential decision; (3) Describes the artificial intelligence system in plain language; and (4) Allows the patient to opt out of the processing of the patient's individually identifiable health information or other personal data for purposes of profiling in furtherance of decisions that have legal or similarly significant effects concerning the patient.
(b) Any health care provider that used an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall provide the patient or the patient's authorized representative, as applicable, with: (1) A written statement that describes the consequential decision and the principal reasons for the consequential decision, including: (A) The degree to which, and manner in which, the artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the artificial intelligence system in making the consequential decision; and (C) The sources of the data described in paragraph (B); (2) An opportunity to correct any incorrect health information or personal data that the artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (3) An opportunity to appeal the consequential decision, including allowing, to the extent technically feasible, human review of all information relating to the consequential decision; provided that this paragraph shall not apply if providing the opportunity for appeal is not in the best interest of the patient, including in instances in which any delay might pose a risk to the life or safety of the patient. (c) The notice and statement required pursuant to subsections (a) and (b), respectively, shall be provided directly to the patient or the patient's authorized representative, as applicable; provided that if the health care provider is unable to comply with this requirement, the health care provider shall provide the notice or statement in a manner that is reasonably calculated to ensure that the patient or the patient's authorized representative, as applicable, receives the notice or statement.
(a) Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall maintain an artificial intelligence oversight personnel. (b) The artificial intelligence oversight personnel: (1) Shall be a natural person; (2) Shall have the qualifications, experience, and expertise necessary to effectively evaluate outputs, including but not limited to any information, data, assumptions, predictions, scoring, recommendations, decisions, or conclusions generated by artificial intelligence systems in the field of health care; and (3) May be retained by contracting with a third-party. (c) The artificial intelligence oversight personnel shall: (1) Monitor the artificial intelligence systems used by the health care provider; and (2) Before the health care provider uses an output generated by an artificial intelligence system to make, or be a substantial factor in making, a consequential decision: (A) Review and evaluate the output; and (B) Validate or override the output.
1. An employer shall provide a written notice that an automated decision system is in use for the purpose of making employment-related decisions, other than hiring decisions, at the workplace to an employee who will foreseeably be directly affected by the automated decision system, or the employee's authorized representative. The employer shall provide the notice by the following dates: a. At least thirty days before an automated decision system is first deployed by the employer. b. If the employer is using an automated decision system to assist in making employment-related decisions as of the effective date of this Act, no later than January 1, 2027. c. To a new employee within thirty days of hiring the employee. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The type of employment-related decisions potentially affected by the automated decision system. b. A general description of the categories of employee-input data the automated decision system will use, the sources of employee input data, and how employee input data will be collected. c. Any key parameters known to disproportionately affect the output of the automated decision system. d. The individuals, vendors, or entities that created the automated decision system. e. If applicable, a description of each quota set or measured by an automated decision system to which the employee is subject, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. f. A description of the employee's right to access and correct the employee's data used by the automated decision system. g. That the employer is prohibited from retaliating against employees for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
4. If an employer will use an automated decision system in making hiring decisions for a position, the employer shall notify an applicant for the position, upon receiving the application, that the employer utilizes an automated decision system when making hiring decisions. The employer may make the notification using an automatic reply mechanism or on a job posting.
e. Rely solely on an automated decision system when making a discipline, termination, or deactivation decision. 2. When an employer relies primarily on output from an automated decision system to make a discipline, termination, or deactivation decision, the employer shall use a human reviewer to review the automated decision system output and compile and review other information that is relevant to the decision, if any. For purposes of this subsection, "other information" may include but is not limited to any of the following: a. Supervisory or managerial evaluations. b. Personnel files. c. Work product of employees. d. Peer reviews. e. Witness interviews, which may include relevant online customer reviews.
1. An employer that primarily relied on an automated decision system to make a discipline, termination, or deactivation decision shall provide the affected employee with a written notice at the time the employer informs the employee of the decision. 2. A notice provided pursuant to subsection 1 shall contain all of the following information: a. The individual to contact for more information about the decision. b. That the employer used an automated decision system to assist the employer in one or more discipline, termination, or deactivation decisions with respect to the employee. c. That the employee has the right to request a copy of the employee's data used by the automated decision system. d. That the employer is prohibited from retaliating against the employee for exercising the rights provided in this chapter. 3. A written notice required by subsection 1 shall be written in plain language as a separate, stand-alone communication. The notice shall be in the language in which routine communications and other information are provided to employees. The notice shall be provided via a simple and easy-to-use method, including but not limited to an email, electronic link, or other written format.
4. a. If a utilization review organization's decision to deny or downgrade a request for prior authorization is appealed by the requesting health care provider or covered person, the appeal shall be conducted by either of the following: (1) A qualified reviewer, if the health care provider requesting prior authorization is a physician. (2) A clinical peer, if the health care provider requesting prior authorization is not a physician. b. A qualified reviewer or clinical peer involved in the initial denial or downgrade determination of a request for prior authorization that is the subject of an appeal shall not conduct the appeal. c. When conducting an appeal of a request for prior authorization, the qualified reviewer or clinical peer shall consider the known clinical aspects of the health care services under review, including but not limited to medical records relevant to the covered person's medical condition that is the subject of the health care services for which prior authorization is requested, and any relevant medical literature submitted by the health care provider as part of the appeal.
(a) An employer shall not use or apply, or authorize any procurement, purchase, or acquisition of any service or system using or relying on any automated decision-making system, directly or indirectly, without meaningful and continuing human review when performing any function that: (1) is related to the administration of any public assistance program; (2) will have an adverse impact on the rights, civil liberties, safety, or welfare of any employee in this State; or (3) affects any statutorily or constitutionally provided rights of an employee.
(b) An employer shall not use or apply any automated decision-making system, directly or indirectly, to perform any function described in subsection (a) without providing: (1) a notice to any affected employee no later than the time a decision is issued to that employee that a decision concerning the employee was made using an automated decision-making system; (2) an appeals process for decisions made by automated decision-making system in which an employee is impacted as a direct result of the use of the automated decision-making system; and (3) the opportunity for an affected employee to have an appropriate alternative review, by an individual working for or on behalf of the employer with respect to the decision, independent of the automated decision-making system.
(d) The use of an automated decision-making system shall not affect: (1) existing rights of employees covered by a collective bargaining agreement; or (2) existing representational relationships among labor organizations or bargaining relationships between an employer and a labor organization.
(a) A deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person who is the subject of the consequential decision that an automated decision tool is being used to make, or be a controlling factor in making, the consequential decision. A deployer shall provide to a natural person notified under this subsection all of the following: (1) a statement of the purpose of the automated decision tool; (2) the contact information for the deployer; and (3) a plain language description of the automated decision tool that includes a description of any human components and how any automated component is used to inform a consequential decision.
(b) If a consequential decision is made solely based on the output of an automated decision tool, a deployer shall, if technically feasible, accommodate a natural person's request to not be subject to the automated decision tool and to be subject to an alternative selection process or accommodation. After a request is made under this subsection, a deployer may reasonably request, collect, and process information from a natural person for the purposes of identifying the person and the associated consequential decision. If the person does not provide that information, the deployer shall not be obligated to provide an alternative selection process or accommodation.
(2) Clear instructions describing how a patient may contact a human health care provider, employee of the health facility, clinic, physician's office, or office of a group provider, or other appropriate person.
It is the policy of this State that a student and the student's parent have the right to: (2) request a human teacher review any automated scored grade or scored grade generated by artificial intelligence;
An employer may not: (1) rely exclusively on an automated decision system in making an employment related decision with respect to a covered individual;
the employer independently corroborates, via meaningful oversight by a human with appropriate and relevant experience, the automated decision system output;
not later than seven (7) days after making the employment related decision, the employer provides full, accessible, and meaningful documentation in plain language and at no cost to the covered individual on the automated decision system output, including: (i) a description of the automated decision system used to generate the automated decision system output; (ii) a description and explanation, in plain language, of the input date to the automated decision system used to generate the automated decision system output and a machine readable copy of the data; (iii) a description and explanation of how the automated decision system output was used in making the employment related decision; and (iv) the reasoning for the use of the automated decision system output in the employment related decision;
the employer allows the covered individual to, after receiving the documentation described in clause (F): (i) dispute, in a manner that is accessible, equitable, and does not pose an unreasonable burden on the covered individual, the automated decision system output to a human with appropriate and relevant experience; and (ii) appeal the employment related decision to a human with appropriate and relevant experience who is not the human for purposes of the corroboration under clause (E).
Sec. 11. (a) An employer that uses or intends to use an automated decision system output in making an employment related decision with respect to a covered individual shall, in accordance with subsections (b) and (c), disclose to the covered individual: (1) that the employer uses or intends to use an automated decision system output in making an employment related decision; (2) a description and explanation of the automated decision system used or intended to be used to generate the automated decision system output, including: (A) the types of data collected or intended to be collected as inputs to the automated decision system and the circumstances of the collection; (B) the characteristics that the automated decision system measures or is intended to measure, such as the knowledge, skills, or abilities of the covered individual; (C) how the characteristics relate or would relate to any function required for the work or potential work of the covered individual; (D) how the system measures or is intended to measure the characteristics; and (E) how the covered individual can interpret the automated decision system output in plain language; (3) the identity of the covered individual or entity that operates the automated decision system that provides the automated decision system output; (4) how the employer uses or intends to use the automated decision system output in making the employment related decision; and (5) how the covered individual may dispute or appeal an employment related decision made with respect to the covered individual using an automated decision system output. (b) An employer shall provide the disclosures required by subsection (a) to a covered individual as follows: (1) In the case of a covered individual who was hired on or before July 1, 2026, the disclosure must be provided to the covered individual not later than August 1, 2026. (2) In the case of a covered individual who is hired after July 1, 2026, the disclosure must be provided to the covered individual before hiring. (c) Not later than thirty (30) days after: (1) any information provided by an employer to a covered individual through a disclosure required by subsection (a) significantly changes; or (2) any significant new information required to be provided in the disclosure becomes available; the employer shall provide the covered individual with an updated disclosure.
A. An employer shall provide written notice that an ADS, for the purpose of making employment-related decisions, not including hiring, is in use at the workplace to a worker who will foreseeably be directly affected by the ADS, or his authorized representative. The notice shall be provided at any of the following time periods: (1) At least thirty days before an ADS is first deployed by the employer. (2) If the employer is using an ADS to assist in making employment-related decisions at the time this Part takes effect. (3) To a new worker within thirty days of his hiring date. B. An employer shall maintain an updated list of all ADS currently in use. C. A written notice required by this Section shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. D. An employer who uses an ADS to make hiring decisions shall notify a job applicant upon receiving his application that the employer utilizes an ADS for hiring decisions. Notifications may be made using an automatic reply mechanism or on the job posting. E. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The type of employment-related decisions potentially affected by the ADS. (2) A general description of the categories of worker input data the ADS will use, the sources of worker input data, and how worker input data will be collected. (3) Any key parameters known to disproportionately affect the output of the ADS. (4) The individuals, vendors, or entities that created the ADS. (5) If applicable, a description of each quota set or measure by an ADS that the worker is subject to, including the quantified number of tasks to be performed or products to be produced, and any potential adverse employment action that could result from failure to meet the quota, as well as whether those quotas are subject to change and if any notice is given of changes in quotas. (6) A description of the worker's right to access and correct the worker's own data used by the ADS. (7) That the employer shall be prohibited from retaliating against a worker who exercises his rights as provided in Paragraph (6) of this Subsection. (8) That the worker has a right to appeal any decision made with the assistance of an ADS and the process to appeal that decision.
C.(1) An employer shall not rely solely on an ADS when making a discipline, termination, or deactivation decision. (2) If an employer or a vendor utilizes an ADS output to assist in making an employment-related decision, the employer or vendor shall do all of the following: (a) Ensure the accuracy of the ADS output. (b)(i) Use a designated internal reviewer to conduct a separate investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (ii) The designated internal reviewer required by this Subparagraph shall have all of the following: (aa) Sufficient authority, discretion, resources, and time to corroborate the ADS output. (bb) Sufficient expertise in the operation of similar systems and a sufficient understanding of the ADS in question to interpret its outputs as well as results of relevant impact assessments. (cc) Education, training, or experience sufficient to allow the reviewer to make a well-informed decision. (iii) The designated internal reviewer shall be protected from retaliation for exercising his responsibilities. (3) An employer shall not rely on an ADS to make an employment-related decision if the employer cannot corroborate the ADS output or the human reviewer has concluded that the ADS output is inaccurate, incomplete, or misleading.
A. An employer that primarily relies on an ADS to make a discipline, termination, or deactivation decision shall provide the affected worker with written notice at the time such decision is made. The notice shall meet all of the following requirements: (1) Written in plain language as a separate, standalone communication. (2) In the language in which routine communications and other information are provided to workers. (3) Provided via a simple and easy-to-use method, including but not limited to an email, hyperlink, or other written format. B. A notice issued pursuant to Subsection A of this Section shall contain all of the following information: (1) The human individual to contact for more information about the decision and the ability to request a copy of the worker's own worker data relied on in the decision. (2) That the employer used an ADS to assist the employer in any discipline, termination, or deactivation decisions with respect to the worker. (3) That the worker has the right to request a copy of the worker's data used by the ADS. (4) That the employer is prohibited from retaliating against the worker for exercising his right pursuant to this Part. (5) The worker's right to appeal the decision as provided in R.S. 23:975.
A. If an employer has used an ADS to make an employment-related decision about a worker, the affected worker has the right to appeal that decision, request a human review, request submission of additional information, and correct any errors in the data used by the ADS. B. An employer or a vendor that used an ADS to make an employment-related decision shall provide an affected worker with a form or a hyperlink to an electronic form that provides that the worker has a right to appeal the decision within thirty days from the date that the worker was notified. The appeal form provided to an affected worker shall include all of the following: (1) The option to request access to the data used as input to or as output from the ADS. (2) The option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the ADS. (3) The worker's reason or justification for an appeal and any evidence to support the appeal. (4) A designation for an authorized representative who can also access the data. C.(1) An employer or a vendor shall respond to an appeal within fourteen business days. (2)(a)(i) In responding to an appeal, the employer or vendor shall designate a human reviewer who shall meet all of the following requirements: (aa) He can objectively evaluate all evidence. (bb) He has sufficient authority, discretion, and resources to evaluate the decision. (cc) He has the authority to overturn the decision. (ii) The employer or vendor shall not designate a person who was involved in the decision that the worker is appealing. (b) The response provided to the worker shall be composed on a clear, written document which describes the result of the appeal and the reasons for that result. (3) If the human reviewer determines that the employment-related decision should be overturned, the employer or vendor shall rectify the decision within twenty-one business days.
E.(1) Any insured has the right to appeal a determination that he has learned was made with a recommendation from an artificial intelligence or an automated decision system. (2) Any adverse determination in which artificial intelligence or an automated decision system materially contributed to the determination shall be presumed invalid unless the health insurance issuer demonstrates that the determination was independently reached through documented clinical judgment without reliance upon algorithmic output. (3) If an adverse determination is appealed on the basis of the use of an artificial intelligence or an automated decision system, the insurer shall not use an artificial intelligence or an automated decision system in any subsequent review of the claim.
(4) Allow covered persons, upon request, to review and have copies of all documents relevant to any artificial intelligence or an automated decision system as defined in R.S. 22:1260.49(A)(1) used in the utilization review or determination process.
(c) Consumer Protections: Deployers must: (1) Notify consumers when an AI system materially influences a consequential decision; (2) Provide consumers with: (i) The purpose of the system; (ii) An explanation of how the system influenced the decision; (iii) A process to appeal or correct adverse decisions.
(c) Consumer Notification: Consumers must be notified when: (1) They are being targeted or influenced by AI systems in a way that materially impacts their decisions; (2) Algorithms are used to determine pricing, eligibility, or access to services.
(a) Covered entities shall not use biometric data to help make decisions that produce legal effects or similarly significant effects concerning end users. Decisions that include legal effects or similarly significant effects concerning end users include, without limitation, denial or degradation of consequential services or support, such as financial or lending services, housing, insurance, educational enrollment, criminal justice, employment opportunities, health care services, and access to basic necessities, such as food and water.
(d) (1) Not later than 6 months after the effective date of this act, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (ii) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and (iii) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer. (2) Not later than 6 months after the effective date of this act, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (i) a statement disclosing the principal reason or reasons for the consequential decision, including: (A) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (B) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) the source or sources of the data described in subsection (d)(2)(i)(B) of this section; (ii) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (iii) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer. (3) (i) except as provided in subsection (d)(3)(ii) of this section, a deployer shall provide the notice, statement, contact information, and description required by subsections (c)(1) and (d)(2) of this section: (A) directly to the consumer; (B) in plain language; (C) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) in a format that is accessible to consumers with disabilities. (ii) if the deployer is unable to provide the notice, statement, contact information, and description required by subsections (d)(1) and (d)(2) of this section directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
(h) An employer shall not rely primarily on employee data collected through electronic monitoring when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer shall establish meaningful human oversight of such decisions based in whole or in part on data collected through electronic monitoring. (ii) A human decision-maker must actually review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (iii) The human decision-maker must consider information other than information collected through electronic monitoring when making each such decision, such as but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
(i) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision based in whole or part on data gathered through the use of electronic monitoring, it shall disclose to affected employees no less than thirty days prior to the decision going into effect: (i) that the decision was based in whole or part on data gathered through electronic monitoring; (ii) the specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (iii) the specific data, and judgments based upon such data, used in the decision-making process; and (iv) any information used in the decision-making process gathered through sources other than electronic monitoring.
(a) Any employer that uses an automated employment decision tool to assess or evaluate an employee or candidate shall notify employees and candidates subject to the tool no less than ten business days before such use: (i) that an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate; (ii) the job qualifications and characteristics that such automated employment decision tool will assess, what employee or candidate data or attributes the tool will use to conduct that assessment, and what kind of outputs the tool will produce as an evaluation of such employee or candidate; (iii) what employee or candidate data is collected for the automated employment decision tool, the source of such data and the employer's data retention policy. Information pursuant to this section shall not be disclosed where such disclosure would violate local, state, or federal law, or interfere with a law enforcement investigation; (iv) the results of the most recent impact assessment of the automated employment decision tool, including any findings of a disparate impact and associated response from the employer, or information about how to access that information if publicly available; (v) information about how an employee or candidate may request an alternative selection process or accommodation that does not involve the use of an automated employment decision tool and details about that alternative process or accommodation process; and (vi) information about how the employee or candidate may: (A) request reevaluation of the employment decision made by the automated employment decision tool in accordance with section one thousand thirteen of this article; and (B) notification of the employee or candidate's right to file a complaint in a civil court in accordance with section seven of this chapter or otherwise exercise the rights described in this chapter. (b) The notice required by this section shall be: (i) written in clear and plain language; (ii) included in each job posting or advertisement for each position for which the automated employment decision tool will be used; (iii) posted on the employer's website in any language that the employer regularly uses to communicate with employees; (iv) provided directly to each candidate who applies for a position in the language with which that candidate communicates with the employer; (v) made available in formats that are reasonably accessible to and usable by individuals with disabilities; and (vi) otherwise presented in a manner that ensures the notice clearly and effectively communicates the required information to employees.
(b) An employer shall not rely primarily on output from an automated decision tool when making hiring, promotion, termination, disciplinary, or compensation decisions. For an employer to satisfy the requirements of this paragraph: (i) An employer must establish meaningful human oversight of such decisions based in whole or in part on the output of automated employment decision tools. In determining whether an internal reviewer employs the requisite knowledge and skill to provide meaningful human oversight, relevant factors include the relative complexity and specialized nature of the automated decision tool, the reviewer's general experience, the reviewer's training and experience in the field, the preparation and study the reviewer is able to give the matter and whether it is feasible to refer the matter to, or associate or consult with, an expert with established competence in the field automated decision tools. (ii) A human decision-maker must actually review any output of an automated employment decision tool and exercise independent judgment in making each such decision; (iii) The human decision-maker must consider information other than automated employment decision tool outputs when making each such decision, such as but not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews; and (iv) An employer shall consider information other than automated employment decision tool outputs when making hiring, promotion, termination, disciplinary, or compensation decisions, such as supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) An employer shall not require employees or candidates to consent to the use of an automated employment decision tool in an employment decision in order to be considered for an employment decision, nor shall an employer discipline or disadvantage an employee or candidate for employment as a result of their request for accommodation.
Sec. 13. (1) If an employer uses an electronic monitoring tool or automated decisions tool, the employer must display a poster at the employer's place of business, in a conspicuous place accessible to the employer's employees, that includes, but is not limited to, notice of the use of an electronic monitoring tool or automated decisions tool. (2) Not less than 30 days before an employer implements an electronic monitoring tool or automated decisions tool, the employer shall provide notice, in writing, of the tool's use to all of the employer's employees. The employer shall also include the notice in every job posting, post the notice on the employer's website, provide the notice directly to every applicant, and make the notice available in accessible formats that account for the applicant's first language, if it is not English, and any disability the applicant may have. The notice must provide a covered individual with the ability to opt out of the electronic monitoring tool or automated decisions tool. (3) If a covered individual opts out of the use of an electronic monitoring tool or automated decisions tool under subsection (2), the employer shall not use the electronic monitoring tool or automated decisions tool to make any employment-related decisions for that covered individual.
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision. Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request. (c) A service provider, contractor, or vendor must provide full assistance to the employer in responding to a worker request for access, including any of that worker's input or output data in the service provider, contractor, or vender's possession and any relevant information about the automated decision system.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
(2) fail to provide notice to an employee or applicant for employment that the employer is using artificial intelligence for the purposes described in clause (1).
Subdivision 1. Pre-use notice; provision. (a) An employer must provide a written notice that an automated decision system is in use at the workplace for the purpose of making employment-related decisions, to a worker who will be directly or indirectly affected by the automated decision system, or the worker's authorized representative, and to any union representing workers who could be directly or indirectly affected by the automated decision system. (b) The notice in paragraph (a) must be provided: (1) if the automated decision system is introduced after the effective date of this section, at least 30 days before the introduction of the automated decision system; (2) if the employer is using an existing automated decision system as of the effective date of this section, no later than September 1, 2026; (3) prominently to a job applicant or new worker, before the employer collects the applicant's or worker's personal information that the employer plans to process using the automated decision system; (4) at least 30 days before implementing any significant change to the automated decision system or how the employer is using the automated decision system; and (5) to a union representing workers who will be subject to the automated decision system, on a timeline that provides a meaningful opportunity to bargain over the use, scope, and impact of the automated decision system prior to deployment or modification of the tool. (c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request. (d) Notices under paragraph (a) must be: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (e) A job applicant or worker must receive the notice required under this section and respond with affirmative written consent before the worker or applicant is subject to an automated decision system. (f) If reasonable alternatives to the use of the automated decision system exist, the worker must be allowed to opt out of being subject to the automated decision system. Subd. 2. Pre-use notice; contents. The notice required under subdivision 1, paragraph (a), must contain the following information: (1) a plain-language explanation of the nature, purpose, and scope of the decisions for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (2) the specific category and sources of worker data the automated decision system will use or collect, and how that data was or will be collected; (3) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system, and the type of outputs the automated decision system will produce; (4) the individuals, vendors, and entities that created the automated decision system and the individuals, vendors, and entities that will run, manage, and interpret the results of the automated decision system output; (5) the job qualifications and characteristics that the automated decision system assesses, what worker data or attributes the system uses to conduct that assessment, and what kind of outputs the system produces as an evaluation of the worker; (6) the results of any impact assessments of the automated decision system, whether performed by the employer or the automated decision system vendor, and how to access that information; (7) an up-to-date list of all automated decision systems the employer is currently using; and (8) a description of the worker's rights under sections 181.9922 to 181.9927.
Subd. 2. Employment-related decisions. (a) An employer must not rely solely on an automated decision system when making an employment-related decision. (b) When an employer relies in part on an automated decision system in making an employment-related decision, the employer must: (1) ensure the accuracy of the automated decision system output; and (2) use a designated internal reviewer to conduct an investigation and compile corroborating information for the decision. This information may include but is not limited to supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (c) The designated internal reviewer must: (1) have sufficient authority, discretion, resources, and time to corroborate the automated decision system output; (2) have sufficient expertise in the operation of similar systems and a sufficient understanding of the automated decision system in question to interpret the outputs and results of relevant impact assessments; (3) have sufficient education, training, or experience to allow the reviewer to make a well-informed decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; and (4) be protected from retaliation for exercising the reviewer's responsibilities. (d) When an employer cannot corroborate the automated decision system output, or the human reviewer has concluded that the automated decision system output is inaccurate, incomplete, or misleading, the employer must not rely on the automated decision system to make the employment-related decision.
Subdivision 1. Notice. (a) An employer that has used an automated decision system to make an employment-related decision must provide the affected worker with a written notice: (1) at the time the employer informs the worker of the decision, or no later than 15 business days from the date of the decision, whichever is earlier; or (2) if the decision results in the discipline or termination of the worker, at least 30 days before the discipline or termination takes effect. (b) The employer must provide a notice under paragraph (a) that is: (1) written in plain language as a separate and standalone communication; (2) in the language in which routine communications and other information are provided to workers; and (3) provided using a simple and easy-to-use method, including an email, hyperlink, or other written format. (c) A notice under paragraph (a) must contain the following information: (1) an acknowledgment that the employer used an automated decision system to make one or more employment-related decisions with respect to the worker; (2) a description of the worker's rights under sections 181.9922 to 181.9927; (3) a form or a hyperlink to an electronic form for the worker to file an appeal or request detailed information about the data and automated decision system used in the decision; and (4) that the employer is prohibited from retaliating against the worker for exercising the worker's rights under this section. (d) If an employer uses the same automated decision system in the same way multiple times a quarter, an employer must provide each affected employee: (1) the full notice required by this section for the first use of the automated decision system each quarter; and (2) a second notice at the end of the quarter that provides: (i) the number of times the employer or operator used the automated decision system that quarter; (ii) the dates the employer or operator used the automated decision system that quarter; and (iii) a description of the worker's rights under sections 181.9922 to 181.9927, including the right to access information about each decision. Subd. 2. Right to access. (a) When responding to a worker's access request, an employer must provide the following information to the worker: (1) a plain-language explanation of the specific decision for which the employer used the automated decision system; (2) in a simple and easy-to-use format, the specific worker data that the automated decision system used and all specific worker outputs produced by the automated decision system; (3) how the employer used the automated decision system output with respect to the worker, including: (i) the rationale for the decision, including the specific roles the output and human involvement played in the business's decision; (ii) any additional corroborating information or judgments the employer used in addition to the automated decision system output in making the decision; (iii) how the logic of the automated decision system, including its assumptions and limitations, was applied to the worker; (iv) the key parameters or performance metrics that affected the output of the automated decision system with respect to the worker and how those parameters applied to the worker; and (v) the range of possible outputs and aggregate output statistics, to help a worker understand how they compare to other workers; (4) the name of the entity that created the automated decision system and the product name of the automated decision system; and (5) a copy of any completed impact assessments of the automated decision system. (b) An employer must respond to an access request no later than 14 calendar days from the date the employer received the request. (c) A service provider, contractor, or vendor must provide full assistance to the employer in responding to a worker request for access, including any of that worker's input or output data in the service provider, contractor, or vender's possession and any relevant information about the automated decision system.
(a) An employer that uses an automated decision system to make an employment-related decision must provide the affected worker with a form or a hyperlink to an electronic form to appeal the decision. (b) The appeal form provided to an affected worker must include: (1) the option to request access to the data used as input to or as output from the automated decision system; (2) the option to request access to any corroborating or supporting evidence provided by a human reviewer to verify output from the automated decision system; (3) space for the worker's reason for an appeal and any evidence the worker has to support the appeal; and (4) information on how the worker can designate an authorized representative who can also access the data. (c) A worker appealing the employment-related decision must submit their appeal form within 30 days of receiving the notification under section 181.9925. (d) Within five business days of receiving an appeal form, an employer must respond to the worker submitting the form. To respond to an appeal, the employer must designate a human reviewer who: (1) must objectively evaluate all evidence; (2) has sufficient authority, discretion, and resources to evaluate the decision, including education about the limitations and biases of automated decision systems and training on workers' rights under sections 181.9922 to 181.9927; (3) has the authority to overturn the employer's decision; and (4) was not involved in making the decision the worker is appealing. (e) After reviewing the evidence, the human reviewer must produce a clear, written document describing the result of the appeal and the reasons for that result. This document must be provided to both the employer and the worker. (f) If the human reviewer determines that the employment-related decision should be overturned, the employer must rectify the decision within five business days of receiving the decision.
(4)(a) On and after February 1, 2026, prior to deploying any high-risk artificial intelligence system to make or be a substantial factor in making any consequential decision concerning any consumer, the deployer shall: (i) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make or be a substantial factor in making a consequential decision; (ii) Provide to the consumer: (A) A statement that discloses the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description written in plain language that describes the high-risk artificial intelligence system; and (D) Instructions on how to access the statement described in subdivision (5)(a) of this section; and (iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107. (c)(i) Except as provided in subdivision (c)(ii) of this subsection, a deployer shall provide the notice, statement, contact information, and description required under subdivisions (4)(a) and (b) of this section: (A) Directly to the consumer; (B) In plain language; (C) In each language in which the deployer in the ordinary course of business provides any contract, disclaimer, sale announcement, or other information to any consumer; and (D) In a format that is accessible to any consumer with any disability. (ii) If the deployer is unable to provide the notice, statement, contact information, and description required under subdivisions (a) and (b) of this subsection directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
(b) On and after February 1, 2026, for each high-risk artificial intelligence system that makes or is a substantial factor in making any consequential decision that is adverse to any consumer, the deployer of such high-risk artificial intelligence system shall provide to such consumer: (i) A statement that discloses each principal reason for the consequential decision, including: (A) The degree to and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) Each source of the data described in subdivision (b)(i)(B) of this subsection; (ii) An opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making or processed as a substantial factor in making the consequential decision; and (iii) An opportunity to appeal any adverse consequential decision concerning the consumer arising from the deployment of the high-risk artificial intelligence system unless providing the opportunity for appeal is not in the best interest of the consumer, including instances when any delay might pose a risk to the life or safety of such consumer. Any such appeal shall allow for human review if technically feasible.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
c. If a business entity uses information obtained through a biometric surveillance system to deny a consumer access to its premises or to remove a consumer from its premises, the business entity shall provide the consumer with a detailed explanation regarding its actions and the criteria used by the business entity in making its determination.
a. An employer in the State that requests applicants to record video interviews and uses an artificial intelligence analysis of the applicant-submitted video shall, prior to making the request for a video interview: (1) notify an applicant before the interview that artificial intelligence may be used to analyze the applicant's video interview and consider the applicant's fitness for the position; (2) provide an applicant with information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants; and (3) obtain, before the interview, written consent, which may be electronic, from the applicant to be evaluated by the artificial intelligence program as described in the information provided. An employer shall not use artificial intelligence to evaluate an applicant who has not consented to the use of artificial intelligence analysis.
(a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section.
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
2. Notices required. (a) Any employer or employment agency that uses an automated employment decision tool to screen candidates who have applied for a position for an employment decision shall notify each such candidate of the following: (i) That an automated employment decision tool will be used in connection with the assessment or evaluation of such candidate; (ii) The job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate; and (iii) Information about the type of data collected for such automated employment decision tool, the source of such data, and the employer or employment agency's data retention policy. (b) The notice required by paragraph (a) of this subdivision shall be made no less than ten business days before the use of such automated employment decision tool and shall allow such candidate to request an alternative selection process or accommodation.
Any landlord that uses an automated housing decision making tool to screen applicants for housing shall notify each such applicant of the following: (i) That an automated housing decision making tool will be used in connection with the assessment or evaluation of such applicant; (ii) The characteristics that such automated housing decision making tool will use in the assessment of such applicant; (iii) Information about the type of data collected for such automated housing decision making tool, the source of such data, and the landlord's data retention policy; (iv) If an application for housing is denied through use of the automated housing decision making tool, the reason for such denial.
(b) The notice required by paragraph (a) of this subdivision shall be made no less than twenty-four hours before the use of such automated housing decision making tool and shall allow such applicant to request an alternative selection process or accommodation.
4. New York residents shall have the right to understand how and why an outcome impacting them was determined by an automated system, even when the automated system is not the sole determinant of the outcome. 5. Automated systems shall provide explanations that are technically valid, meaningful to the individual and any other persons who need to understand the system and proportionate to the level of risk based on the context.
1. New York residents shall have the right to opt out of automated systems, where appropriate, in favor of a human alternative. The appropriateness of such an option shall be determined based on reasonable expectations in a given context, with a focus on ensuring broad accessibility and protecting the public from particularly harmful impacts. In some instances, a human or other alternative may be mandated by law. 2. New York residents shall have access to a timely human consideration and remedy through a fallback and escalation process if an automated system fails, produces an error, or if they wish to appeal or contest its impacts on them. 3. The human consideration and fallback process shall be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.
4. Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall additionally be tailored to their purpose, provide meaningful access for oversight, include training for New York residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
1. (a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
2. (a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
1. Not later than two years after the effective date of this article, the division shall promulgate regulations in accordance with specifying the circumstances and manner in which a deployer shall provide to an individual a means to opt-out of the use of a covered algorithm for a consequential action and to elect to have the consequential action concerning the individual undertaken by a human without the use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall consider the following: (a) how to ensure that any notice or request from a deployer regarding the right to a human alternative is clear and conspicuous, in plain language, easy to execute, and at no cost to an individual; (b) how to ensure that any such notice to individuals is effective, timely, and useful; (c) the specific types of consequential actions for which a human alternative is appropriate, considering the magnitude of the action and risk of harm; (d) the extent to which a human alternative would be beneficial to individuals and the public interest; (e) the extent to which a human alternative can prevent or mitigate harm; (f) the risk of harm to individuals beyond the requestor if a human alternative is available or not available; (g) the feasibility of providing a human alternative in different circumstances; and (h) any other considerations the division deems appropriate to balance the need to give an individual control over a consequential action related to such individual with the practical feasibility and effectiveness of granting such control. 2. A developer or deployer may not condition, effectively condition, attempt to condition, or attempt to effectively condition the exercise of any individual right under this article or individual choice through: (a) the use of any false, fictitious, fraudulent, or materially misleading statement or representation; or (b) the design, modification, or manipulation of any user interface with the purpose or substantial effect of obscuring, subverting, or impairing a reasonable individual's autonomy, decision making, or choice to exercise any such right.
3. Not later than two years after the effective date of this article, the division shall promulgate regulations specifying the circumstances and manner in which a deployer shall provide to an individual a mechanism to appeal to a human a consequential action resulting from the deployer's use of a covered algorithm. In promulgating the regulations under this subdivision, the division shall do the following: (a) ensure that the appeal mechanism is clear and conspicuous, in plain language, easy-to-execute, and at no cost to individuals; (b) ensure that the appeal mechanism is proportionate to the consequential action; (c) ensure that the appeal mechanism is reasonably accessible to individuals with disabilities, timely, usable, effective, and non-discriminatory; (d) require, where appropriate, a mechanism for individuals to identify and correct any personal data used by the covered algorithm; (e) specify training requirements for human reviewers with respect to a consequential action; and (f) consider any other circumstances, procedures, or matters the division deems appropriate to balance the need to give an individual a right to appeal a consequential action related to such individual with the practical feasibility and effectiveness of granting such right.
6. A deployer shall provide a short-form notice regarding a covered algorithm it develops, offers, licenses, or uses in a manner that: (a) is concise, clear, conspicuous, in plain language, and not misleading; (b) is readily accessible to individuals with disabilities; (c) is based on what is reasonably anticipated within the context of the relationship between the individual and the deployer; (d) includes an overview of each applicable individual right and disclosure in a manner that draws attention to any practice that may be unexpected to a reasonable individual or that involves a consequential action; (e) is not more than five hundred words in length; and (f) is available to the public at no cost. 7. (a) If a deployer has a relationship with an individual, the deployer shall provide an electronic version of the short-form notice directly to the individual upon the individual's first interaction with the covered algorithm. (b) If a deployer does not have a relationship with an individual, the deployer shall provide the short-form notice in a clear, conspicuous, accessible, and not misleading manner on their website.
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days.
(ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
2. (a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
5. (a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section.
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
Oversight of artificial intelligence systems. Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
(b) It shall be an unlawful discriminatory practice for an employer to fail to provide notice to an employee that such employer is using artificial intelligence for the purposes described in paragraph (a) of this subdivision. (c) The division shall adopt any rules or regulations necessary for the implementation and enforcement of this subdivision, including, but not limited to, rules on the circumstances and conditions that require notice, the time period for providing such notice and the means for providing such notice.
B. The qualified end-user of the AI device shall retain authority to amend or overrule outputs from the device based on their professional judgment, and without pressure from the deployer or any other entity to ignore or alter professional judgement.
D. A clinical peer reviewer who participates in a utilization review process for a health benefit plan that initially uses artificial intelligence tools for a utilization review shall open and document the utilization review of the individual clinical records or data prior to issuing an adverse determination.
(a) Right to human review.--A consumer shall have the right to request that a human representing the business entity review any consumer interaction involving a high-impact decision. (b) Notice.--When the conditions under section 3 are met requiring the disclosure of the use of artificial intelligence in a consumer interaction and involve a high-impact decision, the business entity shall disclose in a clear and conspicuous manner that the consumer has a right to request a human review by the business entity involving the high-impact decision. (c) Time frame.--A business entity shall commence the human review not later than 14 days after the request for a human review is made. The human review shall be completed and the decision delivered to the requester not later than 28 days after the request for a human review is made.
(c) Any employer that uses an electronic monitoring tool shall give prior written notice and shall obtain written acknowledgment from all candidates and employees subject to electronic monitoring and shall also post said notice in a conspicuous place which is readily available for viewing by candidates for employment and employees. Such notice shall include, at a minimum, the following: (1) A description of the purpose for which the electronic monitoring tool will be used, as specified in subsection (a)(1) of this section; (2) A description of the specific employee data to be collected, stored, secured, and disposed of (and the schedule therefor), and the activities, locations, communications, and job roles that will be electronically monitored by the tool; (3) A description of the dates, times, and frequency that electronic monitoring will occur; (4) Whether and how any employee data collected by the electronic monitoring tool will be used as an input in an automated decision system; (5) Whether and how any employee data collected by the electronic monitoring tool will alone or in conjunction with an automated decision system be used to make an employment decision by the employer or employment agency; (6) Whether and how any employee data collected by the electronic monitoring tool may be stored and utilized in discipline, in internal policy compliance, in administrative agency adjudications, in litigation (whether or not it involves the employee or not as a party); (7) Whether any employee data collected by the electronic monitoring tool will be used to assess employees' productivity performance or to set productivity standards, and if so, how; (8) A description of where any employee data collected by the electronic monitoring tool will be stored and the length of time it will be retained; (9) An explanation for how the specific electronic monitoring practice is the least invasive means available to accomplish the monitoring purpose; (10) That an employee is entitled to notice and maintains the right to refuse the sale, transfer, or disclosure of their employee data, subject to the provisions of subsection (g) of this section; and (11) A clear and reasonably understandable description of how an employee can exercise the rights described in this chapter.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews.
(j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
(i) An employer shall not rely primarily on employee data collected through electronic monitoring, when making hiring, promotion, disciplinary decisions up to and including termination, or compensation decisions. For an employer to satisfy the requirements of this subsection: (1) An employer shall establish meaningful human oversight of such decisions that are based, in whole or in part, on data collected through electronic monitoring. (2) A human decision-maker shall review any information collected through electronic monitoring, verify that such information is accurate and up to date, review any pending employee requests to correct erroneous data, and exercise independent judgment in making each such decision; and (3) The human decision-maker shall consider information other than information collected through electronic monitoring, when making each such decision including, but not limited to, supervisory or managerial evaluations, personnel files, employee work products, or peer reviews. (j) When an employer makes a hiring, promotion, termination, disciplinary or compensation decision, based, in whole or in part, on data gathered through the use of electronic monitoring, it shall disclose to affected employees and their authorized representative within thirty (30) days of the decision being made or going into effect, whichever is sooner: (1) That the decision was based, in whole or in part, on data gathered through electronic monitoring; (2) The specific electronic monitoring tool or tools used to gather such data, how the tools work to gather and analyze the data, and the increments of time in which the data is gathered; (3) The specific data, and judgments based upon such data, used in the decision-making process; and (4) Any information used in the decision-making process gathered through sources other than electronic monitoring.
(D)(1) No later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (a) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (b) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by this item; and (c) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer pursuant to Section 30-31-60(A)(1)(a)(iii). (2) A deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (a) a statement disclosing the principal reason or reasons for the consequential decision, including: (i) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (ii) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (iii) the source or sources of the data described in item (2)(a)(ii); (b) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (c) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer. (3)(a) Except as provided in subitem (b), a deployer shall provide the notice, statement, contact information, and description required by items (1) and (2): (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities. (b) If the deployer is unable to provide the notice, statement, contact information, and description required by items (1) and (2) directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
B. All decisions related to the pre-trial detention or release, prosecution, adjudication, sentencing, probation, parole, correctional supervision, or rehabilitation of criminal offenders shall be made by the judicial officer or other person charged with making such decision. No such decision shall be made without the involvement of a human decision-maker. The use of any recommendation or prediction from an artificial intelligence-based tool shall be subject to any challenge or objection permitted by law.
Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the state agency's decision-making processes; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
No employment decision shall be made by a state agency without the involvement of a human decision maker. No state agency shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
The Department shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in the Commonwealth's employment decisions and a process for the investigation and resolution of any such concerns and complaints. Such process shall be separate and apart from the dispute resolution process described in § 2.2-1202.1.
Disclose (i) the fact that an automated decision system is being used; (ii) the intended use of the automated decision system, including evaluating job candidates, making compensation decisions, or considering employees for promotion; (iii) the type of data inputs received by the automated decision system and the source of such data; (iv) how the automated decision system will be used in the decision-making processes of the department, office, board, commission, agency, or instrumentality of local government; and (v) the extent to which an individual's personal data will be shared with third parties or used as future inputs for the automated decision system;
No employment decision shall be made by a department, office, board, commission, agency, or instrumentality of local government without the involvement of a human decision maker. No department, office, board, commission, agency, or instrumentality of local government shall solely use any recommendation or prediction from an automated decision system to make an employment decision.
Any department, office, board, commission, agency, or instrumentality of local government that uses an automated decision system as a substantial factor in any employment decision shall establish and publicize a process for applicants for employment and employees to file concerns and complaints regarding the use of automated decision systems in employment decisions and a process for the investigation and resolution of any such concerns and complaints.
B. No employment decision shall be made by an employer without the involvement of a human decision maker. No employer shall solely use any recommendation or prediction from an automated decision system to make an employment decision. C. Any employer that knowingly violates the provisions of this section shall be subject to a civil penalty not to exceed $500 for a first violation and $1,500 for each subsequent violation. The Commissioner shall notify any employer that he alleges has violated the provisions of this section by certified mail. Such notice shall contain a description of the alleged violation. Within 15 days of receipt of notice of the alleged violation, the employer may request an informal conference regarding such violation with the Commissioner. In determining the amount of any penalty to be imposed, the Commissioner shall consider the size of the business of the employer charged and the gravity of the violation. The decision of the Commissioner shall be final. Civil penalties under this section shall be assessed by the Commissioner and paid to the Literary Fund. The Commissioner shall prescribe procedures for the payment of proposed penalties that are not contested by employers. E. The Commissioner or his authorized representative shall have the right to petition a circuit court for injunctive or such other relief as may be necessary for the enforcement of this section.
(f) Restrictions on use of automated decision systems. (1) An employer shall not use an automated decision system in a manner that: (A) violates or results in a violation of State or federal law; (B) makes predictions about an employee's behavior that are unrelated to the employee's essential job functions; (C) identifies, profiles, or predicts the likelihood that an employee will exercise the employee's legal rights; (D) makes predictions about an employee's emotions, personality, or other sentiments; or (E) use customer or client data, including customer or client reviews and feedback, as an input of the automated decision system. (2)(A) An employer shall not solely rely on outputs from an automated decision system when making employment-related decisions. (B) An employer may utilize an automated decision system in making employment-related decisions if: (i) the automated decision system outputs considered in making the employment-related decision are corroborated by human oversight of the employee, including supervisory or managerial observations and documentation of the employee's work, personnel records, and consultations with the employee's coworkers; (ii) the employer has conducted an impact assessment of the automated decision system pursuant to subsection (g) of this section; and (iii) the employer is in compliance with the notice requirements of subdivision (4) of this subsection (f). (3) An employer shall not use any automated decision system outputs regarding an employee's physical or mental health in relation to an employment-related decision. (4) Prior to using an automated decision system to make an employment-related decision about an employee, the employer must provide the employee with a notice that complies with subdivision (c)(3)(A) of this section and, at a minimum, contains the following information: (A) a plain language explanation of the nature, purpose, and scope for which the automated decision system will be used, including the specific employment-related decisions potentially affected; (B) the logic used in the automated decision system, including the key parameters that affect the output of the automated decision system; (C) the specific category and sources of employee input data that the automated decision system will use, including a specific description of any data collected through electronic monitoring; (D) any performance metrics the employer will consider using with the automated decision system; (E) the type of outputs the automated decision system will produce; (F) the individuals or entities that developed the automated decision system; (G) the individual or entities that will operate, monitor, and interpret the results of the automated decision system; (H) information about how an employee can access the results of the most recent impact assessment of the automated decision system; (I) a description of an employee's rights, pursuant to subsection (j) of this section, to access information about the employer's use of the automated decision system and to correct data used by the automated decision system; and (J) a statement that employees are protected from retaliation for exercising the rights described in the notice.
(a) Any deployer that employs an automated decision system for a consequential decision shall inform the consumer prior to the use of the system for a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that automated decision systems will be used to make a consequential decision or to assist in making a consequential decision. (b) Any notice provided by a deployer to the consumer pursuant to subsection (a) of this section shall include: (1) a description of the personal characteristics or attributes that the system will measure or assess; (2) the method by which the system measures or assesses those attributes or characteristics; (3) how those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (4) any human components of the system; (5) how any automated components of the system are used to inform the consequential decision; and (6) a direct link to a publicly accessible page on the deployer's website that contains a plain-language description of the: (A) system's outputs; (B) types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and (C) results of the most recent impact assessment, or an active link to a web page where a consumer can review those results.
(c) Any deployer that employs an automated decision system for a consequential decision shall provide the consumer with a single notice containing a plain-language explanation of the decision that identifies the principal reason or reasons for the consequential decision, including: (1) the identity of the developer of the automated decision system used in the consequential decision, if the deployer is not also the developer; (2) a description of what the output of the automated decision system is, such as a score, recommendation, or other similar description; (3) the degree and manner to which the automated decision system contributed to the decision; (4) the types and sources of data processed by the automated decision system in making the consequential decision; (5) a plain language explanation of how the consumer's personal data informed the consequential decision; and (6) what actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future.
(d)(1) A deployer shall provide and explain a process for a consumer to appeal a decision, which shall at minimum allow the consumer to: (A) formally contest the decision; (B) provide information to support their position; and (C) obtain meaningful human review of the decision. (2) For an appeal made pursuant to subdivision (1) of this subsection: (A) a deployer shall designate a human reviewer who: (i) is trained and qualified to understand the consequential decision being appealed, the consequences of the decision for the consumer, how to evaluate and how to serve impartially, including by avoiding prejudgment of the facts at issue, conflict of interest, and bias; (ii) does not have a conflict of interest for or against the deployer or the consumer; (iii) was not involved in the initial decision being appealed; (iv) shall enjoy protection from dismissal or its equivalent, disciplinary measures, or other adverse treatment for exercising their functions under this section; and (v) shall be allocated sufficient human resources by the deployer to conduct an effective appeal of the decision; and (B) the human reviewer shall consider the information provided by the consumer in their appeal and may consider other sources of information relevant to the consequential decision. (3) A deployer shall respond to a consumer's appeal not later than 45 after receipt of the appeal. That period may be extended once by an additional 45 days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the consumer of any extension not later than 45 days after receipt of the appeal, together with the reasons for the delay.
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal information beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
(5) A deployer that has deployed a high-risk artificial intelligence system to make a consequential decision concerning a consumer shall transmit to the consumer the consequential decision without undue delay. If such consequential decision is adverse to the consumer and based on personal data beyond information that the consumer provided directly to the deployer, the deployer shall provide to the consumer a statement disclosing the principal reason or reasons for the consequential decision, including: (a) The degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (b) The type of data that was processed by such system in making the consequential decision; and (c) The sources of such data.
Beginning July 1, 2026, each time a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; and (2) Provide to the consumer a statement disclosing: (a) The purpose of the high-risk artificial intelligence system and the nature of the consequential decisions; (b) The contact information for the deployer; and (c) A description, in plain language, of the high-risk artificial intelligence system.