R-02
Reporting & Regulatory Submissions
Regulatory Disclosure & Submissions
Developers or deployers of certain AI systems must submit documentation — including system descriptions, risk assessments, and safety evaluation results — to regulatory authorities either proactively on a defined schedule or in response to regulatory requests. Proactive submission requirements cannot be satisfied by waiting to be asked.
Applies to DeveloperDeployerGovernment Sector Foundation ModelGovernment System
Bills — Enacted
3
unique bills
Bills — Proposed
49
Last Updated
2026-03-29
Core Obligation

Developers or deployers of certain AI systems must submit documentation — including system descriptions, risk assessments, and safety evaluation results — to regulatory authorities either proactively on a defined schedule or in response to regulatory requests. Proactive submission requirements cannot be satisfied by waiting to be asked.

Sub-Obligations4 sub-obligations
Bills That Map This Requirement 52 bills
Bill
Status
Sub-Obligations
Section
Passed 2026-10-01
R-02.4
Section 1(b)(2)
Plain Language
Insurers must annually certify to the Alabama Department of Insurance that their AI prior authorization systems meet three standards: (1) they do not rely solely on group-level datasets; (2) they are configured and applied fairly, producing consistent results for enrollees with similar clinical profiles; and (3) they do not discriminate directly or indirectly against any subscriber group or enrollee in violation of state or federal law, including HHS regulations and guidance. This is a proactive annual regulatory submission — insurers cannot wait to be asked.
(2) An insurer shall certify annually to the department that the artificial intelligence used to make determinations on requests for prior authorization complies with all of the following: a. Does not rely solely on a group dataset to make determinations. b. Is configured and applied in a fair manner for each subscriber group and enrollee such that resulting determinations are consistent for enrollees who present with similar clinical considerations. c. Does not discriminate directly or indirectly against any subscriber group or enrollee in violation of state or federal law, including any regulation or guidance issued by the federal Department of Health and Human Services.
Passed 2026-10-01
R-02.4
Section 1(c)(2)
Plain Language
Insurers must annually certify to the Department of Insurance two things: first, that their AI systems and the outcomes they produce are periodically reviewed to maximize accuracy and reliability; and second, that AI use in utilization review complies with all of subsection (b)'s requirements (individualized clinical data, no sole reliance on group data, fairness, non-discrimination, and human review of adverse decisions). This certification is separate from and in addition to the subsection (b)(2) certification — subsection (b)(2) certifies the AI's configuration and fairness standards, while this provision certifies ongoing operational review and overall subsection (b) compliance.
(2) Certify annually to the department that: (i) use of artificial intelligence and the outcomes that it generates are reviewed on a periodic basis to maximize accuracy and reliability; and (ii) use of artificial intelligence in utilization review complies with the requirements of subsection (b).
Pending 2027-07-01
R-02.1
Bus. & Prof. Code § 22615(a)-(b)
Plain Language
This provision primarily imposes obligations on the Attorney General rather than on operators: the AG must adopt audit regulations by January 1, 2028 (covering auditor standards, eligibility, compliance assessment procedures, and report requirements), establish a public complaint mechanism for consumers, and establish a researcher access process for anonymized audit data. Beginning January 1, 2028, the AG must issue annual public reports summarizing audit results, compliance trends, emerging risks, best practices, and recommendations. For operators, the practical implication is that the audit framework — and thus the annual audit obligation under § 22614 — cannot begin until the AG completes this rulemaking. Operators should monitor the AG's regulatory timeline.
(a) On or before January 1, 2028, the Attorney General shall do all of the following: (1) Adopt regulations that include, at a minimum, all of the following: (A) Professional and ethical standards for auditors that ensure independence. (B) Eligibility requirements for auditors. (C) Procedures for auditors to assess compliance with this chapter. (D) Requirements for AI child safety audit reports. (2) Establish a public incident reporting mechanism for consumers to submit complaints relating to companion chatbots to the Attorney General. (3) Establish a process for qualified researchers to access anonymized and aggregated audit data for academic study of child safety in companion chatbots. (b) Beginning January 1, 2028, the Attorney General shall issue an annual public report that includes the following: (1) A high-level summary of each child safety audit report. (2) The total number of child safety audits conducted. (3) Common findings and trends across the companion chatbot industry. (4) Emerging child safety risks identified through audit reviews. (5) Best practices and effective mitigation strategies observed. (6) Aggregated data on compliance rates and common deficiencies. (7) Recommendations for operators, parents, and policymakers.
Pending 2026-01-01
R-02.2
Bus. & Prof. Code § 22756.6(a)
Plain Language
Developers must provide a copy of their impact assessment to the Attorney General or Civil Rights Department within 30 days of a request. The submitted impact assessment must be kept confidential by the receiving agency. This is a responsive submission obligation — developers are not required to proactively submit but must be able to produce the assessment on a 30-day turnaround. Note this obligation applies only to developers; the statute does not expressly require deployers to submit their assessments to the AG or CRD upon request.
(a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter. (2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.
Enacted 2026-01-01
R-02.1
Bus. & Prof. Code § 22757.12(d)
Plain Language
Large frontier developers must submit summaries of catastrophic risk assessments from internal use of their frontier models to the Office of Emergency Services on a quarterly basis, or on another reasonable schedule the developer specifies in writing to OES. This is a proactive, recurring submission obligation — not triggered by a specific incident. The developer has flexibility to propose an alternative schedule but must communicate it in writing. Updates must be provided as appropriate.
(d) A large frontier developer shall transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate.
Enacted 2026-01-01
R-02.1
Bus. & Prof. Code § 22757.12(d)
Plain Language
Large frontier developers must submit to the Office of Emergency Services quarterly summaries (or on another pre-agreed schedule) of catastrophic risk assessments from internal use of their frontier models.
A large frontier developer shall transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate.
Pending 2027-01-01
R-02.1
C.R.S. § 10-16-112.7(4)(a)-(d)
Plain Language
Covered entities must provide written disclosures to the applicable state agency — the Division of Insurance, the Department of Human Services, or the Department of Health Care Policy and Financing — identifying: which utilization review functions use AI, at what points in the process AI is used, the human oversight process including reviewer qualifications and whether humans must approve adverse determinations, and the process for maintaining audit records demonstrating compliance. This is a proactive regulatory filing obligation — covered entities must submit the information without waiting to be asked.
(4) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION SHALL PROVIDE WRITTEN DISCLOSURES TO THE DIVISION, THE DEPARTMENT OF HUMAN SERVICES, OR THE DEPARTMENT OF HEALTH CARE POLICY AND FINANCING, AS APPLICABLE, THAT IDENTIFY: (a) THE UTILIZATION REVIEW FUNCTIONS FOR WHICH THE ARTIFICIAL INTELLIGENCE SYSTEM WILL BE USED; (b) THE POINTS IN THE UTILIZATION REVIEW PROCESS WHEN THE ARTIFICIAL INTELLIGENCE SYSTEM IS USED; (c) THE HUMAN OVERSIGHT PROCESS, INCLUDING THE QUALIFICATIONS OF THE REVIEWER AND WHETHER THE A HUMAN MUST APPROVE AN ADVERSE DETERMINATION; AND (d) THE PROCESS FOR MAINTAINING AUDIT INFORMATION SUFFICIENT TO DEMONSTRATE COMPLIANCE WITH SUBSECTION (3) OF THIS SECTION.
Enacted 2026-06-30
R-02.1
C.R.S. § 6-1-1702(5)
Plain Language
Developers must proactively disclose to the attorney general (in a prescribed form) and to all known deployers any known or reasonably foreseeable risks of algorithmic discrimination, within 90 days of discovering such risks. This is not a wait-to-be-asked obligation — it triggers on knowledge or reasonable foreseeability of discrimination risks. The 90-day clock runs from the triggering date specified in the original SB 205 provisions.
(5) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which:
Enacted 2026-06-30
R-02.2
C.R.S. § 6-1-1702(7)
Plain Language
The attorney general may require developers to disclose documentation described in subsection (2) — including model cards, dataset cards, and related materials — within 90 days of the AG's request. The AG may evaluate these materials for compliance. Importantly, these disclosures are exempt from CORA (Colorado Open Records Act) and developers may designate materials as proprietary or trade secret. Attorney-client privilege and work-product protections are preserved. This on-demand regulatory disclosure power is separate from the proactive disclosure obligations in subsection (5).
(7) On and after June 30, 2026, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (2) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this part 17, and the statement or documentation is not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (7), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Enacted 2026-06-30
R-02.2
C.R.S. § 6-1-1703(9)
Plain Language
The attorney general may require deployers (or contracted third parties) to produce their risk management policy, impact assessments, or maintained records within 90 days of the AG's request. The AG may evaluate these materials for compliance with the statute. Materials are exempt from CORA, and deployers may designate them as proprietary or trade secrets. Attorney-client privilege and work-product protections are preserved. This mirrors the developer on-demand disclosure obligation in § 6-1-1702(7) but applies to deployer-side documentation.
(9) On and after June 30, 2026, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (2) of this section, the impact assessment completed pursuant to subsection (3) of this section, or the records maintained pursuant to subsection (3)(f) of this section. The attorney general may evaluate such risk management policy, impact assessment, or records to ensure compliance with this part 17, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (9), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2025-07-01
R-02.1
O.C.G.A. § 10-16-2(b)
Plain Language
Developers must submit comprehensive documentation about each automated decision system to the Attorney General, in a form the AG prescribes. The required information covers foreseeable uses and misuses, system purpose and benefits, training data summaries, known limitations and discrimination risks, mitigation measures taken, pre-deployment evaluation methods, data governance measures, usage and monitoring guidance, and any additional documentation deployers need for compliance. Developers may make reasonable trade-secret redactions under § 10-16-2(f) but must notify the AG and provide a basis for the redaction, and may not redact information deployers need for their own compliance obligations.
Except as provided in subsection (f) of this Code section, a developer of an automated decision system shall provide certain information regarding such automated decision system to the Attorney General, in a form and manner prescribed by the Attorney General. Such information shall include, at a minimum: (1) A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the automated decision system; (2) Documentation disclosing: (A) The purpose of the automated decision system; (B) The intended benefits and uses of the automated decision system; (C) High-level summaries of the types of data used to train the automated decision system; (D) Known or reasonably foreseeable limitations of the automated decision system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system; (E) The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination; (F) How the automated decision system was evaluated for performance and mitigation of algorithmic discrimination before the automated decision system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (G) The data governance measures used to cover the training data sets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (H) How the automated decision system should be used, not be used, and be monitored by an individual when the automated decision system is used to make, or assist in making, a consequential decision; and (I) All other information necessary to allow the deployer to comply with the requirements of Code Section 10-16-3; and (3) Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitoring the performance of the automated decision system for risks of algorithmic discrimination.
Pending 2025-07-01
R-02.2
O.C.G.A. § 10-16-2(g)
Plain Language
The Attorney General may demand that developers produce any documentation or records required under § 10-16-2 within seven days. Records disclosed are exempt from Georgia open records requirements. Developers may designate materials as proprietary or trade secret, and disclosure does not waive attorney-client privilege or work-product protection. This is a regulatory-request power — the AG can compel production at any time, and developers must maintain records in a form ready for rapid assembly.
The Attorney General may require that a developer disclose to the Attorney General, within seven days and in a form and manner prescribed by the Attorney General, any documentation or records required by this Code section, including, but not limited to, the statement or documentation described in subsection (b) of this Code section. The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, such records shall not be open to inspection by or made available to the public. In a disclosure pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2025-07-01
R-02.2
O.C.G.A. § 10-16-9
Plain Language
The Attorney General may demand that deployers (or their contracted third parties) produce any documentation or records required under this chapter within seven days. This includes risk management policies, impact assessments, and related records. Produced materials are exempt from Georgia open records requirements. Deployers may designate materials as proprietary or trade secret, and disclosure does not waive attorney-client privilege or work-product protection.
The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to the Attorney General, no later than seven days after and in a form and manner prescribed by the Attorney General, any documentation or records required by this chapter. The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and such records, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, shall not be open to inspection by or made available to the public. In a disclosure pursuant to this Code section, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records is subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2025-01-01
R-02.2
Section 10(a)
Plain Language
Health insurance issuers must comply with Department of Insurance requests for information and documentation during investigations or market conduct actions. The Department's authority extends to reviewing the development, implementation, and use of AI systems and predictive models, including their outcomes. Required documentation categories include AI governance and risk management protocols, pre-acquisition and pre-utilization diligence on third-party AI systems and data, monitoring and auditing records for third-party tools, and compliance records for the issuer's AI systems program. This is a respond-on-demand obligation — issuers must maintain documentation in a form that can be produced when the Department requests it.
(a) The Department's regulatory oversight of health insurance coverage includes oversight of the use of AI systems or predictive models to make or support adverse consumer outcomes. The Department's authority in an investigation or market conduct action includes review regarding the development, implementation, and use of AI systems or predictive models and the outcomes from the use of those AI systems or predictive models. The Department may also request other information or documentation relevant to an investigation or market conduct action, and a health insurance issuer or any other person described in subsection (b) of Section 132 of the Illinois Insurance Code must comply with that request. The Department's inquiries may include, but are not limited to, questions regarding any specific model, AI system, or application of a model or AI system. The Department may also make requests for information and documentation relating to AI systems governance, risk management, and use protocols; information and documentation relating to the health insurance issuer's preacquisition and preutilization diligence, monitoring, and auditing of data or AI systems developed or used by a third party; and information and documentation relating to implementation and compliance with the health insurance issuer's AI systems program.
Pending 2026-01-01
R-02.1
Section 20(a)-(c)
Plain Language
State agencies must submit each impact assessment to the Governor and General Assembly at least 30 days before deploying the automated system. Other public bodies must submit the assessment to their director or governing body leadership on the same 30-day pre-implementation timeline. Employers may redact information from the assessment under two circumstances: (1) where disclosure would substantially harm public health/safety, infringe privacy, or impair IT/operational security; or (2) where the assessment covers security, fraud detection, or anti-harassment technology. In either case, the redacted assessment must be published alongside an explanatory statement describing the redaction rationale.
(a) Each impact assessment conducted by a State agency under this Act shall be submitted to the Governor and the General Assembly at least 30 days prior to implementation of the automated decision-making system that is the subject of the assessment. Each impact assessment conducted by any other public body under this Act shall be submitted to the director of the public body or the executive officers or primary administrator of the relevant governing body at least 30 days prior to implementation of the automated decision-making system that is the subject of the assessment. (b) If the employer determines that disclosure of any information in the impact assessment would result in a substantial negative impact on public health or safety, infringe upon privacy rights, or significantly impair the employer's ability to protect its information technology or operational assets, the information may be redacted, if an explanatory statement describing the determination process for redaction is published along with the redacted assessment. (c) If the impact assessment covers technology used to prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, or other illegal activity, the employer may redact related information, if an explanatory statement describing the determination process for redaction is published along with the redacted assessment.
Pending 2025-06-01
R-02.2
Section 10(a)
Plain Language
The Department of Insurance has broad authority to investigate and request documentation from any insurer authorized to operate in Illinois regarding its development, implementation, and use of AI systems and predictive models. Insurers must comply with such requests. The scope of inquiry is expansive: the Department may ask about specific models or AI systems, AI governance and risk management protocols, due diligence on third-party AI vendors, and compliance with the insurer's own AI systems program. This effectively requires insurers to maintain documentation in a form that can be produced upon request, covering the full lifecycle of AI systems used in insurance decision-making.
(a) The Department's regulatory oversight of insurers includes oversight of an insurer's use of AI systems to make or support adverse determinations that affect consumers. Any insurer authorized to operate in the State is subject to review by the Department in an investigation or market conduct action regarding the development, implementation, and use of AI systems or predictive models and the outcomes from the use of those AI systems or predictive models. The Department may also request other information or documentation relevant to an investigation or market conduct action, and an insurer must comply with that request. The Department's inquiries may include, but are not limited to, questions regarding any specific model, AI system, or application of a model or AI system. The Department may also make requests for information and documentation relating to AI systems governance, risk management, and use protocols; information and documentation relating to the insurer's preacquisition and preutilization diligence, monitoring, and auditing of data or AI systems developed by a third party; and information and documentation relating to implementation and compliance with the insurer's AI systems program.
Pending 2026-07-01
R-02.2
IC 22-5-10.4-15
Plain Language
The Department of Labor has broad investigative and reporting authority. It may receive complaints, investigate potential violations, and require employers to file annual or special reports — or answer specific written questions — about their use of automated decision systems for employment decisions. When the Department requires a report, the employer must comply within the manner and timeframe the Department specifies. Separately, employers have a standing recordkeeping obligation: they must maintain, preserve, and make available to the Department all records pertaining to compliance with this chapter. This recordkeeping duty is ongoing and not contingent on a Department request.
Sec. 15. (a) The department may do the following: (1) Receive complaints regarding alleged violations of this chapter. (2) Investigate any facts, conditions, practices, or matters as the department deems necessary or appropriate to determine whether an employer has violated this chapter. (3) Require an employer to file with the department, on a form prescribed by the department, annual or special reports or answers in writing to specific questions relating to the use of an automated decision system for employment related decisions. (b) If the department requires an employer to file a report or answers under subsection (a)(3), the employer shall file the report or answers in the manner and time period required by the department. (c) An employer shall maintain, keep, preserve, and make available to the department records pertaining to compliance with this chapter.
Pre-filed 2025-07-17
R-02.2
Ch. 93M § 2(g)
Plain Language
The attorney general may at any time request that a developer produce the documentation described in Section 2(b) — including training data summaries, bias evaluation methodology, data governance measures, and intended uses — within 90 days. The documentation is exempt from public records disclosure. Developers may designate materials as containing proprietary information or trade secrets, and producing privileged materials does not waive attorney-client privilege or work-product protection.
(g) Not later than 6 months after the effective date of this act, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (b) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (g), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pre-filed 2025-07-17
R-02.2
Ch. 93M § 3(i)
Plain Language
The attorney general may at any time request a deployer (or its contracted third party) to produce within 90 days: the risk management policy, any impact assessment, or retained records. All such materials are exempt from Massachusetts public records disclosure. Deployers may designate materials as proprietary or trade secret, and producing privileged materials does not waive attorney-client or work-product protection.
(i) Not later than 6 months after the effective date of this act, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (b) of this section, the impact assessment completed pursuant to subsection (c) of this section, or the records maintained pursuant to subsection (c)(6) of this section. The attorney general may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (i), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records include information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2026-10-01
R-02.1
Insurance § 15–10A–06(a)(1)(iii)(9)
Plain Language
Carriers must include in their existing quarterly reports to the Commissioner the total number of grievances that received human review under the new AI-related grievance provision (§ 15–10A–02(b)(2)(vi)), broken down by type of claim, member race/gender/profession, and type of policy (individual, small group, large group, and whether purchased on the Health Benefit Exchange). This demographic disaggregation enables the Commissioner to monitor for potential disparate impact of AI-driven adverse decisions across protected classes.
9. THE TOTAL NUMBER OF GRIEVANCES REVIEWED UNDER § 15–10A–02(B)(2)(VI) OF THIS SUBTITLE AND AGGREGATED BY: A. TYPE OF CLAIM; B. RACE, GENDER, AND PROFESSION OF MEMBER; AND C. TYPE OF POLICY, INCLUDING INDIVIDUAL, SMALL GROUP, OR LARGE GROUP AND WHETHER THE POLICY WAS PURCHASED ON THE HEALTH BENEFIT EXCHANGE; AND
Pending 2026-10-01
R-02.1
Insurance § 15–10A–06(a)(1)(iii)(6)
Plain Language
Carriers must report in their quarterly submissions to the Commissioner whether an AI, algorithm, or other software tool was used in making each adverse decision, alongside existing reporting on the number of adverse decisions, whether prior authorization or step therapy was involved, and the type of service at issue. While the underlying reporting obligation is pre-existing, this bill amends the existing statutory text to add the AI-usage disclosure requirement — carriers must now track and report AI involvement in adverse decisions as part of their standard quarterly reporting.
6. the number of adverse decisions issued by the carrier under § 15–10A–02(f) of this subtitle, whether the adverse decision involved a prior authorization or step therapy protocol, the type of service at issue in the adverse decisions, and whether an artificial intelligence, algorithm, or other software tool was used in making the adverse decision;
Pending 2026-08-01
R-02.1
Minn. Stat. § 181.9922, subd. 1(c)
Plain Language
Each time an employer issues a pre-use notice to workers about an automated decision system, the employer must also submit a copy to the Commissioner of Labor and Industry within ten days. This is an event-driven regulatory submission — not a periodic schedule, but triggered each time a new or modified ADS notice is issued. Copies must also be available to authorized representatives upon request. This provision is mapped separately from the pre-use notice obligation because it creates an independent regulatory submission obligation.
(c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request.
Pending 2026-09-01
R-02.1
§ 181.9922, Subd. 1(c)
Plain Language
Each time an employer provides a pre-use notice to workers about an automated decision system, the employer must also file a copy of that same notice with the Commissioner of Labor and Industry within 10 days. This is a continuing, event-triggered filing obligation — not a one-time submission. Copies must also be available to authorized representatives on request.
(c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request.
Pending 2026-01-01
R-02.3
G.S. § 114B-3(a)-(b)
Plain Language
No person may operate or distribute a chatbot that deals substantially with health information in North Carolina without first obtaining a health information chatbot license from the Department of Justice. The application must include comprehensive documentation covering technical architecture, data practices, security measures, privacy protections, QA/testing procedures, risk assessment, regulatory compliance evidence, proof of insurance, and required fees. The definition of 'health information' is extremely broad, covering physical and mental health data, reproductive and gender-affirming care information, biometric and genetic data, and even inferred health data. This is a pre-market licensing requirement — the chatbot cannot be operated or distributed until the license is granted.
(a) No person shall operate or distribute a chatbot that deals substantially with health information without first obtaining a health information chatbot license. (b) An application for a health information chatbot license shall include all of the following: (1) Detailed documentation of the chatbot's: a. Technical architecture and operational specifications. b. Data collection, processing, storage, and deletion practices. c. Security measures and protocols. d. Privacy protection mechanisms. (2) Quality control and testing procedures. (3) Risk assessment and mitigation strategies. (4) Evidence of compliance with applicable federal and state regulations. (5) Proof of insurance coverage. (6) Required application fees. (7) Any additional information required by the Department.
Pending 2026-01-01
R-02.2
G.S. § 114B-5(b)-(f)
Plain Language
The AG's designated enforcement staff may conduct both physical and digital inspections of licensed health information chatbots. Digital inspections cover source code, algorithms, ML models, data practices, cybersecurity, user privacy protections, chatbot response testing, and integration with other platforms. The Director may access all records relating to development, testing, validation, production, distribution, and performance. Trade secrets and confidential commercial information are protected from public records disclosure. After each inspection, the Director provides a detailed findings report with required corrective actions. Manufacturers and importers must establish and maintain records and submit reports as the Director requires by regulation. Licensees must maintain documentation in a form that can be produced for inspection.
(b) The Attorney General shall designate a Director, officers, and employees assigned to the oversight and enforcement of this Chapter. Upon presenting appropriate credentials and a written notice to the owner, operator, or agent in charge, those officers and employees are authorized to enter, at reasonable times, any factory, warehouse, or establishment in which chatbots licensed under this Chapter are manufactured, processed, or held, and to inspect, in a reasonable manner and within reasonable limits and in a reasonable time. In addition to physical inspections, the Department may conduct digital inspections of licensed chatbots under this Chapter, to include the following: (1) Examination of source code, algorithms, and machine learning models. (2) Review of data processing and storage practices. (3) Evaluation of cybersecurity measures and protocols. (4) Assessment of user data privacy protections. (5) Testing of chatbot responses and behaviors in various scenarios. (6) Audit of data collection, use, and retention practices. (7) Inspection of software development and update processes. (8) Review of remote access and monitoring capabilities. (9) Evaluation of integration with other digital health technologies or platforms. (c) As part of any inspection, whether physical or digital, the Director may require access to all records relating to the development, testing, validation, production, distribution, and performance of a chatbot licensed under this Chapter. (d) Any information obtained during an inspection which falls within the definition of a trade secret or confidential commercial information as defined in 21 CFR 20.61 shall be treated as confidential and shall not be disclosed under Chapter 132 of the General Statutes, except as may be necessary in proceedings under this Chapter or other applicable law. (e) Following any inspection, the Director shall provide a detailed report of findings to the manufacturer or importer, including any identified deficiencies and required corrective actions. (f) Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish and maintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
Pending 2027-01-01
R-02.1
Sec. 5(5)-(6)
Plain Language
Large frontier developers must submit to the Attorney General summaries of their catastrophic risk assessments from internal use of frontier models at least every three months (quarterly). The Attorney General will establish a confidential submission mechanism. This is a proactive, scheduled submission — the developer cannot wait to be asked. The obligation covers internal use specifically, distinguishing it from the pre-deployment public disclosure requirement in Sec. 4(4).
(5) The Attorney General shall establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models. (6) A large frontier developer shall transmit to the Attorney General a summary of any assessment of catastrophic risk resulting from internal use of its frontier models no less frequently than every three months.
Pending 2026-02-01
R-02.2
Sec. 3(7)(a)-(d)
Plain Language
Upon written demand in connection with an ongoing investigation, the Attorney General may require a developer to produce the documentation described in Sec. 3(2) (uses, training data, limitations, discrimination risks, evaluations, data governance, etc.). The developer must produce it in the AG's prescribed form. Developers may designate materials as proprietary or trade secret, and such designated materials are exempt from public disclosure.
(7)(a) On and after February 1, 2026, the Attorney General may provide a written demand to any developer to disclose to the Attorney General the statement or documentation described in subsection (2) of this section if such a statement or documentation is relevant to an investigation related to the developer conducted by the Attorney General. Such statement or documentation shall be provided to the Attorney General in a form and manner prescribed by the Attorney General. (b) The Attorney General may evaluate such statement or documentation, if it is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) In any disclosure pursuant to this subsection, any developer may designate the statement or documentation as including proprietary information or a trade secret. (d) To the extent any such statement or documentation includes any proprietary information or any trade secret, such statement or documentation shall be exempt from disclosure.
Pending 2026-02-01
R-02.2
Sec. 4(8)(a)-(d)
Plain Language
In connection with an ongoing investigation, the AG may require a deployer (or its contracted third party) to produce its risk management policy, impact assessments, and associated records within 90 days. Disclosures to the AG are not public records under Nebraska's public records law, and deployers may designate materials as proprietary or trade secret. This requires deployers to maintain documentation in a form that can be produced to the AG on demand.
(8)(a) On and after February 1, 2026, in connection with an ongoing investigation related to the deployer, the Attorney General may require any deployer or third party contracted by a deployer to disclose any of the following to the Attorney General no later than ninety days after such request in a form and manner prescribed by the Attorney General: (i) The risk management policy implemented pursuant to subsection (2) of this section; (ii) The impact assessment completed pursuant to subsection (3) of this section; or (iii) The records maintained pursuant to subdivision (3)(f) of this section. (b) If such risk management policy, impact assessment, or record is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, the Attorney General may evaluate the risk management policy, impact assessment, or records disclosed pursuant to subdivision (a) of this subsection to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) Any disclosure under this subsection shall not be a public record subject to disclosure pursuant to sections 84-712 to 84-712.09. (d) A deployer may designate any statement or documentation disclosed under this subsection as including proprietary information or a trade secret.
Pre-filed 2026-07-01
R-02.1
Section 1(c)(1)-(4)
Plain Language
Every artificial intelligence company must annually conduct safety tests on all AI technology it sells, develops, deploys, uses, or offers for sale in New Jersey, following the minimum requirements established by the Office of Information Technology. The company must then submit an annual report to OIT containing: a list of all AI technologies tested, a description of each safety test conducted and how it adheres to OIT's requirements, a list of any third parties used to conduct the tests, and the results of each test. The scope is extremely broad — it covers any private entity or public agency with any connection to AI technology in New Jersey, including entities that merely use AI technology. The bill does not specify penalties for failure to test or report.
An artificial intelligence company shall annually subject all artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State to a safety test that adheres to the requirements established pursuant to subsection b. of this section and submit a report to the Office of Information Technology containing: (1) a list of all artificial intelligence technologies tested; (2) a description of each safety test conducted, including the safety test's adherence to the requirements established pursuant to subsection b. of this section; (3) a list of all third parties used to conduct safety tests, if any; and (4) the results of each safety test administered.
Pre-filed 2026-09-28
R-02.1
Section 5(a)(6)-(7), 5(b)
Plain Language
Employers with 100 or more employees who deploy AI systems that result in layoffs must file an AI Impact Disclosure with the Department of Labor and Workforce Development. The disclosure must include, at minimum, the date the AI tool was deployed, the date of layoffs, and the number of workers displaced. These employers must also make supplemental contributions to the AI Horizon Fund based on the number of AI-attributable layoffs, according to a schedule the Department will develop. The 100-employee threshold is a firm-wide headcount, not limited to the affected division or location.
(6) develop an AI Impact Disclosure that employers deploying AI systems that results in layoffs shall file with the department. This disclosure shall contain, at a minimum, the date on which the AI tool that resulted in layoffs was deployed, the date of layoffs, and the number of workers displaced by the AI tool deployment; and (7) develop a supplemental contribution schedule to the AI Horizon Fund based on the number of layoffs attributable to AI and develop a mechanism for assessment and payment of these assessments. b. The disclosure statements and supplemental contributions specified in paragraphs (6) and (7) of subsection a. of this section shall only be applicable to firms which have 100 or more employees.
Pre-filed 2026-09-28
R-02.1
Section 6(a)-(b)
Plain Language
AI infrastructure entities must conduct and file an environmental impact assessment with the Department of Labor and Workforce Development at the time of initial deployment and annually thereafter, plus an additional assessment with any capacity expansion. They must also submit annual reports detailing energy consumption, water usage, and carbon emissions. The manner of filing is to be determined by the Department. These are ongoing reporting and filing obligations — not one-time requirements.
Each AI infrastructure entity shall, at the time of initial deployment and annually thereafter, in a manner determined by the department: a. Conduct an environmental impact assessment and provide an additional environmental impact assessment with any capacity expansion, and file the assessment with the department; b. Submit annual reports to the department detailing energy consumption, water usage, and carbon emissions;
Pre-filed 2026-02-02
R-02.1
Section 1.d.-e.
Plain Language
Employers using AI video interview analysis to screen applicants for in-person interviews must collect demographic data on the race and ethnicity of applicants at two stages: (1) those who are and are not selected for in-person interviews after AI screening, and (2) those who are offered positions or hired. This data must be reported annually to the Department of Labor and Workforce Development. The obligation requires employers to both collect race/ethnicity data from applicants and submit it on a defined schedule — this is an ongoing annual reporting requirement, not a one-time filing.
d. An employer that uses an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview shall collect and report the following demographic data: (1) the race and ethnicity of applicants who are and are not afforded the opportunity for an in-person interview after the use of artificial intelligence analysis; and (2) the race and ethnicity of applicants who are offered a position or hired. e. The demographic data collected under subsection d. of this section shall be reported annually to the Department of Labor and Workforce Development.
Pending 2026-05-13
R-02.1
Section 6(a)-(c)
Plain Language
AI infrastructure entities must, at initial deployment and annually thereafter: (1) conduct and file environmental impact assessments with the Department (with additional assessments required for capacity expansions); (2) submit annual reports detailing energy consumption, water usage, and carbon emissions; and (3) enter into community benefit agreements with affected municipalities and file those agreements with the Department. The specific format and procedures will be determined by the Department. Violation carries civil penalties under section 8.
Each AI infrastructure entity shall, at the time of initial deployment and annually thereafter, in a manner determined by the department: a. Conduct an environmental impact assessment and provide an additional environmental impact assessment with any capacity expansion, and file the assessment with the department; b. Submit annual reports to the department detailing energy consumption, water usage, and carbon emissions; and c. Enter into community benefit agreements with affected municipalities, and file the agreement with the department.
Pending 2027-01-01
R-02.2
GBL § 1551(3)(a)-(b)
Plain Language
Developers distributing high-risk AI decision systems must, to the extent feasible, provide deployers and downstream developers with the documentation needed to complete impact assessments under this article, delivered through model cards, dataset cards, or similar artifacts. A developer that also serves as its own deployer is exempt from this documentation requirement unless the system is provided to an unaffiliated deployer. Trade secrets and security-sensitive information are exempt.
(a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer.
Pending 2027-01-01
R-02.2
GBL § 1551(6)
Plain Language
The AG may require developers to produce their deployer-facing documentation and general statements as part of an investigation. Developers may designate submitted materials as trade secret, confidential, or privileged — such materials are exempt from public disclosure and production does not waive attorney-client privilege or work product protection. This is a responsive disclosure obligation — triggered by AG request, not on a defined schedule.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2027-01-01
R-02.2
GBL § 1552(9)
Plain Language
The AG may require deployers (or their contracted third parties) to produce risk management policies, impact assessments, and related records within 90 days of a request, as part of an AG investigation. Deployers may designate materials as trade secret or confidential, and production does not waive attorney-client privilege or work product protection. This is a responsive disclosure obligation triggered by AG request.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2027-01-01
R-02.2
GBL § 1553(4)
Plain Language
The AG may require developers of general-purpose AI models to produce technical documentation maintained under § 1553 within 90 days of request, as part of an investigation. Developers may designate materials as trade secret or confidential, and production does not waive attorney-client privilege or work product protection.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2025-07-26
R-02.3
State Tech. Law § 510(1)-(3)
Plain Language
Any person developing or operating a high-risk AI system in New York must register the system with the Secretary of State by applying for a license. Registration is triggered by active deployment and covers all updates, modifications, and capability expansions. For autonomous weapons systems specifically (§ 501(2)(i)), pre-development written disclosure is required before active development begins. The Secretary may order cessation of development or public access pending classification review, and determinations of high-risk status are made through formal public hearings. The registration duty applies to systems that more likely than not qualify as high-risk, with the Secretary empowered to proactively identify unregistered systems.
§ 510. Duty to register a high-risk advanced artificial intelligence system. 1. Any person who develops a high-risk advanced artificial intelligence system, whether in whole or in part, in the state that is presently performing functions for its intended purpose or within its designated operational parameters, shall have the duty to disclose the existence and function of said system to the secretary by applying for a license as required under section five hundred eleven of this article or, where applicable, a supplemental license under section five hundred twelve of this article. This duty to disclose shall be triggered by the system's active deployment and usage in its intended context or field of operation and is applicable irrespective of the system's location of operation. This duty extends to any updates, modifications, upgrades, or expansions of the system's capabilities or intended uses. 2. Any person developing a system as defined in paragraph (i) of subdivision two of section five hundred one of this article within the state shall disclose in writing to the secretary the development of such a system prior to active development of the system. Such writing shall set forth the names and addresses of all persons involved in the development of such system, a description of the system, the systems functions and intended use cases, and measures that will be taken to ensure that any risks posed by the system are mitigated. The secretary may, upon receipt of such writing, require such person to cease development of such a system where, in the secretary's discretion, the secretary believes the system has a high likelihood of violating section five hundred twenty-nine or section five hundred thirty of this article. 3. The duties set forth in this section shall apply only to advanced artificial intelligence systems that more likely than not fall under the definition of high-risk advanced artificial intelligence system as defined in section five hundred one of this article. The secretary shall send notice to any system that is presently performing functions for its intended purpose or within its designated operational parameters which, in their discretion, may fall under the definition of high-risk advanced artificial intelligence systems but that has not registered with the secretary. In the notice, the secretary may require the creators of the system to cease development and access by private individuals or the general public, pending review. Such notice shall be binding and have the effect of law. Determinations that a system is a high-risk advanced artificial intelligence system shall be made in a hearing held pursuant to the provisions of section five hundred nine of this article. In such hearing, the administrator of such hearing shall accept comments from the public. Such hearing shall, to the extent practicable, not disclose any proprietary information concerning the advanced artificial intelligence system to the public.
Pending 2025-07-26
R-02.3
State Tech. Law § 513(1)-(4)
Plain Language
License applications must include: the applicant's identity and corporate details, the names and addresses of all ethics and risk management board members, principals, and officers, and a description of all known general use cases of the AI system. The Secretary conducts a substantive review and may deny the license if the applicant's ethics, experience, character, and fitness do not command community confidence. Denied applicants receive a license fee refund but not an investigation fee refund. This functions as both a regulatory submission and a character-fitness assessment for AI operators.
§ 513. Application for licenses. 1. An application for a license required under this article shall be in writing, under oath, and in the form prescribed by the secretary, and shall contain the following: (a) the exact name and address of the applicant, and if the applicant be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation; (b) the name and the business and residential address of each member of the ethics and risk management board, each principal, and officer of the applicant; and (c) the description of all known general use cases of the advanced artificial intelligence system, including any purposes foreseen to be implemented by the applicant. A "use case" shall be defined as broad category of potential use. 2. After the filing of an application for a license accompanied by payment of the fees for license and investigation, it shall be substantively reviewed. After the application is deemed sufficient and complete, the secretary shall issue the license, or the secretary may refuse to issue the license if the secretary shall find that the ethics, experience, character and general fitness of the applicant or any person associated with the applicant are not such as to command the confidence of the community and to warrant the belief that the business will be conducted honestly, fairly and efficiently within the purposes and intent of this article. 3. If the secretary refuses to issue a license, the secretary shall notify the applicant of the denial, return to the applicant the sum paid as a license fee, but retain the investigation fee to cover the costs of investigating the applicant. 4. Each license issued pursuant to this article shall remain in full force unless it is surrendered by the licensee, revoked or suspended.
Pending 2025-07-26
R-02.1
State Tech. Law § 516(4)
Plain Language
The ethics and risk management board must annually submit a comprehensive report to the Secretary for each licensed system. The report must cover all possible use cases, detailed risk assessments, evaluation of which applications should be constrained, mitigation plans, incident and failure reviews, user education plans, board conflicts of interest, and compliance updates. This is a scheduled proactive regulatory submission — operators cannot wait to be asked. The scope is very broad, requiring assessment even of unlikely and unintended use cases.
4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence.
Pending 2025-07-26
R-02.1
State Tech. Law § 519(1)-(5)
Plain Language
Licensees must obtain written Secretary approval before deploying any source code modification or upgrade — but not minor updates. Modifications (changes to decision-making logic) and upgrades (new features) require a written submission detailing purpose, new functions, reasons, and risk assessment. The Secretary has 30 business days to approve (extendable by 30 more), with deemed approval if no response. Rewrites (substantial changes resulting in a new version) are reviewed as new applications with a 180-business-day timeline. All changes must be developed in pre-production. Updates — defined as minor enhancements, bug fixes, and performance improvements — are exempt. This creates a pre-deployment approval gate for all material system changes.
§ 519. Source code modifications, updates, upgrades, and rewrites. 1. Where a licensee intends to modify or upgrade the source code of their high-risk advanced artificial intelligence system, such licensee shall be required to inform the secretary of such modification or upgrade and shall be prohibited from implementing such modification or upgrade in an accessible version of the system without express consent of the secretary in writing. This section shall not apply to source code updates. 2. A licensee shall, in writing to the secretary, set forth the purpose of the modification or upgrade, the new functions added to the system or the functions modified, the reason for the modification or upgrade, and an assessment of new risks or risks that may be more probable as a result of the modification or upgrade. The secretary shall, upon receipt of notice, have thirty business days to provide the licensee with approval of the modification or upgrade. Where approval is not received within thirty business days, absent an extension in writing which shall not exceed thirty additional business days, the modification or upgrade shall be deemed approved. Nothing in this subdivision shall be construed as limiting the ability of the secretary to take any action they are authorized to take in relation to the approved modification or upgrade. Where the secretary rejects the modification or upgrade, the secretary shall set forth in writing the reasons for the rejection and steps that the licensee can take to receive approval. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 3. A licensee who rewrites the source code of its system shall comply with the same standards set forth in subdivisions one and two of this section provided however that the secretary shall examine such source code in the same manner as a new application and shall provide a letter of approval or rejection upon completion of such review within one hundred eighty business days of receipt of such notices except where the secretary requires an extension of time, then an extension of no more than one hundred eighty days shall be authorized. Where the secretary rejects the rewrite, such letter of rejection shall state the reasons for the rejection and steps that the licensee can take to correct such rejection, if any. Where the secretary approves the modification or upgrade, the licensee may immediately implement such modification or upgrade in a publicly accessible version. 4. All modifications, upgrades, and rewrites shall be conducted in a pre-production environment, which shall mean any stage prior to the accessible version. 5. For purposes of this section: (a) "Modify" shall mean altering the source code of the system to alter the way by which the system, or any features within the system, makes decisions. (b) "Upgrade" shall mean altering the source code of the system which gives it new features or functions. (c) "Rewrite" shall mean a change in the source code to such a substantial degree that: (i) it effectively results in a new version of the system; or (ii) the change nullifies all or a substantial amount of the initial findings of the secretary in the operator's original application. (d) "Update" shall mean a change to the source code that includes minor enhancements, improvements, modifications, error corrections, cosmetic changes, or any other change intended to increase the functionality, compatibility, security or performance of the system. (e) "Accessible version" shall mean a version of the software that is available to the public or for private use or that is presently operating within its designated operational parameters.
Pending 2025-07-26
R-02.2
State Tech. Law § 526(1)-(4)
Plain Language
The Secretary has broad investigative and examination authority over all licensees and any person suspected of violating this article. The Secretary may compel testimony under oath, subpoena witnesses, and require production of books, records, accounts, documents, source code, and logs. Examination costs — including travel and subsistence — are assessed against and paid by the examined licensee. All investigation reports and correspondence are confidential and not subject to subpoena, unless the Secretary determines publication serves justice and public advantage. Operators must be prepared to produce all records and source code on demand and must budget for examination cost assessments.
§ 526. Investigations and examinations. 1. The secretary shall have the power to make such investigations as the secretary shall deem necessary to determine whether any operator or any other person has violated any of the provisions of this article, or whether any licensee has conducted itself in such manner as would justify the revocation of its license, and to the extent necessary therefor, the secretary may require the attendance of and examine any person under oath, and shall have the power to compel the production of all relevant books, records, accounts, documents, source code, and logs. 2. The secretary shall have the power to make such examinations of the books, records, accounts, documents, source code, and logs used in the business of any licensee as the secretary shall deem necessary to determine whether any such licensee has violated any of the provisions of this article. 3. The expenses incurred in making any examination pursuant to this section shall be assessed against and paid by the licensee so examined, except that traveling and subsistence expenses so incurred shall be charged against and paid by licensees in such proportions as the secretary shall deem just and reasonable, and such proportionate charges shall be added to the assessment of the other expenses incurred upon each examination. Upon written notice by the secretary of the total amount of such assessment, the licensee shall become liable for and shall pay such assessment to the secretary. 4. All reports of examinations and investigations, and all correspondence and memoranda concerning or arising out of such examinations or investigations, including any duly authenticated copy or copies thereof in the possession of any licensee or the department, shall be confidential communications, shall not be subject to subpoena and shall not be made public unless, in the judgment of the secretary, the ends of justice and the public advantage will be subserved by the publication thereof, in which event the secretary may publish or authorize the publication of a copy of any such report or other material referred to in this subdivision, or any part thereof, in such manner as the secretary may deem proper.
Pending 2025-01-01
R-02.1
Labor Law § 201-j(2)
Plain Language
Employers must submit each completed impact assessment to the Department of Labor at least 30 days before implementing the AI system covered by the assessment. This is a proactive submission requirement — the employer cannot wait to be asked. The 30-day lead time creates a de facto pre-deployment review window for the Department, though the statute does not explicitly grant the Department authority to block implementation.
Any impact assessment conducted pursuant to this subdivision shall be submitted to the department at least thirty days prior to the implementation of the artificial intelligence that is the subject of such assessment.
Pending 2025-06-04
R-02.1
Gen. Bus. Law § 390-f(2)(b)
Plain Language
Each covered entity must file an annual certification of compliance with the responsible capability scaling policy requirement with the Chief Information Officer (or the Chief Cyber Officer or any successor office designated by the governor). The bill does not specify the form or content of the certification beyond affirming compliance — further detail is expected via CIO rulemaking. Entities that also file cybersecurity compliance certifications with the Department of Financial Services must file jointly (see § 390-f(3)).
Each such entity shall file an annual certification of compliance with this section with the chief information officer.
Pending 2025-06-04
R-02.2
Gen. Bus. Law § 390-f(2)(d)
Plain Language
The Attorney General, acting in consultation with the CIO, may audit the responsible capability scaling policies that entities have filed. This creates an obligation for covered entities to maintain policies in a form that can withstand regulatory audit — i.e., the policies must be substantive and documented, not merely nominal certifications. No specific audit timeline, notice requirements, or consequences for adverse audit findings are specified in the bill.
The attorney general, in consultation with the chief information officer, shall have the power to audit the policies filed by entities under this section.
Pending 2025-06-04
R-02.1
Gen. Bus. Law § 390-f(3)
Plain Language
Entities that are already required to file cybersecurity compliance certifications with the New York Department of Financial Services (e.g., under 23 NYCRR 500) must file their AI responsible capability scaling policy certification jointly with that cybersecurity filing. This is a procedural coordination requirement — it does not create a new substantive obligation but does affect the timing and format of the annual certification for DFS-regulated entities.
If an entity also has to file any certification of cybersecurity compliance with the department of financial services, such filings shall be done jointly.
Pending
R-02.1
Civil Rights Law § 88(1)-(3)
Plain Language
Developers must file reports with the attorney general on a defined schedule: within six months of initial offering or deployment, annually thereafter, and within six months of any substantial change. Developer reports must describe intended and disallowed uses, development methodology, training data overview, and information sufficient for deployers to monitor the system and fulfill their own obligations. Each report must be accompanied by a copy of the most recently completed independent audit. Substantial changes — new versions, releases, or updates affecting use cases, functionality, or expected outcomes — trigger an additional reporting obligation.
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
Pending
R-02.1
Civil Rights Law § 88(4)
Plain Language
Deployers must file reports with the attorney general on a schedule: within six months of deployment, one year after the first report, then every two years, plus within six months of any substantial change. Deployer reports must describe actual and planned uses for consequential decisions, flag any developer-disallowed uses, and include an impact assessment covering algorithmic discrimination risks and mitigation steps, monetization details, and a cost-benefit evaluation for consumers. Each report must be accompanied by the latest independent audit. Entities that are both developer and deployer may file a single joint report covering both sets of requirements.
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
Pending
R-02.1
Civil Rights Law § 88(6)
Plain Language
For high-risk AI systems already deployed when the article takes effect, developers and deployers receive an 18-month grace period to file their first report and associated audit. After the initial filing, developers must report annually and deployers every two years. This transitional provision gives existing operators more time than the six-month window that applies to newly developed or deployed systems under § 88(3)-(4).
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
Pending 2025-09-05
R-02.1
Real Prop. Law § 442-m(1)
Plain Language
The annual submission of the disparate impact analysis summary to the attorney general's office is a proactive, scheduled regulatory disclosure obligation. Covered entities must submit without waiting for a request. The submission must occur at least annually and must cover the most recent analysis. This provision is mapped separately from the underlying audit obligation because it creates an independent regulatory submission requirement.
No less than annually, any real estate broker or online housing platform that uses virtual agents to assist with searches for available properties for sale or rental properties, and any online housing platform that uses AI tools, shall have a disparate impact analysis conducted and shall submit a summary of the most recent disparate impact analysis to the attorney general's office.
Pending
R-02.1
GBL § 1712(1)-(2)
Plain Language
Developers must proactively submit documentation to the Attorney General affirming: (1) the identities and qualifications of the professional domain experts who participated, (2) which development phases each expert contributed to, and (3) any known risks, limitations, or ethical concerns identified during development. This is a proactive submission — developers cannot wait to be asked. Upon review, the Attorney General issues a certificate of compliance; developers found non-compliant are subject to investigation and penalties. The statute does not specify a submission schedule or deadline, which will likely be addressed through AG rulemaking under § 1714.
§ 1712. Documentation and compliance. 1. Developers of artificial intelligence technologies shall submit documentation to the attorney general affirming: (a) The identities and qualifications of professional domain experts involved in the AI technology, pursuant to section seventeen hundred eleven of this article; (b) The specific phases of development in which such professional domain experts contributed; and (c) Any known risks, limitations, or ethical concerns disclosed during development. 2. The attorney general or a duly authorized representative of the attorney general shall issue certificates of compliance to developers who have submitted documentation pursuant to subdivision one of this section and are found to be in compliance. Any technology and developers found to be not in compliance may be subject to investigation and penalties pursuant to section seventeen hundred thirteen of this article.
Pending 2026-01-21
R-02.1
Labor Law § 201-j(2)
Plain Language
Every covered business must file an annual report with the Department of Labor by March 1 covering AI use in the prior calendar year. The report has two components: (1) employment impact data, including estimates of employees displaced, hired, or positions eliminated due to AI; and (2) operational AI usage data, including objectives of AI use, human oversight measures, frequency and duration of use, sensitive personal data handling, and risk reduction measures. The enumerated items are a floor — the statute uses 'including but not limited to,' and the Department may develop additional reporting requirements under subdivision 3. The 90-day cure period in subdivision 5(b) operates as a safe harbor: if the Commissioner notifies a covered business of a violation and the business cures within 90 days to the Commissioner's satisfaction, penalties shall be waived or reduced.
2. Reporting requirement. On or before March first of every year, a covered business shall report to the department regarding the impact of artificial intelligence on its hiring and the nature of its artificial intelligence use in the calendar year ending the preceding December thirty-first. Such report shall include: (a) Employment data, including but not limited to: (i) An estimate of the number of employees displaced, or whose hours have been reduced, due in full or in part to use of artificial intelligence; (ii) An estimate of the number of employees hired, or whose hours have been increased, due in full or in part to use of artificial intelligence; and (iii) An estimate of the number of positions previously filled that the covered business has decided not to fill due in full or in part to use of artificial intelligence; and (b) Information on the nature of artificial intelligence usage, including but not limited to: (i) Descriptions of the objectives of the use of artificial intelligence; (ii) Information regarding any human oversight of artificial intelligence; (iii) Information on the frequency and length of use of artificial intelligence; (iv) Information on any use of artificial intelligence in relation to sensitive personal data, including storage and access protections related to use of artificial intelligence in relation to such personal data; and (v) Measures in place for oversight, risk reduction, or other protections related to use of artificial intelligence.
Pending 2027-01-01
R-02.1
Civil Rights Law § 104(6)
Plain Language
Within 30 days of completing any full pre-deployment evaluation, full impact assessment, or developer annual review, the developer or deployer must: (1) submit the complete evaluation/assessment/review to the Division of Consumer Protection; (2) publish a summary on their website in an easily accessible manner; and (3) submit that summary to the Division. Upon legislative request, the full evaluation/assessment/review must be made available. All evaluations, assessments, and reviews must be retained for at least 10 years. Trade secrets may be redacted from public disclosures, and personal data must be redacted from public disclosures.
6. (a) A developer or deployer that conducts a full pre-deployment evaluation, full impact assessment, or developer annual review of assessments shall: (i) not later than thirty days after completion, submit the evaluation, assessment, or review to the division; (ii) upon request, make the evaluation, assessment, or review available to the legislature; and (iii) not later than thirty days after completion: (A) publish a summary of the evaluation, assessment, or review on the website of the developer or deployer in a manner that is easily accessible to individuals; and (B) submit such summary to the division. (b) A developer or deployer shall retain all evaluations, assessments, and reviews described in this section for a period of not fewer than ten years. (c) A developer or deployer: (i) may redact and segregate any trade secret (as defined in section 1839 of title 18, United States Code) from public disclosure under this subdivision; and (ii) shall redact and segregate personal data from public disclosure under this section.
Pending 2027-01-01
R-02.1
Civ. Rights Law § 88(1)-(3)
Plain Language
Developers of high-risk AI systems must file reports with the Attorney General on a recurring schedule: first report within six months of initial offering or deployment; annually thereafter; and within six months of any substantial change. Each report must describe the system's intended uses, unintended or disallowed uses, development overview, training data overview, and any information deployers need to monitor the system and fulfill their own obligations. Each filing must also include the most recent independent audit. The training data overview requirement makes this a de facto training data disclosure obligation to the regulator. Substantial change includes new versions, releases, or updates that significantly change use cases, functionality, or expected outcomes.
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section.
2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article.
3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision.
(a) A developer of a high-risk AI system shall complete and file with the attorney general at least:
(i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment;
(ii) one report annually following the submission of the first report; and
(iii) one report within six months of any substantial change to the high-risk AI system.
(b) A developer report under this section shall include:
(i) a description of the system including:
(A) the uses of the high-risk AI system that the developer intends; and
(B) any explicitly unintended or disallowed uses of the high-risk AI system;
(ii) an overview of how the high-risk AI system was developed;
(iii) an overview of the high-risk AI system's training data; and
(iv) any other information necessary to allow a deployer to:
(A) understand the outputs and monitor the system for compliance with this article; and
(B) fulfill its duties under this article.
Pending 2027-01-01
R-02.1
Civ. Rights Law § 88(4)
Plain Language
Deployers of high-risk AI systems must file reports with the Attorney General on a recurring schedule: first report within six months of deployment; second report one year later; biennially thereafter; and within six months of any substantial change. Reports must include a system description covering actual, intended, or planned uses and any developer-unintended uses, plus an impact assessment addressing algorithmic discrimination risk and mitigation, monetization plans, and a cost-benefit evaluation for consumers and end users. Each filing must also include the most recent independent audit. An entity that is both developer and deployer may submit a single joint report. For systems already deployed at the effective date, an 18-month transition period applies.
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision.
(a) A deployer of a high-risk AI system shall complete and file with the attorney general at least:
(i) a first report within six months after initial deployment;
(ii) a second report within one year following the completion and filing of the first report;
(iii) one report every two years following the completion and filing of the second report; and
(iv) one report within six months of any substantial change to the high-risk AI system.
(b) A deployer report under this section shall include:
(i) a description of the system including:
(A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and
(B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and
(ii) an impact assessment including:
(A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination;
(B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and
(C) an evaluation of the costs and benefits to consumers and other end users.
(c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
Pending 2027-01-01
R-02.1
Civ. Rights Law § 88(6)
Plain Language
For high-risk AI systems already deployed at the effective date of this article, developers and deployers receive an 18-month transition period to complete and file their first report and associated independent audit. After the first filing, developers must file annually and deployers must file biennially. This is the grandfathering provision for legacy systems — it provides additional runway but does not exempt existing deployments from the statute's requirements.
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article.
(a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision.
(b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
Pending 2025-01-01
R-02.1
Labor Law § 201-j(2)
Plain Language
Employers must submit each completed impact assessment to the Department of Labor at least 30 days before deploying the AI system that is the subject of the assessment. This is a proactive submission requirement — the employer cannot wait to be asked. Because assessments must also be updated biennially and before material changes, each updated assessment would also need to be submitted before the change takes effect.
Any impact assessment conducted pursuant to this subdivision shall be submitted to the department at least thirty days prior to the implementation of the artificial intelligence that is the subject of such assessment.
Pending 2025-10-11
R-02.2
GBL § 1551(6)
Plain Language
The AG may require developers to produce their deployer-facing documentation (the general statement and supporting documentation under § 1551(2)) as part of an investigation. Developers may designate trade secrets, FOIL-exempt information, and attorney-client privileged materials, which will remain exempt from public disclosure. Producing privileged materials to the AG does not waive the privilege.
6. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2025-10-11
R-02.2
GBL § 1552(9)
Plain Language
The AG may require deployers (or their contracted third parties) to produce their risk management policies, impact assessments, and associated records within 90 days of a request, as part of an investigation. Deployers may designate trade secrets, FOIL-exempt information, and attorney-client privileged materials, which remain exempt from public disclosure. Producing privileged materials does not waive the privilege.
9. Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2025-10-11
R-02.2
GBL § 1553(4)
Plain Language
The AG may require developers of general-purpose AI models to produce their technical documentation within 90 days of a request, as part of an investigation. Developers may designate trade secrets, FOIL-exempt information, and attorney-client privileged materials, which remain exempt from public disclosure. Producing privileged materials does not waive the privilege.
4. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2026-01-07
R-02.1
Labor Law § 201-j(2)
Plain Language
Every covered business must submit a report to the New York Department of Labor by March 1 each year covering the prior calendar year. The report must include two categories of information: (1) employment data estimating the number of employees displaced, hired, or positions eliminated due in whole or part to AI use; and (2) information on AI usage objectives, human oversight, frequency and duration of use, sensitive personal data involvement and protections, and risk reduction measures. The enumerated items are floors — 'including but not limited to' — so the Department may require additional disclosures. The obligation is annual and recurring, triggered by operating as a covered business in New York during the reporting year.
2. Reporting requirement. On or before March first of every year, a covered business shall report to the department regarding the impact of artificial intelligence on its hiring and the nature of its artificial intelligence use in the calendar year ending the preceding December thirty-first. Such report shall include: (a) Employment data, including but not limited to: (i) An estimate of the number of employees displaced, or whose hours have been reduced, due in full or in part to use of artificial intelligence; (ii) An estimate of the number of employees hired, or whose hours have been increased, due in full or in part to use of artificial intelligence; and (iii) An estimate of the number of positions previously filled that the covered business has decided not to fill due in full or in part to use of artificial intelligence; and (b) Information on the nature of artificial intelligence usage, including but not limited to: (i) Descriptions of the objectives of the use of artificial intelligence; (ii) Information regarding any human oversight of artificial intelligence; (iii) Information on the frequency and length of use of artificial intelligence; (iv) Information on any use of artificial intelligence in relation to sensitive personal data, including storage and access protections related to use of artificial intelligence in relation to such personal data; and (v) Measures in place for oversight, risk reduction, or other protections related to use of artificial intelligence.
Pending 2026-07-15
R-02.1
Labor Law § 860-b(1)(f)(i)-(ii)
Plain Language
Employers subject to New York's WARN Act who issue mass layoff, relocation, or employment loss notices must now include a statement disclosing whether the workforce reduction is attributable, in whole or in part, to the introduction, expansion, or adoption of AI systems, automation technologies, or machine-based processes that replaced or materially altered affected employees' duties. To the extent known at the time of notice, the employer must also estimate the percentage of positions affected and briefly describe the relevant technology or process. This is an addition to the existing WARN notice content requirements — it does not change who must file or when, only what the notice must contain.
(f) (i) Each notice required under this section shall include a statement indicating whether the employment losses described are the result, in whole or in part, of the introduction, expansion, or adoption of artificial intelligence (AI) systems, automation technologies, or machine-based processes that have replaced or materially altered the duties of affected employees. (ii) Such statement shall also include, to the extent known by the employer at the time of notice: (A) The estimated percentage of positions affected due to such automation or AI integration; and (B) A brief description of the technology or process that contributed to the reduction.
Pending 2025-01-01
R-02.1
Ohio Rev. Code § 3902.80(B)(1)-(2)
Plain Language
Each health plan issuer must file an annual report with the Superintendent of Insurance by March 1, covering network providers, enrollment counts, and — critically — whether the issuer uses AI-based algorithms in utilization review. If AI is used, the report must detail the algorithm criteria, training datasets, the algorithm itself, software outcomes, and data on how much time human reviewers spend examining adverse determinations before signing off. An officer must verify the report's contents. This is a comprehensive AI transparency filing obligation directed at the state insurance regulator.
(B)(1) Each health plan issuer, annually, on or before the first day of March, shall file a report with the superintendent of insurance covering all of the following information: (a) Each provider in the health plan issuer's network; (b) The number of covered persons enrolled in health benefit plans issued by the health plan issuer in this state in the preceding calendar year; (c) Whether the health plan issuer used, is using, or will use artificial intelligence-based algorithms in utilization review processes for those health benefit plans and, if so, all of the following information: (i) The algorithm criteria; (ii) Data sets used to train the algorithm; (iii) The algorithm itself; (iv) Outcomes of the software in which the algorithm is used; (v) Data on the amount of time a human reviewer spends examining an adverse determination prior to signing off on each such determination. (2) The health plan issuer shall submit the report in a form prescribed by the superintendent. An officer of the health plan issuer shall verify the contents of the report.
Pending 2025-01-01
R-02.2
Ohio Rev. Code § 3902.80(D)
Plain Language
The Superintendent of Insurance may audit any health plan issuer's use of AI-based algorithms at any time, without advance notice or triggering event. The Superintendent may also engage third-party auditors to conduct these audits. For health plan issuers, this means they must maintain records and documentation of their AI use in a form that can be produced for audit at any time — there is no advance scheduling requirement or cure period before an audit may commence.
(D) The superintendent may audit a health plan issuer's use of an artificial intelligence-based algorithm at any time and may contract with a third party for the purposes of conducting such an audit.
Pending 2025-01-01
R-02.1
Ohio Rev. Code § 3902.80(B)(1)-(3)
Plain Language
Each health plan issuer must file an annual report with the Superintendent of Insurance by March 1 covering its provider network, covered person enrollment, and whether it uses AI-based algorithms in utilization review. If AI is used, the report must include detailed information: algorithm criteria, training data sets, the algorithm itself, software outcomes, and data on how much time human reviewers spend examining adverse determinations before signing off. An officer must verify the report. Both the superintendent and the health plan issuer must publish the report on their respective websites, making this both a regulatory filing and a public disclosure obligation.
(B)(1) Each health plan issuer, annually, on or before the first day of March, shall file a report with the superintendent of insurance covering all of the following information: (a) Each provider in the health plan issuer's network; (b) The number of covered persons enrolled in health benefit plans issued by the health plan issuer in this state in the preceding calendar year; (c) Whether the health plan issuer used, is using, or will use artificial intelligence-based algorithms in utilization review processes for those health benefit plans and, if so, all of the following information: (i) The algorithm criteria; (ii) Data sets used to train the algorithm; (iii) The algorithm itself; (iv) Outcomes of the software in which the algorithm is used; (v) Data on the amount of time a human reviewer spends examining an adverse determination prior to signing off on each such determination. (2) The health plan issuer shall submit the report in a form prescribed by the superintendent. An officer of the health plan issuer shall verify the contents of the report. (3) The superintendent shall publish a copy of the report on the web site of the department of insurance. The health plan issuer shall publish a copy of the report on the health plan issuer's publicly accessible web site.
Pending 2026-10-06
R-02.1R-02.4
35 Pa.C.S. § 3504(a)-(b)
Plain Language
Facilities using AI for clinical decision making must annually file a compliance statement with the Department of Health. The statement must include: a summary of AI algorithm function and scope; a logic or decision tree of the algorithms; a description of each training data set including data source; an attestation of compliance with the responsible use requirements with supporting evidence; and a description of the facility's oversight and validation process. This is both a regulatory submission and an annual certification obligation.
(a) Compliance statement required.--A facility using artificial intelligence-based algorithms for clinical decision making shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of artificial intelligence-based algorithms used for clinical decision making. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for clinical decision making. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for clinical decision making, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 3503 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the facility for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 3503.
Pending 2026-10-06
R-02.1R-02.4
40 Pa.C.S. § 5204(a)-(b)
Plain Language
Insurers using AI in utilization review must annually file a compliance statement with the Insurance Department covering algorithm function and scope, logic/decision trees, training data descriptions with sources, an attestation of compliance with responsible use requirements with evidence, and a description of the insurer's AI oversight and validation process.
(a) Compliance statement required.--An insurer using artificial intelligence-based algorithms in the utilization review process shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of the artificial intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 5203 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the insurer for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 5203.
Pending 2026-10-06
R-02.1R-02.4
40 Pa.C.S. § 5304(a)-(b)
Plain Language
MA or CHIP managed care plans using AI in utilization review must annually file a compliance statement with the Department of Human Services covering the same categories as the insurer filing: algorithm function and scope, logic/decision trees, training data descriptions with sources, compliance attestation with evidence, and oversight/validation process descriptions.
(a) Compliance statement required.--An MA or CHIP managed care plan using artificial intelligence-based algorithms in the utilization review process shall annually file with the department, in the form and manner prescribed by the department, an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of the artificial intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 5303 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the MA or CHIP managed care plan for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 5303.
Pending 2026-10-06
R-02.2
35 Pa.C.S. § 3507
Plain Language
The Department of Health may request additional information and evidence from facilities beyond the annual compliance statement, covering disclosures, responsible use, and compliance statements, to ensure compliance. Facilities must be prepared to produce supplementary documentation on request.
The department may request additional information and evidence from a facility regarding the items provided under sections 3502 (relating to disclosure), 3503 (relating to responsible use) and 3504 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-10-06
R-02.2
40 Pa.C.S. § 5208
Plain Language
The Insurance Department may request additional information and evidence from insurers beyond the annual compliance statement to ensure compliance. Insurers must maintain documentation in a form that can be produced on request.
The department may request additional information and evidence from an insurer regarding the items provided under sections 5202 (relating to disclosure), 5203 (relating to responsible use) and 5204 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-10-06
R-02.2
40 Pa.C.S. § 5308
Plain Language
The Department of Human Services may request additional information and evidence from MA or CHIP managed care plans beyond the annual compliance statement to ensure compliance.
The department may request additional information and evidence from an MA or CHIP managed care plan regarding the items provided under section 5302 (relating to disclosure), 5303 (relating to responsible use) and 5304 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-04-01
R-02.1R-02.3
12 Pa.C.S. § 7105(e)-(g)
Plain Language
Suppliers must file their written disclosure policy with the Bureau of Consumer Protection, along with the supplier's name and address, the chatbot's name, and an annual filing fee prescribed by the Bureau. The filing must follow the form and manner prescribed by the Bureau. Suppliers may voluntarily submit policy revisions and additional documentation. Critically, § 7105(g) requires suppliers to actually comply with the policy as filed — the filed policy becomes a binding compliance commitment, not merely a disclosure document. This effectively creates a registration requirement and converts the policy into an enforceable standard.
(e) Filing.--A supplier shall file the policy described under subsection (a) with the bureau, in the form and manner as prescribed by the bureau, along with: (1) The name and address of the supplier. (2) The name of the chatbot. (3) An annual filing fee as prescribed by the bureau. (f) Additional information.--A supplier may provide to the bureau, in the form and manner prescribed by the bureau: (1) Any revision to the policy described under subsection (a) and filed in accordance with subsection (e). (2) Any other documentation that the supplier deems appropriate to provide. (g) Compliance.--A supplier shall comply with the requirements of the policy filed in accordance with this section.
Pending 2027-01-09
R-02.1R-02.4
35 Pa.C.S. § 3504(a)-(b)
Plain Language
Facilities using AI for clinical decision making must annually file an AI compliance statement with the Department of Health. The statement must include: a summary of the AI algorithms' function and scope, a logic or decision tree, a description of each training data set and its source, an attestation of compliance with responsible use requirements with supporting evidence, and a description of the facility's oversight and validation process. The Department prescribes the form and manner of the filing.
(a) Compliance statement required.--A facility using artificial-intelligence-based algorithms for clinical decision making shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of artificial-intelligence-based algorithms used for clinical decision making. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for clinical decision making. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for clinical decision making, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 3503 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the facility for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 3503.
Pending 2027-01-09
R-02.1R-02.4
40 Pa.C.S. § 5204(a)-(b)
Plain Language
Insurers using AI in utilization review must annually file an AI compliance statement with the Insurance Department. Contents mirror the facility requirements: function/scope summary, logic/decision tree, training data descriptions with sources, compliance attestation with evidence, and oversight/validation process description.
(a) Compliance statement required.--An insurer using artificial-intelligence-based algorithms in the utilization review process shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of the artificial-intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 5203 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the insurer for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 5203.
Pending 2027-01-09
R-02.1R-02.4
40 Pa.C.S. § 5304(a)-(b)
Plain Language
MA or CHIP managed care plans using AI in utilization review must annually file an AI compliance statement with the Department of Human Services. Contents mirror the facility and insurer requirements.
(a) Compliance statement required.--An MA or CHIP managed care plan using artificial-intelligence-based algorithms in the utilization review process shall annually file with the department, in the form and manner prescribed by the department, an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of the artificial-intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 5303 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the MA or CHIP managed care plan for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 5303.
Pending 2027-01-09
R-02.2
35 Pa.C.S. § 3507
Plain Language
The Department of Health may request additional information and evidence from facilities beyond the annual compliance statement regarding disclosure practices, responsible use compliance, and compliance statement contents. Facilities must be prepared to produce supporting documentation upon request.
The department may request additional information and evidence from a facility regarding the items provided under sections 3502 (relating to disclosure), 3503 (relating to responsible use) and 3504 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2027-01-09
R-02.2
40 Pa.C.S. § 5208
Plain Language
The Insurance Department may request additional information and evidence from insurers beyond the annual compliance statement. Insurers must be prepared to produce supporting documentation upon request.
The department may request additional information and evidence from an insurer regarding the items provided under sections 5202 (relating to disclosure), 5203 (relating to responsible use) and 5204 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2027-01-09
R-02.2
40 Pa.C.S. § 5308
Plain Language
The Department of Human Services may request additional information and evidence from MA or CHIP managed care plans beyond the annual compliance statement.
The department may request additional information and evidence from an MA or CHIP managed care plan regarding the items provided under section 5302 (relating to disclosure), 5303 (relating to responsible use) and 5304 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-01-21
R-02.1
R.I. Gen. Laws § 27-84-3(a)(1)
Plain Language
Insurers must affirmatively disclose to OHIC and DBR how they use AI to manage claims and coverage. The disclosure is broad: it must cover the types of AI models used, AI's role in decision-making, training datasets, performance metrics, governance and risk management policies, and the specific decisions where AI made or substantially contributed to the outcome. This is a proactive disclosure obligation — insurers must provide this information without waiting for a regulatory request.
Insurers subject to this chapter shall disclose to the office of the health insurance commissioner ("OHIC") and the department of business regulation ("DBR") how they use artificial intelligence to manage healthcare claims and coverage including, but not limited to, the types of artificial intelligence models used, the role of artificial intelligence in the decision-making process, training datasets, performance metrics, governance and risk management policies, and the decisions on healthcare claims and coverage where artificial intelligence made, or was a substantial factor in making, the decisions.
Pending 2026-01-21
R-02.2
R.I. Gen. Laws § 27-84-3(a)(2)
Plain Language
Insurers must produce, upon request by OHIC or DBR, all information — including documents and software — needed for enforcement. This is an on-demand production obligation, not a scheduled submission. The scope is notably broad: it covers software itself, not just documentation about the software, meaning regulators may request access to the actual AI tools used in claims processing.
Insurers shall submit to the office of the health insurance commissioner and the department of business regulation, upon request, all information, including documents and software, that permits enforcement of this chapter.
Pending 2026-01-09
R-02.1
R.I. Gen. Laws § 27-84-3(a)(1)
Plain Language
Insurers must proactively disclose to OHIC and DBR how they use AI to manage healthcare claims and coverage. The disclosure must cover, at minimum: types of AI models used, AI's role in decision-making, training datasets, performance metrics, governance and risk management policies, and which claims and coverage decisions AI made or substantially influenced. This is a broad, affirmative disclosure obligation — not merely responsive to a regulator request — and the 'including, but not limited to' language means the enumerated categories are a floor, not a ceiling.
Insurers subject to this chapter shall disclose to the office of the health insurance commissioner ("OHIC") and the department of business regulation ("DBR") how they use artificial intelligence to manage healthcare claims and coverage including, but not limited to, the types of artificial intelligence models used, the role of artificial intelligence in the decision-making process, training datasets, performance metrics, governance and risk management policies, and the decisions on healthcare claims and coverage where artificial intelligence made, or was a substantial factor in making, the decisions.
Pending 2026-01-09
R-02.2
R.I. Gen. Laws § 27-84-3(a)(2)
Plain Language
Insurers must produce to OHIC and DBR, upon request, all information — including documents and software — necessary for enforcement of this chapter. This is a broad on-demand production obligation with no stated time limit for response. Notably, it encompasses software itself, not just documentation about software, which could require making AI tools available for regulator inspection or testing.
Insurers shall submit to the office of the health insurance commissioner and the department of business regulation, upon request, all information, including documents and software, that permits enforcement of this chapter.
Pending
R-02.2
S.C. Code § 37-31-20(G)
Plain Language
Upon request from the Attorney General, a developer must produce the deployer-facing documentation described in subsection (B) within 90 days. The documentation is exempt from South Carolina FOIA and may be designated as containing proprietary information or trade secrets. Attorney-client privilege and work-product protection are preserved. This is a responsive obligation — not a proactive filing requirement.
(G) The Attorney General may require that a developer disclose to the Attorney General, no later than ninety days after the request and in a form and manner prescribed by the Attorney General, the statement or documentation described in subsection (B). The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending
R-02.2
S.C. Code § 37-31-30(I)
Plain Language
Upon request by the Attorney General, deployers must produce their risk management policy, impact assessments, and associated records within 90 days. These materials are exempt from South Carolina FOIA and may be designated as containing proprietary information or trade secrets. Attorney-client privilege and work-product protection are preserved. This requires maintaining documentation in a form that can be produced on demand.
(I) The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to him, no later than ninety days after the request and in a form and manner prescribed by him, the risk management policy implemented pursuant to subsection (B), the impact assessment completed pursuant to subsection (C), or the records maintained pursuant to subsection (C)(6). The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2026-07-01
R-02.1
Section 3
Plain Language
Health carriers using AI for utilization review — directly or through contracted entities — must compile and submit an annual report to the Executive Board of the Legislative Research Council by December 1 each year. The report must detail how AI tools were used in the utilization review process during the preceding fiscal year and describe the nature and degree of human review and oversight applied to affirm or negate determinations. This is a proactive, scheduled legislative reporting obligation — not triggered by an incident or regulatory request. Note the report goes to a legislative body, not a regulatory agency.
Any health carrier that makes determinations or provides advice about third-party payment for any health care services using an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or that contracts with or otherwise works through an entity that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review shall compile an annual report detailing how, during the preceding fiscal year, the artificial intelligence, algorithm, or other software tool was used in the utilization review process and the nature and degree of human review and oversight that was used to afform or negate any determinations. The report must be forwarded to the Executive Board of the Legislative Research Council on or before December first of each year.
Pending 2026-07-01
R-02.2
Section 4
Plain Language
The Division of Insurance has unrestricted authority to inspect a health carrier's AI-based utilization review system at any time to verify compliance with the individualized-data and human-review requirements of this Act. Health carriers must therefore maintain their AI systems and related documentation in a state of inspection-readiness. If the Division finds noncompliance, it notifies the Attorney General, who may order the carrier to cease and desist from further noncompliant activities. There is no cure period or pre-inspection notice requirement specified.
The Division of Insurance may, at any time, inspect a health carrier's automated system to ensure that the health carrier's use of artificial intelligence, algorithms, or other software tools is in compliance with sections 1 and 2 of this Act. If the division determines that the automated system is not in compliance, the division shall notify the attorney general who may direct the health carrier to cease and desist from engaging in further noncompliant activities.
Pending 2026-07-01
R-02.1R-02.2
Va. Code § 38.2-3407.15(B)(15)(i)-(ii)
Plain Language
Carriers that use AI to manage insurance claims and coverage must proactively disclose to the Bureau of Insurance details about their AI use, including the underlying algorithms, the data used, and the resulting determinations. In addition, carriers must submit to the Bureau upon request all information — including documents and software — necessary for the Bureau to enforce this provision. The proactive disclosure obligation is ongoing and not limited to a response to a regulatory request, while the second clause creates a separate on-demand production obligation that extends to the software itself.
Each carrier shall (i) publicly disclose, if applicable, to the Bureau the carrier's use of AI to manage insurance claims and coverage, including in underlying algorithms, data used, and resulting determinations; (ii) submit to the Bureau, upon request, all information, including documents and software, necessary for enforcement of this subdivision;
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(a)-(b)
Plain Language
Every developer and deployer must file reports with the Attorney General before deployment and then annually or after each substantial change, whichever comes first. Each report must be accompanied by the most recent independent audit and a legal attestation either certifying compliance or disclosing known or potential violations with a remediation plan and summary. This creates a continuous disclosure obligation — the attestation requirement means developers and deployers must self-report potential violations when they file, not just when asked.
(a) Every developer and deployer of an automated decision system used in a consequential decision shall comply with the reporting requirements of this section. Regardless of final findings, reports shall be filed with the Attorney General prior to deployment of an automated decision system used in a consequential decision and then annually, or after each substantial change to the system, whichever comes first. (b) Together with each report required to be filed under this section, developers and deployers shall file with the Attorney General a copy of the last completed independent audit required by this subchapter and a legal attestation that the automated decision system used in a consequential decision: (1) does not violate any provision of this subchapter; or (2) may violate or does violate one or more provisions of this article, that there is a plan of remediation to bring the automated decision system into compliance with this subchapter, and a summary of the plan of remediation.
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(c)
Plain Language
Developers must file a comprehensive report with the Attorney General covering nine categories of information: system description (software stack, purpose, intended uses); intended outputs and secondary use potential; detailed training methodology including pre-processing, dataset descriptions, data quality and breadth, and legal compliance steps; data management policies; information for deployer compliance; system capabilities, limitations, safeguards, and guardrail testing; an internal risk assessment covering algorithmic discrimination, validity, reliability, privacy, autonomy, safety, and security; and monitoring recommendations. This is one of the most granular developer-reporting requirements in any state AI bill — it requires disclosure of training data methodology and data gap analysis, not just system-level descriptions.
(c) Developers of automated decision systems shall file with the Attorney General a report containing the following: (1) a description of the system including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) the methods for training of their models including: (A) any pre-processing steps taken to prepare datasets for the training of a model underlying an automated decision system; (B) descriptions of the datasets upon which models were trained and evaluated, how and why datasets were collected and the sources of those datasets, and how that training data will be used and maintained; (C) the quality and appropriateness of the data used in the automated decision system's design, development, testing, and operation; (D) whether the data contains sufficient breadth to address the range of real-world inputs the automated decision system might encounter and how any data gaps have been addressed; and (E) steps taken to ensure compliance with privacy, data privacy, data security, and copyright laws; (4) use and data management policies; (5) any other information necessary to allow the deployer to understand the outputs and monitor the system for compliance with this subchapter; (6) any other information necessary to allow the deployer to comply with the requirements of subsection (d) of this section; (7) a description of the system's capabilities and any developer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (8) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, validity and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (9) whether the system should be monitored and, if so, how the system should be monitored.
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(d)
Plain Language
Deployers must file their own report with the Attorney General covering eight categories: system description (software stack, purpose, intended uses); intended outputs and secondary use potential; monetization plans; whether the system makes or supports consequential decisions; capabilities, limitations, safeguards and guardrail testing; a cost-benefit assessment for consumers; an internal risk assessment covering algorithmic discrimination, accuracy, reliability, privacy, autonomy, safety, and security; and monitoring recommendations. The deployer report differs from the developer report in requiring revenue disclosure and consumer cost-benefit analysis while omitting training data methodology details.
(d) Deployers of automated decision systems used in consequential decisions shall file with the Attorney General a report containing the following: (1) a description of the system, including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) whether the deployer collects revenue or plans to collect revenue from use of the automated decision system in a consequential decision and, if so, how it monetizes or plans to monetize use of the system; (4) whether the system is designed to make consequential decisions itself or whether and how it supports consequential decisions; (5) a description of the system's capabilities and any deployer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (6) an assessment of the relative benefits and costs to the consumer given the system's purpose, capabilities, and probable use cases; (7) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, accuracy and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (8) whether the system should be monitored and, if so, how the system should be monitored.
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(f)
Plain Language
Systems already deployed for consequential decisions as of July 1, 2025 receive a transitional period: developers and deployers have until January 1, 2027 (18 months after July 1, 2025) to complete and file all required reports and complete the independent audit. New systems deployed after July 1, 2025 must comply with the pre-deployment reporting and audit requirements before deployment. This is a grandfathering provision that gives existing deployments time to come into compliance.
(f) For automated decision systems already in deployment for use in consequential decisions on or before July 1, 2025, developers and deployers shall not later than 18 months after July 1, 2025 complete and file the reports and complete the independent audit required by this subchapter.
Pending 2025-07-01
R-02.2
9 V.S.A. § 4193g(c)
Plain Language
The Attorney General may at any time require a developer or deployer to disclose its risk management policy and program in a form and manner the AG prescribes, and may evaluate the program for compliance. This is a demand-driven regulatory disclosure — developers and deployers should maintain their risk management documentation in a form that can be produced upon AG request.
(c) The Attorney General may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subsection (a) of this section in a form and manner prescribed by the Attorney General. The Attorney General may evaluate the risk management policy and program to ensure compliance with this section.
Pre-filed 2025-07-01
R-02.1
9 V.S.A. § 4193e(a)-(b)
Plain Language
Every deployer of an inherently dangerous AI system must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence before deploying the system in Vermont, and must resubmit every two years. An updated assessment is also required upon any material and substantial change to the system's purpose or the type of data it processes or uses for training. The assessment must cover 13 enumerated elements including the system's purpose, deployment context, training data description, whether personal information and copyrighted content have been removed from training data, transparency measures, third-party dependencies, post-deployment monitoring, and the system's impact on consequential decisions or biometric data collection. Upon notice that a deployer is not in compliance, the Division notifies the deployer in writing and grants a 45-day cure period; failure to submit triggers referral to the Attorney General.
(a) Each deployer of an inherently dangerous artificial intelligence system shall: (1) submit to the Division of Artificial Intelligence an Artificial Intelligence System Safety and Impact Assessment prior to deploying the inherently dangerous artificial intelligence system in this State, and every two years thereafter; and (2) submit to the Division of Artificial Intelligence an updated Artificial Intelligence System Safety and Impact Assessment if the deployer makes a material and substantial change to the inherently dangerous artificial intelligence system that includes: (A) the purpose for which the system is used for; or (B) the type of data the system processes or uses for training purposes. (b) Each Artificial Intelligence System Safety and Impact Assessment pursuant to subsection (a) of this section shall include, with respect to the inherently dangerous artificial intelligence system: (1) the purpose of the system; (2) the deployment context and intended use cases; (3) the benefits of use; (4) any foreseeable risk of unintended or unauthorized uses and the steps taken, to the extent reasonable, to mitigate the risk; (5) whether the model is proprietary; (6) a description of the data the system processes or uses for training purposes; (7) whether the data the system uses for training purposes has been processed to remove personal information, copyrighted information, and do not train data; (8) a description of transparency measures, including identifying to individuals when the system is in use; (9) identification of any third-party artificial intelligence systems or datasets the deployer relies on to train or operate the system, if applicable; (10) whether the developer of the system, if different than the deployer, disclosed the information pursuant to this subsection as well as the results of testing, vulnerabilities, and the parameters for safe and intended use; (11) a description of the data that the system, once deployed, processes as inputs; (12) a description of postdeployment monitoring and user safeguards, including a description of the oversight process in place to address issues as issues arise; and (13) a description of how the model impacts consequential decisions or the collection of biometric data.
Pre-filed 2025-07-01
R-02.2
9 V.S.A. § 4193c(c)(1)-(4)
Plain Language
The Attorney General may issue a civil investigative demand whenever there is reasonable cause to believe a violation of the subchapter has occurred. Developers and deployers must respond but may redact trade secrets or information protected by state or federal law, provided they affirmatively state the basis for redaction. Attorney-client privilege and work-product protection are preserved and not waived by disclosure. All information provided to the Attorney General under this subsection is exempt from public inspection under the Public Records Act. This creates an obligation to maintain documentation in a form producible upon demand, with defined confidentiality protections.
(c)(1) Whenever the Attorney General has reasonable cause to believe that any person has engaged in or is engaging in any violation of this subchapter, the Attorney General may issue a civil investigative demand. (2) In rendering and furnishing any information requested pursuant to a civil investigative demand, a developer or deployer may redact or omit any trade secrets or information protected from disclosure by State or federal law. If a developer or deployer refuses to disclose or redacts or omits information based on the exemption from disclosure of trade secrets, the developer or deployer shall affirmatively state to the Attorney General that the basis for nondisclosure, redaction, or omission is because the information is a trade secret. (3) To the extent that any information requested pursuant to a civil investigative demand is subject to attorney-client privilege or work-product protection, disclosure of the information shall not constitute a waiver of the privilege or protection. (4) Any information, statement, or documentation provided to the Attorney General pursuant to this subsection shall be exempt from public inspection and copying under the Public Records Act.
Passed 2026-07-01
R-02.3
18 V.S.A. § 9764(c)
Plain Language
To obtain the affirmative defense, suppliers must file with the Office of the Attorney General their name and address, the chatbot's name, the written compliance policy, and a $100 filing fee. Suppliers may also voluntarily submit policy revisions and additional documentation. This is a registration-like requirement — the filing is a prerequisite to claiming the affirmative defense, and the AG's office prescribes the form and manner of filing.
(c) To file a policy with the Office of the Attorney General under this section, a supplier of a mental health chatbot: (1) shall provide to the Office, in the form and manner prescribed by the Office: (A) the name and address of the supplier; (B) the name of the mental health chatbot supplied by the supplier; (C) the written policy described in subsection (b) of this section; and (D) a $100.00 filing fee; and (2) may provide to the Office: (A) any revisions to a policy filed under this section; and (B) any other documentation that the supplier elects to provide.
Pending 2026-07-01
R-02.2
Sec. 6(6)
Plain Language
Deployers must retain the most recent impact assessment, all supporting records, and all prior impact assessments for at least three years after final deployment of the high-risk AI system. While the statute does not explicitly require production to regulators, the AG's enforcement authority under Section 10 and the CPA's investigative powers implicitly require maintaining records in a form suitable for production. The three-year retention floor runs from final deployment — not from the date the assessment was completed.
A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this section, relevant records supporting the impact assessment, and prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence system.
Pending 2026-07-01
R-02.4
§ 16-5EE-9(a)-(b)
Plain Language
By December 31 of each year, every covered medical facility, research facility, company, or nonprofit must certify to the Attorney General that it is in compliance with all provisions of the Genomic Privacy Act. The certification must be submitted by an attorney representing the organization — this creates a professional-responsibility overlay, as the attorney is making a representation to the AG on behalf of the entity. Note that subsection (b) references '§16-5EE-8(a)' but appears to be a drafting error — in context, it should reference §16-5EE-9(a).
(a) Not later than December 31 of each year, a medical facility, research facility, company, or nonprofit organization subject to this §16-5EE-1 et seq. shall certify to the attorney general that the facility, company, or organization is in compliance with this chapter. (b) An attorney representing a medical facility, research facility, company, or nonprofit organization subject to this chapter shall submit the certification required under Subsection §16-5EE-8(a).