R-02
Reporting & Regulatory Submissions
Regulatory Disclosure & Submissions
Developers or deployers of certain AI systems must submit documentation — including system descriptions, risk assessments, and safety evaluation results — to regulatory authorities either proactively on a defined schedule or in response to regulatory requests. Proactive submission requirements cannot be satisfied by waiting to be asked.
Applies to DeveloperDeployerGovernment Sector Foundation ModelGovernment System
Bills — Enacted
4
unique bills
Bills — Proposed
51
Last Updated
2026-03-29
Core Obligation

Developers or deployers of certain AI systems must submit documentation — including system descriptions, risk assessments, and safety evaluation results — to regulatory authorities either proactively on a defined schedule or in response to regulatory requests. Proactive submission requirements cannot be satisfied by waiting to be asked.

Sub-Obligations4 sub-obligations
Bills That Map This Requirement 55 bills
Bill
Status
Sub-Obligations
Section
Passed 2026-10-01
R-02.4
Section 1(b)(2)
Plain Language
Insurers must annually certify to the Alabama Department of Insurance that their AI systems used in prior authorization comply with three requirements: (1) determinations are not based solely on group-level datasets; (2) the AI is configured fairly so that enrollees with similar clinical profiles receive consistent outcomes; and (3) the AI does not discriminate directly or indirectly in violation of state or federal law, including HHS guidance. This is a proactive annual certification obligation — insurers must affirmatively represent compliance, not merely respond to regulator inquiries.
(2) An insurer shall certify annually to the department that the artificial intelligence used to make determinations on requests for prior authorization complies with all of the following: a. Does not rely solely on a group dataset to make determinations. b. Is configured and applied in a fair manner for each subscriber group and enrollee such that resulting determinations are consistent for enrollees who present with similar clinical considerations. c. Does not discriminate directly or indirectly against any subscriber group or enrollee in violation of state or federal law, including any regulation or guidance issued by the federal Department of Health and Human Services.
Pending 2027-07-01
R-02.1
Bus. & Prof. Code § 22615(a)-(b)
Plain Language
This section imposes obligations on the Attorney General rather than on operators, establishing the regulatory infrastructure for this chapter. By January 1, 2028, the AG must: adopt regulations setting auditor standards, eligibility, compliance assessment procedures, and audit report requirements; establish a public consumer complaint mechanism for companion chatbots; and create a process for qualified researchers to access anonymized audit data. Beginning January 1, 2028, the AG must also issue annual public reports summarizing audit findings, industry compliance trends, emerging risks, best practices, and recommendations. While this section primarily imposes duties on the AG, operators should monitor the rulemaking process because the AG's regulations will define the specific audit requirements operators must satisfy.
(a) On or before January 1, 2028, the Attorney General shall do all of the following: (1) Adopt regulations that include, at a minimum, all of the following: (A) Professional and ethical standards for auditors that ensure independence. (B) Eligibility requirements for auditors. (C) Procedures for auditors to assess compliance with this chapter. (D) Requirements for AI child safety audit reports. (2) Establish a public incident reporting mechanism for consumers to submit complaints relating to companion chatbots to the Attorney General. (3) Establish a process for qualified researchers to access anonymized and aggregated audit data for academic study of child safety in companion chatbots. (b) Beginning January 1, 2028, the Attorney General shall issue an annual public report that includes the following: (1) A high-level summary of each child safety audit report. (2) The total number of child safety audits conducted. (3) Common findings and trends across the companion chatbot industry. (4) Emerging child safety risks identified through audit reviews. (5) Best practices and effective mitigation strategies observed. (6) Aggregated data on compliance rates and common deficiencies. (7) Recommendations for operators, parents, and policymakers.
Pending 2026-01-01
R-02.2
Bus. & Prof. Code § 22756.6(a)(1)-(2)
Plain Language
Developers must produce a copy of their impact assessment to the Attorney General or the Civil Rights Department within 30 days of a request. The impact assessment is treated as confidential regardless of other California disclosure laws. Note the 30-day response window is shorter than the 90-day default in many other jurisdictions. This applies only to developers — the statute does not impose a parallel production obligation on deployers to these regulators.
(a) (1) A developer shall provide to the Attorney General or Civil Rights Department, within 30 days of a request from the Attorney General or the Civil Rights Department, a copy of an impact assessment performed pursuant to this chapter. (2) Notwithstanding any other law, an impact assessment provided to the Attorney General or Civil Rights Department pursuant to this subdivision shall be kept confidential.
Enacted 2026-01-01
R-02.1
Bus. & Prof. Code § 22757.12(d)
Plain Language
Large frontier developers must submit to the Office of Emergency Services quarterly summaries (or on another pre-agreed schedule) of catastrophic risk assessments from internal use of their frontier models.
A large frontier developer shall transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate.
Pending 2027-01-01
R-02.1
C.R.S. § 10-16-112.7(4)(a)-(d)
Plain Language
Covered entities must submit written disclosures to their applicable state regulator — the Division of Insurance, Department of Human Services, or Department of Health Care Policy and Financing — identifying: which utilization review functions use AI, at what points in the process AI is deployed, the human oversight process including reviewer qualifications and whether a human must approve adverse determinations, and the process for maintaining audit records sufficient to demonstrate compliance. This is a proactive regulatory submission — entities must provide these disclosures without waiting for a request.
(4) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION SHALL PROVIDE WRITTEN DISCLOSURES TO THE DIVISION, THE DEPARTMENT OF HUMAN SERVICES, OR THE DEPARTMENT OF HEALTH CARE POLICY AND FINANCING, AS APPLICABLE, THAT IDENTIFY: (a) THE UTILIZATION REVIEW FUNCTIONS FOR WHICH THE ARTIFICIAL INTELLIGENCE SYSTEM WILL BE USED; (b) THE POINTS IN THE UTILIZATION REVIEW PROCESS WHEN THE ARTIFICIAL INTELLIGENCE SYSTEM IS USED; (c) THE HUMAN OVERSIGHT PROCESS, INCLUDING THE QUALIFICATIONS OF THE REVIEWER AND WHETHER THE A HUMAN MUST APPROVE AN ADVERSE DETERMINATION; AND (d) THE PROCESS FOR MAINTAINING AUDIT INFORMATION SUFFICIENT TO DEMONSTRATE COMPLIANCE WITH SUBSECTION (3) OF THIS SECTION.
Enacted 2023-07-01
R-02.2
C.R.S. § 6-1-1309(4)
Plain Language
Controllers must produce their data protection assessments to the Attorney General upon request. The AG may evaluate them for compliance with controller duties and other applicable laws. Assessments are confidential and exempt from open records requests. Disclosure to the AG does not waive attorney-client privilege or work-product protection. This is a regulatory submission obligation separate from the duty to conduct the assessment itself.
(4) A controller shall make the data protection assessment available to the attorney general upon request. The attorney general may evaluate the data protection assessment for compliance with the duties contained in section 6-1-1308 and with other laws, including this article 1. Data protection assessments are confidential and exempt from public inspection and copying under the "Colorado Open Records Act", part 2 of article 72 of title 24. The disclosure of a data protection assessment pursuant to a request from the attorney general under this subsection (4) does not constitute a waiver of any attorney-client privilege or work-product protection that might otherwise exist with respect to the assessment and any information contained in the assessment.
Enacted 2026-06-30
R-02.1
C.R.S. § 6-1-1702(5)
Plain Language
Developers must proactively disclose to the attorney general (in a prescribed form) and to all known deployers any known or reasonably foreseeable risks of algorithmic discrimination, within 90 days of discovering such risks. This is not a wait-to-be-asked obligation — it triggers on knowledge or reasonable foreseeability of discrimination risks. The 90-day clock runs from the triggering date specified in the original SB 205 provisions.
(5) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which:
Enacted 2026-06-30
R-02.2
C.R.S. § 6-1-1702(7)
Plain Language
The attorney general may require developers to disclose documentation described in subsection (2) — including model cards, dataset cards, and related materials — within 90 days of the AG's request. The AG may evaluate these materials for compliance. Importantly, these disclosures are exempt from CORA (Colorado Open Records Act) and developers may designate materials as proprietary or trade secret. Attorney-client privilege and work-product protections are preserved. This on-demand regulatory disclosure power is separate from the proactive disclosure obligations in subsection (5).
(7) On and after June 30, 2026, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (2) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this part 17, and the statement or documentation is not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (7), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Enacted 2026-06-30
R-02.2
C.R.S. § 6-1-1703(9)
Plain Language
The attorney general may require deployers (or contracted third parties) to produce their risk management policy, impact assessments, or maintained records within 90 days of the AG's request. The AG may evaluate these materials for compliance with the statute. Materials are exempt from CORA, and deployers may designate them as proprietary or trade secrets. Attorney-client privilege and work-product protections are preserved. This mirrors the developer on-demand disclosure obligation in § 6-1-1702(7) but applies to deployer-side documentation.
(9) On and after June 30, 2026, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (2) of this section, the impact assessment completed pursuant to subsection (3) of this section, or the records maintained pursuant to subsection (3)(f) of this section. The attorney general may evaluate such risk management policy, impact assessment, or records to ensure compliance with this part 17, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (9), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2026-10-01
R-02.1
Sec. 8(c)
Plain Language
Within 30 days of completing each bias audit, deployers must both (1) file the full bias audit report and a plain-language summary with the Labor Commissioner and (2) publish the plain-language summary on their website in a conspicuous, accessible location. The summary must cover methodology, key findings and identified risks, and corrective actions taken. This creates both a regulatory filing and a public disclosure obligation tied to each annual bias audit cycle.
(c) Not later than thirty days after completing a bias audit pursuant to subsection (a) of this section, the deployer shall (1) in a form and manner prescribed by the Labor Commissioner, file a bias audit report and a plain-language summary of such report with the commissioner, and (2) publish a plain-language summary of such audit report on the deployer's Internet web site in a conspicuous place accessible to applicants for employment and employees. Such summary shall include (A) the methodology used in such bias audit, (B) the key findings and identified risks found by such bias audit, and (C) any corrective actions taken by the deployer.
Pending 2025-07-01
R-02.1
O.C.G.A. § 10-16-2(b)
Plain Language
Developers must submit comprehensive documentation about each automated decision system to the Attorney General, in a form the AG prescribes. The required documentation covers foreseeable uses and misuses, system purpose and benefits, training data summaries, known discrimination risks and mitigation measures, pre-deployment evaluation methodology, data governance measures, usage and monitoring instructions, and any other information deployers need for compliance. Trade secret redactions are permitted under § 10-16-2(f) but not where the information is necessary for deployer compliance.
Except as provided in subsection (f) of this Code section, a developer of an automated decision system shall provide certain information regarding such automated decision system to the Attorney General, in a form and manner prescribed by the Attorney General. Such information shall include, at a minimum: (1) A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the automated decision system; (2) Documentation disclosing: (A) The purpose of the automated decision system; (B) The intended benefits and uses of the automated decision system; (C) High-level summaries of the types of data used to train the automated decision system; (D) Known or reasonably foreseeable limitations of the automated decision system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system; (E) The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination; (F) How the automated decision system was evaluated for performance and mitigation of algorithmic discrimination before the automated decision system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (G) The data governance measures used to cover the training data sets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (H) How the automated decision system should be used, not be used, and be monitored by an individual when the automated decision system is used to make, or assist in making, a consequential decision; and (I) All other information necessary to allow the deployer to comply with the requirements of Code Section 10-16-3; and (3) Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitoring the performance of the automated decision system for risks of algorithmic discrimination.
Pending 2025-07-01
R-02.2
O.C.G.A. § 10-16-2(g)
Plain Language
The Attorney General may demand any documentation or records required under the developer obligations section, and the developer must produce them within seven days. Records submitted to the AG are exempt from Georgia's open records law. Developers may designate materials as trade secrets or proprietary, and disclosure does not waive attorney-client privilege or work-product protection. The seven-day production window is significantly shorter than the 90-day norm in other jurisdictions.
The Attorney General may require that a developer disclose to the Attorney General, within seven days and in a form and manner prescribed by the Attorney General, any documentation or records required by this Code section, including, but not limited to, the statement or documentation described in subsection (b) of this Code section. The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, such records shall not be open to inspection by or made available to the public. In a disclosure pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2025-07-01
R-02.2
O.C.G.A. § 10-16-9
Plain Language
The Attorney General may demand any chapter-required documentation from a deployer (or its contracted third party) with a seven-day production deadline. Materials submitted are exempt from Georgia's open records law. Deployers may designate trade secrets and proprietary information, and privilege and work-product protections are preserved. This mirrors the developer on-demand disclosure in § 10-16-2(g).
The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to the Attorney General, no later than seven days after and in a form and manner prescribed by the Attorney General, any documentation or records required by this chapter. The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and such records, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, shall not be open to inspection by or made available to the public. In a disclosure pursuant to this Code section, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records is subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2025-01-01
R-02.2
Section 10(a)
Plain Language
The Department of Insurance has broad authority to request any information or documentation related to a health insurance issuer's use of AI systems during an investigation or market conduct action, and the issuer must comply. The scope of permissible requests is expansive: it covers AI governance, risk management, use protocols, third-party AI vendor diligence and monitoring, and the issuer's AI systems program implementation. This is a produce-on-demand obligation — the statute does not require proactive submission, but issuers must maintain documentation in a form that can be produced when requested. Both insurers and other persons subject to Section 132(b) of the Illinois Insurance Code are obligated to comply.
(a) The Department's regulatory oversight of health insurance coverage includes oversight of the use of AI systems or predictive models to make or support adverse consumer outcomes. The Department's authority in an investigation or market conduct action includes review regarding the development, implementation, and use of AI systems or predictive models and the outcomes from the use of those AI systems or predictive models. The Department may also request other information or documentation relevant to an investigation or market conduct action, and a health insurance issuer or any other person described in subsection (b) of Section 132 of the Illinois Insurance Code must comply with that request. The Department's inquiries may include, but are not limited to, questions regarding any specific model, AI system, or application of a model or AI system. The Department may also make requests for information and documentation relating to AI systems governance, risk management, and use protocols; information and documentation relating to the health insurance issuer's preacquisition and preutilization diligence, monitoring, and auditing of data or AI systems developed or used by a third party; and information and documentation relating to implementation and compliance with the health insurance issuer's AI systems program.
Pending 2026-01-01
R-02.1
Section 20(a)-(c)
Plain Language
State agencies must submit each impact assessment to the Governor and General Assembly at least 30 days before deploying the assessed system. Other public bodies must submit to their director or primary administrator on the same timeline. Two redaction exceptions apply: (1) if disclosure would substantially harm public health/safety, infringe privacy, or impair IT security, the information may be redacted; and (2) if the assessment covers security/fraud/identity-theft technology, related information may be redacted. In both cases, the redacted assessment must be accompanied by a published explanatory statement describing the redaction rationale. This is a proactive submission requirement — the employer cannot wait to be asked.
(a) Each impact assessment conducted by a State agency under this Act shall be submitted to the Governor and the General Assembly at least 30 days prior to implementation of the automated decision-making system that is the subject of the assessment. Each impact assessment conducted by any other public body under this Act shall be submitted to the director of the public body or the executive officers or primary administrator of the relevant governing body at least 30 days prior to implementation of the automated decision-making system that is the subject of the assessment. (b) If the employer determines that disclosure of any information in the impact assessment would result in a substantial negative impact on public health or safety, infringe upon privacy rights, or significantly impair the employer's ability to protect its information technology or operational assets, the information may be redacted, if an explanatory statement describing the determination process for redaction is published along with the redacted assessment. (c) If the impact assessment covers technology used to prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, or other illegal activity, the employer may redact related information, if an explanatory statement describing the determination process for redaction is published along with the redacted assessment.
Pending 2025-06-01
R-02.2
Section 10(a)
Plain Language
Insurers must respond to Department of Insurance requests for information and documentation about their AI systems at any time during an investigation or market conduct action. The Department's authority is broad: it can ask about any specific AI model or system, AI governance and risk management protocols, due diligence and auditing of third-party AI vendors, and compliance with the insurer's own AI systems program. Insurers must comply with such requests. This creates an ongoing obligation to maintain documentation in a producible form, even though the statute does not specify a fixed production timeline.
The Department's regulatory oversight of insurers includes oversight of an insurer's use of AI systems to make or support adverse determinations that affect consumers. Any insurer authorized to operate in the State is subject to review by the Department in an investigation or market conduct action regarding the development, implementation, and use of AI systems or predictive models and the outcomes from the use of those AI systems or predictive models. The Department may also request other information or documentation relevant to an investigation or market conduct action, and an insurer must comply with that request. The Department's inquiries may include, but are not limited to, questions regarding any specific model, AI system, or application of a model or AI system. The Department may also make requests for information and documentation relating to AI systems governance, risk management, and use protocols; information and documentation relating to the insurer's preacquisition and preutilization diligence, monitoring, and auditing of data or AI systems developed by a third party; and information and documentation relating to implementation and compliance with the insurer's AI systems program.
Pending 2026-07-01
R-02.2
IC 22-5-10.4-15
Plain Language
The Department of Labor has authority to receive complaints, investigate potential violations, and require employers to file annual or special reports on their ADS use in employment decisions. When the Department requires a report, the employer must comply within the Department's specified timeframe and format. Separately, employers have a standing obligation to maintain and preserve all records pertaining to chapter compliance and make them available to the Department — this is a continuous recordkeeping duty, not triggered by a specific request.
Sec. 15. (a) The department may do the following: (1) Receive complaints regarding alleged violations of this chapter. (2) Investigate any facts, conditions, practices, or matters as the department deems necessary or appropriate to determine whether an employer has violated this chapter. (3) Require an employer to file with the department, on a form prescribed by the department, annual or special reports or answers in writing to specific questions relating to the use of an automated decision system for employment related decisions. (b) If the department requires an employer to file a report or answers under subsection (a)(3), the employer shall file the report or answers in the manner and time period required by the department. (c) An employer shall maintain, keep, preserve, and make available to the department records pertaining to compliance with this chapter.
Enacted 2025-07-01
R-02.1
IC 27-1-37.5-19(c)-(d)
Plain Language
Utilization review entities must publicly post detailed statistics on prior authorization approvals and denials on their website, broken down by provider specialty, medication or procedure, indication, reason for denial, appeal status, appeal outcomes, and response times. Additionally, they must compile an annual report of these statistics and submit it to the Indiana Department of Insurance by December 31 each year. This creates both a public transparency obligation and a regulatory reporting obligation covering operational performance of the prior authorization process.
(c) A utilization review entity shall make statistics available regarding prior authorization approvals and denials on the utilization review entity's website in a readily accessible format, including statistics for the following categories: (1) Health care provider specialty. (2) Medication or diagnostic test or procedure. (3) Indication offered. (4) Reason for denial. (5) If a decision was appealed. (6) If a decision was approved or denied on appeal. (7) The time between submission and the response. (d) Not later than December 31 of each year, a utilization review entity shall: (1) prepare a report of the statistics compiled under subsection (c); and (2) submit the report to the department.
Passed 2025-03-13
R-02.1
Section 3(11)(a)-(b)
Plain Language
By December 1, 2025, and annually thereafter, the Commonwealth Office of Technology must report to the Legislative Research Commission and the Interim Joint Committee on State Government. The report must include the AI registry (inventory and use cases), all applications received for AI use with approval/disapproval decisions and rationales, and third-party AI developers and contractors submitted for review. To compile this report, each state department and agency must submit a report to the Office identifying potential AI deployment use cases with benefit and risk descriptions. This creates both a bottom-up reporting obligation on individual agencies and a top-down annual legislative reporting obligation on the Office.
(11) (a) The Commonwealth Office of Technology shall transmit reports to the Legislative Research Commission and the Interim Joint Committee on State Government by December 1, 2025, and annually every year thereafter. The reports shall include: 1. The artificial intelligence registry, which shall include the current inventory and use case of artificial intelligence utilized in state government; 2. Applications received for use of artificial intelligence, including the decision and rationale in approving or disapproving a request in compliance with subsection (5)(c) of this section; and 3. Third-party artificial intelligence developers, system administrators, providers, and contractors submitted for review in compliance with subsection (5) of this section. (b) To facilitate the report in paragraph (a) of this subsection, the Commonwealth Office of Technology shall receive from each department, agency, and administrative body a report examining and identifying potential use cases for the deployment of generative artificial intelligence systems and high-risk artificial intelligence systems, including a description of the benefits and risks to individuals, communities, government, and government employees.
Pre-filed
R-02.2
Chapter 93M § 2(g)
Plain Language
The attorney general may request at any time that a developer produce the documentation described in Section 2(b) within 90 days. The AG may evaluate it for compliance, but it is exempt from public records disclosure. Developers may designate materials as containing trade secrets or proprietary information, and submitting privileged materials does not waive privilege.
(g) Not later than 6 months after the effective date of this act, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (b) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (g), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pre-filed
R-02.2
Chapter 93M § 3(i)
Plain Language
The attorney general may request that a deployer produce its risk management policy, impact assessments, or records within 90 days. These documents are exempt from public records disclosure and may be designated as containing trade secrets or proprietary information. Submission of privileged materials does not waive attorney-client privilege or work-product protection.
(i) Not later than 6 months after the effective date of this act, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (b) of this section, the impact assessment completed pursuant to subsection (c) of this section, or the records maintained pursuant to subsection (c)(6) of this section. The attorney general may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (i), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records include information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2026-10-01
R-02.1
Insurance Article § 15–10A–06(a)(1)(iii)(6), (a)(1)(iii)(9)
Plain Language
Carriers must include in their quarterly reports to the Commissioner two categories of AI-related data. First, for all adverse decisions, the report must disclose whether AI was used in making the decision (this is an existing requirement retained without amendment). Second, the bill adds a new requirement: carriers must report the total number of grievances that received human review under the new AI grievance provision (§ 15–10A–02(b)(2)(vi)), broken down by claim type, member race/gender/profession, and policy type (individual, small group, large group, or Exchange-purchased). The demographic disaggregation enables the Commissioner to monitor for patterns of disparate impact in AI-driven adverse decisions.
6. the number of adverse decisions issued by the carrier under § 15–10A–02(f) of this subtitle, whether the adverse decision involved a prior authorization or step therapy protocol, the type of service at issue in the adverse decisions, and whether an artificial intelligence, algorithm, or other software tool was used in making the adverse decision; ... 9. THE TOTAL NUMBER OF GRIEVANCES REVIEWED UNDER § 15–10A–02(B)(2)(VI) OF THIS SUBTITLE AND AGGREGATED BY: A. TYPE OF CLAIM; B. RACE, GENDER, AND PROFESSION OF MEMBER; AND C. TYPE OF POLICY, INCLUDING INDIVIDUAL, SMALL GROUP, OR LARGE GROUP AND WHETHER THE POLICY WAS PURCHASED ON THE HEALTH BENEFIT EXCHANGE;
Pending 2026-08-01
R-02.1
Minn. Stat. § 181.9922, subd. 1(c)
Plain Language
Each time an employer provides a pre-use notice to workers about an automated decision system, a copy of that same notice must be submitted to the Commissioner of Labor and Industry within 10 days. This is an event-triggered regulatory filing, not a periodic report — it applies at each new deployment, significant change, and worker notification. Copies must also be made available to authorized representatives on request.
(c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request.
Pending 2026-08-01
R-02.2
Minn. Stat. § 325M.41, subd. 1(4)
Plain Language
If the attorney general requests it, the developer must provide access to the safety and security protocol with minimal redactions — only those required by federal law are permitted. This is a demand-driven disclosure obligation distinct from the proactive transmission of a redacted copy under subdivision 1(3). The practical effect is that the AG can see the essentially full, unredacted protocol upon request, whereas the version proactively transmitted may have broader redactions.
Before deploying an artificial intelligence model, a developer must: (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access;
Pending 2026-09-01
R-02.1
§ 181.9922, Subd. 1(c)
Plain Language
Each time an employer issues a pre-use notice to workers about an automated decision system, the employer must also submit a copy of that notice to the commissioner of labor and industry within ten days. This is an event-triggered filing obligation — not an annual or periodic schedule, but triggered by each notice event. Copies must also be made available to authorized representatives upon request.
(c) Every time an employer provides a notice under paragraph (a), a copy of that notice must be submitted to the commissioner of labor and industry within ten days of the date the notice was provided to the worker. Copies of notices under paragraph (a) must also be made available to authorized representatives upon request.
Pending 2026-01-01
R-02.2
G.S. 114B-5(a)-(f)
Plain Language
The Department of Justice has broad inspection authority — both physical and digital — over licensed health information chatbots. Digital inspections can cover source code, algorithms, ML models, data practices, cybersecurity, user privacy protections, chatbot behavior testing, development processes, and integration with other platforms. The Director may require access to all records relating to development, testing, validation, production, distribution, and performance. Trade secrets and confidential commercial information receive protection under 21 CFR 20.61. Following inspections, the Director provides a detailed findings report with required corrective actions. Manufacturers and importers must also establish records and submit reports as the Director requires by regulation. This creates a comprehensive regulatory inspection and recordkeeping framework that licensees must be prepared to comply with at any time.
(a) The Department shall enforce the provisions of, and the rules adopted under, this Chapter. (b) The Attorney General shall designate a Director, officers, and employees assigned to the oversight and enforcement of this Chapter. Upon presenting appropriate credentials and a written notice to the owner, operator, or agent in charge, those officers and employees are authorized to enter, at reasonable times, any factory, warehouse, or establishment in which chatbots licensed under this Chapter are manufactured, processed, or held, and to inspect, in a reasonable manner and within reasonable limits and in a reasonable time. In addition to physical inspections, the Department may conduct digital inspections of licensed chatbots under this Chapter, to include the following: (1) Examination of source code, algorithms, and machine learning models. (2) Review of data processing and storage practices. (3) Evaluation of cybersecurity measures and protocols. (4) Assessment of user data privacy protections. (5) Testing of chatbot responses and behaviors in various scenarios. (6) Audit of data collection, use, and retention practices. (7) Inspection of software development and update processes. (8) Review of remote access and monitoring capabilities. (9) Evaluation of integration with other digital health technologies or platforms. (c) As part of any inspection, whether physical or digital, the Director may require access to all records relating to the development, testing, validation, production, distribution, and performance of a chatbot licensed under this Chapter. (d) Any information obtained during an inspection which falls within the definition of a trade secret or confidential commercial information as defined in 21 CFR 20.61 shall be treated as confidential and shall not be disclosed under Chapter 132 of the General Statutes, except as may be necessary in proceedings under this Chapter or other applicable law. (e) Following any inspection, the Director shall provide a detailed report of findings to the manufacturer or importer, including any identified deficiencies and required corrective actions. (f) Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish and maintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
Pending 2027-01-01
R-02.2
G.S. § 114B-6(a)-(f)
Plain Language
The Department of Justice has broad inspection authority over licensed health-information chatbots, including both physical and digital inspections. Digital inspections may cover source code, algorithms, ML models, data practices, cybersecurity, user privacy protections, chatbot behavior testing, and integration with other platforms. The Director may require access to all development, testing, validation, production, distribution, and performance records. Trade secrets and confidential commercial information obtained during inspections are protected from public disclosure. Following inspections, the Director issues a detailed findings report with required corrective actions. Manufacturers and importers must establish and maintain records and submit reports as required by regulation. Licensees must maintain documentation in a form that can be produced to the Department upon request.
(a) The Department shall enforce the provisions of, and the rules adopted under, this Chapter. (b) The Attorney General shall designate a Director, officers, and employees assigned to the oversight and enforcement of this Chapter. Upon presenting appropriate credentials and a written notice to the owner, operator, or agent in charge, those officers and employees are authorized to enter, at reasonable times, any factory, warehouse, or establishment in which chatbots licensed under this Chapter are manufactured, processed, or held, and to inspect, in a reasonable manner and within reasonable limits and in a reasonable time. In addition to physical inspections, the Department may conduct digital inspections of licensed chatbots under this Chapter, to include the following: (1) Examination of source code, algorithms, and machine learning models. (2) Review of data processing and storage practices. (3) Evaluation of cybersecurity measures and protocols. (4) Assessment of user data privacy protections. (5) Testing of chatbot responses and behaviors in various scenarios. (6) Audit of data collection, use, and retention practices. (7) Inspection of software development and update processes. (8) Review of remote access and monitoring capabilities. (9) Evaluation of integration with other digital health technologies or platforms. (c) As part of any inspection, whether physical or digital, the Director may require access to all records relating to the development, testing, validation, production, distribution, and performance of a chatbot licensed under this Chapter. (d) Any information obtained during an inspection which falls within the definition of a trade secret or confidential commercial information, as defined in 21 C.F.R. § 20.61, shall be treated as confidential and shall not be disclosed under Chapter 132 of the General Statutes, except as may be necessary in proceedings under this Chapter or other applicable law. (e) Following any inspection, the Director shall provide a detailed report of findings to the manufacturer or importer, including any identified deficiencies and required corrective actions. (f) Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish and maintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
Failed 2027-01-01
R-02.1
Sec. 5(5)-(6)
Plain Language
Large frontier developers must submit to the Attorney General confidential summaries of catastrophic risk assessments related to internal use of their frontier models at least every three months (quarterly). The Attorney General will establish a confidential submission mechanism for this purpose. This is a proactive scheduled regulatory submission — the developer cannot wait to be asked.
(5) The Attorney General shall establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models. (6) A large frontier developer shall transmit to the Attorney General a summary of any assessment of catastrophic risk resulting from internal use of its frontier models no less frequently than every three months.
Failed 2026-02-01
R-02.2
Sec. 3(7)(a)-(d)
Plain Language
The Attorney General may issue a written demand requiring a developer to produce the documentation described in Sec. 3(2) — including use statements, training data summaries, limitation disclosures, and bias evaluation documentation — in connection with an ongoing investigation. Developers may designate materials as proprietary or trade secret, and such materials are exempt from public disclosure. Documentation must be produced in the form and manner prescribed by the AG.
(7)(a) On and after February 1, 2026, the Attorney General may provide a written demand to any developer to disclose to the Attorney General the statement or documentation described in subsection (2) of this section if such a statement or documentation is relevant to an investigation related to the developer conducted by the Attorney General. Such statement or documentation shall be provided to the Attorney General in a form and manner prescribed by the Attorney General. (b) The Attorney General may evaluate such statement or documentation, if it is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) In any disclosure pursuant to this subsection, any developer may designate the statement or documentation as including proprietary information or a trade secret. (d) To the extent any such statement or documentation includes any proprietary information or any trade secret, such statement or documentation shall be exempt from disclosure.
Failed 2026-02-01
R-02.2
Sec. 4(8)(a)-(d)
Plain Language
In connection with an ongoing investigation, the Attorney General may require a deployer (or its contracted third party) to produce its risk management policy, impact assessments, and related records within 90 days. Disclosures are not public records and deployers may designate materials as proprietary or trade secret. This is a responsive disclosure obligation — triggered by AG demand, not a proactive filing requirement.
(8)(a) On and after February 1, 2026, in connection with an ongoing investigation related to the deployer, the Attorney General may require any deployer or third party contracted by a deployer to disclose any of the following to the Attorney General no later than ninety days after such request in a form and manner prescribed by the Attorney General: (i) The risk management policy implemented pursuant to subsection (2) of this section; (ii) The impact assessment completed pursuant to subsection (3) of this section; or (iii) The records maintained pursuant to subdivision (3)(f) of this section. (b) If such risk management policy, impact assessment, or record is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, the Attorney General may evaluate the risk management policy, impact assessment, or records disclosed pursuant to subdivision (a) of this subsection to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) Any disclosure under this subsection shall not be a public record subject to disclosure pursuant to sections 84-712 to 84-712.09. (d) A deployer may designate any statement or documentation disclosed under this subsection as including proprietary information or a trade secret.
Pending
R-02.1R-02.3
Section 3(g)–(h)
Plain Language
The full impact assessment report — including all supporting data and determinations — must be submitted to the Department of Labor and Workforce Development within 60 days of completion, along with an accessible summary, for inclusion in a public registry maintained by the department. The vendor must also provide the report to any employer or public entity seeking to implement the system. The public registry must be accessible to affected employees, applicants, and their representatives. Proprietary information is protected from public disclosure unless essential, in which case it may only be disclosed in aggregated form. When the department conducts the assessment for public employee systems, the vendor must reimburse the department's full direct costs.
g. The report of the impact assessment shall include all of the information and data used in making its determinations, including the full data and information provided pursuant to subsections d. and e. of this section, and shall, within 60 days of its completion, submitted in its entirety, together with an accessible summary of the report, to the department, for inclusion in a public registry of impact assessments maintained by the department, and to the vendor, who shall provide the report to any employer or public entity seeking to implement the AEDS or EMT. Impact assessments in the public registry shall be made available to affected employees, applicants for employment and their authorized representatives. Proprietary information shall not be publicly disclosed unless essential, and then only in aggregated form. h. In the case of an impact assessment conducted by the department because the AEDS or EMT is to be applied to public employees, the vendor shall pay the department the full amount of direct costs of making the impact assessment of the AEDS or EMT.
Pre-filed 2026-07-01
R-02.1
Section 1(b)(2), Section 1(c)
Plain Language
OIT is required to review each annual safety test report submitted by AI companies. From the AI company's perspective, this confirms that the annual report is not merely filed and forgotten — OIT has an affirmative obligation to review each submission. While the primary reporting obligation is in Section 1(c), this provision establishes that OIT will actively review the submissions, creating an implicit expectation that reports must be substantive and complete enough to withstand regulatory scrutiny.
The Office of Information Technology shall: (2) review each annual report required to be submitted by an artificial intelligence company pursuant to subsection c. of this section.
Pre-filed
R-02.1
Section 5(a)(6)-(7), Section 5(b)
Plain Language
Employers with 100 or more employees that deploy AI systems resulting in layoffs must file an AI Impact Disclosure with the Department of Labor and Workforce Development. The disclosure must include the AI deployment date, the layoff date, and the number of displaced workers. In addition, these employers must make supplemental contributions to the AI Horizon Fund based on the number of AI-attributable layoffs, per a schedule the department will develop. Employers with fewer than 100 employees are exempt from both the disclosure and supplemental contribution obligations.
(6) develop an AI Impact Disclosure that employers deploying AI systems that results in layoffs shall file with the department. This disclosure shall contain, at a minimum, the date on which the AI tool that resulted in layoffs was deployed, the date of layoffs, and the number of workers displaced by the AI tool deployment; and (7) develop a supplemental contribution schedule to the AI Horizon Fund based on the number of layoffs attributable to AI and develop a mechanism for assessment and payment of these assessments. b. The disclosure statements and supplemental contributions specified in paragraphs (6) and (7) of subsection a. of this section shall only be applicable to firms which have 100 or more employees.
Pending 2026-02-02
R-02.1
Section 1.e.
Plain Language
Employers must submit the race and ethnicity demographic data collected under subsection d. to the New Jersey Department of Labor and Workforce Development on an annual basis. This is a proactive, scheduled regulatory submission — employers cannot wait to be asked. The Department will use this data to analyze whether AI video interview tools produce racial bias in hiring outcomes.
The demographic data collected under subsection d. of this section shall be reported annually to the Department of Labor and Workforce Development.
Pending 2027-01-01
R-02.2
GBL § 1551(3)(a)-(b)
Plain Language
Developers providing high-risk AI decision systems to deployers must, to the extent feasible, furnish documentation sufficient for deployers to complete their impact assessments — delivered through model cards, dataset cards, or similar artifacts. A developer that is also itself the sole deployer need not generate this documentation unless the system is provided to an unaffiliated deployer. This is a deployer-enablement obligation: the developer must provide enough information for downstream compliance, not merely describe its own system.
(a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer.
Pending 2027-01-01
R-02.2
GBL § 1551(6)
Plain Language
The AG may require developers to produce the documentation described in § 1551(2) as part of an investigation. Developers may designate trade secrets, FOIL-exempt information, and attorney-client privileged materials as confidential, and such designations are respected — disclosure to the AG does not waive privilege. No fixed production timeline is specified for this subsection (compare § 1552(9) which provides 90 days). This is a reactive disclosure obligation triggered by AG investigation, not a proactive filing requirement.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2027-01-01
R-02.2
GBL § 1552(9)
Plain Language
The AG may require deployers (or their contracted third parties) to produce their risk management policy, impact assessments, and retained records within 90 days of a request, as part of an AG investigation. Deployers may designate trade secrets, FOIL-exempt materials, and privileged information, which will be protected from public disclosure. Production to the AG does not waive attorney-client privilege or work product protection. This is a reactive production obligation — deployers must maintain documentation in a form that can be assembled and produced within the 90-day window.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2027-01-01
R-02.2
GBL § 1553(4)
Plain Language
The AG may require GPAI model developers to produce their technical documentation within 90 days of a request as part of an investigation. Developers may designate trade secrets, FOIL-exempt materials, and privileged information for protection. Production does not waive attorney-client privilege. This parallels the developer and deployer production obligations under §§ 1551(6) and 1552(9), extending the AG's investigative reach to GPAI-specific technical documentation.
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2025-07-26
R-02.3
State Tech. Law § 510(1)
Plain Language
Any person who develops a high-risk advanced AI system in New York that is actively deployed must disclose the system's existence and function to the Secretary of State by applying for a license or supplemental license. This registration duty is triggered by active deployment, applies regardless of where the system physically operates, and extends to any updates, modifications, upgrades, or expansions of the system's capabilities or intended uses. This is a continuing obligation — not a one-time filing.
Any person who develops a high-risk advanced artificial intelligence system, whether in whole or in part, in the state that is presently performing functions for its intended purpose or within its designated operational parameters, shall have the duty to disclose the existence and function of said system to the secretary by applying for a license as required under section five hundred eleven of this article or, where applicable, a supplemental license under section five hundred twelve of this article. This duty to disclose shall be triggered by the system's active deployment and usage in its intended context or field of operation and is applicable irrespective of the system's location of operation. This duty extends to any updates, modifications, upgrades, or expansions of the system's capabilities or intended uses.
Pending 2025-07-26
R-02.1
State Tech. Law § 510(2)
Plain Language
Developers of autonomous weapons systems (§ 501(2)(i)) must submit a written pre-development disclosure to the Secretary of State before beginning active development. The disclosure must include the names and addresses of all persons involved, a system description, functions and intended use cases, and risk mitigation measures. The Secretary may order a cease-development if the system is likely to violate the ethical code of conduct or the prohibited systems provisions. This is a heightened obligation that applies only to the autonomous weapons subcategory of high-risk systems.
Any person developing a system as defined in paragraph (i) of subdivision two of section five hundred one of this article within the state shall disclose in writing to the secretary the development of such a system prior to active development of the system. Such writing shall set forth the names and addresses of all persons involved in the development of such system, a description of the system, the systems functions and intended use cases, and measures that will be taken to ensure that any risks posed by the system are mitigated. The secretary may, upon receipt of such writing, require such person to cease development of such a system where, in the secretary's discretion, the secretary believes the system has a high likelihood of violating section five hundred twenty-nine or section five hundred thirty of this article.
Pending 2025-07-26
R-02.3
State Tech. Law § 513(1)-(4)
Plain Language
License applications must be in writing, under oath, and include: the applicant's name and address (including partnership/corporate details), the names and addresses of each ethics and risk management board member and each principal and officer, and a description of all known general use cases of the system. The Secretary substantively reviews each application and may refuse to issue a license based on the ethics, experience, character, and fitness of the applicant. A denied applicant receives a license fee refund but not the investigation fee. Licenses remain in force until surrendered, revoked, or suspended.
1. An application for a license required under this article shall be in writing, under oath, and in the form prescribed by the secretary, and shall contain the following: (a) the exact name and address of the applicant, and if the applicant be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation; (b) the name and the business and residential address of each member of the ethics and risk management board, each principal, and officer of the applicant; and (c) the description of all known general use cases of the advanced artificial intelligence system, including any purposes foreseen to be implemented by the applicant. A "use case" shall be defined as broad category of potential use. 2. After the filing of an application for a license accompanied by payment of the fees for license and investigation, it shall be substantively reviewed. After the application is deemed sufficient and complete, the secretary shall issue the license, or the secretary may refuse to issue the license if the secretary shall find that the ethics, experience, character and general fitness of the applicant or any person associated with the applicant are not such as to command the confidence of the community and to warrant the belief that the business will be conducted honestly, fairly and efficiently within the purposes and intent of this article. 3. If the secretary refuses to issue a license, the secretary shall notify the applicant of the denial, return to the applicant the sum paid as a license fee, but retain the investigation fee to cover the costs of investigating the applicant. 4. Each license issued pursuant to this article shall remain in full force unless it is surrendered by the licensee, revoked or suspended.
Pending 2025-07-26
R-02.1
State Tech. Law § 516(4)(a)-(h), (5)
Plain Language
The ethics and risk management board must submit an annual comprehensive report to the Secretary for each licensed high-risk AI system. The report must include: all possible use cases (intended/unintended, likely/unlikely), a thorough risk assessment for each use case covering privacy, security, fairness, economic, societal, and environmental impacts, an evaluation of whether known use cases should be constrained or banned, a mitigation plan for each identified risk, a review of all incidents and failures in the past year, user education plans considering varying digital literacy, disclosure of board conflicts of interest, and a compliance update. Board members who make false statements, fail to disclose conflicts, or misrepresent risks face criminal liability — misdemeanor punishable by up to $500 fine and/or 6 months imprisonment.
4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence. 5. In addition to any applicable civil penalties pursuant to section five hundred eight of this article, a member of an ethics and risk management board who makes a false statement, fails to disclose conflicts of interest or misrepresents the risks or severity of the risks posed by a system in the performance of their duties as a member of such board, shall be guilty of a misdemeanor and, upon conviction, shall be fined not more than five hundred dollars or imprisoned for not more than six months or both, in the discretion of the court.
Pending 2025-07-26
R-02.2
State Tech. Law § 526(1)-(4)
Plain Language
The Secretary has broad investigative and examination authority, including the power to compel production of all relevant books, records, accounts, documents, source code, and logs, and to examine persons under oath. Examination expenses are assessed to and paid by the examined licensee. All examination and investigation reports are confidential and not subject to subpoena unless the Secretary determines publication serves justice and the public interest. Operators must maintain their records in a form that can be produced to the Secretary, and must bear the financial cost of regulatory examinations.
1. The secretary shall have the power to make such investigations as the secretary shall deem necessary to determine whether any operator or any other person has violated any of the provisions of this article, or whether any licensee has conducted itself in such manner as would justify the revocation of its license, and to the extent necessary therefor, the secretary may require the attendance of and examine any person under oath, and shall have the power to compel the production of all relevant books, records, accounts, documents, source code, and logs. 2. The secretary shall have the power to make such examinations of the books, records, accounts, documents, source code, and logs used in the business of any licensee as the secretary shall deem necessary to determine whether any such licensee has violated any of the provisions of this article. 3. The expenses incurred in making any examination pursuant to this section shall be assessed against and paid by the licensee so examined, except that traveling and subsistence expenses so incurred shall be charged against and paid by licensees in such proportions as the secretary shall deem just and reasonable, and such proportionate charges shall be added to the assessment of the other expenses incurred upon each examination. Upon written notice by the secretary of the total amount of such assessment, the licensee shall become liable for and shall pay such assessment to the secretary. 4. All reports of examinations and investigations, and all correspondence and memoranda concerning or arising out of such examinations or investigations, including any duly authenticated copy or copies thereof in the possession of any licensee or the department, shall be confidential communications, shall not be subject to subpoena and shall not be made public unless, in the judgment of the secretary, the ends of justice and the public advantage will be subserved by the publication thereof, in which event the secretary may publish or authorize the publication of a copy of any such report or other material referred to in this subdivision, or any part thereof, in such manner as the secretary may deem proper.
Pending 2025-01-01
R-02.1
Labor Law § 201-j(2)
Plain Language
Employers must submit completed AI impact assessments to the Department of Labor at least 30 days before deploying the AI system that is the subject of the assessment. This is a proactive pre-deployment submission requirement — employers cannot wait to be asked. The 30-day lead time creates a mandatory waiting period between submission and implementation, though the statute does not expressly grant the Department authority to block implementation based on the assessment's contents.
Any impact assessment conducted pursuant to this subdivision shall be submitted to the department at least thirty days prior to the implementation of the artificial intelligence that is the subject of such assessment.
Pending 2025-06-04
R-02.4
Gen. Bus. Law § 390-f(2)(b)
Plain Language
Every covered entity must file an annual certification of compliance with the responsible capability scaling policy requirement with the Chief Information Officer. This is a proactive filing obligation on a defined annual schedule — entities cannot wait to be asked. The certification attests to compliance with the section as a whole, meaning the entity has developed and presumably maintains its responsible capability scaling policy. The bill does not specify the form or content of the certification, leaving that to the CIO's rulemaking authority.
Each such entity shall file an annual certification of compliance with this section with the chief information officer.
Pending 2025-06-04
R-02.2
Gen. Bus. Law § 390-f(2)(d)
Plain Language
The Attorney General, acting in consultation with the Chief Information Officer, has authority to audit the responsible capability scaling policies that entities file. This implies that filed policies must be substantive enough to withstand audit scrutiny and that entities must maintain documentation supporting their policies. While this provision directly obligates the AG rather than covered entities, it creates an implicit obligation on entities to maintain audit-ready policy documentation. Entities should treat their filed policies and supporting records as subject to regulatory review at any time.
The attorney general, in consultation with the chief information officer, shall have the power to audit the policies filed by entities under this section.
Pending 2026-06-09
R-02.1
Civ. Rights Law § 88(1)–(3)
Plain Language
Developers must file periodic reports with the Attorney General covering system description, intended and disallowed uses, development overview, training data overview, and information necessary for deployers to monitor compliance. The first report is due within six months of offering the system for deployment (or deploying it), with annual reports thereafter and an additional report within six months of any substantial change. Each report must be accompanied by the most recent independent audit. Developers who are also deployers should note the dual filing requirement.
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
Pending 2026-06-09
R-02.1
Civ. Rights Law § 88(4)
Plain Language
Deployers must file periodic reports with the Attorney General covering system description, actual and planned uses, any deviation from developer-intended uses, and an impact assessment addressing algorithmic discrimination risk, monetization plans, and cost-benefit evaluation for consumers. The first report is due within six months of deployment, the second within one year, then biennially thereafter, plus within six months of any substantial change. Entities that are both developer and deployer may file a single joint report covering both sets of requirements.
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
Pending 2026-06-09
R-02.1
Civ. Rights Law § 88(6)
Plain Language
For high-risk AI systems already in deployment when the law takes effect, developers and deployers have an 18-month grace period to complete and file their first report and associated audit. After the first filing, developers must file annually and deployers must file biennially. This transitional provision applies only to pre-existing deployments — new deployments after the effective date follow the standard six-month timeline under § 88(3)–(4).
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
Pending 2026-06-09
R-02.2
Civ. Rights Law § 89(3)
Plain Language
The Attorney General may require developers or deployers to produce their risk management policy and program on demand, in a form and manner the AG prescribes, and may evaluate it for compliance. This is a responsive regulatory disclosure obligation — entities must maintain their risk management documentation in a form that can be produced to the AG when requested.
3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
Pending
R-02.1
GBL § 1712(1)-(2)
Plain Language
Developers must submit documentation to the Attorney General affirming: (1) the identities and qualifications of the professional domain experts involved; (2) the specific development phases in which each expert contributed; and (3) any known risks, limitations, or ethical concerns identified during development. The Attorney General reviews submissions and issues certificates of compliance to compliant developers. Non-compliant developers may face investigation and penalties. The statute does not specify a submission schedule, so developers should submit prior to or contemporaneous with deployment to obtain their compliance certificate.
§ 1712. Documentation and compliance. 1. Developers of artificial intelligence technologies shall submit documentation to the attorney general affirming: (a) The identities and qualifications of professional domain experts involved in the AI technology, pursuant to section seventeen hundred eleven of this article; (b) The specific phases of development in which such professional domain experts contributed; and (c) Any known risks, limitations, or ethical concerns disclosed during development. 2. The attorney general or a duly authorized representative of the attorney general shall issue certificates of compliance to developers who have submitted documentation pursuant to subdivision one of this section and are found to be in compliance. Any technology and developers found to be not in compliance may be subject to investigation and penalties pursuant to section seventeen hundred thirteen of this article.
Pending 2026-01-21
R-02.1
Labor Law § 201-j(2)(a)-(b)
Plain Language
By March 1 of each year, every covered business must file a report with the Department of Labor covering the preceding calendar year. The report has two parts: (1) employment impact data, including estimates of employees displaced, hired, or positions eliminated due in whole or part to AI use; and (2) information on the nature of AI usage, including objectives, human oversight, frequency and duration, use involving sensitive personal data and related protections, and risk reduction measures. The enumerated items are floors — the report must include but is not limited to these categories. Failure to report triggers civil penalties of up to $500 per day, subject to a 90-day cure period upon notice of violation.
2. Reporting requirement. On or before March first of every year, a covered business shall report to the department regarding the impact of artificial intelligence on its hiring and the nature of its artificial intelligence use in the calendar year ending the preceding December thirty-first. Such report shall include: (a) Employment data, including but not limited to: (i) An estimate of the number of employees displaced, or whose hours have been reduced, due in full or in part to use of artificial intelligence; (ii) An estimate of the number of employees hired, or whose hours have been increased, due in full or in part to use of artificial intelligence; and (iii) An estimate of the number of positions previously filled that the covered business has decided not to fill due in full or in part to use of artificial intelligence; and (b) Information on the nature of artificial intelligence usage, including but not limited to: (i) Descriptions of the objectives of the use of artificial intelligence; (ii) Information regarding any human oversight of artificial intelligence; (iii) Information on the frequency and length of use of artificial intelligence; (iv) Information on any use of artificial intelligence in relation to sensitive personal data, including storage and access protections related to use of artificial intelligence in relation to such personal data; and (v) Measures in place for oversight, risk reduction, or other protections related to use of artificial intelligence.
Pending 2027-01-01
R-02.1
Civil Rights Law § 104(6)(a)-(b)
Plain Language
Within 30 days of completing any full pre-deployment evaluation, full impact assessment, or developer annual review, the developer or deployer must: (1) submit the complete evaluation, assessment, or review to the Division of Consumer Protection; (2) publish a public summary on their website; and (3) submit the summary to the Division. All evaluations, assessments, and reviews must be retained for at least 10 years. Upon legislative request, the documents must also be made available to the legislature. Trade secrets may be redacted from public disclosures, and personal data must be redacted.
6. (a) A developer or deployer that conducts a full pre-deployment evaluation, full impact assessment, or developer annual review of assessments shall: (i) not later than thirty days after completion, submit the evaluation, assessment, or review to the division; (ii) upon request, make the evaluation, assessment, or review available to the legislature; and (iii) not later than thirty days after completion: (A) publish a summary of the evaluation, assessment, or review on the website of the developer or deployer in a manner that is easily accessible to individuals; and (B) submit such summary to the division. (b) A developer or deployer shall retain all evaluations, assessments, and reviews described in this section for a period of not fewer than ten years.
Pending 2026-01-01
R-02.1
Civ. Rights Law § 88(1)-(3)
Plain Language
Developers must file reports with the Attorney General on a defined schedule: within six months of initial offering or deployment, annually thereafter, and within six months of any substantial change. Reports must include system description (intended and disallowed uses), development overview, training data overview, and sufficient information for deployers to monitor compliance. Each report must be accompanied by the most recent independent audit. Substantial change triggers include new versions, new releases, or updates significantly affecting use cases, functionality, or expected outcomes.
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section.
2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article.
3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision.
(a) A developer of a high-risk AI system shall complete and file with the attorney general at least:
(i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment;
(ii) one report annually following the submission of the first report; and
(iii) one report within six months of any substantial change to the high-risk AI system.
(b) A developer report under this section shall include:
(i) a description of the system including:
(A) the uses of the high-risk AI system that the developer intends; and
(B) any explicitly unintended or disallowed uses of the high-risk AI system;
(ii) an overview of how the high-risk AI system was developed;
(iii) an overview of the high-risk AI system's training data; and
(iv) any other information necessary to allow a deployer to:
(A) understand the outputs and monitor the system for compliance with this article; and
(B) fulfill its duties under this article.
Pending 2026-01-01
R-02.1
Civ. Rights Law § 88(4)
Plain Language
Deployers must file reports with the Attorney General on a defined schedule: within six months of initial deployment, a second report one year later, then biennially, plus within six months of any substantial change. Reports must include a system description covering actual and intended uses with respect to consequential decisions and whether any developer-unintended uses are occurring. Reports must also include an impact assessment covering algorithmic discrimination risk and mitigation steps, monetization details, and a cost-benefit evaluation for consumers. Entities that are both developer and deployer may file a single joint report. Each report must be accompanied by the latest audit.
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision.
(a) A deployer of a high-risk AI system shall complete and file with the attorney general at least:
(i) a first report within six months after initial deployment;
(ii) a second report within one year following the completion and filing of the first report;
(iii) one report every two years following the completion and filing of the second report; and
(iv) one report within six months of any substantial change to the high-risk AI system.
(b) A deployer report under this section shall include:
(i) a description of the system including:
(A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and
(B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and
(ii) an impact assessment including:
(A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination;
(B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and
(C) an evaluation of the costs and benefits to consumers and other end users.
(c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
Pending 2026-01-01
R-02.1
Civ. Rights Law § 88(5)-(6)
Plain Language
The Attorney General must create a redaction process for reports and maintain a publicly accessible online database of reports and audits, updated biannually. For high-risk AI systems already deployed at the effective date, developers and deployers have 18 months to complete and file their first report and audit, followed by annual (developers) or biennial (deployers) subsequent reports. This transition provision gives existing systems additional compliance runway beyond the standard six-month initial filing window.
5. The attorney general shall:
(a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and
(b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article.
(a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision.
(b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
Pending 2026-01-01
R-02.2
Civ. Rights Law § 89(3)
Plain Language
The Attorney General may at any time require a developer or deployer to produce its risk management policy and program in a prescribed form. The AG may also evaluate the program for compliance. This means entities must maintain their risk management documentation in a form that can be produced on request — it is not sufficient to have a program only in concept.
3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
Pending 2025-01-01
R-02.1
Labor Law § 201-j(2)
Plain Language
Employers must submit their completed AI impact assessments to the New York Department of Labor at least 30 days before implementing the AI system that is the subject of the assessment. This is a proactive submission requirement — employers cannot wait to be asked. The 30-day lead time creates a pre-implementation review window, though the bill does not expressly grant the Department authority to block implementation based on the assessment's contents.
Any impact assessment conducted pursuant to this subdivision shall be submitted to the department at least thirty days prior to the implementation of the artificial intelligence that is the subject of such assessment.
Pending 2025-10-11
R-02.2
GBL § 1551(6)
Plain Language
The Attorney General may require developers to produce their deployer-facing documentation (foreseeable uses, training data summaries, bias risks, mitigation measures, etc.) as part of an AG investigation. Developers may designate trade secrets, FOIL-exempt information, and attorney-client privileged material as confidential, and such designations are honored — disclosure to the AG does not waive privilege. This is a demand-driven disclosure obligation, not a proactive filing requirement.
6. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2025-10-11
R-02.2
GBL § 1552(9)
Plain Language
The Attorney General may require deployers (or their contracted third parties) to produce their risk management policy, impact assessments, and associated records within 90 days of a request, as part of an AG investigation. Deployers may designate trade secrets, FOIL-exempt information, and privileged material as confidential. Disclosure to the AG does not waive attorney-client privilege or work product protection. This is a demand-driven disclosure obligation with a 90-day response window.
9. Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2025-10-11
R-02.2
GBL § 1553(3)-(4)
Plain Language
GPAI model developers need not disclose trade secrets or legally protected information in their technical documentation. The AG may require developers to produce their § 1553 technical documentation within 90 days as part of an investigation. Developers may designate trade secrets, FOIL-exempt information, and privileged material as confidential, and disclosure to the AG does not waive attorney-client privilege or work product protection.
3. Nothing in subdivision one of this section shall be construed to require a developer to disclose any information that is a trade secret or otherwise protected from disclosure pursuant to state or federal law. 4. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
Pending 2026-01-07
R-02.1
Labor Law § 201-j(2)(a)
Plain Language
Covered businesses must submit an annual report to the Department of Labor by March 1 covering the prior calendar year. The report has two required components: (1) employment data estimating the number of employees displaced, hired, or whose hours changed due to AI, plus unfilled positions attributable to AI; and (2) information on the nature of AI usage, including objectives, human oversight, frequency of use, handling of sensitive personal data, and risk reduction measures. The enumerated items are non-exhaustive ('including but not limited to'), so the Department may expect additional relevant information. Because the report covers the preceding calendar year, covered businesses should begin tracking these metrics immediately upon the law's effective date.
2. Reporting requirement. On or before March first of every year, a covered business shall report to the department regarding the impact of artificial intelligence on its hiring and the nature of its artificial intelligence use in the calendar year ending the preceding December thirty-first. Such report shall include: (a) Employment data, including but not limited to: (i) An estimate of the number of employees displaced, or whose hours have been reduced, due in full or in part to use of artificial intelligence; (ii) An estimate of the number of employees hired, or whose hours have been increased, due in full or in part to use of artificial intelligence; and (iii) An estimate of the number of positions previously filled that the covered business has decided not to fill due in full or in part to use of artificial intelligence; and (b) Information on the nature of artificial intelligence usage, including but not limited to: (i) Descriptions of the objectives of the use of artificial intelligence; (ii) Information regarding any human oversight of artificial intelligence; (iii) Information on the frequency and length of use of artificial intelligence; (iv) Information on any use of artificial intelligence in relation to sensitive personal data, including storage and access protections related to use of artificial intelligence in relation to such personal data; and (v) Measures in place for oversight, risk reduction, or other protections related to use of artificial intelligence.
Pending 2025-01-01
R-02.1
State Technology Law § 404(1)
Plain Language
Every impact assessment must be submitted to the governor and legislative leaders at least 30 days before the agency implements the automated decision-making system covered by the assessment. This creates a mandatory waiting period: agencies cannot deploy a system until 30 days after the governor and legislature receive the completed assessment. While the statute does not explicitly grant the governor or legislature veto power, the 30-day window provides an opportunity for legislative intervention before deployment.
Each impact assessment conducted pursuant to this article shall be submitted to the governor, the temporary president of the senate, and the speaker of the assembly at least thirty days prior to the implementation of the automated decision-making system that is the subject of such assessment.
Pending 2025-01-01
R-02.1
Ohio Rev. Code § 3902.80(B)(1)-(2)
Plain Language
Health plan issuers must file an annual report with the superintendent of insurance by March 1 each year. The report must cover: the issuer's network providers, enrollment counts for the preceding year, and — if AI algorithms are used in utilization review — detailed information including the algorithm criteria, training data sets, the algorithm itself, software outcomes, and data on human reviewer time spent on each adverse determination before sign-off. The report must be submitted in a form prescribed by the superintendent and verified by an officer of the health plan issuer. This is a proactive, scheduled submission — not triggered by request.
(B)(1) Each health plan issuer, annually, on or before the first day of March, shall file a report with the superintendent of insurance covering all of the following information: (a) Each provider in the health plan issuer's network; (b) The number of covered persons enrolled in health benefit plans issued by the health plan issuer in this state in the preceding calendar year; (c) Whether the health plan issuer used, is using, or will use artificial intelligence-based algorithms in utilization review processes for those health benefit plans and, if so, all of the following information: (i) The algorithm criteria; (ii) Data sets used to train the algorithm; (iii) The algorithm itself; (iv) Outcomes of the software in which the algorithm is used; (v) Data on the amount of time a human reviewer spends examining an adverse determination prior to signing off on each such determination. (2) The health plan issuer shall submit the report in a form prescribed by the superintendent. An officer of the health plan issuer shall verify the contents of the report.
Pending 2025-01-01
R-02.2
Ohio Rev. Code § 3902.80(D)
Plain Language
The superintendent of insurance has authority to audit any health plan issuer's use of AI-based algorithms at any time, with no advance notice requirement specified. The superintendent may also engage third-party auditors. For health plan issuers, this means they must maintain their AI systems, documentation, and records in a state of audit readiness at all times. While this provision primarily grants authority to the superintendent, it imposes a practical obligation on issuers to be prepared to produce documentation on demand.
(D) The superintendent may audit a health plan issuer's use of an artificial intelligence-based algorithm at any time and may contract with a third party for the purposes of conducting such an audit.
Pending 2025-01-01
R-02.1
Ohio Rev. Code § 3902.80(B)(1)-(2)
Plain Language
Health plan issuers must file an annual report with the Superintendent of Insurance by March 1 each year. The report must cover the issuer's provider network, enrolled covered persons, and — critically — whether the issuer uses AI-based algorithms in utilization review. If AI is used, the issuer must disclose the algorithm criteria, training datasets, the algorithm itself, software outcomes, and data on how much time human reviewers spend examining adverse determinations before signing off. An officer must verify the report's contents. This is a proactive, scheduled regulatory submission — issuers cannot wait to be asked.
(B)(1) Each health plan issuer, annually, on or before the first day of March, shall file a report with the superintendent of insurance covering all of the following information: (a) Each provider in the health plan issuer's network; (b) The number of covered persons enrolled in health benefit plans issued by the health plan issuer in this state in the preceding calendar year; (c) Whether the health plan issuer used, is using, or will use artificial intelligence-based algorithms in utilization review processes for those health benefit plans and, if so, all of the following information: (i) The algorithm criteria; (ii) Data sets used to train the algorithm; (iii) The algorithm itself; (iv) Outcomes of the software in which the algorithm is used; (v) Data on the amount of time a human reviewer spends examining an adverse determination prior to signing off on each such determination. (2) The health plan issuer shall submit the report in a form prescribed by the superintendent. An officer of the health plan issuer shall verify the contents of the report.
Pending 2025-01-01
R-02.2
Ohio Rev. Code § 3902.80(D)
Plain Language
The Superintendent of Insurance may audit a health plan issuer's AI algorithm use at any time, including by engaging a third-party auditor. This creates an obligation for health plan issuers to maintain their AI systems, documentation, and records in a form that can be produced for audit at any time — not merely upon annual reporting. Issuers should treat this as a continuing readiness obligation to cooperate with regulatory examinations of their AI utilization review tools.
(D) The superintendent may audit a health plan issuer's use of an artificial intelligence-based algorithm at any time and may contract with a third party for the purposes of conducting such an audit.
Pending 2026-10-06
R-02.1R-02.4
35 Pa.C.S. § 3504(a)-(b)
Plain Language
Facilities using AI for clinical decision making must annually file an AI compliance statement with the Department of Health. The statement must include: a summary of each AI algorithm's function and scope; a logic or decision tree of the algorithms; a description of each training data set including data sources; an attestation of compliance with responsible-use requirements with supporting evidence; and a description of the facility's oversight and validation process. This combines annual regulatory submission with annual compliance certification.
§ 3504. Artificial intelligence compliance statements. (a) Compliance statement required.--A facility using artificial intelligence-based algorithms for clinical decision making shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of artificial intelligence-based algorithms used for clinical decision making. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for clinical decision making. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for clinical decision making, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 3503 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the facility for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 3503.
Pending 2026-10-06
R-02.2
35 Pa.C.S. § 3507
Plain Language
The Department of Health may request additional information and evidence from facilities at any time regarding their AI disclosure practices, responsible-use compliance, and compliance statements. Facilities must be prepared to produce supporting documentation on demand. This creates a continuing obligation to maintain records in a form that can be produced to the regulator upon request.
§ 3507. Oversight. The department may request additional information and evidence from a facility regarding the items provided under sections 3502 (relating to disclosure), 3503 (relating to responsible use) and 3504 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-10-06
R-02.1R-02.4
40 Pa.C.S. § 5204(a)-(b)
Plain Language
Insurers using AI in utilization review must annually file a compliance statement with the Insurance Department. The statement must summarize AI algorithm function and scope, provide decision trees, describe training data sets and sources, attest to compliance with responsible-use requirements with supporting evidence, and describe the insurer's oversight and validation process. This combines annual regulatory reporting with compliance certification.
§ 5204. Artificial intelligence compliance statements. (a) Compliance statement required.--An insurer using artificial intelligence-based algorithms in the utilization review process shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of the artificial intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 5203 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the insurer for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 5203.
Pending 2026-10-06
R-02.2
40 Pa.C.S. § 5208
Plain Language
The Insurance Department may request additional information and evidence from insurers at any time regarding their AI disclosure, responsible use, and compliance statements. Insurers must maintain documentation in a form that can be produced on demand to the regulator.
§ 5208. Oversight. The department may request additional information and evidence from an insurer regarding the items provided under sections 5202 (relating to disclosure), 5203 (relating to responsible use) and 5204 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-10-06
R-02.1R-02.4
40 Pa.C.S. § 5304(a)-(b)
Plain Language
MA/CHIP managed care plans using AI in utilization review must annually file a compliance statement with the Department of Human Services, covering algorithm function and scope, decision trees, training data descriptions and sources, compliance attestation with evidence, and a description of the plan's oversight and validation processes. This parallels the insurer compliance statement under § 5204.
§ 5304. Artificial intelligence compliance statements. (a) Compliance statement required.--An MA or CHIP managed care plan using artificial intelligence-based algorithms in the utilization review process shall annually file with the department, in the form and manner prescribed by the department, an artificial intelligence compliance statement. (b) Contents.--Each compliance statement must: (1) Summarize the function and scope of the artificial intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial intelligence-based algorithms and the training data sets comply with section 5303 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the MA or CHIP managed care plan for overseeing and validating the performance and compliance of the artificial intelligence-based algorithms in accordance with section 5303.
Pending 2026-10-06
R-02.2
40 Pa.C.S. § 5308
Plain Language
The Department of Human Services may request additional information and evidence from MA/CHIP managed care plans at any time regarding their AI disclosure, responsible use, and compliance statements. Plans must maintain documentation ready for regulatory production on demand.
§ 5308. Oversight. The department may request additional information and evidence from an MA or CHIP managed care plan regarding the items provided under section 5302 (relating to disclosure), 5303 (relating to responsible use) and 5304 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-04-01
R-02.3
12 Pa.C.S. § 7105(e)-(f)
Plain Language
Suppliers must file their written chatbot disclosure policy with the Bureau of Consumer Protection, along with the supplier's name and address, the chatbot's name, and an annual filing fee set by the bureau. This is a mandatory registration-style obligation — suppliers cannot operate without filing. Suppliers may also voluntarily submit policy revisions and any other documentation they deem appropriate. The bureau prescribes the form and manner of filing.
(e) Filing.--A supplier shall file the policy described under subsection (a) with the bureau, in the form and manner as prescribed by the bureau, along with: (1) The name and address of the supplier. (2) The name of the chatbot. (3) An annual filing fee as prescribed by the bureau. (f) Additional information.--A supplier may provide to the bureau, in the form and manner prescribed by the bureau: (1) Any revision to the policy described under subsection (a) and filed in accordance with subsection (e). (2) Any other documentation that the supplier deems appropriate to provide.
Pending 2027-01-09
R-02.1R-02.4
35 Pa.C.S. § 3504(a)-(b)
Plain Language
Facilities using AI for clinical decision making must annually file an AI compliance statement with the Department of Health. The statement must include: a summary of each AI algorithm's function and scope, a logic or decision tree, a description of each training dataset and its source, an attestation with evidence of compliance with responsible use requirements, and a description of the facility's oversight and validation process. This is a comprehensive annual regulatory filing combining compliance certification with substantive algorithm documentation.
(a) Compliance statement required.--A facility using artificial-intelligence-based algorithms for clinical decision making shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of artificial-intelligence-based algorithms used for clinical decision making. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for clinical decision making. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for clinical decision making, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 3503 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the facility for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 3503.
Pending 2027-01-09
R-02.1R-02.4
40 Pa.C.S. § 5204(a)-(b)
Plain Language
Insurers using AI in utilization review must annually file an AI compliance statement with the Insurance Department. Contents mirror the facility filing requirement: algorithm function and scope summary, logic/decision tree, training data descriptions with sources, compliance attestation with evidence, and description of oversight and validation processes.
(a) Compliance statement required.--An insurer using artificial-intelligence-based algorithms in the utilization review process shall annually file with the department in the form and manner prescribed by the department an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of the artificial-intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 5203 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the insurer for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 5203.
Pending 2027-01-09
R-02.1R-02.4
40 Pa.C.S. § 5304(a)-(b)
Plain Language
MA or CHIP managed care plans using AI in utilization review must annually file an AI compliance statement with the Department of Human Services. Contents parallel the facility and insurer filing requirements.
(a) Compliance statement required.--An MA or CHIP managed care plan using artificial-intelligence-based algorithms in the utilization review process shall annually file with the department, in the form and manner prescribed by the department, an artificial intelligence compliance statement. (b) Contents.--A compliance statement must: (1) Summarize the function and scope of the artificial-intelligence-based algorithms used for utilization review. (2) Provide a logic or decision tree of artificial-intelligence-based algorithms used for utilization review. (3) Provide a description of each training data set used by artificial-intelligence-based algorithms for utilization review, including the source of the data. (4) Attest that the artificial-intelligence-based algorithms and the training data sets comply with section 5303 (relating to responsible use) and provide evidence of the compliance. (5) Describe the process of the MA or CHIP managed care plan for overseeing and validating the performance and compliance of the artificial-intelligence-based algorithms in accordance with section 5303.
Pending 2027-01-09
R-02.2
35 Pa.C.S. § 3507
Plain Language
The Department of Health may at any time request additional information and evidence from a facility regarding its AI disclosures, responsible use practices, and compliance statements. Facilities must be prepared to produce documentation on demand to support their regulatory filings.
The department may request additional information and evidence from a facility regarding the items provided under sections 3502 (relating to disclosure), 3503 (relating to responsible use) and 3504 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2027-01-09
R-02.2
40 Pa.C.S. § 5208
Plain Language
The Insurance Department may request additional information and evidence from insurers regarding their AI disclosures, responsible use practices, and compliance statements at any time to ensure compliance.
The department may request additional information and evidence from an insurer regarding the items provided under sections 5202 (relating to disclosure), 5203 (relating to responsible use) and 5204 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2027-01-09
R-02.2
40 Pa.C.S. § 5308
Plain Language
The Department of Human Services may request additional information and evidence from MA or CHIP managed care plans regarding their AI disclosures, responsible use practices, and compliance statements to ensure compliance.
The department may request additional information and evidence from an MA or CHIP managed care plan regarding the items provided under section 5302 (relating to disclosure), 5303 (relating to responsible use) and 5304 (relating to artificial intelligence compliance statements) that are necessary to ensure compliance with this chapter.
Pending 2026-01-21
R-02.1
R.I. Gen. Laws § 27-84-3(a)(1)
Plain Language
Insurers must proactively disclose to OHIC and DBR comprehensive information about how they use AI in claims and coverage management. This includes the types of AI models used, the role AI plays in decision-making, training datasets, performance metrics, governance and risk management policies, and which specific claims and coverage decisions AI made or substantially influenced. This is a broad regulatory transparency obligation — not a one-time filing but an ongoing disclosure duty covering all AI use in the claims and coverage lifecycle.
Insurers subject to this chapter shall disclose to the office of the health insurance commissioner ("OHIC") and the department of business regulation ("DBR") how they use artificial intelligence to manage healthcare claims and coverage including, but not limited to, the types of artificial intelligence models used, the role of artificial intelligence in the decision-making process, training datasets, performance metrics, governance and risk management policies, and the decisions on healthcare claims and coverage where artificial intelligence made, or was a substantial factor in making, the decisions.
Pending 2026-01-21
R-02.2
R.I. Gen. Laws § 27-84-3(a)(2)
Plain Language
Upon request from OHIC or DBR, insurers must produce all information — including documents and software — necessary for the regulators to enforce the chapter. This is an on-demand production obligation, not a scheduled filing. Notably, the scope includes software itself, meaning regulators could request access to or copies of the AI systems used in claims and coverage decisions. Insurers should maintain documentation and system access in a form that can be produced promptly upon request.
Insurers shall submit to the office of the health insurance commissioner and the department of business regulation, upon request, all information, including documents and software, that permits enforcement of this chapter.
Pending 2026-01-09
R-02.1
R.I. Gen. Laws § 27-84-3(a)(1)
Plain Language
Insurers must proactively disclose to OHIC and DBR how they use AI to manage healthcare claims and coverage. The disclosure must cover at minimum: the types of AI models used, the role AI plays in the decision-making process, training datasets, performance metrics, governance and risk management policies, and the specific claims and coverage decisions where AI made or substantially contributed to the outcome. This is a comprehensive transparency obligation to the regulator — not a one-time filing but a disclosure of the insurer's overall AI use practices.
Insurers subject to this chapter shall disclose to the office of the health insurance commissioner ("OHIC") and the department of business regulation ("DBR") how they use artificial intelligence to manage healthcare claims and coverage including, but not limited to, the types of artificial intelligence models used, the role of artificial intelligence in the decision-making process, training datasets, performance metrics, governance and risk management policies, and the decisions on healthcare claims and coverage where artificial intelligence made, or was a substantial factor in making, the decisions.
Pending 2026-01-09
R-02.2
R.I. Gen. Laws § 27-84-3(a)(2)
Plain Language
Upon request from OHIC or DBR, insurers must produce all information — including documents and software — necessary to enforce the chapter. This is notably broad: it encompasses not just documentation but the AI software itself, meaning regulators can demand access to the actual AI tools used in claims and coverage management. Insurers should maintain their AI systems and associated documentation in a state of readiness for production at any time.
Insurers shall submit to the office of the health insurance commissioner and the department of business regulation, upon request, all information, including documents and software, that permits enforcement of this chapter.
Pending 2025-01-01
R-02.2
Section 37-31-20(G)
Plain Language
The Attorney General may request that a developer produce the deployer-facing documentation described in Section 37-31-20(B) within 90 days. Developers may designate materials as proprietary or trade secret, and attorney-client privilege and work-product protections are preserved. The disclosed documentation is exempt from FOIA. This requires developers to maintain their documentation in a form producible to the AG on demand.
(G) The Attorney General may require that a developer disclose to the Attorney General, no later than ninety days after the request and in a form and manner prescribed by the Attorney General, the statement or documentation described in subsection (B). The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2025-01-01
R-02.2
Section 37-31-30(I)
Plain Language
The Attorney General may request that a deployer produce its risk management policy, impact assessments, and associated records within 90 days. Deployers may designate materials as proprietary or trade secret, and attorney-client privilege and work-product protections are preserved. All disclosed materials are exempt from FOIA. This requires deployers to maintain documentation in a form producible to the AG on demand.
(I) The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to him, no later than ninety days after the request and in a form and manner prescribed by him, the risk management policy implemented pursuant to subsection (B), the impact assessment completed pursuant to subsection (C), or the records maintained pursuant to subsection (C)(6). The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Pending 2026-07-01
R-02.1
Section 3
Plain Language
Health carriers using AI for utilization review — directly or through contracted entities — must compile and submit an annual report to the Executive Board of the Legislative Research Council by December 1 each year. The report must detail how AI tools were used in the utilization review process during the preceding fiscal year, including the nature and degree of human review and oversight applied to affirm or negate determinations. This is a legislative oversight submission rather than a regulatory-agency filing — the recipient is the Executive Board of the Legislative Research Council, not the Division of Insurance.
Any health carrier that makes determinations or provides advice about third-party payment for any health care services using an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or that contracts with or otherwise works through an entity that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review shall compile an annual report detailing how, during the preceding fiscal year, the artificial intelligence, algorithm, or other software tool was used in the utilization review process and the nature and degree of human review and oversight that was used to afform or negate any determinations. The report must be forwarded to the Executive Board of the Legislative Research Council on or before December first of each year.
Pending 2026-07-01
R-02.2
Section 4
Plain Language
The Division of Insurance has authority to inspect a health carrier's automated utilization review system at any time — no notice or triggering event is required. This means health carriers must maintain their AI systems and associated documentation in a state that can withstand regulatory inspection on demand. If the Division finds noncompliance, it must notify the Attorney General, who may then direct the carrier to cease and desist from noncompliant activities. This provision creates the enforcement mechanism for the substantive requirements in Sections 1 and 2 but also imposes an implicit readiness obligation — carriers must be able to demonstrate compliance upon inspection.
The Division of Insurance may, at any time, inspect a health carrier's automated system to ensure that the health carrier's use of artificial intelligence, algorithms, or other software tools is in compliance with sections 1 and 2 of this Act. If the division determines that the automated system is not in compliance, the division shall notify the attorney general who may direct the health carrier to cease and desist from engaging in further noncompliant activities.
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(a)-(b)
Plain Language
Developers and deployers must file reports with the Attorney General before deployment and then annually or upon each substantial change, whichever comes first. Each report must be accompanied by the most recent independent audit and a legal attestation that either the system complies with the subchapter, or that it may violate or does violate provisions but includes a remediation plan and summary. This reporting is mandatory regardless of audit findings — even a system found to have issues must be reported with a remediation plan rather than withheld.
(a) Every developer and deployer of an automated decision system used in a consequential decision shall comply with the reporting requirements of this section. Regardless of final findings, reports shall be filed with the Attorney General prior to deployment of an automated decision system used in a consequential decision and then annually, or after each substantial change to the system, whichever comes first. (b) Together with each report required to be filed under this section, developers and deployers shall file with the Attorney General a copy of the last completed independent audit required by this subchapter and a legal attestation that the automated decision system used in a consequential decision: (1) does not violate any provision of this subchapter; or (2) may violate or does violate one or more provisions of this article, that there is a plan of remediation to bring the automated decision system into compliance with this subchapter, and a summary of the plan of remediation.
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(c)
Plain Language
Developers must file a detailed report with the Attorney General covering nine categories: system description (including software stack, purpose, and intended uses), intended outputs and permissible secondary uses, training methods and data (including preprocessing, dataset descriptions, data quality, breadth assessment, and legal compliance steps), data management policies, information necessary for deployer compliance monitoring, system capabilities and limitations including safeguards, an internal risk assessment covering discrimination, reliability, privacy, and security risks with mitigation testing, and monitoring recommendations. This is an exceptionally comprehensive developer reporting obligation that combines training data disclosure, model documentation, and risk assessment into a single filing.
(c) Developers of automated decision systems shall file with the Attorney General a report containing the following: (1) a description of the system including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) the methods for training of their models including: (A) any pre-processing steps taken to prepare datasets for the training of a model underlying an automated decision system; (B) descriptions of the datasets upon which models were trained and evaluated, how and why datasets were collected and the sources of those datasets, and how that training data will be used and maintained; (C) the quality and appropriateness of the data used in the automated decision system's design, development, testing, and operation; (D) whether the data contains sufficient breadth to address the range of real-world inputs the automated decision system might encounter and how any data gaps have been addressed; and (E) steps taken to ensure compliance with privacy, data privacy, data security, and copyright laws; (4) use and data management policies; (5) any other information necessary to allow the deployer to understand the outputs and monitor the system for compliance with this subchapter; (6) any other information necessary to allow the deployer to comply with the requirements of subsection (d) of this section; (7) a description of the system's capabilities and any developer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (8) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, validity and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (9) whether the system should be monitored and, if so, how the system should be monitored.
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(d)
Plain Language
Deployers must file a separate detailed report with the Attorney General covering eight categories: system description, intended outputs, revenue/monetization plans, the system's decision-making role (autonomous vs. supportive), capabilities and limitations with safeguards, a consumer cost-benefit assessment, an internal risk assessment covering discrimination, accuracy, privacy, and security risks with mitigation documentation, and monitoring plans. The deployer report differs from the developer report by including monetization disclosure and consumer cost-benefit analysis rather than training data methodology. Both reports are mandatory and must be filed on the same schedule (pre-deployment and annually or upon substantial change).
(d) Deployers of automated decision systems used in consequential decisions shall file with the Attorney General a report containing the following: (1) a description of the system, including: (A) a description of the system's software stack; (B) the purpose of the system and its expected benefits; and (C) the system's current and intended uses, including what consequential decisions it will support and what stakeholders will be impacted; (2) the intended outputs of the system and whether the outputs can be or are otherwise appropriate to be used for any purpose not previously articulated; (3) whether the deployer collects revenue or plans to collect revenue from use of the automated decision system in a consequential decision and, if so, how it monetizes or plans to monetize use of the system; (4) whether the system is designed to make consequential decisions itself or whether and how it supports consequential decisions; (5) a description of the system's capabilities and any deployer-imposed limitations, including capabilities outside of its intended use, when the system should not be used, any safeguards or guardrails in place to protect against unintended, inappropriate, or disallowed uses, and testing of any safeguards or guardrails; (6) an assessment of the relative benefits and costs to the consumer given the system's purpose, capabilities, and probable use cases; (7) an internal risk assessment including documentation and results of testing conducted to identify all reasonably foreseeable risks related to algorithmic discrimination, accuracy and reliability, privacy and autonomy, and safety and security, as well as actions taken to address those risks, and subsequent testing to assess the efficacy of actions taken to address risks; and (8) whether the system should be monitored and, if so, how the system should be monitored.
Pending 2025-07-01
R-02.1
9 V.S.A. § 4193f(f)
Plain Language
Systems already deployed for consequential decisions as of July 1, 2025 have an 18-month grace period — developers and deployers must complete and file all required reports and the independent audit no later than January 1, 2027. This is a transition provision for existing systems that would otherwise be required to have pre-deployment audits and reports that cannot retroactively be produced.
(f) For automated decision systems already in deployment for use in consequential decisions on or before July 1, 2025, developers and deployers shall not later than 18 months after July 1, 2025 complete and file the reports and complete the independent audit required by this subchapter.
Pending 2025-07-01
R-02.2
9 V.S.A. § 4193g(c)
Plain Language
The Attorney General may at any time require developers or deployers to disclose their risk management policy and program in a prescribed form, and may evaluate the program for compliance. This is a regulatory-on-demand disclosure obligation — entities must be prepared to produce their risk management documentation upon AG request and in the AG's specified format.
(c) The Attorney General may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subsection (a) of this section in a form and manner prescribed by the Attorney General. The Attorney General may evaluate the risk management policy and program to ensure compliance with this section.
Pre-filed 2025-07-01
R-02.1
9 V.S.A. § 4193e(a)
Plain Language
Before deploying any inherently dangerous AI system in Vermont, deployers must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence. The assessment must be resubmitted every two years and also whenever the deployer makes a material and substantial change to the system's purpose or the type of data it processes or uses for training. This is a pre-deployment gate — deployment cannot proceed until the assessment is filed.
(a) Each deployer of an inherently dangerous artificial intelligence system shall: (1) submit to the Division of Artificial Intelligence an Artificial Intelligence System Safety and Impact Assessment prior to deploying the inherently dangerous artificial intelligence system in this State, and every two years thereafter; and (2) submit to the Division of Artificial Intelligence an updated Artificial Intelligence System Safety and Impact Assessment if the deployer makes a material and substantial change to the inherently dangerous artificial intelligence system that includes: (A) the purpose for which the system is used for; or (B) the type of data the system processes or uses for training purposes.
Pre-filed 2025-07-01
R-02.1
9 V.S.A. § 4193e(b)
Plain Language
The Safety and Impact Assessment submitted to the Division of Artificial Intelligence must cover thirteen specific elements: system purpose, deployment context and use cases, benefits, foreseeable misuse risks and mitigations, whether the model is proprietary, training data descriptions, whether training data was processed to remove personal information, copyrighted information, and 'do not train' data, transparency measures including user notification, third-party system and dataset dependencies, whether the developer has disclosed testing results and safe-use parameters, post-deployment input data descriptions, post-deployment monitoring and oversight processes, and the system's impact on consequential decisions or biometric data collection. This is a comprehensive documentation requirement that effectively requires deployers to understand and document the entire lifecycle of the AI system.
(b) Each Artificial Intelligence System Safety and Impact Assessment pursuant to subsection (a) of this section shall include, with respect to the inherently dangerous artificial intelligence system: (1) the purpose of the system; (2) the deployment context and intended use cases; (3) the benefits of use; (4) any foreseeable risk of unintended or unauthorized uses and the steps taken, to the extent reasonable, to mitigate the risk; (5) whether the model is proprietary; (6) a description of the data the system processes or uses for training purposes; (7) whether the data the system uses for training purposes has been processed to remove personal information, copyrighted information, and do not train data; (8) a description of transparency measures, including identifying to individuals when the system is in use; (9) identification of any third-party artificial intelligence systems or datasets the deployer relies on to train or operate the system, if applicable; (10) whether the developer of the system, if different than the deployer, disclosed the information pursuant to this subsection as well as the results of testing, vulnerabilities, and the parameters for safe and intended use; (11) a description of the data that the system, once deployed, processes as inputs; (12) a description of postdeployment monitoring and user safeguards, including a description of the oversight process in place to address issues as issues arise; and (13) a description of how the model impacts consequential decisions or the collection of biometric data.
Pre-filed 2025-07-01
R-02.1
9 V.S.A. § 4193e(c)
Plain Language
In the first year after deploying a high-risk AI system, deployers must submit testing results to the Division of Artificial Intelligence at three intervals: one month, six months, and twelve months after deployment. Each submission must show the reliability of the system's results, any variance over the testing periods, and strategies for mitigating variances. This post-deployment testing and reporting obligation applies specifically to high-risk AI systems — a subset of the broader 'inherently dangerous' category — and is a first-year-only requirement distinct from the biennial safety and impact assessment.
(c) Each deployer of a high-risk artificial intelligence system shall submit a one-, six-, and 12-month testing result to the Division of Artificial Intelligence showing the reliability of the results generated by the system, any variance in those results over the testing periods, and any mitigation strategies for variances, in the first year of deployment.
Pre-filed 2025-07-01
R-02.2
9 V.S.A. § 4193e(d)
Plain Language
When the Division of Artificial Intelligence learns a deployer is not in compliance with assessment requirements, it must immediately notify the deployer in writing and order submission of the required assessment. If the deployer fails to submit within 45 days, the Division refers the violation to the Attorney General. This creates a 45-day cure window between the Division's noncompliance notice and Attorney General referral. Deployers should treat the initial Division notice as an urgent compliance demand — the 45-day period is a hard deadline, not a suggestion.
(d) Upon the Division of Artificial Intelligence receiving notice that a deployer of an inherently dangerous artificial intelligence system is not in compliance with the requirements under this section, the Division shall immediately notify the deployer of the finding in writing and order the deployer to submit the assessment required pursuant to subsection (a) of this section. If the deployer fails to submit the assessment on or before 45 days after the deployers receives the notice, the Division of Artificial Intelligence shall notify the Attorney General in writing of the violation.
Pre-filed 2025-07-01
R-02.2
9 V.S.A. § 4193c(c)
Plain Language
The Attorney General may issue a civil investigative demand (CID) when there is reasonable cause to believe a violation has occurred. Developers and deployers must produce responsive documents but may redact trade secrets and legally protected information — provided they affirmatively state that the basis for withholding is a trade secret claim. Attorney-client privilege and work-product protections are preserved and disclosure does not waive them. All materials produced to the AG under a CID are exempt from public records disclosure. Practically, entities should maintain documentation in a form that allows rapid response to a CID, with trade-secret designations pre-identified.
(c)(1) Whenever the Attorney General has reasonable cause to believe that any person has engaged in or is engaging in any violation of this subchapter, the Attorney General may issue a civil investigative demand. (2) In rendering and furnishing any information requested pursuant to a civil investigative demand, a developer or deployer may redact or omit any trade secrets or information protected from disclosure by State or federal law. If a developer or deployer refuses to disclose or redacts or omits information based on the exemption from disclosure of trade secrets, the developer or deployer shall affirmatively state to the Attorney General that the basis for nondisclosure, redaction, or omission is because the information is a trade secret. (3) To the extent that any information requested pursuant to a civil investigative demand is subject to attorney-client privilege or work-product protection, disclosure of the information shall not constitute a waiver of the privilege or protection. (4) Any information, statement, or documentation provided to the Attorney General pursuant to this subsection shall be exempt from public inspection and copying under the Public Records Act.
Passed 2026-07-01
R-02.1
18 V.S.A. § 9764(c)
Plain Language
To activate the affirmative defense, suppliers must file with the Attorney General's office: the supplier's name and address, the chatbot's name, the comprehensive written policy described in § 9764(b), and a $100 filing fee. Suppliers may also voluntarily file policy revisions and additional documentation. While technically optional (the filing is part of an affirmative defense, not a standalone mandate), the practical incentive to file is strong for any supplier that wants regulatory protection.
(c) To file a policy with the Office of the Attorney General under this section, a supplier of a mental health chatbot: (1) shall provide to the Office, in the form and manner prescribed by the Office: (A) the name and address of the supplier; (B) the name of the mental health chatbot supplied by the supplier; (C) the written policy described in subsection (b) of this section; and (D) a $100.00 filing fee; and (2) may provide to the Office: (A) any revisions to a policy filed under this section; and (B) any other documentation that the supplier elects to provide.
Passed 2022-07-01
R-02.1
3 V.S.A. § 3303(a)(8); Sec. 4
Plain Language
The Secretary of Digital Services must include an annual update to the automated decision systems inventory in the Agency's annual report to the General Assembly, submitted concurrent with the Governor's annual budget request. This ensures the inventory is not a one-time exercise but a continuing reporting obligation to the legislature. Additionally, Sec. 4 requires a one-time report on the inventory to legislative committees by December 1, 2022, with recommendations on how the inventory should be maintained going forward.
(8) an annual update to the inventory required by section 3305 of this title.
Pending 2026-07-01
R-02.4
§ 16-5EE-9(a)-(b)
Plain Language
By December 31 each year, every covered medical facility, research facility, company, or nonprofit organization must certify to the Attorney General that it is in compliance with the Genomic Information Privacy Act. The certification must be submitted by an attorney representing the organization. This is a continuing annual obligation — not a one-time filing.
(a) Not later than December 31 of each year, a medical facility, research facility, company, or nonprofit organization subject to this §16-5EE-1 et seq. shall certify to the attorney general that the facility, company, or organization is in compliance with this chapter. (b) An attorney representing a medical facility, research facility, company, or nonprofit organization subject to this chapter shall submit the certification required under Subsection §16-5EE-8(a).