R-01
Reporting & Regulatory Submissions
Incident Reporting
Operators of AI systems must report significant safety incidents to designated regulatory authorities within specified timeframes. Timeframes vary by jurisdiction and incident severity — typically 15 days for standard incidents and 24–72 hours for incidents posing imminent risk of death or serious injury. Some jurisdictions also require notification to affected individuals.
Applies to DeveloperDeployerProfessional Sector Foundation ModelHealthcare
Bills — Enacted
4
unique bills
Bills — Proposed
21
Last Updated
2026-03-29
Core Obligation

Operators of AI systems must report significant safety incidents to designated regulatory authorities within specified timeframes. Timeframes vary by jurisdiction and incident severity — typically 15 days for standard incidents and 24–72 hours for incidents posing imminent risk of death or serious injury. Some jurisdictions also require notification to affected individuals.

Sub-Obligations3 sub-obligations
Bills That Map This Requirement 25 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-07-01
Bus. & Prof. Code § 22612(d)(8)
Plain Language
Operators must provide a public-facing incident reporting mechanism that allows any third party to report child safety incidents directly to the operator. The mechanism must also allow third parties to access other reports that have been submitted through it — creating a degree of public transparency around reported child safety incidents. This is distinct from the AG's separate complaint mechanism under Section 22615 and from the operator's internal crisis response protocol.
(8) A public incident reporting mechanism that enables a third party to report directly to the operator an incident regarding a child safety risk and to access other reports made through that reporting mechanism.
Enacted 2026-01-01
R-01.1
Bus. & Prof. Code § 22757.13(c)(1)
Plain Language
Frontier developers must report critical safety incidents to the Office of Emergency Services within 15 days of discovery. If the incident poses an imminent risk of death or serious physical injury, the developer must additionally notify an appropriate authority — such as a law enforcement or public safety agency — within 24 hours. Developers may file amended reports as new information emerges after the initial filing. Reporting critical safety incidents involving non-frontier foundation models is encouraged but not required.
(1) Subject to paragraph (2), a frontier developer shall report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident. (2) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. (3) A frontier developer that discovers information about a critical safety incident after filing the initial report required by this subdivision may file an amended report. (4) A frontier developer is encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not frontier models.
Enacted 2026-06-30
R-01.3
C.R.S. § 6-1-1703(7)
Plain Language
If a deployer discovers that a deployed high-risk AI system has actually caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, in the AG's prescribed form. This is a post-discovery incident reporting obligation, not a periodic reporting requirement. The 90-day window runs from actual discovery, and the deployer must not unreasonably delay even within that window.
(7) If a deployer deploys a high-risk artificial intelligence system on or after June 30, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Pending 2025-07-01
R-01.3
O.C.G.A. § 10-16-2(e)(2)
Plain Language
When a developer discovers — through its own testing or via a credible deployer report — that its automated decision system has caused or is likely to have caused algorithmic discrimination, the developer must notify the Attorney General and all known deployers or other developers within 90 days. This is a mandatory disclosure triggered by discovery of actual or likely discrimination, not a routine reporting obligation.
A developer of an automated decision system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the automated decision system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system without unreasonable delay but no later than 90 days after the date on which: (A) The developer discovers through the developer's ongoing testing and analysis that the developer's automated decision system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (B) The developer receives from a deployer a credible report that the automated decision system has been deployed and has caused algorithmic discrimination.
Pending 2025-07-01
R-01.3
O.C.G.A. § 10-16-7
Plain Language
Deployers that discover their automated decision system has caused algorithmic discrimination must notify the Attorney General within 90 days of discovery. This parallels the developer notification obligation in § 10-16-2(e)(2) but applies at the deployer level. The form and manner of the notice are prescribed by the AG.
If a deployer deploys an automated decision system and subsequently discovers that the automated decision system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by the Attorney General, a notice disclosing the discovery.
Pre-filed 2025-07-07
R-01.3
Chapter 93M, Section 2(c)
Plain Language
When a developer discovers or identifies a known or foreseeable risk of algorithmic discrimination in an AI system, they must notify both the Attorney General and all deployers of that system within 90 days. This is a discovery-triggered notification — it is not a routine periodic report but an event-driven disclosure obligation. The 90-day window runs from the point of discovery, not from a calendar date.
(c) Disclosure of Risks: Developers must notify the Attorney General and deployers of any known or foreseeable risks of discrimination within 90 days of discovery.
Pre-filed
R-01.3
Chapter 93M § 2(e)
Plain Language
When a developer discovers — through its own testing or a credible deployer report — that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify both the attorney general and all known deployers/developers within 90 days. This is an event-triggered disclosure, not a routine reporting obligation. The notice must describe the known or foreseeable discrimination risks.
(e) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
Pre-filed
R-01.3
Chapter 93M § 3(g)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, including the impact assessment information required under Section 3(c)(2). This is a deployer-side counterpart to the developer's discrimination notification obligation in Section 2(e).
(g) if a deployer deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send subsection (c)(2) to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Pending 2026-02-24
R-01.1R-01.2
Sec. 11(1)-(2)
Plain Language
Upon a security breach of data collected through electronic monitoring or automated decision tools, employers must: (1) promptly secure systems, mitigate harm, and certify corrective steps; (2) notify all affected covered individuals within 48 hours with breach details and response information; (3) provide extraordinarily comprehensive remediation benefits — including 10 years of paid identity theft protection with a $5 million insurance policy per individual, credit monitoring for individuals and their dependents, dark web monitoring, credit freezes, SSN monitoring and reissuance, and US-based fraud remediation; (4) notify the Department and Attorney General. After the breach, the employer must also commission a third-party security audit. The remediation benefits package is unusually prescriptive and costly compared to typical breach notification statutes.
Sec. 11. (1) If an employer has a security breach of data collected through an electronic monitoring tool or automated decisions tool, the employer must do all of the following: (a) Promptly secure the electronic monitoring systems or automated decisions tools, mitigate harm, and certify that corrective steps were taken. (b) Not more than 48 hours after the discovery of the security breach, provide notice of the security breach to all of the covered individuals whose data is affected by the security breach. The notice must include all of the following: (i) A summary of how the breach occurred. (ii) The specific data that was compromised, if known. (iii) How the employer is responding to the security breach. (iv) Information on any necessary steps the employee can take to help secure the employee's data or apply for employer-covered protections under subdivision (c). (c) Provide all of the following to the covered individuals whose data is affected by the security breach: (i) Ten years of paid premium identity theft protection and insurance, including, but not limited to, an insurance policy of not less than $5,000,000.00 that covers financial loss, expense reimbursement, and legal fees for each affected covered individual. (ii) Comprehensive credit monitoring that also covers a covered individual's dependents if the dependents' data is compromised. (iii) Dark web monitoring. (iv) Account breach alerts. (v) A 3-bureau credit freeze. (vi) Expert fraud remediation that is based in the United States. (vii) Social Security number monitoring and the cost of reissuance. (viii) Bank fraud and financial transaction monitoring. (d) Provide notice of the security breach to the department and the attorney general. (2) After a security breach has occurred as described in subsection (1), the employer must contract with a third party to perform an audit of the electronic monitoring tool or automated decisions tool to ensure that any vulnerabilities have been fixed.
Pending 2026-01-01
R-01.1
§ 325M.41, subd. 4
Plain Language
Developers must report every safety incident to the attorney general within 72 hours of learning of the incident or forming a reasonable belief that one occurred. The report must include the date, the statutory basis for why the event qualifies as a safety incident, and a plain-language description. A safety incident includes actual critical harm events as well as precursor events — autonomous model behavior, model weight theft or unauthorized access, and unauthorized use — that provide demonstrable evidence of increased critical harm risk. The 72-hour clock starts on knowledge or constructive knowledge, whichever is earlier.
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Pending 2026-08-01
R-01.1
Minn. Stat. § 325M.41, subd. 4
Plain Language
Developers must report each safety incident to the attorney general within 72 hours of learning of the incident or of learning facts sufficient to establish a reasonable belief one occurred — whichever is earlier. Safety incidents include known critical harm events, autonomous model behavior outside user requests, model weight theft or leaks, and unauthorized use — provided any of the latter three provide demonstrable evidence of increased critical harm risk. The report must include the incident date, why it qualifies as a safety incident under the statute, and a plain-language description. The 72-hour clock is aggressive and starts at knowledge or reasonable belief, not at confirmation.
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Pending 2026-01-01
R-01.1R-01.2
G.S. 114B-4(b)(2)
Plain Language
Licensed health information chatbot operators must report any data breach to the Department of Justice within 24 hours and notify affected consumers within 48 hours. This supersedes any contrary provision of law, meaning it applies even if other North Carolina data breach notification statutes would otherwise provide longer timelines. The 24-hour regulator notification and 48-hour consumer notification timelines are among the most aggressive in any U.S. AI or data breach statute.
(2) Report any data breaches within twenty-four (24) hours to the Department and within forty-eight (48) hours to affected consumers, notwithstanding any provision of law to the contrary.
Pending 2027-01-01
R-01.1R-01.2
G.S. § 114B-4(b)(2)
Plain Language
Licensees must report data breaches to the Department of Justice within 24 hours and notify affected consumers within 48 hours. This obligation overrides any conflicting state breach notification timelines. The 24-hour regulator reporting and 48-hour consumer notification windows are among the most aggressive in any U.S. AI statute.
A licensee shall do all of the following: (2) Report any data breaches within 24 hours to the Department and within 48 hours to affected consumers, notwithstanding any provision of law to the contrary.
Failed 2027-01-01
R-01.1
Sec. 5(2)-(3)
Plain Language
All frontier developers (not just large frontier developers) must report critical safety incidents involving their frontier models to the Attorney General within 15 days of discovery. If the incident poses an imminent risk of death or serious physical injury, the developer must additionally disclose it within 24 hours to an appropriate authority, including law enforcement or public safety agencies. Critical safety incidents include unauthorized model weight access, mass-casualty events, loss of model control, and model deception of its developer.
(2) A frontier developer shall report any critical safety incident pertaining to one of its frontier models to the Attorney General within fifteen days after discovering the critical safety incident. (3) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within twenty-four hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.
Failed 2027-01-01
R-01.1
Sec. 5(4)
Plain Language
Large chatbot providers must report any child safety incident involving their covered chatbots to the Attorney General within 15 days of discovery. A child safety incident includes chatbot behavior toward a minor that, if committed by a human, would constitute intentional or reckless causation of death, bodily injury, or severe emotional distress. This is a mandatory reporting obligation triggered by discovery, not by external complaint.
(4) A large chatbot provider shall report any child safety incident pertaining to one of its covered chatbots to the Attorney General within fifteen days after discovering the child safety incident.
Failed 2027-01-01
R-01.1
Sec. 5(1)(a)-(c)
Plain Language
The Attorney General must establish a public reporting mechanism for safety incidents usable by frontier developers, large chatbot providers, and members of the public. Reports must include the incident date, the reasons it qualifies as a safety incident, and a short and plain statement describing it. This provision creates infrastructure for the incident reporting obligations in the rest of Section 5, and also opens reporting to the general public.
(1) The Attorney General shall establish a mechanism to be used by a frontier developer, a large chatbot provider, or a member of the public to report a safety incident that includes all of the following: (a) The date of the safety incident; (b) The reasons the incident qualifies as a safety incident; and (c) A short and plain statement describing the safety incident.
Failed 2026-02-01
R-01.3
Sec. 3(5)(a)-(b)
Plain Language
When a developer discovers through its own testing or receives a credible report from a deployer that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must disclose the known discrimination risks to all known deployers and other developers of that system without unreasonable delay. The Attorney General prescribes the form and manner of disclosure. This functions as a discrimination-specific incident notification obligation from developer to deployers.
(5)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall disclose to all known deployers or other developers of the high-risk artificial intelligence system, each known risk of algorithmic discrimination arising from any intended use of the high-risk artificial intelligence system without unreasonable delay after the date on which: (i) The developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (ii) The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination. (b) The Attorney General shall prescribe the form and manner of the disclosure described in subdivision (a) of this subsection.
Pending
R-01.1R-01.2
Section 5(c)
Plain Language
Employers and public entities must establish and maintain reasonable administrative and physical data security practices for all employee, service beneficiary, and applicant data, in compliance with commissioner-specified standards. In the event of a security breach, both the employer/public entity and any vendor holding the data must provide written notice to the department and each affected individual within 48 hours, describing the categories of data compromised and remediation steps. Employers and vendors are jointly and severally liable for damages caused by fraud or theft resulting from a failure to secure personal data.
c. An employer or public entity shall establish, implement, and maintain reasonable administrative and physical data security practices to protect the confidentiality, integrity and accessibility of employee, service beneficiary, or applicant data and information, which shall be in compliance with any recordkeeping, data retention, and security requirements specified by the commissioner. The employer or public entity, and any vendor keeping employee, service beneficiary, or applicant data, shall promptly provide the department and each affected employee or service beneficiary, a written notice of any security breach, within 48 hours of the breach, describing the specific categories of data that were, or are reasonably believed to have been, accessed or acquired by an unauthorized person, and the steps the employer or public entity and vendor will take to address the impact of the data breach on affected individuals. The employer or public entity and the vendor shall be jointly and severally liable for any damages caused to the employee, service beneficiary, or applicant for employment by fraud or theft made possible by a failure of the employer or public entity or vendor to secure personal data and information of the employee, service beneficiary, or applicant held by the employer or public entity.
Pending 2025-07-26
R-01.1
State Tech. Law § 520(1)-(2)
Plain Language
Licensees must report system malfunctions to the Department and, where applicable, to a relevant law enforcement agency or governmental entity. A malfunction is reportable when it lasts long enough that it had the capacity to, or did, harm a person. For systems that interact with law enforcement or government systems, engage in government functions, or operate as weapons, the Department may impose additional reporting requirements to specific agencies upon issuing the license. No specific reporting timeframe is specified — the statute says the duty exists but delegates timing details to the Department.
1. A licensee shall have the duty to notify the department and, if applicable, a relevant law enforcement agency or governmental entity where the licensee's system fails to operate as intended for any significant period of time. A period of time is deemed "significant" for purposes of this section where the period of time that the malfunction occurred had the capacity to or has harmed a person or persons. 2. A licensee shall have the duty to notify a relevant law enforcement agency or governmental entity of a malfunction where designated by the department upon receipt of a license. The secretary shall issue such a requirement upon the licensee where such systems interact with law enforcement systems or the systems of a government agency, engage in law enforcement functions or the functions of a government agency, or where such systems operate, in whole or in part, or are, a weapon.
Pending 2025-09-02
R-01.1
Gen. Bus. Law § 1421(5)
Plain Language
Large developers must report each safety incident affecting a frontier model to the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident or of learning facts establishing a reasonable belief that an incident has occurred. The report must include the incident date, the reasons it qualifies as a safety incident, and a plain-language description. Safety incidents include: autonomous model behavior beyond user requests, model weight theft or unauthorized access, critical failure of technical or administrative controls, and unauthorized model use — but only where the incident provides demonstrable evidence of increased critical harm risk. The 72-hour clock starts from actual knowledge or constructive knowledge.
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Enacted 2025-06-03
R-01.1
Gen. Bus. Law § 1421(4)
Plain Language
Large developers must report every safety incident to both the Attorney General and the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident (or learning facts sufficient to establish a reasonable belief one occurred). The report must include the date, the classification basis under the statutory definition, and a plain-language description. Safety incidents include actual critical harm events as well as precursor incidents — autonomous model behavior, model weight theft/release, control failures, and unauthorized use — that provide demonstrable evidence of increased critical harm risk. The 72-hour clock starts from actual or constructive knowledge, creating an incentive for robust internal monitoring and escalation procedures.
A large developer shall disclose each safety incident affecting the frontier model to the attorney general and division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Passed 2025-06-25
R-01.1
Gen. Bus. Law § 1421(5)
Plain Language
Large developers must report each safety incident to the Division of Homeland Security and Emergency Services within 72 hours. The 72-hour clock starts when the developer learns of the incident or learns facts sufficient to establish a reasonable belief that an incident has occurred. Reports must include the date, the statutory basis for classification as a safety incident, and a plain statement describing what happened. Safety incidents are defined by four categories — autonomous behavior, model weight compromise, control failures, and unauthorized use — but only qualify when they provide demonstrable evidence of an increased risk of critical harm.
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Pending
R-01.1
S.C. Code § 39-81-50(A)(1)-(3)
Plain Language
When a covered entity learns that a user faces imminent risk of death or serious physical injury, it must make reasonable efforts within 24 hours to notify emergency services or law enforcement, using information it already has or can obtain through reasonable user-facing prompts. If the operator lacks sufficient information to enable an emergency response, it must instead: provide a prominent message urging the user to contact emergency services with crisis information, encourage the user to seek help, and document the steps taken and why direct notification was not practicable. Good-faith notifications are protected from liability absent willful misconduct or gross negligence. This is an emergency escalation obligation distinct from the crisis messaging requirement in § 39-81-40(B)(3) — it requires affirmative outreach to external emergency services, not just providing crisis information to the user.
(A)(1) If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, then the operator must make reasonable efforts, within twenty-four hours, to notify appropriate emergency services or law enforcement, to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. (2) If the operator cannot make a notification under item (1) because the operator lacks sufficient information to enable an emergency response, then the operator shall: (a) promptly provide a clear and prominent message urging the user to contact emergency services and provide crisis services information, (b) make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services, and (c) document the steps taken and the basis for the operator's determination that notification was not practicable. (3) An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification, unless the operator acted with willful misconduct or gross negligence.
Pending
R-01.1
S.C. Code § 39-81-50(B)(1)-(2), (C)(1)-(2)
Plain Language
Covered entities must report covered incidents to the Attorney General within 15 days of obtaining knowledge. A covered incident is one where a user suffered a covered harm — death, suicide attempt, self-harm requiring medical attention, psychiatric emergency requiring urgent treatment, or serious physical injury requiring medical attention — arising from chatbot interactions. The report must include the date of knowledge, incident date, description of the incident and its chatbot connection, and responsive actions taken. Supplemental reports may be filed within 60 days. All reports are confidential and FOIA-exempt, though the Attorney General may publish aggregate statistics that do not identify users or disclose trade secrets.
(B)(1) A covered entity shall submit a report to the Attorney General within fifteen days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: (a) the date the operator obtained knowledge of the incident; (b) the date of the incident, if known; (c) a brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and (d) a description of any actions the operator took in response. (2) A covered entity may submit a supplemental report within sixty days after the initial report to update or correct information learned through investigation. (C)(1) Reports submitted under this section shall be confidential and are not subject to disclosure pursuant to Chapter 4, Title 30, the Freedom of Information Act. (2) The Attorney General may publish aggregate information and statistics derived from the reports, so long as the publication does not identify individual users or disclose trade secrets.
Pending 2025-01-01
R-01.3
Section 37-31-20(E)(1)-(2)
Plain Language
When a developer discovers — through its own testing or through a credible deployer report — that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify both the Attorney General and all known deployers within 90 days. This dual-notification obligation is triggered by either the developer's own discovery or receipt of a credible external report.
(E) A developer of a high-risk artificial intelligence system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
Pending 2025-01-01
R-01.3
Section 37-31-30(G)
Plain Language
If a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the Attorney General within 90 days of discovery, in the form and manner prescribed by the AG. This is an incident-reporting obligation triggered by actual discovery of discrimination, not by a suspicion or risk assessment.
(G) If a deployer deploys a high-risk artificial intelligence system and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by him, a notice disclosing the discovery.
Pending
R-01.1
S.C. Code § 39-81-50(A)
Plain Language
When a covered entity learns that a user faces imminent risk of death or serious physical injury, it must make reasonable efforts within 24 hours to notify emergency services or law enforcement, using information it already has or can obtain through reasonable user-facing prompts. If the operator lacks sufficient information to enable emergency contact, it must instead: (1) promptly display a prominent message urging the user to contact emergency services with crisis information, (2) encourage the user to seek help from a trusted adult or emergency services, and (3) document the steps taken and why direct notification was not practicable. A good-faith safe harbor protects operators from liability for making the notification unless they acted with willful misconduct or gross negligence. This is an imminent-risk escalation obligation that goes beyond the crisis message requirement in § 39-81-40(B)(3).
(A)(1) If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, then the operator must make reasonable efforts, within twenty-four hours, to notify appropriate emergency services or law enforcement, to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. (2) If the operator cannot make a notification under item (1) because the operator lacks sufficient information to enable an emergency response, then the operator shall: (a) promptly provide a clear and prominent message urging the user to contact emergency services and provide crisis services information, (b) make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services, and (c) document the steps taken and the basis for the operator's determination that notification was not practicable. (3) An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification, unless the operator acted with willful misconduct or gross negligence.
Pending
R-01.1
S.C. Code § 39-81-50(B)-(C)
Plain Language
Covered entities must report to the Attorney General within 15 days of learning of a covered incident — defined as an incident where a user suffered death, a suicide attempt, self-harm requiring medical attention, a psychiatric emergency requiring urgent medical treatment, or serious physical injury requiring medical attention arising from chatbot interactions. The report must include dates, a description of the incident and its connection to the chatbot, and actions taken in response. A supplemental report may be filed within 60 days to update or correct information. All reports are confidential and exempt from FOIA disclosure, though the Attorney General may publish aggregate statistics that do not identify users or reveal trade secrets.
(B)(1) A covered entity shall submit a report to the Attorney General within fifteen days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: (a) the date the operator obtained knowledge of the incident; (b) the date of the incident, if known; (c) a brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and (d) a description of any actions the operator took in response. (2) A covered entity may submit a supplemental report within sixty days after the initial report to update or correct information learned through investigation. (C)(1) Reports submitted under this section shall be confidential and are not subject to disclosure pursuant to Chapter 4, Title 30, the Freedom of Information Act. (2) The Attorney General may publish aggregate information and statistics derived from the reports, so long as the publication does not identify individual users or disclose trade secrets.
Enacted 2024-05-01
R-01.1
Utah Code § 13-70-304(5)
Plain Language
Learning Laboratory participants must immediately report to the Office of Artificial Intelligence Policy any incident resulting in consumer harm, a privacy breach, or unauthorized data usage. This is a continuous obligation throughout the participation period. Failure to report — or the underlying incident itself — may result in removal from the Learning Laboratory and exposure to all applicable civil and criminal penalties.
A participant shall immediately report to the office any incidents resulting in consumer harm, privacy breach, or unauthorized data usage, which may result in removal of the participant from the learning laboratory.
Pending 2026-07-01
R-01.1
Va. Code § 59.1-616(A)(1)-(3)
Plain Language
When a covered entity learns that a user faces imminent risk of death or serious physical injury, it must make reasonable efforts within 24 hours to notify emergency services or law enforcement, using information it already has or can obtain through reasonable user-facing prompts. If the operator lacks sufficient information to enable emergency notification, it must instead: promptly display a crisis message urging the user to contact emergency services, encourage the user to seek help from a trusted adult, and document the steps taken and why direct notification was not practicable. Good-faith notifications are shielded from liability absent willful misconduct or gross negligence. This is an emergency-response obligation triggered by actual knowledge of imminent risk — not a routine reporting requirement.
A. 1. If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, the operator shall make reasonable efforts, within 24 hours, to notify appropriate emergency services or law enforcement to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. 2. If the operator cannot make a notification under subdivision 1 because the operator lacks sufficient information to enable emergency response, the operator shall: a. Promptly provide a clear and prominent message urging the user to contact emergency services and providing crisis services information; b. Make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services; and c. Document the steps taken and the basis for the operator's determination that notification was not practicable. 3. An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification unless the operator acted with willful misconduct or gross negligence.
Pending 2026-07-01
R-01.1
Va. Code § 59.1-616(B)-(C)
Plain Language
Covered entities must report covered incidents to the Attorney General within 15 days of obtaining knowledge. A covered incident is one where a user suffered death, a suicide attempt, self-harm requiring medical attention, a psychiatric emergency requiring urgent medical treatment, or serious physical injury requiring medical attention arising from chatbot interactions. The report must include dates, a description of the incident and its connection to the chatbot, and any responsive actions taken. A supplemental report may be filed within 60 days to update or correct information. All reports are confidential, though the Attorney General may publish aggregate statistics that do not identify individual users or disclose trade secrets.
B. A covered entity shall submit a report to the Attorney General within 15 days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: 1. The date the operator obtained knowledge of the incident; 2. The date of the incident, if known; 3. A brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and 4. A description of any actions the operator took in response. A covered entity may submit a supplemental report within 60 days of the initial report to update or correct information learned through investigation. C. 1. Reports submitted under this section shall be confidential. 2. The Attorney General may publish aggregate information and statistics derived from such reports, so long as the publication does not identify individual users or disclose trade secrets.
Pending 2026-07-01
R-01.3
Sec. 3(2)(b)
Plain Language
If a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of the discovery. The notice must be submitted in a form and manner prescribed by the attorney general. This is triggered by actual discovery of discrimination, not by a routine review cycle. The trade secret protection in Sec. 3(3) applies — nothing in this section requires disclosure of trade secrets or confidential or proprietary information.
(b) If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Pending 2026-07-01
R-01.3
Sec. 3(2)(b)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, it must notify the Attorney General within 90 days of discovery, using the form and manner the AG prescribes. This is an incident-triggered reporting obligation, distinct from the annual review requirement. The 90-day clock runs from the date of discovery, not the date the discrimination occurred.
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.