R-01
Reporting & Regulatory Submissions
Incident Reporting
Operators of AI systems must report significant safety incidents to designated regulatory authorities within specified timeframes. Timeframes vary by jurisdiction and incident severity — typically 15 days for standard incidents and 24–72 hours for incidents posing imminent risk of death or serious injury. Some jurisdictions also require notification to affected individuals.
Applies to DeveloperDeployerProfessional Sector Foundation ModelHealthcare
Bills — Enacted
5
unique bills
Bills — Proposed
18
Last Updated
2026-03-29
Core Obligation

Operators of AI systems must report significant safety incidents to designated regulatory authorities within specified timeframes. Timeframes vary by jurisdiction and incident severity — typically 15 days for standard incidents and 24–72 hours for incidents posing imminent risk of death or serious injury. Some jurisdictions also require notification to affected individuals.

Sub-Obligations3 sub-obligations
Bills That Map This Requirement 23 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-07-01
Bus. & Prof. Code § 22612(d)(8)
Plain Language
Operators must establish a public incident reporting mechanism that allows any third party to report child safety incidents directly to the operator and to view other reports submitted through the mechanism. This is a transparency-oriented obligation — the mechanism must be public-facing and accessible, not merely an internal intake channel. The requirement that third parties can 'access other reports' implies some form of public incident log or database.
(8) A public incident reporting mechanism that enables a third party to report directly to the operator an incident regarding a child safety risk and to access other reports made through that reporting mechanism.
Enacted 2026-01-01
R-01.1
Bus. & Prof. Code § 22757.13(a), (c)(1)-(3)
Plain Language
All frontier developers must report critical safety incidents to the Office of Emergency Services within 15 days of discovery. Reports must include the date, reasons the incident qualifies, a plain statement describing it, and whether it involved internal model use. An accelerated 24-hour reporting timeline applies when the incident poses imminent risk of death or serious physical injury — in that case, the developer must disclose to an appropriate authority, including law enforcement or public safety agencies. Amended reports may be filed if additional information is discovered after the initial filing. Critical safety incidents include model weight breaches causing death/injury, materialized catastrophic risk, loss of model control causing death/injury, and deceptive model behavior subverting developer controls. A federal equivalency safe harbor is available under § 22757.13(h)-(i), allowing a developer to comply by meeting standards of designated substantially equivalent federal laws.
(a) The Office of Emergency Services shall establish a mechanism to be used by a frontier developer or a member of the public to report a critical safety incident that includes all of the following: (1) The date of the critical safety incident. (2) The reasons the incident qualifies as a critical safety incident. (3) A short and plain statement describing the critical safety incident. (4) Whether the incident was associated with internal use of a frontier model. ... (c) (1) Subject to paragraph (2), a frontier developer shall report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident. (2) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. (3) A frontier developer that discovers information about a critical safety incident after filing the initial report required by this subdivision may file an amended report.
Enacted 2026-01-01
R-01.1
Bus. & Prof. Code § 22757.13(h)-(j)
Plain Language
The Office of Emergency Services may designate federal laws, regulations, or guidance documents as substantially equivalent alternatives for critical safety incident reporting compliance. Frontier developers may declare their intent to comply via the federal alternative instead, and will be deemed in compliance as long as they actually meet the federal standards. The federal alternative does not need to require reporting to California specifically. However, failure to meet the designated federal standard constitutes a violation of this chapter — the safe harbor converts the federal standard into a binding state obligation. OES must revoke the designation if the federal standard no longer meets equivalency requirements. This creates a dynamic federal equivalency mechanism that may reduce compliance burden if federal AI safety reporting requirements emerge.
(h) The Office of Emergency Services may adopt regulations designating one or more federal laws, regulations, or guidance documents that meet all of the following conditions for the purposes of subdivision (i): (1) (A) The law, regulation, or guidance document imposes or states standards or requirements for critical safety incident reporting that are substantially equivalent to, or stricter than, those required by this section. (B) The law, regulation, or guidance document described in subparagraph (A) does not need to require critical safety incident reporting to the State of California. (2) The law, regulation, or guidance document is intended to assess, detect, or mitigate the catastrophic risk. (i) (1) A frontier developer that intends to comply with this section by complying with the requirements of, or meeting the standards stated by, a federal law, regulation, or guidance document designated pursuant to subdivision (h) shall declare its intent to do so to the Office of Emergency Services. (2) After a frontier developer has declared its intent pursuant to paragraph (1), both of the following apply: (A) The frontier developer shall be deemed in compliance with this section to the extent that the frontier developer meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document until the frontier developer declares the revocation of that intent to the Office of Emergency Services or the Office of Emergency Services revokes a relevant regulation pursuant to subdivision (j). (B) The failure by a frontier developer to meet the standards of, or comply with the requirements stated by, the federal law, regulation, or guidance document designated pursuant to subdivision (h) shall constitute a violation of this chapter. (j) The Office of Emergency Services shall revoke a regulation adopted under subdivision (h) if the requirements of subdivision (h) are no longer met.
Enacted 2026-01-01
R-01.1
Bus. & Prof. Code § 22757.13(c)(1)
Plain Language
Frontier developers must report critical safety incidents to the Office of Emergency Services within 15 days of discovery. If the incident poses an imminent risk of death or serious physical injury, the developer must additionally notify an appropriate authority — such as a law enforcement or public safety agency — within 24 hours. Developers may file amended reports as new information emerges after the initial filing. Reporting critical safety incidents involving non-frontier foundation models is encouraged but not required.
(1) Subject to paragraph (2), a frontier developer shall report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident. (2) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. (3) A frontier developer that discovers information about a critical safety incident after filing the initial report required by this subdivision may file an amended report. (4) A frontier developer is encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not frontier models.
Enacted 2026-06-30
R-01.3
C.R.S. § 6-1-1703(7)
Plain Language
If a deployer discovers that a deployed high-risk AI system has actually caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, in the AG's prescribed form. This is a post-discovery incident reporting obligation, not a periodic reporting requirement. The 90-day window runs from actual discovery, and the deployer must not unreasonably delay even within that window.
(7) If a deployer deploys a high-risk artificial intelligence system on or after June 30, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Pending 2025-07-01
R-01.3
O.C.G.A. § 10-16-7
Plain Language
When a deployer discovers that a deployed automated decision system has caused algorithmic discrimination, the deployer must notify the Attorney General within 90 days of discovery, in a form and manner prescribed by the AG. This is a mandatory incident-reporting obligation triggered by the deployer's discovery of actual algorithmic discrimination — not merely a risk of discrimination.
If a deployer deploys an automated decision system and subsequently discovers that the automated decision system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by the Attorney General, a notice disclosing the discovery.
Pre-filed 2025-07-07
R-01.3
Chapter 93M, Section 2(c)
Plain Language
When a developer discovers — or should foresee — that an AI system poses risks of algorithmic discrimination, the developer must notify both the Attorney General and all deployers within 90 days. This is a triggered reporting obligation: the 90-day clock runs from discovery, not from deployment. The obligation covers both known risks (actual discovery) and foreseeable risks, which broadens the trigger beyond confirmed discrimination to encompass risks a reasonable developer should anticipate.
(c) Disclosure of Risks: Developers must notify the Attorney General and deployers of any known or foreseeable risks of discrimination within 90 days of discovery.
Pre-filed 2025-07-17
R-01.3
Ch. 93M § 2(e)
Plain Language
When a developer discovers — through its own testing or a credible deployer report — that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify the attorney general and all known deployers or other developers within 90 days. The disclosure must describe the known or reasonably foreseeable discrimination risks. This is a dual-trigger obligation: it applies both when the developer self-discovers and when it receives a credible external report.
(e) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
Pre-filed 2025-07-17
R-01.3
Ch. 93M § 3(g)
Plain Language
If a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, in the form and manner the AG prescribes. The notice must disclose the discovery. This is a deployer-side counterpart to the developer's reporting obligation under Section 2(e).
(g) if a deployer deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send subsection (c)(2) to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Pending 2026-02-24
R-01.1R-01.2
Sec. 11(1)-(2)
Plain Language
When a security breach occurs involving data collected through electronic monitoring or automated decision tools, the employer must promptly secure the affected systems, mitigate harm, and certify corrective steps. Within 48 hours of discovery, the employer must notify all affected covered individuals with details of the breach, the data compromised, the employer's response, and available protections. The employer must also notify the Department of Labor and Economic Opportunity and the attorney general. Affected covered individuals must receive an extraordinarily comprehensive remediation package including 10 years of paid premium identity theft protection with a minimum $5 million insurance policy per individual, comprehensive credit monitoring (including dependent coverage), dark web monitoring, breach alerts, a three-bureau credit freeze, US-based fraud remediation, SSN monitoring with reissuance costs, and bank fraud monitoring. The employer must also engage a third party to audit the breached tool for vulnerabilities.
Sec. 11. (1) If an employer has a security breach of data collected through an electronic monitoring tool or automated decisions tool, the employer must do all of the following: (a) Promptly secure the electronic monitoring systems or automated decisions tools, mitigate harm, and certify that corrective steps were taken. (b) Not more than 48 hours after the discovery of the security breach, provide notice of the security breach to all of the covered individuals whose data is affected by the security breach. The notice must include all of the following: (i) A summary of how the breach occurred. (ii) The specific data that was compromised, if known. (iii) How the employer is responding to the security breach. (iv) Information on any necessary steps the employee can take to help secure the employee's data or apply for employer-covered protections under subdivision (c). (c) Provide all of the following to the covered individuals whose data is affected by the security breach: (i) Ten years of paid premium identity theft protection and insurance, including, but not limited to, an insurance policy of not less than $5,000,000.00 that covers financial loss, expense reimbursement, and legal fees for each affected covered individual. (ii) Comprehensive credit monitoring that also covers a covered individual's dependents if the dependents' data is compromised. (iii) Dark web monitoring. (iv) Account breach alerts. (v) A 3-bureau credit freeze. (vi) Expert fraud remediation that is based in the United States. (vii) Social Security number monitoring and the cost of reissuance. (viii) Bank fraud and financial transaction monitoring. (d) Provide notice of the security breach to the department and the attorney general. (2) After a security breach has occurred as described in subsection (1), the employer must contract with a third party to perform an audit of the electronic monitoring tool or automated decisions tool to ensure that any vulnerabilities have been fixed.
Pending 2026-01-01
R-01.1
Minn. Stat. § 325M.41, subd. 4
Plain Language
Developers must report every safety incident to the attorney general within 72 hours of learning of it or of learning enough facts to form a reasonable belief one occurred. The report must include the date, the statutory basis for classifying it as a safety incident, and a plain-language description. Safety incidents include actual critical harm events as well as precursor events — autonomous model behavior, model weight theft or unauthorized access, and unauthorized model use — if they provide demonstrable evidence of increased critical-harm risk. The 72-hour clock starts at knowledge or reasonable belief, whichever is earlier.
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Pre-filed 2026-08-01
R-01.1
Minn. Stat. § 325M.41, subd. 4
Plain Language
Developers must report every safety incident to the Minnesota attorney general within 72 hours of learning of the incident or learning sufficient facts to reasonably believe one occurred. The report must include the date, an explanation of why the event qualifies as a safety incident, and a plain-language description. Safety incidents include actual critical harm events and events demonstrating increased risk of critical harm — such as autonomous model behavior, model weight theft or leakage, and unauthorized model use. The 72-hour clock starts at actual or constructive knowledge, not at the time of the incident itself.
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Pending 2026-01-01
R-01.1R-01.2
G.S. § 114B-4(b)(2)
Plain Language
Licensed health information chatbot operators must report any data breach to the Department of Justice within 24 hours and notify affected consumers within 48 hours. This accelerated timeline overrides any contrary state notification law. Note that this obligation is also captured in mapping NC-S-624-D-01-a as part of the broader data security requirements; it is separately mapped here because the incident reporting timeline is independently actionable and maps to a distinct taxonomy requirement.
Report any data breaches within twenty-four (24) hours to the Department and within forty-eight (48) hours to affected consumers, notwithstanding any provision of law to the contrary.
Pending 2027-01-01
R-01.1
Sec. 5(2)-(3)
Plain Language
All frontier developers (not just large frontier developers) must report any critical safety incident to the Attorney General within 15 days of discovery. Critical safety incidents include model weight exfiltration, mass casualty events, loss of model control, and deceptive model behavior. If the incident poses an imminent risk of death or serious physical injury, accelerated 24-hour reporting to an appropriate authority (including law enforcement or public safety agencies) is required. Note the broader scope — this obligation applies to all frontier developers, not just those meeting the $500M revenue threshold.
(2) A frontier developer shall report any critical safety incident pertaining to one of its frontier models to the Attorney General within fifteen days after discovering the critical safety incident. (3) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within twenty-four hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.
Pending 2027-01-01
R-01.1
Sec. 5(4)
Plain Language
Large chatbot providers must report any child safety incident involving their covered chatbots to the Attorney General within 15 days of discovery. A child safety incident is defined as chatbot behavior that, if committed by a human, would constitute intentionally or recklessly causing death, bodily injury, or severe emotional distress to a minor.
(4) A large chatbot provider shall report any child safety incident pertaining to one of its covered chatbots to the Attorney General within fifteen days after discovering the child safety incident.
Pending 2026-02-01
R-01.3
Sec. 3(5)(a)-(b)
Plain Language
When a developer discovers — through its own testing or via a credible deployer report — that a deployed high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify all known deployers and other developers without unreasonable delay. The Attorney General will prescribe the form and manner of this notification. This is a reactive disclosure obligation triggered by discovery of actual or likely discrimination, not a periodic reporting requirement.
(5)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall disclose to all known deployers or other developers of the high-risk artificial intelligence system, each known risk of algorithmic discrimination arising from any intended use of the high-risk artificial intelligence system without unreasonable delay after the date on which: (i) The developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (ii) The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination. (b) The Attorney General shall prescribe the form and manner of the disclosure described in subdivision (a) of this subsection.
Pending 2025-07-26
R-01.1
State Tech. Law § 520(1)-(2)
Plain Language
Licensees must report system malfunctions to the Department whenever the system fails to operate as intended for a period sufficient to have caused or have the capacity to cause harm. For systems interacting with law enforcement, government agencies, or weapons systems, additional notification to relevant law enforcement or government entities is required as a license condition. The 'significant period' trigger is harm-based rather than time-based — any malfunction with harm capacity triggers the duty regardless of duration.
§ 520. Malfunction and incident reports; duty to notify. 1. A licensee shall have the duty to notify the department and, if applicable, a relevant law enforcement agency or governmental entity where the licensee's system fails to operate as intended for any significant period of time. A period of time is deemed "significant" for purposes of this section where the period of time that the malfunction occurred had the capacity to or has harmed a person or persons. 2. A licensee shall have the duty to notify a relevant law enforcement agency or governmental entity of a malfunction where designated by the department upon receipt of a license. The secretary shall issue such a requirement upon the licensee where such systems interact with law enforcement systems or the systems of a government agency, engage in law enforcement functions or the functions of a government agency, or where such systems operate, in whole or in part, or are, a weapon.
Pending 2025-09-02
R-01.1
Gen. Bus. Law § 1421(5)
Plain Language
Large developers must report each safety incident to the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident or of learning facts sufficient to establish a reasonable belief that one has occurred. The report must include the incident date, the specific statutory basis for why it qualifies as a safety incident, and a short plain-language description. The 72-hour clock starts from actual or constructive knowledge, creating an ongoing monitoring obligation. Safety incidents include autonomous model behavior, model weight theft or leakage, critical control failures, and unauthorized model use — but only when the incident provides demonstrable evidence of increased critical harm risk.
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Enacted 2025-06-03
R-01.1
Gen. Bus. Law § 1421(4)
Plain Language
Large developers must report every safety incident to both the Attorney General and the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident (or learning facts sufficient to establish a reasonable belief one occurred). The report must include the date, the classification basis under the statutory definition, and a plain-language description. Safety incidents include actual critical harm events as well as precursor incidents — autonomous model behavior, model weight theft/release, control failures, and unauthorized use — that provide demonstrable evidence of increased critical harm risk. The 72-hour clock starts from actual or constructive knowledge, creating an incentive for robust internal monitoring and escalation procedures.
A large developer shall disclose each safety incident affecting the frontier model to the attorney general and division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Pending 2025-06-25
R-01.1
Gen. Bus. Law § 1421(5)
Plain Language
Large developers must report every safety incident affecting a frontier model to the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident or learning facts establishing a reasonable belief one occurred. A safety incident is not any operational issue — it must provide demonstrable evidence of increased risk of critical harm and fall into one of four categories: autonomous model behavior not requested by a user, theft or unauthorized access to model weights, critical failure of technical or administrative controls, or unauthorized use. Each report must include the incident date, the specific reason it qualifies as a safety incident, and a plain-language description.
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Pending
R-01.3
S.C. Code § 37-31-20(E)
Plain Language
Developers must notify both the Attorney General and all known deployers within 90 days of discovering — through their own testing or a credible deployer report — that their high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination. This is a mandatory disclosure triggered by actual or likely discrimination, not a routine reporting obligation. The AG prescribes the form and manner of disclosure.
(E) A developer of a high-risk artificial intelligence system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
Pending
R-01.3
S.C. Code § 37-31-30(G)
Plain Language
Deployers who discover that a deployed high-risk AI system has caused algorithmic discrimination must report the discovery to the Attorney General within 90 days in a form prescribed by the AG. This is a mandatory incident-triggered obligation — not a routine periodic filing.
(G) If a deployer deploys a high-risk artificial intelligence system and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by him, a notice disclosing the discovery.
Pending 2026-01-01
R-01.1
S.C. Code § 39-81-50(A)(1)-(3)
Plain Language
When an operator learns a user faces imminent risk of death or serious physical injury, it must make reasonable efforts within 24 hours to notify emergency services or law enforcement, using information it already has or can obtain through reasonable user-facing prompts. If the operator lacks sufficient information to enable emergency response, it must instead promptly urge the user to contact emergency services, provide crisis information, encourage the user to seek help from a trusted adult, and document its steps and reasoning. A good-faith safe harbor protects operators from damages for making emergency notifications, unless they acted with willful misconduct or gross negligence.
(A)(1) If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, then the operator must make reasonable efforts, within twenty-four hours, to notify appropriate emergency services or law enforcement, to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. (2) If the operator cannot make a notification under item (1) because the operator lacks sufficient information to enable an emergency response, then the operator shall: (a) promptly provide a clear and prominent message urging the user to contact emergency services and provide crisis services information, (b) make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services, and (c) document the steps taken and the basis for the operator's determination that notification was not practicable. (3) An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification, unless the operator acted with willful misconduct or gross negligence.
Pending 2026-01-01
R-01.1
S.C. Code § 39-81-50(B)-(C)
Plain Language
Covered entities must report covered incidents to the South Carolina Attorney General within 15 days of obtaining knowledge of the incident. Covered incidents include death, suicide attempts, self-harm requiring medical attention, psychiatric emergencies requiring urgent treatment, or serious physical injury arising from chatbot interactions. Reports must include (to the extent known): dates, incident description, basis for believing the chatbot is connected, and responsive actions taken. Supplemental reports may be filed within 60 days. Reports are confidential and exempt from FOIA, though the AG may publish aggregate anonymized statistics. This is a 15-day standard reporting timeline — contrast with the 24-hour emergency notification to law enforcement under § 39-81-50(A) for imminent risk.
(B)(1) A covered entity shall submit a report to the Attorney General within fifteen days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: (a) the date the operator obtained knowledge of the incident; (b) the date of the incident, if known; (c) a brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and (d) a description of any actions the operator took in response. (2) A covered entity may submit a supplemental report within sixty days after the initial report to update or correct information learned through investigation. (C)(1) Reports submitted under this section shall be confidential and are not subject to disclosure pursuant to Chapter 4, Title 30, the Freedom of Information Act. (2) The Attorney General may publish aggregate information and statistics derived from the reports, so long as the publication does not identify individual users or disclose trade secrets.
Enacted 2024-05-01
R-01.1
Utah Code § 13-70-304(5)
Plain Language
Learning Laboratory participants must immediately report to the Office of Artificial Intelligence Policy any incident resulting in consumer harm, a privacy breach, or unauthorized data usage. This is a continuous obligation throughout the participation period. Failure to report — or the underlying incident itself — may result in removal from the Learning Laboratory and exposure to all applicable civil and criminal penalties.
A participant shall immediately report to the office any incidents resulting in consumer harm, privacy breach, or unauthorized data usage, which may result in removal of the participant from the learning laboratory.
Pending 2026-07-01
R-01.1
Va. Code § 59.1-616(A)(1)-(3)
Plain Language
When a covered entity learns that a user faces imminent risk of death or serious physical injury, it must make reasonable efforts within 24 hours to notify emergency services or law enforcement, using information it already has or can obtain through reasonable user-facing prompts. If the operator lacks sufficient information to make the notification, it must instead: (1) display a clear, prominent crisis message urging the user to contact emergency services, (2) encourage the user to seek help from a trusted adult or emergency services, and (3) document the steps taken and why direct notification was not practicable. A good-faith safe harbor protects operators from liability for making the notification itself, unless the operator acted with willful misconduct or gross negligence.
A. 1. If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, the operator shall make reasonable efforts, within 24 hours, to notify appropriate emergency services or law enforcement to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance.
2. If the operator cannot make a notification under subdivision 1 because the operator lacks sufficient information to enable emergency response, the operator shall:
a. Promptly provide a clear and prominent message urging the user to contact emergency services and providing crisis services information;
b. Make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services; and
c. Document the steps taken and the basis for the operator's determination that notification was not practicable.
3. An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification unless the operator acted with willful misconduct or gross negligence.
Pending 2026-07-01
R-01.1
Va. Code § 59.1-616(B)-(C)
Plain Language
Within 15 days of learning that a user suffered a covered harm (death, suicide attempt, self-harm requiring medical attention, psychiatric emergency, or serious physical injury) connected to one of its chatbots, a covered entity must submit a confidential report to the Attorney General. The report must include the date of knowledge, date of incident if known, a description of what happened and why the operator believes it is connected to the chatbot, and the operator's response actions. A supplemental report may be filed within 60 days to update or correct information. Reports are confidential, though the Attorney General may publish aggregate statistics that do not identify users or disclose trade secrets.
B. A covered entity shall submit a report to the Attorney General within 15 days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include:
1. The date the operator obtained knowledge of the incident;
2. The date of the incident, if known;
3. A brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and
4. A description of any actions the operator took in response.
A covered entity may submit a supplemental report within 60 days of the initial report to update or correct information learned through investigation.
C. 1. Reports submitted under this section shall be confidential.
2. The Attorney General may publish aggregate information and statistics derived from such reports, so long as the publication does not identify individual users or disclose trade secrets.
Pending 2026-07-01
R-01.3
Sec. 3(2)(b)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, using the form and manner the attorney general prescribes. This is a reactive disclosure triggered by discovery, not a scheduled report. The obligation runs in parallel with the annual review requirement — the review is how discovery is expected to occur, but the notification obligation applies regardless of how discrimination is discovered.
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Pending 2026-07-01
R-01.3
Sec. 3(2)(b)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, it must notify the Attorney General within 90 days of discovery, using a form and manner the AG prescribes. This is a reactive obligation triggered by actual discovery of discrimination, separate from the annual review obligation that requires proactively checking for discrimination. Trade secrets and confidential information are protected from disclosure under Section 3(3).
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.