Operators of AI systems must report significant safety incidents to designated regulatory authorities within specified timeframes. Timeframes vary by jurisdiction and incident severity — typically 15 days for standard incidents and 24–72 hours for incidents posing imminent risk of death or serious injury. Some jurisdictions also require notification to affected individuals.
(8) A public incident reporting mechanism that enables a third party to report directly to the operator an incident regarding a child safety risk and to access other reports made through that reporting mechanism.
(a) The Office of Emergency Services shall establish a mechanism to be used by a frontier developer or a member of the public to report a critical safety incident that includes all of the following: (1) The date of the critical safety incident. (2) The reasons the incident qualifies as a critical safety incident. (3) A short and plain statement describing the critical safety incident. (4) Whether the incident was associated with internal use of a frontier model. ... (c) (1) Subject to paragraph (2), a frontier developer shall report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident. (2) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. (3) A frontier developer that discovers information about a critical safety incident after filing the initial report required by this subdivision may file an amended report.
(h) The Office of Emergency Services may adopt regulations designating one or more federal laws, regulations, or guidance documents that meet all of the following conditions for the purposes of subdivision (i): (1) (A) The law, regulation, or guidance document imposes or states standards or requirements for critical safety incident reporting that are substantially equivalent to, or stricter than, those required by this section. (B) The law, regulation, or guidance document described in subparagraph (A) does not need to require critical safety incident reporting to the State of California. (2) The law, regulation, or guidance document is intended to assess, detect, or mitigate the catastrophic risk. (i) (1) A frontier developer that intends to comply with this section by complying with the requirements of, or meeting the standards stated by, a federal law, regulation, or guidance document designated pursuant to subdivision (h) shall declare its intent to do so to the Office of Emergency Services. (2) After a frontier developer has declared its intent pursuant to paragraph (1), both of the following apply: (A) The frontier developer shall be deemed in compliance with this section to the extent that the frontier developer meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document until the frontier developer declares the revocation of that intent to the Office of Emergency Services or the Office of Emergency Services revokes a relevant regulation pursuant to subdivision (j). (B) The failure by a frontier developer to meet the standards of, or comply with the requirements stated by, the federal law, regulation, or guidance document designated pursuant to subdivision (h) shall constitute a violation of this chapter. (j) The Office of Emergency Services shall revoke a regulation adopted under subdivision (h) if the requirements of subdivision (h) are no longer met.
(1) Subject to paragraph (2), a frontier developer shall report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident. (2) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. (3) A frontier developer that discovers information about a critical safety incident after filing the initial report required by this subdivision may file an amended report. (4) A frontier developer is encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not frontier models.
(7) If a deployer deploys a high-risk artificial intelligence system on or after June 30, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
If a deployer deploys an automated decision system and subsequently discovers that the automated decision system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by the Attorney General, a notice disclosing the discovery.
(c) Disclosure of Risks: Developers must notify the Attorney General and deployers of any known or foreseeable risks of discrimination within 90 days of discovery.
(e) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
(g) if a deployer deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send subsection (c)(2) to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
Sec. 11. (1) If an employer has a security breach of data collected through an electronic monitoring tool or automated decisions tool, the employer must do all of the following: (a) Promptly secure the electronic monitoring systems or automated decisions tools, mitigate harm, and certify that corrective steps were taken. (b) Not more than 48 hours after the discovery of the security breach, provide notice of the security breach to all of the covered individuals whose data is affected by the security breach. The notice must include all of the following: (i) A summary of how the breach occurred. (ii) The specific data that was compromised, if known. (iii) How the employer is responding to the security breach. (iv) Information on any necessary steps the employee can take to help secure the employee's data or apply for employer-covered protections under subdivision (c). (c) Provide all of the following to the covered individuals whose data is affected by the security breach: (i) Ten years of paid premium identity theft protection and insurance, including, but not limited to, an insurance policy of not less than $5,000,000.00 that covers financial loss, expense reimbursement, and legal fees for each affected covered individual. (ii) Comprehensive credit monitoring that also covers a covered individual's dependents if the dependents' data is compromised. (iii) Dark web monitoring. (iv) Account breach alerts. (v) A 3-bureau credit freeze. (vi) Expert fraud remediation that is based in the United States. (vii) Social Security number monitoring and the cost of reissuance. (viii) Bank fraud and financial transaction monitoring. (d) Provide notice of the security breach to the department and the attorney general. (2) After a security breach has occurred as described in subsection (1), the employer must contract with a third party to perform an audit of the electronic monitoring tool or automated decisions tool to ensure that any vulnerabilities have been fixed.
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Report any data breaches within twenty-four (24) hours to the Department and within forty-eight (48) hours to affected consumers, notwithstanding any provision of law to the contrary.
(2) A frontier developer shall report any critical safety incident pertaining to one of its frontier models to the Attorney General within fifteen days after discovering the critical safety incident. (3) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within twenty-four hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.
(4) A large chatbot provider shall report any child safety incident pertaining to one of its covered chatbots to the Attorney General within fifteen days after discovering the child safety incident.
(5)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall disclose to all known deployers or other developers of the high-risk artificial intelligence system, each known risk of algorithmic discrimination arising from any intended use of the high-risk artificial intelligence system without unreasonable delay after the date on which: (i) The developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (ii) The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination. (b) The Attorney General shall prescribe the form and manner of the disclosure described in subdivision (a) of this subsection.
§ 520. Malfunction and incident reports; duty to notify. 1. A licensee shall have the duty to notify the department and, if applicable, a relevant law enforcement agency or governmental entity where the licensee's system fails to operate as intended for any significant period of time. A period of time is deemed "significant" for purposes of this section where the period of time that the malfunction occurred had the capacity to or has harmed a person or persons. 2. A licensee shall have the duty to notify a relevant law enforcement agency or governmental entity of a malfunction where designated by the department upon receipt of a license. The secretary shall issue such a requirement upon the licensee where such systems interact with law enforcement systems or the systems of a government agency, engage in law enforcement functions or the functions of a government agency, or where such systems operate, in whole or in part, or are, a weapon.
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
A large developer shall disclose each safety incident affecting the frontier model to the attorney general and division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
(E) A developer of a high-risk artificial intelligence system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
(G) If a deployer deploys a high-risk artificial intelligence system and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by him, a notice disclosing the discovery.
(A)(1) If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, then the operator must make reasonable efforts, within twenty-four hours, to notify appropriate emergency services or law enforcement, to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. (2) If the operator cannot make a notification under item (1) because the operator lacks sufficient information to enable an emergency response, then the operator shall: (a) promptly provide a clear and prominent message urging the user to contact emergency services and provide crisis services information, (b) make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services, and (c) document the steps taken and the basis for the operator's determination that notification was not practicable. (3) An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification, unless the operator acted with willful misconduct or gross negligence.
(B)(1) A covered entity shall submit a report to the Attorney General within fifteen days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: (a) the date the operator obtained knowledge of the incident; (b) the date of the incident, if known; (c) a brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and (d) a description of any actions the operator took in response. (2) A covered entity may submit a supplemental report within sixty days after the initial report to update or correct information learned through investigation. (C)(1) Reports submitted under this section shall be confidential and are not subject to disclosure pursuant to Chapter 4, Title 30, the Freedom of Information Act. (2) The Attorney General may publish aggregate information and statistics derived from the reports, so long as the publication does not identify individual users or disclose trade secrets.
A participant shall immediately report to the office any incidents resulting in consumer harm, privacy breach, or unauthorized data usage, which may result in removal of the participant from the learning laboratory.
A. 1. If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, the operator shall make reasonable efforts, within 24 hours, to notify appropriate emergency services or law enforcement to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. 2. If the operator cannot make a notification under subdivision 1 because the operator lacks sufficient information to enable emergency response, the operator shall: a. Promptly provide a clear and prominent message urging the user to contact emergency services and providing crisis services information; b. Make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services; and c. Document the steps taken and the basis for the operator's determination that notification was not practicable. 3. An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification unless the operator acted with willful misconduct or gross negligence.
B. A covered entity shall submit a report to the Attorney General within 15 days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: 1. The date the operator obtained knowledge of the incident; 2. The date of the incident, if known; 3. A brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and 4. A description of any actions the operator took in response. A covered entity may submit a supplemental report within 60 days of the initial report to update or correct information learned through investigation. C. 1. Reports submitted under this section shall be confidential. 2. The Attorney General may publish aggregate information and statistics derived from such reports, so long as the publication does not identify individual users or disclose trade secrets.
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
If a deployer subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.