S-03
Safety & Prohibited Conduct
Frontier Model Safety Obligations
Developers of frontier AI models — defined by compute thresholds — face a distinct set of safety obligations focused on catastrophic and systemic risk. These go beyond general AI system safety obligations to address existential-scale harms, dual-use potential for weapons of mass destruction, and deployment gating based on risk thresholds.
Applies to DeveloperDeployer Sector Foundation Model
Bills — Enacted
2
unique bills
Bills — Proposed
6
Last Updated
2026-03-29
Core Obligation

Developers of frontier AI models — defined by compute thresholds — face a distinct set of safety obligations focused on catastrophic and systemic risk. These go beyond general AI system safety obligations to address existential-scale harms, dual-use potential for weapons of mass destruction, and deployment gating based on risk thresholds.

Sub-Obligations5 sub-obligations
ID
Name & Description
Enacted
Proposed
S-03.1
Catastrophic risk assessment and mitigation Frontier model developers must assess and document the risk that their models could cause catastrophic harm — such as mass casualties, critical infrastructure attacks, or other existential-scale outcomes — and implement appropriate safeguards to prevent unreasonable risk of such harm.
0 enacted
3 proposed
S-03.2
CBRN and critical infrastructure risk evaluation Developers must evaluate whether the model provides meaningful uplift to individuals seeking to develop chemical, biological, radiological, or nuclear weapons, or to plan attacks on critical infrastructure. Must be documented and updated as capabilities change.
1 enacted
0 proposed
S-03.3
Risk-threshold deployment prohibition A developer may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Critical harm is defined in most statutes as CBRN weapon creation or mass-casualty autonomous AI conduct causing death or serious injury to 100+ people or $1B+ in damages.
1 enacted
5 proposed
S-03.4
Compute and capability reporting Developers of models trained above defined compute thresholds must report model characteristics — including training compute, architecture, capabilities, and safety evaluation results — to designated regulatory authorities.
0 enacted
0 proposed
S-03.5
Frontier AI safety framework publication Large frontier model developers must write, implement, comply with, and publicly publish a frontier AI safety framework detailing how the developer handles catastrophic risk assessment and thresholds, safety oversight, third-party evaluation processes, cybersecurity protections, and whistleblower procedures. The framework must be kept current and updated following material changes to the developer's systems or risk profile.
1 enacted
6 proposed
Bills That Map This Requirement 8 bills
Bill
Status
Sub-Obligations
Section
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(e)(1)(A)
Plain Language
Frontier developers are prohibited from making materially false or misleading public statements about catastrophic risks posed by their frontier models or how they manage those risks.
(1)(A) A frontier developer shall not make a materially false or misleading statement about catastrophic risk from its frontier models or its management of catastrophic risk... (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances. 
Pending 2026-01-01
S-03.5
Sec. 7(1)(a)-(b)
Plain Language
Large developers must produce, implement, comply with, and publicly publish a safety and security protocol that addresses critical risks (CBRN weapons, cyberattacks, autonomous criminal conduct) as defined by section 5. If the protocol is materially modified, the changes must be conspicuously published within 30 days. This is a continuous operating obligation — the developer must both follow and publish the protocol, not merely document it.
(1) Beginning on January 1, 2026, a large developer shall do all of the following: (a) Produce, implement, follow, and conspicuously publish a safety and security protocol. (b) If materially modifying the safety and security protocol under subdivision (a), conspicuously publish the modifications not more than 30 days after the material modification was made.
Pending 2026-01-01
S-03.5
Sec. 5(a)-(l)
Plain Language
This section specifies the mandatory contents of the safety and security protocol. The protocol must detail: risk exclusion criteria for lower-risk models, intolerable risk thresholds and escalation procedures, testing and assessment procedures (including evasion, misuse, and model proliferation scenarios), deployment gating procedures, physical/digital/organizational security protections against unauthorized access, safeguard efficacy assessments, critical risk incident response procedures, re-assessment triggers for model modifications, incident reporting conditions, protocol modification conditions, scientific reproducibility disclosures, and the role of financially disinterested third parties. This is a content specification for the protocol required by section 7(1)(a), not an independent obligation.
Sec. 5. A safety and security protocol must describe in detail all of the following, as applicable: (a) How the large developer excludes certain foundation models from being covered by the safety and security protocol when those foundation models pose a limited critical risk. (b) The thresholds at which critical risks would be considered intolerable, any justification for the thresholds, and what the large developer will do if a threshold is surpassed. (c) The testing and assessment procedures the large developer uses to investigate critical risks and how the tests and procedures account for the possibility that a foundation model could evade the control of the large developer or user or be misused, modified, executed with increased computational resources, or used to create another foundation model. (d) The procedure the large developer will use to determine if and how to deploy a foundation model when doing so poses critical risks. (e) The physical, digital, and organizational security protection the large developer will implement to prevent insiders or third parties from accessing foundation models within the large developer's control in a manner that is unauthorized by the developer and could create a critical risk. (f) Any safeguards and risk mitigation measures the large developer uses to reduce critical risks from the large developer's foundation models and how the large developer assesses efficacy and limitations. (g) How the large developer will respond if a critical risk materializes or is imminent. (h) The procedures that the large developer uses to determine whether to conduct additional assessments for a critical risk when the large developer modifies or expands access to the large developer's foundation models or combines the foundation models with other software and how such assessments are conducted. (i) The conditions under which the large developer will report an incident relevant to a critical risk that occurs in connection with 1 or more of the large developer's foundation models and the entities to which the large developer will make those reports. (j) The conditions under which the large developer will modify the large developer's safety and security protocol. (k) The parts of the safety and security protocol that the large developer believes provide sufficient scientific detail to allow for the independent assessment of the methods used to generate the results, evidence, and analysis, and to which experts any unredacted versions are made available. (l) Any other role a financially disinterested third party plays under subdivisions (a) to (k).
Pending 2026-01-01
S-03.3
Sec. 7(2)
Plain Language
Large developers are prohibited from knowingly making false or materially misleading statements or omissions in any documents produced under the act, including the safety and security protocol, transparency reports, and testing records. This anti-fraud provision applies to all published documents and creates independent liability — a developer that publishes technically compliant documents containing knowing falsehoods violates this subsection regardless of whether the underlying safety obligations are met.
(2) A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced in accordance with this section.
Pending 2026-01-01
S-03.5
§ 325M.41, subd. 1(1)-(6)
Plain Language
Before deploying any AI model, developers must create and implement a written safety and security protocol covering risk reduction measures, cybersecurity protections, detailed testing procedures, and designation of responsible senior personnel. Developers must publicly publish a redacted version and transmit a copy to the attorney general, retain the unredacted version plus all testing records for the deployment period plus five years, grant the AG access to the unredacted protocol upon request (with redactions only as required by federal law), and implement safeguards against unreasonable risk of critical harm. This is a comprehensive pre-deployment gate — no model may be deployed without these steps completed.
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pending 2026-01-01
S-03.3
§ 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying any AI model that creates an unreasonable risk of critical harm. Critical harm covers CBRN weapon creation/use and autonomous criminal conduct resulting in death, serious injury, or mental injury to 25+ people, or $1M+ in property/monetary damages. This is a hard deployment prohibition — no compliance program or safety protocol can cure it if the unreasonable risk exists.
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
Pending 2026-08-01
S-03.5
Minn. Stat. § 325M.41, subd. 1(1)-(2)
Plain Language
Before deploying any AI model, a developer must create and implement a written safety and security protocol that covers risk reduction measures, cybersecurity protections against unauthorized access by sophisticated actors, detailed testing procedures for evaluating unreasonable risk of critical harm, and designation of senior compliance personnel. The developer must retain an unredacted copy of the protocol — including all revision history — for the entire deployment period plus five years. This is a pre-deployment gating requirement: deployment may not proceed until the protocol is in place.
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years;
Pending 2026-08-01
S-03.1
Minn. Stat. § 325M.41, subd. 1(6)
Plain Language
Developers must implement appropriate safeguards to prevent unreasonable risk of critical harm before deploying any AI model. Critical harm is narrowly defined to cover CBRN weapon creation/use and autonomous criminal conduct causing mass casualties (25+ people) or $1M+ in damages. This is a substantive pre-deployment safety obligation — developers must have working safeguards, not merely a documented protocol. The 'appropriate' and 'unreasonable risk' standards introduce a reasonableness balancing test.
Before deploying an artificial intelligence model, a developer must: (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pending 2026-08-01
S-03.3
Minn. Stat. § 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying any AI model if deployment would create an unreasonable risk of critical harm. This is a deployment-gating prohibition — not a process obligation. Even full compliance with the safety and security protocol requirement does not authorize deployment if the model still poses unreasonable critical harm risk. The standard is 'unreasonable risk,' implying some level of risk may be acceptable if adequately mitigated.
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
Failed 2027-01-01
S-03.5
Sec. 4(1)(a)(i)-(vi), (1)(c)(i)-(iv)
Plain Language
Large frontier developers must write, implement, comply with, and conspicuously publish on their website a public safety and child protection plan that details how they assess catastrophic risk thresholds, apply mitigations, review risks before deployment or extensive internal use, use third-party evaluators, secure unreleased model weights, and manage risks from internal model use including evasion of oversight. The plan must also describe how the developer incorporates national and international standards, revisits and updates the plan, identifies and responds to safety incidents, and maintains internal governance for implementation. This is a continuing obligation — the plan must be kept current and compliance is ongoing.
(1) A large frontier developer or large chatbot provider shall write, implement, comply with, and clearly and conspicuously publish on its website a public safety and child protection plan that describes in detail: (a) For a large frontier developer, how the large frontier developer: (i) Defines and assesses thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (ii) Applies mitigations to address the potential for catastrophic risks based on the results of the assessments undertaken pursuant to subdivision (1)(a)(i) of this section; (iii) Reviews assessments of catastrophic risk and adequacy of mitigations of catastrophic risk as part of the decision to deploy a frontier model or use it extensively internally; (iv) Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (v) Implements cybersecurity practices to secure unreleased frontier model weights from unauthorized modification or transfer by internal or external parties; and (vi) Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms; (c) For both large frontier developers and large chatbot providers, how the large frontier developer or large chatbot provider: (i) Incorporates national standards, international standards, and industry-consensus best practices into its public safety and child protection plan; (ii) Revisits and updates the public safety and child protection plan, including any criteria that trigger updates and how such developer or provider determines when its foundation models or frontier models are substantially modified enough to require disclosures pursuant to subsection (3) or subsection (4) of this section; (iii) Identifies and responds to safety incidents; and (iv) Institutes internal governance practices to ensure implementation of its public safety and child protection plan.
Failed 2027-01-01
S-03.5
Sec. 4(1)(b)(i)-(iii), (1)(c)(i)-(iv)
Plain Language
Large chatbot providers must write, implement, comply with, and conspicuously publish on their website a public safety and child protection plan that describes how they assess child safety risks, apply mitigations based on those assessments, and use third parties to evaluate risks and mitigation effectiveness. The plan must also cover incorporation of standards and best practices, update triggers, safety incident identification and response, and internal governance practices. This is the chatbot-provider-specific counterpart to the large frontier developer's plan obligations.
(b) For a large chatbot provider, how the large chatbot provider: (i) Assesses potential for child safety risks. (ii) Applies mitigations to address the potential for child safety risks based on the results of the assessments undertaken pursuant to subdivision (1)(b)(i) of this section; and (iii) Uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks; and (c) For both large frontier developers and large chatbot providers, how the large frontier developer or large chatbot provider: (i) Incorporates national standards, international standards, and industry-consensus best practices into its public safety and child protection plan; (ii) Revisits and updates the public safety and child protection plan, including any criteria that trigger updates and how such developer or provider determines when its foundation models or frontier models are substantially modified enough to require disclosures pursuant to subsection (3) or subsection (4) of this section; (iii) Identifies and responds to safety incidents; and (iv) Institutes internal governance practices to ensure implementation of its public safety and child protection plan.
Pending 2025-09-02
S-03.5
Gen. Bus. Law § 1421(1)(a)-(c)
Plain Language
Before deploying any frontier model, a large developer must implement a written safety and security protocol covering critical harm risk reduction, cybersecurity, testing procedures, misuse assessment, compliance requirements, and designation of responsible senior personnel. The developer must retain an unredacted copy for the deployment period plus five years, publicly publish an appropriately redacted version, transmit the redacted version to the Division of Homeland Security and Emergency Services, and grant the Division or Attorney General access to unredacted versions (with redactions only where required by federal law) upon request. Appropriate redactions are limited to public safety, trade secrets, legally required confidentiality, employee/customer privacy, and state/federally controlled information.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years; (c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
Pending 2025-09-02
S-03.1
Gen. Bus. Law § 1421(1)(d)-(e)
Plain Language
Before deployment, the large developer must record and retain detailed information about all tests and test results used to assess the frontier model — with enough detail that a third party could replicate the testing procedure. Records must be retained for the deployment period plus five years. The developer must also implement appropriate safeguards to prevent unreasonable risk of critical harm, which encompasses CBRN weapon creation or autonomous AI conduct causing mass casualties or over $1 billion in damages. The intervening-actor carve-out means a developer is only liable for harm inflicted by a human if the developer's activities made that harm substantially easier or more likely.
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pending 2025-09-02
S-03.3
Gen. Bus. Law § 1421(2)
Plain Language
This is a deployment gate: a large developer is categorically prohibited from deploying a frontier model if deployment would create an unreasonable risk of critical harm. The standard is 'unreasonable risk,' not zero risk. The critical harm threshold is high — death or serious injury of 100+ people or $1 billion+ in damages through CBRN weapon use or autonomous criminal AI conduct. This prohibition operates independently of whether the developer has implemented a safety and security protocol.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Pending 2025-09-02
S-03.5
Gen. Bus. Law § 1421(7)(a)-(b)
Plain Language
A person who is not yet a large developer but is about to begin training a model that, upon completion, would cause them to meet the large developer thresholds must implement a written safety and security protocol before training begins. This pre-qualification protocol need not include the detailed testing procedure description (§ 1420(12)(c)) or the misuse assessment details (§ 1420(12)(d)) required of full safety and security protocols. The person must transmit an appropriately redacted copy to the Division of Homeland Security and Emergency Services. Unlike large developers, this person need not publicly publish the protocol. The academic research exemption applies. This creates a pre-deployment early-warning obligation — entities cannot begin training a frontier-scale model without safety governance in place.
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.
Enacted 2025-06-03
S-03.5
Gen. Bus. Law § 1421(1)(c)(i)-(ii)
Plain Language
Large developers must conspicuously publish a copy of their safety and security protocol, which may include appropriate redactions (for trade secrets, public safety, privacy, and legally protected information). A copy of this redacted protocol must also be transmitted to the AG and Division of Homeland Security and Emergency Services. Separately, upon request, the AG and DHSES must be given access to the protocol with redactions limited only to those required by federal law — meaning the regulator version is substantially less redacted than the public version. This creates a two-tier disclosure regime: a more heavily redacted public version and a nearly unredacted regulator version.
(i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the attorney general and division of homeland security and emergency services; (ii) Grant the attorney general and division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request.
Enacted 2025-06-03
S-03.3
Gen. Bus. Law § 1421(2)
Plain Language
This is an absolute deployment prohibition: a large developer may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Critical harm is defined narrowly — it requires either CBRN weapon creation/use or autonomous AI criminal conduct, and must cause death or serious injury to 100+ people or $1 billion+ in damages. The 'unreasonable risk' standard introduces a reasonableness analysis rather than a zero-risk requirement. Note that deployment excludes internal training, evaluation, and legal compliance use. The intervening human actor causation limitation in the critical harm definition provides a defense where harm is caused by an unforeseeable third party.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Enacted 2025-06-03
S-03.2
Gen. Bus. Law § 1420(12)(c)
Plain Language
The safety and security protocol must include detailed testing procedures that evaluate two things: (1) whether the frontier model poses an unreasonable risk of critical harm, and (2) whether the model could be misused, modified, scaled up, escape developer/user control, combined with other software, or used to create another frontier model in ways that increase critical harm risk. This effectively mandates red-teaming and adversarial testing across multiple attack vectors and misuse scenarios. Because the critical harm definition encompasses CBRN weapon creation, this subsection requires CBRN-specific risk evaluation as part of the testing regime.
"Safety and security protocol" means documented technical and organizational protocols that: ... (c) Describe in detail the testing procedure to evaluate if the frontier model poses an unreasonable risk of critical harm and whether the frontier model could be misused, be modified, be executed with increased computational resources, evade the control of its large developer or user, be combined with other software or be used to create another frontier model in a manner that would increase the risk of critical harm.
Passed 2025-06-25
S-03.5
Gen. Bus. Law § 1421(1)(a)-(b)
Plain Language
Before deploying any frontier model, large developers must create and implement a comprehensive written safety and security protocol covering critical harm risk reduction, cybersecurity protections, detailed testing procedures, misuse assessment, compliance requirements, and designation of senior personnel responsible for compliance. The unredacted protocol — including all update history — must be retained for the duration of deployment plus five years. The protocol definition is prescriptive: it must be detailed enough that a third party can determine whether it has been followed.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years;
Passed 2025-06-25
S-03.1
Gen. Bus. Law § 1421(1)(e)
Plain Language
Large developers must implement appropriate safeguards — before deployment — to prevent unreasonable risk of critical harm. Critical harm is defined narrowly: death or serious injury to 100+ people, or $1B+ in property damages, caused through either CBRN weapon creation/use or autonomous AI conduct constituting a crime. The intervening-actor carve-out limits liability where a human independently chooses to cause harm unless the developer's activities made it substantially easier or more likely. This is an affirmative obligation to implement safeguards, separate from the prohibition on deploying models that pose unreasonable risk.
(e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
Passed 2025-06-25
S-03.3
Gen. Bus. Law § 1421(2)
Plain Language
This is a categorical deployment gate: large developers may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Unlike the safeguard-implementation requirement in § 1421(1)(e), this is an absolute prohibition — no amount of safeguards can cure an unreasonable risk. The 'unreasonable risk' standard provides a reasonableness-based threshold rather than a zero-risk requirement.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Passed 2025-06-25
S-03.5
Gen. Bus. Law § 1421(7)(a)-(b)
Plain Language
Persons who are not yet large developers but who plan to train a model that would make them qualify must, before beginning training, implement a written safety and security protocol and transmit a redacted copy to DHSES. The protocol is slightly less demanding than the full large developer protocol — it need not include the detailed testing procedure description (paragraph (c)) or the misuse/modification assessment (paragraph (d)) of the safety and security protocol definition. This pre-qualification obligation catches entities before they cross the large developer threshold, ensuring safety protocols are in place before training begins. The academic research exemption applies here as well.
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.