S-03
Safety & Prohibited Conduct
Frontier Model Safety Obligations
Developers of frontier AI models — defined by compute thresholds — face a distinct set of safety obligations focused on catastrophic and systemic risk. These go beyond general AI system safety obligations to address existential-scale harms, dual-use potential for weapons of mass destruction, and deployment gating based on risk thresholds.
Applies to DeveloperDeployer Sector Foundation Model
Bills — Enacted
3
unique bills
Bills — Proposed
6
Last Updated
2026-03-29
Core Obligation

Developers of frontier AI models — defined by compute thresholds — face a distinct set of safety obligations focused on catastrophic and systemic risk. These go beyond general AI system safety obligations to address existential-scale harms, dual-use potential for weapons of mass destruction, and deployment gating based on risk thresholds.

Sub-Obligations5 sub-obligations
ID
Name & Description
Enacted
Proposed
S-03.1
Catastrophic risk assessment and mitigation Frontier model developers must assess and document the risk that their models could cause catastrophic harm — such as mass casualties, critical infrastructure attacks, or other existential-scale outcomes — and implement appropriate safeguards to prevent unreasonable risk of such harm.
0 enacted
1 proposed
S-03.2
CBRN and critical infrastructure risk evaluation Developers must evaluate whether the model provides meaningful uplift to individuals seeking to develop chemical, biological, radiological, or nuclear weapons, or to plan attacks on critical infrastructure. Must be documented and updated as capabilities change.
1 enacted
0 proposed
S-03.3
Risk-threshold deployment prohibition A developer may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Critical harm is defined in most statutes as CBRN weapon creation or mass-casualty autonomous AI conduct causing death or serious injury to 100+ people or $1B+ in damages.
1 enacted
5 proposed
S-03.4
Compute and capability reporting Developers of models trained above defined compute thresholds must report model characteristics — including training compute, architecture, capabilities, and safety evaluation results — to designated regulatory authorities.
0 enacted
0 proposed
S-03.5
Frontier AI safety framework publication Large frontier model developers must write, implement, comply with, and publicly publish a frontier AI safety framework detailing how the developer handles catastrophic risk assessment and thresholds, safety oversight, third-party evaluation processes, cybersecurity protections, and whistleblower procedures. The framework must be kept current and updated following material changes to the developer's systems or risk profile.
2 enacted
6 proposed
Bills That Map This Requirement 9 bills
Bill
Status
Sub-Obligations
Section
Enacted 2026-01-01
S-03.5
Bus. & Prof. Code § 22757.12(a)(1)-(10)
Plain Language
Large frontier developers must create, implement, comply with, and publicly publish on their website a comprehensive frontier AI framework covering ten specified domains: incorporation of national/international standards and best practices, catastrophic risk threshold definition and assessment, risk mitigation, pre-deployment review, third-party evaluation, framework update criteria, cybersecurity for unreleased model weights, critical safety incident response, internal governance, and management of catastrophic risk from internal model use. This is both a documentation obligation and an operational compliance obligation — the developer must actually comply with its own published framework, and failure to do so is independently enforceable. The framework must be publicly accessible, not merely filed with a regulator.
(a) A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework. (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds. (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to paragraph (2). (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally. (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks. (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to subdivision (c). (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties. (8) Identifying and responding to critical safety incidents. (9) Instituting internal governance practices to ensure implementation of these processes. (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
Enacted 2026-01-01
S-03.5
Bus. & Prof. Code § 22757.12(b)(1)-(2)
Plain Language
Large frontier developers must review their frontier AI framework at least annually and update it as appropriate. When a material modification is made, the updated framework and a justification for the change must be publicly published within 30 days. This is an ongoing maintenance obligation — the annual review is required regardless of whether any changes are made, and the 30-day publication deadline is triggered by any material modification, whether during or outside the annual review cycle.
(b) (1) A large frontier developer shall review and, as appropriate, update its frontier AI framework at least once per year. (2) If a large frontier developer makes a material modification to its frontier AI framework, the large frontier developer shall clearly and conspicuously publish the modified frontier AI framework and a justification for that modification within 30 days.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(e)(1)(A)
Plain Language
Frontier developers are prohibited from making materially false or misleading public statements about catastrophic risks posed by their frontier models or how they manage those risks.
(1)(A) A frontier developer shall not make a materially false or misleading statement about catastrophic risk from its frontier models or its management of catastrophic risk... (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances. 
Pending 2026-01-01
S-03.5
Sec. 7(1)(a)-(b)
Plain Language
Large developers must produce, implement, follow, and conspicuously publish a safety and security protocol that addresses critical risks as defined by statute. If the protocol is materially modified, the modifications must be published within 30 days. The protocol must be publicly accessible on the developer's website. This is a continuing obligation — the developer must not only write and publish the protocol but actively follow it.
(1) Beginning on January 1, 2026, a large developer shall do all of the following: (a) Produce, implement, follow, and conspicuously publish a safety and security protocol. (b) If materially modifying the safety and security protocol under subdivision (a), conspicuously publish the modifications not more than 30 days after the material modification was made.
Pending 2026-01-01
S-03.5
Sec. 5
Plain Language
The safety and security protocol must cover twelve detailed areas: model exclusion criteria for limited-risk models, intolerable risk thresholds and responses, testing and assessment procedures (including evasion and misuse scenarios), deployment decision procedures, physical/digital/organizational security against unauthorized access, safeguard efficacy assessments, incident response procedures, procedures for reassessment upon model modification or expanded access, incident reporting conditions, protocol modification conditions, scientific reproducibility details, and the role of financially disinterested third parties. This section defines the mandatory contents of the protocol required under Section 7.
Sec. 5. A safety and security protocol must describe in detail all of the following, as applicable: (a) How the large developer excludes certain foundation models from being covered by the safety and security protocol when those foundation models pose a limited critical risk. (b) The thresholds at which critical risks would be considered intolerable, any justification for the thresholds, and what the large developer will do if a threshold is surpassed. (c) The testing and assessment procedures the large developer uses to investigate critical risks and how the tests and procedures account for the possibility that a foundation model could evade the control of the large developer or user or be misused, modified, executed with increased computational resources, or used to create another foundation model. (d) The procedure the large developer will use to determine if and how to deploy a foundation model when doing so poses critical risks. (e) The physical, digital, and organizational security protection the large developer will implement to prevent insiders or third parties from accessing foundation models within the large developer's control in a manner that is unauthorized by the developer and could create a critical risk. (f) Any safeguards and risk mitigation measures the large developer uses to reduce critical risks from the large developer's foundation models and how the large developer assesses efficacy and limitations. (g) How the large developer will respond if a critical risk materializes or is imminent. (h) The procedures that the large developer uses to determine whether to conduct additional assessments for a critical risk when the large developer modifies or expands access to the large developer's foundation models or combines the foundation models with other software and how such assessments are conducted. (i) The conditions under which the large developer will report an incident relevant to a critical risk that occurs in connection with 1 or more of the large developer's foundation models and the entities to which the large developer will make those reports. (j) The conditions under which the large developer will modify the large developer's safety and security protocol. (k) The parts of the safety and security protocol that the large developer believes provide sufficient scientific detail to allow for the independent assessment of the methods used to generate the results, evidence, and analysis, and to which experts any unredacted versions are made available. (l) Any other role a financially disinterested third party plays under subdivisions (a) to (k).
Pending 2026-01-01
S-03.3
Sec. 7(2)
Plain Language
Large developers are prohibited from knowingly including false or materially misleading statements or omissions in any document produced under Section 7, including the safety and security protocol and transparency reports. This is a scienter-based prohibition — it requires knowledge, not mere negligence.
(2) A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced in accordance with this section.
Pending 2026-01-01
S-03.5
Minn. Stat. § 325M.41, subd. 1(1)-(6)
Plain Language
Before deploying any AI model, a developer must create and implement a written safety and security protocol that covers risk-reduction procedures, cybersecurity protections, and detailed testing procedures. The developer must publicly publish a redacted version and transmit it to the attorney general, and must grant the AG access to the unredacted version (with only federally-required redactions) upon request. The developer must also retain the unredacted protocol and all test records in sufficient detail for third-party replication for the entire deployment period plus five years. Additionally, the developer must implement appropriate safeguards to prevent unreasonable risk of critical harm. The protocol must designate senior personnel responsible for compliance.
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pending 2026-01-01
S-03.3
Minn. Stat. § 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying an AI model if doing so would create an unreasonable risk of critical harm. Critical harm has a specific statutory definition keyed to CBRN weapons or autonomous criminal conduct causing death, serious injury, or mental injury of 25+ people or $1M+ in property damage. This is a deployment-gating prohibition — not a risk-mitigation obligation. If the risk is unreasonable, the model must not be deployed regardless of what safeguards are in place.
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
Pre-filed 2026-08-01
S-03.5
Minn. Stat. § 325M.41, subd. 1(1)-(6)
Plain Language
Before deploying any AI model, a developer must write and implement a comprehensive safety and security protocol covering risk reduction measures, cybersecurity protections, and detailed testing procedures. The protocol must designate senior personnel responsible for compliance. Developers must publicly publish an appropriately redacted version, transmit a copy to the attorney general, and grant the AG access to the less-redacted version upon request (with redactions limited to those required by federal law). All testing records must be detailed enough for third-party replication and retained for the deployment period plus five years. Developers must also implement safeguards to prevent unreasonable risk of critical harm. This is a comprehensive pre-deployment gating obligation — no model may be deployed until all six requirements are satisfied.
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pre-filed 2026-08-01
S-03.3
Minn. Stat. § 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying an AI model if doing so would create an unreasonable risk of critical harm. This is a deployment gate — not a mitigation obligation. If the risk of critical harm is unreasonable, the model may not be deployed at all, regardless of what safeguards are in place. Critical harm is defined by reference to CBRN weapon creation or autonomous criminal conduct causing mass casualties or $1M+ in damages.
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
Pending 2027-01-01
S-03.5
Sec. 4(1)(a)
Plain Language
Large frontier developers must write, implement, comply with, and publicly publish on their website a detailed public safety and child protection plan covering catastrophic risk. The plan must describe how the developer defines and assesses catastrophic risk thresholds (which may be multi-tiered), applies mitigations, reviews risk assessments as part of deployment and internal-use decisions, uses third-party evaluators, implements cybersecurity to protect unreleased model weights, and manages catastrophic risk from internal model use including risks from models circumventing oversight. This is both a documentation obligation and a continuous operational requirement — the developer must implement and comply with the plan, not merely publish it.
(1) A large frontier developer or large chatbot provider shall write, implement, comply with, and clearly and conspicuously publish on its website a public safety and child protection plan that describes in detail: (a) For a large frontier developer, how the large frontier developer: (i) Defines and assesses thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (ii) Applies mitigations to address the potential for catastrophic risks based on the results of the assessments undertaken pursuant to subdivision (1)(a)(i) of this section; (iii) Reviews assessments of catastrophic risk and adequacy of mitigations of catastrophic risk as part of the decision to deploy a frontier model or use it extensively internally; (iv) Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (v) Implements cybersecurity practices to secure unreleased frontier model weights from unauthorized modification or transfer by internal or external parties; and (vi) Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms;
Pending 2027-01-01
S-03.5
Sec. 4(1)(b)
Plain Language
Large chatbot providers must include in their public safety and child protection plan a detailed description of how they assess child safety risks, apply mitigations based on those assessments, and use third parties to evaluate child safety risk potential and mitigation effectiveness. This plan must be written, implemented, complied with, and published on the provider's website per the parent obligation in Sec. 4(1).
(b) For a large chatbot provider, how the large chatbot provider: (i) Assesses potential for child safety risks. (ii) Applies mitigations to address the potential for child safety risks based on the results of the assessments undertaken pursuant to subdivision (1)(b)(i) of this section; and (iii) Uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks;
Pending 2027-01-01
S-03.5
Sec. 4(1)(c)
Plain Language
Both large frontier developers and large chatbot providers must describe in their public safety and child protection plan how they incorporate national and international standards and industry best practices, how they revisit and update the plan (including triggers for updates and criteria for determining when models are substantially modified enough to require new disclosures), how they identify and respond to safety incidents, and what internal governance practices ensure the plan is actually implemented. This shared section applies to both entity types on top of their entity-specific plan requirements.
(c) For both large frontier developers and large chatbot providers, how the large frontier developer or large chatbot provider: (i) Incorporates national standards, international standards, and industry-consensus best practices into its public safety and child protection plan; (ii) Revisits and updates the public safety and child protection plan, including any criteria that trigger updates and how such developer or provider determines when its foundation models or frontier models are substantially modified enough to require disclosures pursuant to subsection (3) or subsection (4) of this section; (iii) Identifies and responds to safety incidents; and (iv) Institutes internal governance practices to ensure implementation of its public safety and child protection plan.
Pending 2025-09-02
S-03.5
Gen. Bus. Law § 1421(1)(a)-(c)
Plain Language
Before deploying any frontier model, the large developer must create and implement a written safety and security protocol — a comprehensive document covering risk reduction measures, cybersecurity protections against unauthorized access, detailed testing procedures for critical harm risk, misuse and evasion assessment, compliance specifics, and designation of responsible senior personnel. The unredacted protocol must be retained for the duration of deployment plus five years. A redacted version must be conspicuously published and transmitted to the Division of Homeland Security and Emergency Services. The unredacted version must be made available to the Division or the Attorney General upon request, with redactions permitted only to the extent required by federal law.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years; (c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
Pending 2025-09-02
S-03.1
Gen. Bus. Law § 1421(1)(d)-(e)
Plain Language
Before deployment, large developers must document the specific tests and results from all frontier model assessments in sufficient detail for third-party replication, and retain those records for the duration of deployment plus five years. Additionally, developers must implement appropriate safeguards to prevent unreasonable risk of critical harm. The safeguards obligation is ongoing — it does not end at deployment. Note the intervening-actor limitation in the critical harm definition: harms caused by a human third party are attributable to the developer only if the developer's activities made the harm substantially easier or more likely.
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pending 2025-09-02
S-03.3
Gen. Bus. Law § 1421(2)
Plain Language
Large developers are categorically prohibited from deploying a frontier model if deployment would create an unreasonable risk of critical harm. This is a deployment gate — it requires an affirmative determination that the risk is not unreasonable before making the model available. 'Critical harm' is defined to require either CBRN weapon enablement or autonomous criminal AI conduct causing 100+ deaths/serious injuries or $1B+ in damages, so the threshold for this prohibition is extremely high. The 'unreasonable risk' standard implies a reasonableness assessment, not a zero-risk requirement.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Pending 2025-09-02
S-03.5
Gen. Bus. Law § 1421(7)
Plain Language
Persons who are not yet large developers but who intend to train a model that would qualify them as large developers upon completion must, before beginning training: (1) implement a written safety and security protocol (though without the detailed testing procedure and misuse assessment elements normally required), and (2) transmit a redacted copy to the Division of Homeland Security and Emergency Services. This is a pre-qualification obligation — it ensures that entities approaching the frontier model threshold have safety protocols in place before, not after, training a qualifying model. The academic research exclusion for accredited colleges and universities applies here as well.
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.
Enacted 2025-06-03
S-03.5
Gen. Bus. Law § 1421(1)(c)(i)-(ii)
Plain Language
Large developers must conspicuously publish a copy of their safety and security protocol, which may include appropriate redactions (for trade secrets, public safety, privacy, and legally protected information). A copy of this redacted protocol must also be transmitted to the AG and Division of Homeland Security and Emergency Services. Separately, upon request, the AG and DHSES must be given access to the protocol with redactions limited only to those required by federal law — meaning the regulator version is substantially less redacted than the public version. This creates a two-tier disclosure regime: a more heavily redacted public version and a nearly unredacted regulator version.
(i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the attorney general and division of homeland security and emergency services; (ii) Grant the attorney general and division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request.
Enacted 2025-06-03
S-03.3
Gen. Bus. Law § 1421(2)
Plain Language
This is an absolute deployment prohibition: a large developer may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Critical harm is defined narrowly — it requires either CBRN weapon creation/use or autonomous AI criminal conduct, and must cause death or serious injury to 100+ people or $1 billion+ in damages. The 'unreasonable risk' standard introduces a reasonableness analysis rather than a zero-risk requirement. Note that deployment excludes internal training, evaluation, and legal compliance use. The intervening human actor causation limitation in the critical harm definition provides a defense where harm is caused by an unforeseeable third party.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Enacted 2025-06-03
S-03.2
Gen. Bus. Law § 1420(12)(c)
Plain Language
The safety and security protocol must include detailed testing procedures that evaluate two things: (1) whether the frontier model poses an unreasonable risk of critical harm, and (2) whether the model could be misused, modified, scaled up, escape developer/user control, combined with other software, or used to create another frontier model in ways that increase critical harm risk. This effectively mandates red-teaming and adversarial testing across multiple attack vectors and misuse scenarios. Because the critical harm definition encompasses CBRN weapon creation, this subsection requires CBRN-specific risk evaluation as part of the testing regime.
"Safety and security protocol" means documented technical and organizational protocols that: ... (c) Describe in detail the testing procedure to evaluate if the frontier model poses an unreasonable risk of critical harm and whether the frontier model could be misused, be modified, be executed with increased computational resources, evade the control of its large developer or user, be combined with other software or be used to create another frontier model in a manner that would increase the risk of critical harm.
Pending 2025-06-25
S-03.5
Gen. Bus. Law § 1421(1)(a)-(b), § 1421(1)(c)
Plain Language
Before deploying any frontier model, a large developer must write, implement, and maintain a detailed safety and security protocol covering risk reduction measures, cybersecurity protections, testing procedures, compliance requirements, and designated senior personnel responsible for compliance. The unredacted protocol must be retained for as long as the model is deployed plus five years. A redacted version must be conspicuously published and transmitted to the Division of Homeland Security and Emergency Services. The unredacted version must be made available to DHSES or the Attorney General upon request, with redactions permitted only to the extent required by federal law. Permissible redactions for the published version cover trade secrets, public safety risks, employee/customer privacy, and information controlled by law.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years; (c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
Pending 2025-06-25
S-03.3
Gen. Bus. Law § 1421(2)
Plain Language
A large developer is categorically prohibited from deploying a frontier model if deployment would create an unreasonable risk of critical harm. This is a deployment gate — not merely a best-efforts obligation. The standard is 'unreasonable risk,' meaning some residual risk may be acceptable, but the developer bears the burden of ensuring the risk does not cross the unreasonable threshold. Note the carve-outs in the definition of 'deploy': internal use for training, evaluation, or legal compliance is not deployment, so those activities are not subject to this prohibition.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Pending 2025-06-25
S-03.5
Gen. Bus. Law § 1421(7)(a)-(b)
Plain Language
Persons who are not yet large developers but who plan to train a frontier model that would qualify them as large developers must, before beginning training, implement a written safety and security protocol and transmit a redacted copy to DHSES. The protocol need not include the detailed testing-procedure descriptions required by paragraphs (c) and (d) of the safety and security protocol definition — reflecting that the model has not yet been built or tested. Accredited colleges and universities engaged in academic research are exempt. This is a forward-looking trigger: the obligation attaches when a person sets out to train a qualifying model, not when the training is completed.
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.