Developers of frontier AI models — defined by compute thresholds — face a distinct set of safety obligations focused on catastrophic and systemic risk. These go beyond general AI system safety obligations to address existential-scale harms, dual-use potential for weapons of mass destruction, and deployment gating based on risk thresholds.
(a) A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework. (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds. (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to paragraph (2). (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally. (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks. (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to subdivision (c). (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties. (8) Identifying and responding to critical safety incidents. (9) Instituting internal governance practices to ensure implementation of these processes. (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
(b) (1) A large frontier developer shall review and, as appropriate, update its frontier AI framework at least once per year. (2) If a large frontier developer makes a material modification to its frontier AI framework, the large frontier developer shall clearly and conspicuously publish the modified frontier AI framework and a justification for that modification within 30 days.
(1)(A) A frontier developer shall not make a materially false or misleading statement about catastrophic risk from its frontier models or its management of catastrophic risk... (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances.
(1) Beginning on January 1, 2026, a large developer shall do all of the following: (a) Produce, implement, follow, and conspicuously publish a safety and security protocol. (b) If materially modifying the safety and security protocol under subdivision (a), conspicuously publish the modifications not more than 30 days after the material modification was made.
Sec. 5. A safety and security protocol must describe in detail all of the following, as applicable: (a) How the large developer excludes certain foundation models from being covered by the safety and security protocol when those foundation models pose a limited critical risk. (b) The thresholds at which critical risks would be considered intolerable, any justification for the thresholds, and what the large developer will do if a threshold is surpassed. (c) The testing and assessment procedures the large developer uses to investigate critical risks and how the tests and procedures account for the possibility that a foundation model could evade the control of the large developer or user or be misused, modified, executed with increased computational resources, or used to create another foundation model. (d) The procedure the large developer will use to determine if and how to deploy a foundation model when doing so poses critical risks. (e) The physical, digital, and organizational security protection the large developer will implement to prevent insiders or third parties from accessing foundation models within the large developer's control in a manner that is unauthorized by the developer and could create a critical risk. (f) Any safeguards and risk mitigation measures the large developer uses to reduce critical risks from the large developer's foundation models and how the large developer assesses efficacy and limitations. (g) How the large developer will respond if a critical risk materializes or is imminent. (h) The procedures that the large developer uses to determine whether to conduct additional assessments for a critical risk when the large developer modifies or expands access to the large developer's foundation models or combines the foundation models with other software and how such assessments are conducted. (i) The conditions under which the large developer will report an incident relevant to a critical risk that occurs in connection with 1 or more of the large developer's foundation models and the entities to which the large developer will make those reports. (j) The conditions under which the large developer will modify the large developer's safety and security protocol. (k) The parts of the safety and security protocol that the large developer believes provide sufficient scientific detail to allow for the independent assessment of the methods used to generate the results, evidence, and analysis, and to which experts any unredacted versions are made available. (l) Any other role a financially disinterested third party plays under subdivisions (a) to (k).
(2) A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced in accordance with this section.
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
(1) A large frontier developer or large chatbot provider shall write, implement, comply with, and clearly and conspicuously publish on its website a public safety and child protection plan that describes in detail: (a) For a large frontier developer, how the large frontier developer: (i) Defines and assesses thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (ii) Applies mitigations to address the potential for catastrophic risks based on the results of the assessments undertaken pursuant to subdivision (1)(a)(i) of this section; (iii) Reviews assessments of catastrophic risk and adequacy of mitigations of catastrophic risk as part of the decision to deploy a frontier model or use it extensively internally; (iv) Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (v) Implements cybersecurity practices to secure unreleased frontier model weights from unauthorized modification or transfer by internal or external parties; and (vi) Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms;
(b) For a large chatbot provider, how the large chatbot provider: (i) Assesses potential for child safety risks. (ii) Applies mitigations to address the potential for child safety risks based on the results of the assessments undertaken pursuant to subdivision (1)(b)(i) of this section; and (iii) Uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks;
(c) For both large frontier developers and large chatbot providers, how the large frontier developer or large chatbot provider: (i) Incorporates national standards, international standards, and industry-consensus best practices into its public safety and child protection plan; (ii) Revisits and updates the public safety and child protection plan, including any criteria that trigger updates and how such developer or provider determines when its foundation models or frontier models are substantially modified enough to require disclosures pursuant to subsection (3) or subsection (4) of this section; (iii) Identifies and responds to safety incidents; and (iv) Institutes internal governance practices to ensure implementation of its public safety and child protection plan.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years; (c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.
(i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the attorney general and division of homeland security and emergency services; (ii) Grant the attorney general and division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request.
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
"Safety and security protocol" means documented technical and organizational protocols that: ... (c) Describe in detail the testing procedure to evaluate if the frontier model poses an unreasonable risk of critical harm and whether the frontier model could be misused, be modified, be executed with increased computational resources, evade the control of its large developer or user, be combined with other software or be used to create another frontier model in a manner that would increase the risk of critical harm.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years; (c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.