S-06953
NY · State · USA
NY
USA
● Passed
Proposed Effective Date
2025-06-25
New York Senate Bill 6953-A — Responsible AI Safety and Education Act (RAISE Act)
Imposes safety, transparency, and reporting obligations on 'large developers' of frontier AI models — defined by a dual compute-cost threshold ($5M on a single frontier model and $100M aggregate). Before deploying a frontier model, large developers must implement, publish, and retain a written safety and security protocol; conduct pre-deployment testing; implement safeguards against unreasonable risk of critical harm; and may not deploy if the model poses an unreasonable risk of critical harm. Large developers must retain independent third-party auditors annually, report safety incidents to the Division of Homeland Security and Emergency Services within 72 hours, and protect employees who disclose safety concerns from retaliation. Enforcement is through Attorney General civil actions with penalties up to $10M for a first violation and $30M for subsequent violations. Accredited colleges and universities engaged in academic research are excluded from the large developer definition.
Summary

Imposes safety, transparency, and reporting obligations on 'large developers' of frontier AI models — defined by a dual compute-cost threshold ($5M on a single frontier model and $100M aggregate). Before deploying a frontier model, large developers must implement, publish, and retain a written safety and security protocol; conduct pre-deployment testing; implement safeguards against unreasonable risk of critical harm; and may not deploy if the model poses an unreasonable risk of critical harm. Large developers must retain independent third-party auditors annually, report safety incidents to the Division of Homeland Security and Emergency Services within 72 hours, and protect employees who disclose safety concerns from retaliation. Enforcement is through Attorney General civil actions with penalties up to $10M for a first violation and $30M for subsequent violations. Accredited colleges and universities engaged in academic research are excluded from the large developer definition.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may bring a civil action for violations of this article. The Division of Homeland Security and Emergency Services receives safety and security protocols, audit reports, and safety incident disclosures and must make critical safety incident disclosures available to the Attorney General upon request, but is not granted independent enforcement authority. No private right of action for violations of § 1421 (transparency/safety requirements). Employees harmed by retaliation under § 1422 may petition a court for temporary or preliminary injunctive relief, but this is limited to anti-retaliation claims — not general enforcement.
Penalties
For violations of § 1421 (frontier model safety and transparency requirements): civil penalty up to $10,000,000 for a first violation and up to $30,000,000 for subsequent violations. For violations of § 1422 (employee retaliation): civil penalty up to $10,000 per employee per violation, awarded to the affected employee. Injunctive or declaratory relief is available for violations of either section. Employees harmed by retaliation may independently petition for temporary or preliminary injunctive relief. Contractual waivers or liability-shifting provisions are void as a matter of public policy. Courts may pierce corporate formalities and impose joint and several liability on affiliated entities that structured corporate arrangements to purposely and unreasonably limit or avoid liability.
Who Is Covered
"Large developer" means a person that has trained at least one frontier model, the compute cost of which exceeds five million dollars, and has spent over one hundred million dollars in compute costs in aggregate in training frontier models. Accredited colleges and universities shall not be considered large developers under this article to the extent that such colleges and universities are engaging in academic research. If a person subsequently transfers full intellectual property rights of the frontier model to another person (including the right to resell the model) and retains none of those rights for themself, then the receiving person shall be considered the large developer and shall be subject to the responsibilities and requirements of this article after such transfer.
"Person" means an individual, proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, limited liability company, association, committee, or any other nongovernmental organization or group of persons acting in concert.
What Is Covered
"Frontier model" means either of the following: (a) an artificial intelligence model trained using greater than 10§26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds one hundred million dollars; or (b) an artificial intelligence model produced by applying knowledge distillation to a frontier model as defined in paragraph (a) of this subdivision.
Compliance Obligations 14 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(a)-(b)
Plain Language
Before deploying any frontier model, large developers must create and implement a comprehensive written safety and security protocol covering critical harm risk reduction, cybersecurity protections, detailed testing procedures, misuse assessment, compliance requirements, and designation of senior personnel responsible for compliance. The unredacted protocol — including all update history — must be retained for the duration of deployment plus five years. The protocol definition is prescriptive: it must be detailed enough that a third party can determine whether it has been followed.
Statutory Text
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years;
G-02 Public Transparency & Documentation · G-02.3 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(c)
Plain Language
Before deployment, large developers must conspicuously publish their safety and security protocol — with permitted redactions for trade secrets, public safety, privacy, and legally controlled information — and transmit a copy to the Division of Homeland Security and Emergency Services. Additionally, the developer must grant DHSES or the Attorney General access to the protocol upon request, with redactions limited only to what federal law requires (a narrower redaction standard than the public-facing version). This creates a two-tier disclosure regime: the public version may have broader redactions, while the regulator version may only be redacted as required by federal law.
Statutory Text
(c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
G-01 AI Governance Program & Documentation · G-01.3 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(d)
Plain Language
Large developers must contemporaneously record and retain for the life of deployment plus five years all testing information used in assessing the frontier model, including specific tests conducted and results obtained. The records must be detailed enough to enable a third party to replicate the testing procedure. The 'as and when reasonably possible' qualifier provides some flexibility in timing of record creation but does not excuse failure to record altogether.
Statutory Text
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure;
S-03 Frontier Model Safety Obligations · S-03.1 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(e)
Plain Language
Large developers must implement appropriate safeguards — before deployment — to prevent unreasonable risk of critical harm. Critical harm is defined narrowly: death or serious injury to 100+ people, or $1B+ in property damages, caused through either CBRN weapon creation/use or autonomous AI conduct constituting a crime. The intervening-actor carve-out limits liability where a human independently chooses to cause harm unless the developer's activities made it substantially easier or more likely. This is an affirmative obligation to implement safeguards, separate from the prohibition on deploying models that pose unreasonable risk.
Statutory Text
(e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI System
Gen. Bus. Law § 1421(2)
Plain Language
This is a categorical deployment gate: large developers may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Unlike the safeguard-implementation requirement in § 1421(1)(e), this is an absolute prohibition — no amount of safeguards can cure an unreasonable risk. The 'unreasonable risk' standard provides a reasonableness-based threshold rather than a zero-risk requirement.
Statutory Text
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Frontier AI System
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must conduct an annual review of their safety and security protocol, considering changes to model capabilities and industry best practices. If modifications are warranted, the developer must update the protocol and re-publish it publicly with appropriate redactions and re-transmit it to the Division of Homeland Security and Emergency Services. This is a continuing obligation — not a one-time pre-deployment exercise. The review must happen regardless of whether modifications are ultimately made; the publication obligation is triggered only when modifications occur.
Statutory Text
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
G-01 AI Governance Program & Documentation · G-01.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(4)(a)-(e)
Plain Language
Large developers must retain an independent third-party auditor annually to assess compliance with all § 1421 requirements. The auditor must have access to unredacted materials and must produce a certified report covering: compliance steps taken, any noncompliance instances with remediation recommendations, and an assessment of internal controls including senior personnel designation. The developer must retain the unredacted report for the deployment period plus five years, publish a redacted version publicly, transmit it to the Division of Homeland Security and Emergency Services, and provide the AG or DHSES with access to a version redacted only as required by federal law upon request. The audit clock starts at the later of the act's effective date or 90 days after a person first qualifies as a large developer.
Statutory Text
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Gen. Bus. Law § 1421(5)
Plain Language
Large developers must report each safety incident to the Division of Homeland Security and Emergency Services within 72 hours. The 72-hour clock starts when the developer learns of the incident or learns facts sufficient to establish a reasonable belief that an incident has occurred. Reports must include the date, the statutory basis for classification as a safety incident, and a plain statement describing what happened. Safety incidents are defined by four categories — autonomous behavior, model weight compromise, control failures, and unauthorized use — but only qualify when they provide demonstrable evidence of an increased risk of critical harm.
Statutory Text
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Other · Frontier AI System
Gen. Bus. Law § 1421(6)
Plain Language
Large developers may not knowingly include false or materially misleading statements or omissions in any documents produced under § 1421, including safety and security protocols, testing records, and audit reports. This is an anti-fraud provision that attaches liability to dishonest regulatory and public disclosures, rather than creating a standalone affirmative compliance obligation.
Statutory Text
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(7)(a)-(b)
Plain Language
Persons who are not yet large developers but who plan to train a model that would make them qualify must, before beginning training, implement a written safety and security protocol and transmit a redacted copy to DHSES. The protocol is slightly less demanding than the full large developer protocol — it need not include the detailed testing procedure description (paragraph (c)) or the misuse/modification assessment (paragraph (d)) of the safety and security protocol definition. This pre-qualification obligation catches entities before they cross the large developer threshold, ensuring safety protocols are in place before training begins. The academic research exemption applies here as well.
Statutory Text
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · Developer · Frontier AI System
Gen. Bus. Law § 1422(1)-(2)
Plain Language
Large developers, their contractors, and subcontractors may not prevent employees from disclosing — or threaten or retaliate against employees for disclosing — information about activities they reasonably believe pose an unreasonable or substantial risk of critical harm. Protected disclosures may be made internally to the large developer or externally to the Attorney General. The anti-retaliation protection applies regardless of whether the employer is otherwise in compliance with applicable law. 'Employee' is defined broadly to include contractors, subcontractors, unpaid advisors involved in critical harm risk work, and corporate officers. Employees harmed by retaliation may independently petition a court for temporary or preliminary injunctive relief.
Statutory Text
A large developer or a contractor or subcontractor of a large developer shall not prevent an employee from disclosing, or threatening to disclose, or retaliate against an employee for disclosing or threatening to disclose, information to the large developer or the attorney general, if the employee has reasonable cause to believe that the large developer's activities pose an unreasonable or substantial risk of critical harm, regardless of the employer's compliance with applicable law. 2. An employee harmed by a violation of this section may petition a court for appropriate temporary or preliminary injunctive relief.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · Developer · Frontier AI System
Gen. Bus. Law § 1422(3)
Plain Language
Large developers must notify all employees of their whistleblower protections, rights, and obligations within 90 days of the statute's effective date (or within 90 days of first qualifying as a large developer, whichever is later). New employees must be notified upon hire. A physical notice must also be posted conspicuously in well-lighted, easily accessible locations customarily frequented by employees. The statute does not specify a separate accommodation for remote workers, which may create practical compliance questions for distributed workforces.
Statutory Text
A large developer shall inform employees of their protections, rights and obligations under this article within ninety days of the effective date of this article or of becoming a large developer, whichever is later, upon commencement of employment, and by posting a notice thereof. Such notice shall be posted conspicuously in easily accessible and well-lighted places customarily frequented by employees.
Other · Frontier AI System
Gen. Bus. Law § 1423(2)(a)-(b)
Plain Language
Contracts that waive, shift, or burden enforcement of liability under this article — including adhesion contracts with users or third parties — are void as a matter of public policy. This prevents large developers from contractually insulating themselves via terms of service, licensing agreements, or downstream liability-shifting provisions. Courts are also authorized to pierce corporate formalities and impose joint and several liability on affiliated entities where the corporate structure was purposely designed to limit or avoid liability under this statute. These provisions strengthen enforcement but do not create independent compliance obligations.
Statutory Text
(a) A provision within a contract or agreement that seeks to waive, preclude, or burden the enforcement of a liability arising from a violation of this article, or to shift that liability to any person or entity in exchange for their use or access of, or right to use or access, a large developer's products or services, including by means of a contract of adhesion, is void as a matter of public policy. (b) A court shall disregard corporate formalities and impose joint and several liability on affiliated entities for purposes of effectuating the intent of this section to the maximum extent allowed by law if the court concludes that both of the following are true: (i) The affiliated entities, in the development of the corporate structure among the affiliated entities, took steps to purposely and unreasonably limit or avoid liability; and (ii) As the result of the steps described in subparagraph (i) of this paragraph, the corporate structure of the large developer or affiliated entities would frustrate recovery of penalties, damages, or injunctive relief under this section.
Other · Frontier AI System
Gen. Bus. Law § 1423(3)
Plain Language
The Division of Homeland Security and Emergency Services must share critical safety incident disclosures received from large developers with the Attorney General upon request. This enables AG enforcement actions based on incident reports submitted to DHSES but creates no new obligation on the regulated entity — the developer's reporting obligation is already captured under § 1421(5).
Statutory Text
The division of homeland security and emergency services shall make any critical safety incident disclosure available to the attorney general upon request.