S-06953
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-06-25
New York Senate Bill 6953-A — Responsible AI Safety and Education Act (RAISE Act)
Imposes safety, transparency, and accountability obligations on 'large developers' of frontier AI models — defined as persons who have trained at least one frontier model costing over $5M in compute and have spent over $100M in aggregate compute costs on frontier models. Core obligations include implementing, publishing, and annually reviewing a safety and security protocol; conducting pre-deployment safety assessments; retaining detailed testing records; reporting safety incidents to the Division of Homeland Security and Emergency Services within 72 hours; and retaining an independent third-party auditor annually. Prohibits deploying a frontier model that would create an unreasonable risk of critical harm (defined as 100+ deaths/serious injuries or $1B+ damages via CBRN weapons or autonomous criminal conduct). Includes robust whistleblower protections for employees, contractors, and unpaid advisors. Enforcement is exclusively through the Attorney General, with civil penalties up to $10M for a first violation and $30M for subsequent violations. Accredited colleges and universities engaged in academic research are exempt.
Summary

Imposes safety, transparency, and accountability obligations on 'large developers' of frontier AI models — defined as persons who have trained at least one frontier model costing over $5M in compute and have spent over $100M in aggregate compute costs on frontier models. Core obligations include implementing, publishing, and annually reviewing a safety and security protocol; conducting pre-deployment safety assessments; retaining detailed testing records; reporting safety incidents to the Division of Homeland Security and Emergency Services within 72 hours; and retaining an independent third-party auditor annually. Prohibits deploying a frontier model that would create an unreasonable risk of critical harm (defined as 100+ deaths/serious injuries or $1B+ damages via CBRN weapons or autonomous criminal conduct). Includes robust whistleblower protections for employees, contractors, and unpaid advisors. Enforcement is exclusively through the Attorney General, with civil penalties up to $10M for a first violation and $30M for subsequent violations. Accredited colleges and universities engaged in academic research are exempt.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may bring a civil action for violations of the article. The Division of Homeland Security and Emergency Services receives safety and security protocols, audit reports, and safety incident disclosures, and must make critical safety incident disclosures available to the Attorney General upon request, but is not granted independent enforcement authority. Employees harmed by whistleblower retaliation may petition a court for temporary or preliminary injunctive relief. No general private right of action for non-employee violations.
Penalties
For violations of § 1421 (transparency and safety requirements): civil penalty up to $10,000,000 for a first violation and up to $30,000,000 for any subsequent violation. For violations of § 1422 (employee retaliation): civil penalty up to $10,000 per employee per violation, awarded to the harmed employee. Injunctive or declaratory relief is available for violations of either section. Employees harmed by retaliation may separately petition for temporary or preliminary injunctive relief. Contract provisions that waive, preclude, burden, or shift liability arising from violations are void as a matter of public policy. Courts may disregard corporate formalities and impose joint and several liability on affiliated entities that purposely structured to avoid liability.
Who Is Covered
"Large developer" means a person that has trained at least one frontier model, the compute cost of which exceeds five million dollars, and has spent over one hundred million dollars in compute costs in aggregate in training frontier models. Accredited colleges and universities shall not be considered large developers under this article to the extent that such colleges and universities are engaging in academic research. If a person subsequently transfers full intellectual property rights of the frontier model to another person (including the right to resell the model) and retains none of those rights for themself, then the receiving person shall be considered the large developer and shall be subject to the responsibilities and requirements of this article after such transfer.
"Person" means an individual, proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, limited liability company, association, committee, or any other nongovernmental organization or group of persons acting in concert.
What Is Covered
"Frontier model" means either of the following: (a) an artificial intelligence model trained using greater than 10§26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds one hundred million dollars; or (b) an artificial intelligence model produced by applying knowledge distillation to a frontier model as defined in paragraph (a) of this subdivision.
Compliance Obligations 11 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(a)-(b), § 1421(1)(c)
Plain Language
Before deploying any frontier model, a large developer must write, implement, and maintain a detailed safety and security protocol covering risk reduction measures, cybersecurity protections, testing procedures, compliance requirements, and designated senior personnel responsible for compliance. The unredacted protocol must be retained for as long as the model is deployed plus five years. A redacted version must be conspicuously published and transmitted to the Division of Homeland Security and Emergency Services. The unredacted version must be made available to DHSES or the Attorney General upon request, with redactions permitted only to the extent required by federal law. Permissible redactions for the published version cover trade secrets, public safety risks, employee/customer privacy, and information controlled by law.
Statutory Text
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years; (c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
S-01 AI System Safety Program · S-01.1 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(d)-(e)
Plain Language
Before deploying a frontier model, the large developer must record and retain detailed information on all tests and test results used to assess the model, in sufficient detail for third parties to replicate the testing procedure. These records must be retained for the duration of deployment plus five years. Additionally, the developer must implement appropriate safeguards to prevent unreasonable risk of critical harm. The 'reasonably possible' qualifier on recordkeeping provides some flexibility, but the obligation is a pre-deployment prerequisite — the developer may not deploy until both the testing documentation and safeguards are in place.
Statutory Text
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI System
Gen. Bus. Law § 1421(2)
Plain Language
A large developer is categorically prohibited from deploying a frontier model if deployment would create an unreasonable risk of critical harm. This is a deployment gate — not merely a best-efforts obligation. The standard is 'unreasonable risk,' meaning some residual risk may be acceptable, but the developer bears the burden of ensuring the risk does not cross the unreasonable threshold. Note the carve-outs in the definition of 'deploy': internal use for training, evaluation, or legal compliance is not deployment, so those activities are not subject to this prohibition.
Statutory Text
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Frontier AI System
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must conduct an annual review of their safety and security protocol to ensure it accounts for changes in model capabilities and evolving industry best practices. If the review identifies needed modifications, the developer must update the protocol and re-publish the redacted version and transmit it to DHSES in the same manner as the initial publication. This is a continuing obligation — the protocol is not a static document filed once at deployment.
Statutory Text
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
G-01 AI Governance Program & Documentation · G-01.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(4)(a)-(e)
Plain Language
Large developers must annually retain an independent third-party auditor to evaluate compliance with all § 1421 requirements. The auditor must receive full unredacted access and produce a signed report covering: compliance assessment, any noncompliance instances with recommendations, and assessment of internal controls including the designation and empowerment of senior compliance personnel. The developer must retain the unredacted report for the deployment period plus five years, conspicuously publish a redacted version, transmit the redacted version to DHSES, and make the unredacted version available to DHSES or the Attorney General upon request (with redactions only as required by federal law). The 90-day grace period for newly qualifying large developers means the first audit must be retained no later than 90 days after the developer first meets the large developer threshold.
Statutory Text
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Gen. Bus. Law § 1421(5)
Plain Language
Large developers must report every safety incident affecting a frontier model to the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident or learning facts establishing a reasonable belief one occurred. A safety incident is not any operational issue — it must provide demonstrable evidence of increased risk of critical harm and fall into one of four categories: autonomous model behavior not requested by a user, theft or unauthorized access to model weights, critical failure of technical or administrative controls, or unauthorized use. Each report must include the incident date, the specific reason it qualifies as a safety incident, and a plain-language description.
Statutory Text
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Other · Frontier AI System
Gen. Bus. Law § 1421(6)
Plain Language
Large developers are prohibited from knowingly making false or materially misleading statements or omissions in any documents produced under § 1421 — including the safety and security protocol, testing records, audit reports, and safety incident disclosures. This is an integrity requirement that applies across all documentation obligations in the article.
Statutory Text
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(7)(a)-(b)
Plain Language
Persons who are not yet large developers but who plan to train a frontier model that would qualify them as large developers must, before beginning training, implement a written safety and security protocol and transmit a redacted copy to DHSES. The protocol need not include the detailed testing-procedure descriptions required by paragraphs (c) and (d) of the safety and security protocol definition — reflecting that the model has not yet been built or tested. Accredited colleges and universities engaged in academic research are exempt. This is a forward-looking trigger: the obligation attaches when a person sets out to train a qualifying model, not when the training is completed.
Statutory Text
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · Developer · Frontier AI System
Gen. Bus. Law § 1422(1)-(2)
Plain Language
Large developers and their contractors/subcontractors are prohibited from preventing or retaliating against employees who disclose (or threaten to disclose) information to the developer itself or the Attorney General about activities they reasonably believe pose an unreasonable or substantial risk of critical harm. The protection applies regardless of whether the employer is in compliance with applicable law — so an employee can blow the whistle even if the developer has not technically violated the statute yet. The 'employee' definition is broad: it covers traditional employees, contractors, subcontractors, unpaid advisors involved in risk assessment, and corporate officers. Employees harmed by retaliation can petition a court directly for temporary or preliminary injunctive relief — this is a limited private right of action for injunctive relief only, not damages.
Statutory Text
A large developer or a contractor or subcontractor of a large developer shall not prevent an employee from disclosing, or threatening to disclose, or retaliate against an employee for disclosing or threatening to disclose, information to the large developer or the attorney general, if the employee has reasonable cause to believe that the large developer's activities pose an unreasonable or substantial risk of critical harm, regardless of the employer's compliance with applicable law. 2. An employee harmed by a violation of this section may petition a court for appropriate temporary or preliminary injunctive relief.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · Developer · Frontier AI System
Gen. Bus. Law § 1422(3)
Plain Language
Large developers must provide written notice to all employees of their whistleblower protections, rights, and obligations under the RAISE Act. The notice must be distributed within 90 days of the article's effective date or of the developer first qualifying as a large developer, whichever is later, and at the commencement of each new employee's employment. Additionally, a physical notice must be posted conspicuously in well-lighted, easily accessible locations where employees frequently gather. The statute does not specify an electronic distribution method for remote workers.
Statutory Text
A large developer shall inform employees of their protections, rights and obligations under this article within ninety days of the effective date of this article or of becoming a large developer, whichever is later, upon commencement of employment, and by posting a notice thereof. Such notice shall be posted conspicuously in easily accessible and well-lighted places customarily frequented by employees.
Other · Frontier AI System
Gen. Bus. Law § 1423(2)(a)-(b)
Plain Language
Any contractual provision that waives, precludes, burdens, or shifts liability arising from violations of the RAISE Act — including through contracts of adhesion tied to product access — is void as a matter of public policy. This means large developers cannot use terms of service or licensing agreements to shift liability downstream to deployers or users. Additionally, courts are directed to pierce corporate formalities and impose joint and several liability on affiliated entities when those entities purposely structured themselves to limit or avoid liability under the article, and where the resulting structure would frustrate recovery. This is a significant anti-evasion provision targeting corporate restructuring strategies.
Statutory Text
(a) A provision within a contract or agreement that seeks to waive, preclude, or burden the enforcement of a liability arising from a violation of this article, or to shift that liability to any person or entity in exchange for their use or access of, or right to use or access, a large developer's products or services, including by means of a contract of adhesion, is void as a matter of public policy. (b) A court shall disregard corporate formalities and impose joint and several liability on affiliated entities for purposes of effectuating the intent of this section to the maximum extent allowed by law if the court concludes that both of the following are true: (i) The affiliated entities, in the development of the corporate structure among the affiliated entities, took steps to purposely and unreasonably limit or avoid liability; and (ii) As the result of the steps described in subparagraph (i) of this paragraph, the corporate structure of the large developer or affiliated entities would frustrate recovery of penalties, damages, or injunctive relief under this section.