AB-6453
NY · State · USA
NY
USA
● Enacted
Effective Date
2025-06-03
New York Assembly Bill 6453-B — An Act to amend the general business law, in relation to the training and use of artificial intelligence frontier models (Responsible AI Safety and Education Act)
The RAISE Act imposes safety, transparency, and reporting obligations on large developers of frontier AI models that are developed, deployed, or operating in New York. Before deployment, large developers must implement and publicly publish a safety and security protocol, conduct pre-deployment testing, retain detailed test records, and implement safeguards to prevent unreasonable risk of critical harm. Developers are prohibited from deploying frontier models that pose unreasonable risk of critical harm and must report safety incidents to the Attorney General and Division of Homeland Security within 72 hours. Enforcement is exclusively through the Attorney General, with civil penalties up to $10 million for a first violation and $30 million for subsequent violations. No private right of action exists. Accredited colleges and universities engaged in academic research are exempted.
Summary

The RAISE Act imposes safety, transparency, and reporting obligations on large developers of frontier AI models that are developed, deployed, or operating in New York. Before deployment, large developers must implement and publicly publish a safety and security protocol, conduct pre-deployment testing, retain detailed test records, and implement safeguards to prevent unreasonable risk of critical harm. Developers are prohibited from deploying frontier models that pose unreasonable risk of critical harm and must report safety incidents to the Attorney General and Division of Homeland Security within 72 hours. Enforcement is exclusively through the Attorney General, with civil penalties up to $10 million for a first violation and $30 million for subsequent violations. No private right of action exists. Accredited colleges and universities engaged in academic research are exempted.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The AG may bring a civil action for violations. No private right of action.
Penalties
Civil penalties not exceeding $10 million for a first violation and not exceeding $30 million for any subsequent violation, determined based on severity. Injunctive or declaratory relief also available. No statutory minimum — penalties are capped, not floored.
Who Is Covered
"Large developer" means a person that has trained at least one frontier model and has spent over one hundred million dollars in compute costs in aggregate in training frontier models. Accredited colleges and universities shall not be considered large developers under this article to the extent that such colleges and universities are engaging in academic research. If a person subsequently transfers full intellectual property rights of the frontier model to another person (including the right to resell the model) and retains none of those rights for themself, then the receiving person shall be considered the large developer and shall be subject to the responsibilities and requirements of this article after such transfer.
What Is Covered
"Frontier model" means either of the following: (a) an artificial intelligence model trained using greater than 10^26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds one hundred million dollars; or (b) an artificial intelligence model produced by applying knowledge distillation to a frontier model as defined in paragraph (a) of this subdivision, provided that the compute cost for such model produced by applying knowledge distillation exceeds five million dollars.
Compliance Obligations 11 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.1S-01.5 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(1)(a)
Plain Language
Before deploying any frontier model, the large developer must have a written safety and security protocol in place. The protocol must cover risk reduction procedures, cybersecurity protections (including against sophisticated actors), detailed testing procedures, and must designate senior personnel responsible for compliance. This is a pre-deployment prerequisite — no frontier model may be deployed without this documentation and these safeguards in place.
Statutory Text
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol;
G-01 AI Governance Program & Documentation · G-01.3 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(1)(b)
Plain Language
Large developers must retain a complete, unredacted version of the safety and security protocol — including a changelog of all updates and revisions — for the entire period the frontier model is deployed plus five additional years. This is a document retention obligation. Note that the publicly published version may include appropriate redactions (see § 1421(1)(c)), but the retained internal version must be unredacted. Organizations should ensure their records management systems can track versioning with dates.
Statutory Text
Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years.
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(1)(c)(i)-(ii)
Plain Language
Large developers must conspicuously publish a copy of their safety and security protocol, which may include appropriate redactions (for trade secrets, public safety, privacy, and legally protected information). A copy of this redacted protocol must also be transmitted to the AG and Division of Homeland Security and Emergency Services. Separately, upon request, the AG and DHSES must be given access to the protocol with redactions limited only to those required by federal law — meaning the regulator version is substantially less redacted than the public version. This creates a two-tier disclosure regime: a more heavily redacted public version and a nearly unredacted regulator version.
Statutory Text
(i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the attorney general and division of homeland security and emergency services; (ii) Grant the attorney general and division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request.
G-01 AI Governance Program & Documentation · G-01.3 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(1)(d)
Plain Language
Large developers must record and retain detailed information about all tests and test results from frontier model assessments — both those required by the statute and those required by the developer's own safety and security protocol. Records must contain sufficient detail for third parties to replicate the testing procedure, creating a reproducibility standard. Retention period is the duration of deployment plus five years. The 'as and when reasonably possible' qualifier provides some flexibility for real-time testing contexts where immediate documentation may be impractical.
Statutory Text
Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model required by this section or the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure.
S-01 AI System Safety Program · S-01.1S-01.5 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(1)(e)
Plain Language
Before deploying any frontier model, the large developer must implement appropriate safeguards to prevent unreasonable risk of critical harm.
Statutory Text
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: ... (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(2)
Plain Language
This is an absolute deployment prohibition: a large developer may not deploy a frontier model if doing so would create an unreasonable risk of critical harm. Critical harm is defined narrowly — it requires either CBRN weapon creation/use or autonomous AI criminal conduct, and must cause death or serious injury to 100+ people or $1 billion+ in damages. The 'unreasonable risk' standard introduces a reasonableness analysis rather than a zero-risk requirement. Note that deployment excludes internal training, evaluation, and legal compliance use. The intervening human actor causation limitation in the critical harm definition provides a defense where harm is caused by an unforeseeable third party.
Statutory Text
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must review their safety and security protocol at least annually, with the review accounting for changes in frontier model capabilities and evolving industry best practices. If the review results in material modifications, the updated protocol must be re-published publicly (with appropriate redactions) and re-transmitted to the AG and Division of Homeland Security. This creates a continuing maintenance obligation — the protocol is not a one-time pre-deployment document but a living document requiring annual reassessment. The trigger for re-publication is 'material modifications,' which introduces a materiality judgment call.
Statutory Text
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any material modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
R-01 Incident Reporting · R-01.1 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(4)
Plain Language
Large developers must report every safety incident to both the Attorney General and the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident (or learning facts sufficient to establish a reasonable belief one occurred). The report must include the date, the classification basis under the statutory definition, and a plain-language description. Safety incidents include actual critical harm events as well as precursor incidents — autonomous model behavior, model weight theft/release, control failures, and unauthorized use — that provide demonstrable evidence of increased critical harm risk. The 72-hour clock starts from actual or constructive knowledge, creating an incentive for robust internal monitoring and escalation procedures.
Statutory Text
A large developer shall disclose each safety incident affecting the frontier model to the attorney general and division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
G-01 AI Governance Program & Documentation · G-01.6 · Foundation ModelFrontier AI System
Gen. Bus. Law § 1420(12)(e)
Plain Language
The safety and security protocol must designate senior personnel responsible for ensuring compliance with the statute. This effectively creates a mandatory accountability role — a named senior individual or individuals who bear responsibility for the developer's compliance with the RAISE Act. While embedded within the protocol definition rather than stated as a standalone obligation, it is independently actionable because a protocol that omits this designation is deficient on its face.
Statutory Text
"Safety and security protocol" means documented technical and organizational protocols that: ... (e) Designate senior personnel to be responsible for ensuring compliance.
G-01 AI Governance Program & Documentation · G-01.4 · Developer · Foundation ModelFrontier AI System
Gen. Bus. Law § 1421(5)
Plain Language
Large developers are prohibited from knowingly making false or materially misleading statements or omissions in any documents produced under the statute — including the safety and security protocol, test records, and safety incident reports. This is an anti-fraud provision that applies to all documentary submissions and publications required by the RAISE Act. The 'knowingly' mens rea standard means the developer must have actual awareness that the statement is false or misleading; negligent inaccuracies would not violate this provision.
Statutory Text
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
S-03 Frontier Model Safety Obligations · S-03.2 · Foundation ModelFrontier AI System
Gen. Bus. Law § 1420(12)(c)
Plain Language
The safety and security protocol must include detailed testing procedures that evaluate two things: (1) whether the frontier model poses an unreasonable risk of critical harm, and (2) whether the model could be misused, modified, scaled up, escape developer/user control, combined with other software, or used to create another frontier model in ways that increase critical harm risk. This effectively mandates red-teaming and adversarial testing across multiple attack vectors and misuse scenarios. Because the critical harm definition encompasses CBRN weapon creation, this subsection requires CBRN-specific risk evaluation as part of the testing regime.
Statutory Text
"Safety and security protocol" means documented technical and organizational protocols that: ... (c) Describe in detail the testing procedure to evaluate if the frontier model poses an unreasonable risk of critical harm and whether the frontier model could be misused, be modified, be executed with increased computational resources, evade the control of its large developer or user, be combined with other software or be used to create another frontier model in a manner that would increase the risk of critical harm.