A-06453
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-09-02
New York Assembly Bill 6453-A — An Act to amend the general business law, in relation to the training and use of artificial intelligence frontier models (Responsible AI Safety and Education Act / RAISE Act)
The RAISE Act imposes safety, transparency, and governance obligations on 'large developers' of frontier AI models — defined as entities that have spent over $5 million in compute on at least one frontier model and over $100 million in aggregate compute on frontier models. Core obligations include implementing and publicly publishing a written safety and security protocol before deployment, conducting annual third-party compliance audits, reporting safety incidents to the Division of Homeland Security and Emergency Services within 72 hours, and prohibiting deployment of frontier models that pose an unreasonable risk of critical harm (defined as 100+ deaths/serious injuries or $1B+ in damages via CBRN weapons or autonomous criminal AI conduct). The bill includes whistleblower anti-retaliation protections for employees, contractors, and unpaid advisors. Enforcement is through the Attorney General, with civil penalties up to $10 million per first violation and $30 million for subsequent violations of the safety requirements. Accredited colleges and universities engaged in academic research are excluded from the large developer definition.
Summary

The RAISE Act imposes safety, transparency, and governance obligations on 'large developers' of frontier AI models — defined as entities that have spent over $5 million in compute on at least one frontier model and over $100 million in aggregate compute on frontier models. Core obligations include implementing and publicly publishing a written safety and security protocol before deployment, conducting annual third-party compliance audits, reporting safety incidents to the Division of Homeland Security and Emergency Services within 72 hours, and prohibiting deployment of frontier models that pose an unreasonable risk of critical harm (defined as 100+ deaths/serious injuries or $1B+ in damages via CBRN weapons or autonomous criminal AI conduct). The bill includes whistleblower anti-retaliation protections for employees, contractors, and unpaid advisors. Enforcement is through the Attorney General, with civil penalties up to $10 million per first violation and $30 million for subsequent violations of the safety requirements. Accredited colleges and universities engaged in academic research are excluded from the large developer definition.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may bring a civil action for violations of the transparency/safety requirements (§ 1421) and the employee protection requirements (§ 1422). The Division of Homeland Security and Emergency Services receives safety incident disclosures and safety and security protocols but is not granted independent enforcement authority; it must make critical safety incident disclosures available to the Attorney General upon request. No private right of action is created. Employees harmed by retaliation may petition a court for temporary or preliminary injunctive relief, but this is limited to whistleblower retaliation claims under § 1422 and does not constitute a general private right of action. Contract provisions that waive, preclude, or burden enforcement of liability under the article — including through contracts of adhesion — are void as a matter of public policy. Courts may pierce corporate formalities and impose joint and several liability on affiliated entities that purposely structured themselves to avoid liability.
Penalties
For violations of § 1421 (transparency and safety requirements): civil penalty up to $10 million for a first violation and up to $30 million for subsequent violations. For violations of § 1422 (employee retaliation): civil penalty up to $10,000 per employee per violation, awarded to the harmed employee. Injunctive or declaratory relief is available for violations of either section. Employees harmed by retaliation may independently petition a court for temporary or preliminary injunctive relief. Courts may disregard corporate formalities and impose joint and several liability on affiliated entities that structured themselves to avoid liability. Contract provisions waiving or shifting liability under this article are void as a matter of public policy.
Who Is Covered
"Large developer" means a person that has trained at least one frontier model, the compute cost of which exceeds five million dollars, and has spent over one hundred million dollars in compute costs in aggregate in training frontier models. Accredited colleges and universities shall not be considered large developers under this article to the extent that such colleges and universities are engaging in academic research. If a person subsequently transfers full intellectual property rights of the frontier model to another person (including the right to resell the model) and retains none of those rights for themself, then the receiving person shall be considered the large developer and shall be subject to the responsibilities and requirements of this article after such transfer.
"Person" means an individual, proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, limited liability company, association, committee, or any other nongovernmental organization or group of persons acting in concert.
What Is Covered
"Frontier model" means either of the following: (a) an artificial intelligence model trained using greater than 10^26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds one hundred million dollars; or (b) an artificial intelligence model produced by applying knowledge distillation to a frontier model as defined in paragraph (a) of this subdivision.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(a)-(c)
Plain Language
Before deploying any frontier model, the large developer must create and implement a written safety and security protocol — a comprehensive document covering risk reduction measures, cybersecurity protections against unauthorized access, detailed testing procedures for critical harm risk, misuse and evasion assessment, compliance specifics, and designation of responsible senior personnel. The unredacted protocol must be retained for the duration of deployment plus five years. A redacted version must be conspicuously published and transmitted to the Division of Homeland Security and Emergency Services. The unredacted version must be made available to the Division or the Attorney General upon request, with redactions permitted only to the extent required by federal law.
Statutory Text
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol; (b) Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years; (c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
S-03 Frontier Model Safety Obligations · S-03.1 · Developer · Frontier AI System
Gen. Bus. Law § 1421(1)(d)-(e)
Plain Language
Before deployment, large developers must document the specific tests and results from all frontier model assessments in sufficient detail for third-party replication, and retain those records for the duration of deployment plus five years. Additionally, developers must implement appropriate safeguards to prevent unreasonable risk of critical harm. The safeguards obligation is ongoing — it does not end at deployment. Note the intervening-actor limitation in the critical harm definition: harms caused by a human third party are attributable to the developer only if the developer's activities made the harm substantially easier or more likely.
Statutory Text
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI System
Gen. Bus. Law § 1421(2)
Plain Language
Large developers are categorically prohibited from deploying a frontier model if deployment would create an unreasonable risk of critical harm. This is a deployment gate — it requires an affirmative determination that the risk is not unreasonable before making the model available. 'Critical harm' is defined to require either CBRN weapon enablement or autonomous criminal AI conduct causing 100+ deaths/serious injuries or $1B+ in damages, so the threshold for this prohibition is extremely high. The 'unreasonable risk' standard implies a reasonableness assessment, not a zero-risk requirement.
Statutory Text
A large developer shall not deploy a frontier model if doing so would create an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Frontier AI System
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must annually review their safety and security protocols, accounting for changes in model capabilities and evolving industry best practices. If modifications are needed, the developer must update the protocol and re-publish the redacted version conspicuously and re-transmit it to the Division of Homeland Security and Emergency Services — following the same publication process as the initial protocol. This is a continuing obligation that ensures protocols do not become stale as models evolve.
Statutory Text
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
G-01 AI Governance Program & Documentation · G-01.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(4)(a)-(e)
Plain Language
Large developers must annually engage an independent third-party auditor to assess compliance with all § 1421 requirements. The auditor must receive unredacted access to all necessary materials and must produce a detailed report covering: compliance steps taken, identified noncompliance instances and improvement recommendations, an assessment of internal controls including the empowerment of designated senior personnel, and a certifying signature from the lead auditor. The unredacted audit report must be retained for the duration of deployment plus five years. A redacted version must be conspicuously published and transmitted to the Division of Homeland Security and Emergency Services. The unredacted report must be provided to the Division or Attorney General upon request, redacted only as required by federal law. The 90-day grace period for new large developers means this obligation kicks in promptly upon qualifying.
Statutory Text
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Gen. Bus. Law § 1421(5)
Plain Language
Large developers must report each safety incident to the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident or of learning facts sufficient to establish a reasonable belief that one has occurred. The report must include the incident date, the specific statutory basis for why it qualifies as a safety incident, and a short plain-language description. The 72-hour clock starts from actual or constructive knowledge, creating an ongoing monitoring obligation. Safety incidents include autonomous model behavior, model weight theft or leakage, critical control failures, and unauthorized model use — but only when the incident provides demonstrable evidence of increased critical harm risk.
Statutory Text
A large developer shall disclose each safety incident affecting the frontier model to the division of homeland security and emergency services within seventy-two hours of the large developer learning of the safety incident or within seventy-two hours of the large developer learning facts sufficient to establish a reasonable belief that a safety incident has occurred. Such disclosure shall include: (a) the date of the safety incident; (b) the reasons the incident qualifies as a safety incident as defined in subdivision thirteen of section fourteen hundred twenty of this article; and (c) a short and plain statement describing the safety incident.
Other · Frontier AI System
Gen. Bus. Law § 1421(6)
Plain Language
Large developers must not knowingly make false or materially misleading statements — or material omissions — in any documents produced under the article's transparency and safety requirements. This includes the safety and security protocol, test records, audit reports, and safety incident disclosures. The 'knowingly' standard means inadvertent errors may not trigger this prohibition, but deliberate misrepresentation or omission will.
Statutory Text
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Gen. Bus. Law § 1421(7)
Plain Language
Persons who are not yet large developers but who intend to train a model that would qualify them as large developers upon completion must, before beginning training: (1) implement a written safety and security protocol (though without the detailed testing procedure and misuse assessment elements normally required), and (2) transmit a redacted copy to the Division of Homeland Security and Emergency Services. This is a pre-qualification obligation — it ensures that entities approaching the frontier model threshold have safety protocols in place before, not after, training a qualifying model. The academic research exclusion for accredited colleges and universities applies here as well.
Statutory Text
Any person who is not a large developer, but who sets out to train a frontier model that if completed as planned would qualify such person as a large developer (i.e. at the end of the training, such person will have spent five million dollars in compute costs on one frontier model and one hundred million dollars in compute costs in aggregate in training frontier models, excluding accredited colleges and universities to the extent such colleges and universities are engaging in academic research) shall, before training such model: (a) Implement a written safety and security protocol, excluding the requirements described in paragraphs (c) and (d) of subdivision twelve of section fourteen hundred twenty of this article; and (b) Transmit a copy of an appropriately redacted safety and security protocol to the division of homeland security and emergency services.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · Developer · Frontier AI System
Gen. Bus. Law § 1422(1)
Plain Language
Large developers and their contractors/subcontractors are prohibited from preventing or retaliating against employees who disclose — or threaten to disclose — information to the developer itself or to the Attorney General, if the employee reasonably believes the developer's activities pose an unreasonable or substantial risk of critical harm. The anti-retaliation protection applies regardless of whether the employer is otherwise in compliance with the law. The employee definition is notably broad — it includes not only traditional employees but also contractors, subcontractors, unpaid advisors involved in risk assessment, and corporate officers. Employees harmed by retaliation may seek temporary or preliminary injunctive relief from a court.
Statutory Text
A large developer or a contractor or subcontractor of a large developer shall not prevent an employee from disclosing, or threatening to disclose, or retaliate against an employee for disclosing or threatening to disclose, information to the large developer or the attorney general, if the employee has reasonable cause to believe that the large developer's activities pose an unreasonable or substantial risk of critical harm, regardless of the employer's compliance with applicable law.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · Developer · Frontier AI System
Gen. Bus. Law § 1422(3)
Plain Language
Large developers must provide written notice to employees of their whistleblower protections and other rights under the RAISE Act within 90 days of the law's effective date or of becoming a large developer. The notice must also be provided at onboarding for new employees and must be posted conspicuously in physical locations frequented by employees. The statute does not specifically address remote workers, which may create a practical compliance gap — consider whether electronic posting or distribution is needed to reach remote employees.
Statutory Text
A large developer shall inform employees of their protections, rights and obligations under this article within ninety days of the effective date of this article or of becoming a large developer, whichever is later, upon commencement of employment, and by posting a notice thereof. Such notice shall be posted conspicuously in easily accessible and well-lighted places customarily frequented by employees.