SF-4509
MN · State · USA
MN
USA
● Pending
Proposed Effective Date
2026-08-01
Minnesota S.F. No. 4509 — Responsible Artificial Intelligence Safety and Education Act (RAISE Act)
The RAISE Act imposes safety, transparency, and incident reporting obligations on developers of AI models. Before deploying a model, developers must implement a written safety and security protocol covering risk reduction, cybersecurity, and testing procedures; publish the protocol publicly; transmit a copy to the attorney general; and retain detailed testing records for the deployment period plus five years. Developers are prohibited from deploying a model that creates an unreasonable risk of critical harm (defined as death, serious injury, or mental injury to 25+ people, or $1M+ in damages from CBRN weapons or autonomous criminal conduct). Safety incidents must be reported to the attorney general within 72 hours. Enforcement is through AG civil actions with penalties up to $10M/$30M and a private right of action for injured persons.
Summary

The RAISE Act imposes safety, transparency, and incident reporting obligations on developers of AI models. Before deploying a model, developers must implement a written safety and security protocol covering risk reduction, cybersecurity, and testing procedures; publish the protocol publicly; transmit a copy to the attorney general; and retain detailed testing records for the deployment period plus five years. Developers are prohibited from deploying a model that creates an unreasonable risk of critical harm (defined as death, serious injury, or mental injury to 25+ people, or $1M+ in damages from CBRN weapons or autonomous criminal conduct). Safety incidents must be reported to the attorney general within 72 hours. Enforcement is through AG civil actions with penalties up to $10M/$30M and a private right of action for injured persons.

Enforcement & Penalties
Enforcement Authority
Attorney general may bring a civil action for violations of section 325M.41. Private right of action available to any person injured by a violation. Standing requires injury in fact. No cure period or safe harbor specified.
Penalties
Attorney general: civil penalty not exceeding $10,000,000 for a first violation and not exceeding $30,000,000 for any subsequent violation, plus injunctive or declaratory relief. Private plaintiffs: actual damages, costs, disbursements, reasonable attorney fees, and other equitable relief as determined by the court. No statutory minimum for private plaintiffs; recovery requires proof of injury.
Who Is Covered
"Developer" means a person that has trained at least one artificial intelligence model.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 1(1)-(2)
Plain Language
Before deploying any AI model, a developer must create and implement a written safety and security protocol that covers risk reduction measures, cybersecurity protections against unauthorized access by sophisticated actors, detailed testing procedures for evaluating unreasonable risk of critical harm, and designation of senior compliance personnel. The developer must retain an unredacted copy of the protocol — including all revision history — for the entire deployment period plus five years. This is a pre-deployment gating requirement: deployment may not proceed until the protocol is in place.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years;
G-02 Public Transparency & Documentation · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 1(3)
Plain Language
Before deployment, developers must both (1) conspicuously publish a redacted copy of their safety and security protocol publicly, and (2) transmit a redacted copy to the attorney general. This creates a dual disclosure obligation — public transparency plus regulatory submission. The developer may apply 'appropriate redactions' to the public and AG copies, but see subdivision 1(4) which requires the developer to provide an essentially unredacted copy if the AG requests access.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general;
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 1(4)
Plain Language
If the attorney general requests it, the developer must provide access to the safety and security protocol with minimal redactions — only those required by federal law are permitted. This is a demand-driven disclosure obligation distinct from the proactive transmission of a redacted copy under subdivision 1(3). The practical effect is that the AG can see the essentially full, unredacted protocol upon request, whereas the version proactively transmitted may have broader redactions.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access;
G-01 AI Governance Program & Documentation · G-01.3 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 1(5)
Plain Language
Developers must create and retain detailed records of all testing — both tests required by law and tests required by the developer's own safety protocol — with enough specificity that a third party could replicate the testing procedure. Retention is required for the full deployment period plus five years. This is a contemporaneous documentation obligation: the records must be created at the time of testing and retained, not reconstructed later.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years;
S-03 Frontier Model Safety Obligations · S-03.1 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 1(6)
Plain Language
Developers must implement appropriate safeguards to prevent unreasonable risk of critical harm before deploying any AI model. Critical harm is narrowly defined to cover CBRN weapon creation/use and autonomous criminal conduct causing mass casualties (25+ people) or $1M+ in damages. This is a substantive pre-deployment safety obligation — developers must have working safeguards, not merely a documented protocol. The 'appropriate' and 'unreasonable risk' standards introduce a reasonableness balancing test.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying any AI model if deployment would create an unreasonable risk of critical harm. This is a deployment-gating prohibition — not a process obligation. Even full compliance with the safety and security protocol requirement does not authorize deployment if the model still poses unreasonable critical harm risk. The standard is 'unreasonable risk,' implying some level of risk may be acceptable if adequately mitigated.
Statutory Text
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 3(a)-(b)
Plain Language
Developers must annually review their safety and security protocol, updating it to reflect both changes in the AI model's capabilities and evolving industry best practices. When material modifications are made, the developer must re-publish the protocol publicly (with appropriate redactions) and re-transmit a copy to the attorney general, following the same process as the initial pre-deployment publication. This is a continuing obligation — annual review is mandatory regardless of whether the developer believes changes are needed.
Statutory Text
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 4
Plain Language
Developers must report each safety incident to the attorney general within 72 hours of learning of the incident or of learning facts sufficient to establish a reasonable belief one occurred — whichever is earlier. Safety incidents include known critical harm events, autonomous model behavior outside user requests, model weight theft or leaks, and unauthorized use — provided any of the latter three provide demonstrable evidence of increased critical harm risk. The report must include the incident date, why it qualifies as a safety incident under the statute, and a plain-language description. The 72-hour clock is aggressive and starts at knowledge or reasonable belief, not at confirmation.
Statutory Text
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Other · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 5
Plain Language
Developers are prohibited from knowingly making false or materially misleading statements or omissions in any documents produced under the RAISE Act — including safety and security protocols, testing records, and safety incident disclosures. This is an anti-fraud overlay on all other documentation obligations in the statute. The 'knowingly' scienter requirement means negligent errors are not covered, but deliberate misrepresentations and intentional omissions are. This provision creates no new standalone compliance process but amplifies the consequences of non-compliance with documentation obligations.
Statutory Text
A developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced under this section.