SF-4509
MN · State · USA
MN
USA
● Pre-filed
Proposed Effective Date
2026-08-01
Minnesota S.F. No. 4509 — Responsible Artificial Intelligence Safety and Education Act (RAISE Act)
The RAISE Act imposes safety and transparency obligations on developers of AI models in Minnesota. Developers must implement and publish a written safety and security protocol before deploying any AI model, conduct annual reviews, retain detailed testing records for the deployment period plus five years, and report safety incidents to the attorney general within 72 hours. Deployment is prohibited if it creates an unreasonable risk of critical harm, defined as death, serious injury, or mental injury to 25+ people or $1M+ in damages resulting from CBRN weapon creation or autonomous criminal conduct. Enforcement is by the attorney general (up to $10M first violation, $30M subsequent) and by private right of action for injured persons.
Summary

The RAISE Act imposes safety and transparency obligations on developers of AI models in Minnesota. Developers must implement and publish a written safety and security protocol before deploying any AI model, conduct annual reviews, retain detailed testing records for the deployment period plus five years, and report safety incidents to the attorney general within 72 hours. Deployment is prohibited if it creates an unreasonable risk of critical harm, defined as death, serious injury, or mental injury to 25+ people or $1M+ in damages resulting from CBRN weapon creation or autonomous criminal conduct. Enforcement is by the attorney general (up to $10M first violation, $30M subsequent) and by private right of action for injured persons.

Enforcement & Penalties
Enforcement Authority
Attorney general may bring a civil action for violations of section 325M.41. Private right of action available to any person injured by a violation. Private plaintiff must demonstrate injury. No cure period or safe harbor is specified.
Penalties
Attorney general may recover civil penalties up to $10,000,000 for a first violation and up to $30,000,000 for subsequent violations, plus injunctive or declaratory relief. Private plaintiffs may recover actual damages, costs, disbursements, reasonable attorney fees, and other equitable relief as determined by the court. Private action requires injury — no statutory minimum damages are specified for private plaintiffs.
Who Is Covered
"Developer" means a person that has trained at least one artificial intelligence model.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 1(1)-(6)
Plain Language
Before deploying any AI model, a developer must write and implement a comprehensive safety and security protocol covering risk reduction measures, cybersecurity protections, and detailed testing procedures. The protocol must designate senior personnel responsible for compliance. Developers must publicly publish an appropriately redacted version, transmit a copy to the attorney general, and grant the AG access to the less-redacted version upon request (with redactions limited to those required by federal law). All testing records must be detailed enough for third-party replication and retained for the deployment period plus five years. Developers must also implement safeguards to prevent unreasonable risk of critical harm. This is a comprehensive pre-deployment gating obligation — no model may be deployed until all six requirements are satisfied.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying an AI model if doing so would create an unreasonable risk of critical harm. This is a deployment gate — not a mitigation obligation. If the risk of critical harm is unreasonable, the model may not be deployed at all, regardless of what safeguards are in place. Critical harm is defined by reference to CBRN weapon creation or autonomous criminal conduct causing mass casualties or $1M+ in damages.
Statutory Text
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 3(a)-(b)
Plain Language
Developers must conduct an annual review of their safety and security protocol, accounting for both changes to the AI model's capabilities and evolving industry best practices. The protocol must be modified as needed following review. If the modifications are material, the developer must re-publish the updated protocol (with appropriate redactions) and transmit a copy to the attorney general — the same publication requirements that apply to the initial protocol. This is a continuing obligation, not a one-time pre-deployment exercise.
Statutory Text
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 4
Plain Language
Developers must report every safety incident to the Minnesota attorney general within 72 hours of learning of the incident or learning sufficient facts to reasonably believe one occurred. The report must include the date, an explanation of why the event qualifies as a safety incident, and a plain-language description. Safety incidents include actual critical harm events and events demonstrating increased risk of critical harm — such as autonomous model behavior, model weight theft or leakage, and unauthorized model use. The 72-hour clock starts at actual or constructive knowledge, not at the time of the incident itself.
Statutory Text
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Other · Frontier AI System
Minn. Stat. § 325M.41, subd. 5
Plain Language
Developers must not knowingly include false or materially misleading statements or omissions in any documents produced under the Act — including the safety and security protocol, testing records, and safety incident disclosures. This is an anti-fraud provision that backstops all documentation obligations in the statute. It applies to both the content of documents and statements made about those documents.
Statutory Text
A developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced under this section.