HF-4532
MN · State · USA
MN
USA
● Pending
Proposed Effective Date
2026-01-01
Minnesota HF 4532 — Responsible Artificial Intelligence Safety and Education Act (RAISE Act)
The RAISE Act imposes safety and transparency obligations on developers of AI models. Developers must implement, publish, and annually review a safety and security protocol before deploying any AI model, and must retain unredacted copies plus testing records for the deployment period plus five years. Developers are prohibited from deploying models that create an unreasonable risk of critical harm. Safety incidents must be reported to the attorney general within 72 hours, and developers may not make false or misleading statements in required documentation. Enforcement is through both attorney general civil actions (up to $10M/$30M penalties) and a private right of action for injured persons.
Summary

The RAISE Act imposes safety and transparency obligations on developers of AI models. Developers must implement, publish, and annually review a safety and security protocol before deploying any AI model, and must retain unredacted copies plus testing records for the deployment period plus five years. Developers are prohibited from deploying models that create an unreasonable risk of critical harm. Safety incidents must be reported to the attorney general within 72 hours, and developers may not make false or misleading statements in required documentation. Enforcement is through both attorney general civil actions (up to $10M/$30M penalties) and a private right of action for injured persons.

Enforcement & Penalties
Enforcement Authority
Attorney general may bring a civil action for violations of § 325M.41. Private right of action available to any person injured by a violation. Injury is required for private plaintiffs. No cure period or safe harbor is specified.
Penalties
Attorney general enforcement: civil penalty up to $10,000,000 for a first violation and up to $30,000,000 for subsequent violations, plus injunctive or declaratory relief. Private right of action: actual damages, costs, disbursements, reasonable attorney fees, and other equitable relief as determined by the court. No statutory minimum damages for private plaintiffs; recovery requires proof of injury.
Who Is Covered
"Developer" means a person that has trained at least one artificial intelligence model.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
§ 325M.41, subd. 1(1)-(6)
Plain Language
Before deploying any AI model, developers must create and implement a written safety and security protocol covering risk reduction measures, cybersecurity protections, detailed testing procedures, and designation of responsible senior personnel. Developers must publicly publish a redacted version and transmit a copy to the attorney general, retain the unredacted version plus all testing records for the deployment period plus five years, grant the AG access to the unredacted protocol upon request (with redactions only as required by federal law), and implement safeguards against unreasonable risk of critical harm. This is a comprehensive pre-deployment gate — no model may be deployed without these steps completed.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI System
§ 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying any AI model that creates an unreasonable risk of critical harm. Critical harm covers CBRN weapon creation/use and autonomous criminal conduct resulting in death, serious injury, or mental injury to 25+ people, or $1M+ in property/monetary damages. This is a hard deployment prohibition — no compliance program or safety protocol can cure it if the unreasonable risk exists.
Statutory Text
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Frontier AI System
§ 325M.41, subd. 3(a)-(b)
Plain Language
Developers must annually review and update their safety and security protocol to reflect changes in model capabilities and evolving industry best practices. If a material modification results from the review, the developer must re-publish the updated protocol (with appropriate redactions) and re-transmit it to the attorney general, following the same publication requirements as the initial deployment. This is not a one-time exercise — it is a continuing annual obligation.
Statutory Text
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
§ 325M.41, subd. 4
Plain Language
Developers must report every safety incident to the attorney general within 72 hours of learning of the incident or forming a reasonable belief that one occurred. The report must include the date, the statutory basis for why the event qualifies as a safety incident, and a plain-language description. A safety incident includes actual critical harm events as well as precursor events — autonomous model behavior, model weight theft or unauthorized access, and unauthorized use — that provide demonstrable evidence of increased critical harm risk. The 72-hour clock starts on knowledge or constructive knowledge, whichever is earlier.
Statutory Text
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
Other · Developer · Frontier AI System
§ 325M.41, subd. 5
Plain Language
Developers are prohibited from knowingly making false or materially misleading statements or omissions in any document produced under the RAISE Act, including the safety and security protocol, testing records, and safety incident reports. This is a knowledge-based standard — negligent inaccuracies are not covered, but deliberate falsehoods and material omissions are. This operates as an anti-fraud backstop for the entire documentation framework rather than an independent compliance obligation.
Statutory Text
A developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced under this section.