HF-4532
MN · State · USA
MN
USA
● Pending
Proposed Effective Date
2026-01-01
Minnesota H.F. No. 4532 — Responsible Artificial Intelligence Safety and Education Act (RAISE Act)
The RAISE Act imposes safety, transparency, and incident-reporting obligations on AI model developers. Before deploying any AI model, a developer must implement, publish, and transmit to the attorney general a written safety and security protocol describing protections against critical harm, cybersecurity measures, and testing procedures. Developers are prohibited from deploying models that create an unreasonable risk of critical harm — defined as death, serious injury, or mental injury of 25+ people, or $1M+ in damages from CBRN weapons or autonomous criminal conduct. Developers must conduct annual protocol reviews, report safety incidents to the attorney general within 72 hours, and retain testing records for the deployment period plus five years. Enforcement is through the attorney general (up to $10M/$30M civil penalties) and a private right of action for injured persons.
Summary

The RAISE Act imposes safety, transparency, and incident-reporting obligations on AI model developers. Before deploying any AI model, a developer must implement, publish, and transmit to the attorney general a written safety and security protocol describing protections against critical harm, cybersecurity measures, and testing procedures. Developers are prohibited from deploying models that create an unreasonable risk of critical harm — defined as death, serious injury, or mental injury of 25+ people, or $1M+ in damages from CBRN weapons or autonomous criminal conduct. Developers must conduct annual protocol reviews, report safety incidents to the attorney general within 72 hours, and retain testing records for the deployment period plus five years. Enforcement is through the attorney general (up to $10M/$30M civil penalties) and a private right of action for injured persons.

Enforcement & Penalties
Enforcement Authority
Attorney general may bring a civil action for violations of section 325M.41. Private right of action available to any person injured by a violation. Injury is required for private plaintiffs. No cure period or safe harbor specified.
Penalties
Attorney general may recover civil penalties up to $10,000,000 for a first violation and up to $30,000,000 for subsequent violations, plus injunctive or declaratory relief. Private plaintiffs may recover actual damages, costs, disbursements, reasonable attorney fees, and other equitable relief as determined by the court. No statutory minimum for private actions; plaintiff must show injury.
Who Is Covered
"Developer" means a person that has trained at least one artificial intelligence model.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 1(1)-(6)
Plain Language
Before deploying any AI model, a developer must create and implement a written safety and security protocol that covers risk-reduction procedures, cybersecurity protections, and detailed testing procedures. The developer must publicly publish a redacted version and transmit it to the attorney general, and must grant the AG access to the unredacted version (with only federally-required redactions) upon request. The developer must also retain the unredacted protocol and all test records in sufficient detail for third-party replication for the entire deployment period plus five years. Additionally, the developer must implement appropriate safeguards to prevent unreasonable risk of critical harm. The protocol must designate senior personnel responsible for compliance.
Statutory Text
Before deploying an artificial intelligence model, a developer must: (1) implement a written safety and security protocol; (2) retain an unredacted copy of the safety and security protocol, including records and dates of updates or revisions, for the entire period of time an artificial intelligence model is deployed, plus five years; (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general; (4) grant the attorney general access to the safety and security protocol with redactions only to the extent required by federal law, if the attorney general requests access; (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years; and (6) implement appropriate safeguards to prevent unreasonable risk of critical harm.
S-03 Frontier Model Safety Obligations · S-03.3 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 2
Plain Language
Developers are categorically prohibited from deploying an AI model if doing so would create an unreasonable risk of critical harm. Critical harm has a specific statutory definition keyed to CBRN weapons or autonomous criminal conduct causing death, serious injury, or mental injury of 25+ people or $1M+ in property damage. This is a deployment-gating prohibition — not a risk-mitigation obligation. If the risk is unreasonable, the model must not be deployed regardless of what safeguards are in place.
Statutory Text
A developer must not deploy an artificial intelligence model if doing so creates an unreasonable risk of critical harm.
G-01 AI Governance Program & Documentation · G-01.2 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 3(a)-(b)
Plain Language
Developers must annually review and update their safety and security protocol to reflect changes in the AI model's capabilities and evolving industry best practices. If the review results in a material modification, the developer must republish the updated protocol publicly with appropriate redactions and transmit a copy to the attorney general — the same dual-publication obligation that applies at initial deployment. The annual review is mandatory, as is modification — the statute says 'modify,' not 'modify if necessary,' suggesting continuous improvement is expected.
Statutory Text
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 4
Plain Language
Developers must report every safety incident to the attorney general within 72 hours of learning of it or of learning enough facts to form a reasonable belief one occurred. The report must include the date, the statutory basis for classifying it as a safety incident, and a plain-language description. Safety incidents include actual critical harm events as well as precursor events — autonomous model behavior, model weight theft or unauthorized access, and unauthorized model use — if they provide demonstrable evidence of increased critical-harm risk. The 72-hour clock starts at knowledge or reasonable belief, whichever is earlier.
Statutory Text
A developer must disclose each safety incident affecting the artificial intelligence model to the attorney general within 72 hours of the date the developer learns of the safety incident or within 72 hours of the date the developer learns sufficient facts to establish a reasonable belief that a safety incident has occurred. The disclosure must include: (1) the date of the safety incident; (2) the reasons the safety incident qualifies as a safety incident as defined in this section; and (3) a short statement describing in plain language the safety incident.
G-01 AI Governance Program & Documentation · Developer · Frontier AI System
Minn. Stat. § 325M.41, subd. 5
Plain Language
Developers must not knowingly include false or materially misleading statements or omissions in any documents produced under the RAISE Act — including the safety and security protocol, test records, and safety incident disclosures. This is a truthfulness-in-reporting obligation that applies to all documents the statute requires the developer to create, retain, publish, or submit to the attorney general. The 'knowingly' scienter requirement means negligent misstatements are not covered, but deliberate misrepresentation or deliberate omission of material facts is prohibited.
Statutory Text
A developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced under this section.