The RAISE Act imposes safety, transparency, and incident-reporting obligations on AI model developers. Before deploying any AI model, a developer must implement, publish, and transmit to the attorney general a written safety and security protocol describing protections against critical harm, cybersecurity measures, and testing procedures. Developers are prohibited from deploying models that create an unreasonable risk of critical harm — defined as death, serious injury, or mental injury of 25+ people, or $1M+ in damages from CBRN weapons or autonomous criminal conduct. Developers must conduct annual protocol reviews, report safety incidents to the attorney general within 72 hours, and retain testing records for the deployment period plus five years. Enforcement is through the attorney general (up to $10M/$30M civil penalties) and a private right of action for injured persons.