SF-2415
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2026-07-01
Iowa Senate File 2415 — A bill for an act relating to provider requirements concerning the mental health of users of an artificial intelligence chatbot, and providing civil penalties
Iowa SF 2415 imposes mental health safety and disclosure obligations on providers (designers, deployers, or operators) of AI chatbots accessible to Iowa users. Providers are prohibited from designing or operating chatbots that offer or simulate professional mental health advice, and chatbots may not represent themselves as licensed professionals or offer services requiring licensure under Iowa psychology (chapter 154B) or behavioral science (chapter 154D) statutes. Providers must implement reasonable protocols to detect user expressions of self-harm, suicidal ideation, or emotional distress, and refer users to crisis services upon detection. Chatbots must disclose they are AI, not human, and not a substitute for professional mental health care — at the start of interaction, at regular intervals, and when generating mental-health-related responses. Enforcement is by the attorney general under Iowa's consumer fraud statute (§ 714.16), with civil penalties up to $40,000 per violation. Educational institutions and libraries are exempt from liability solely for providing access to general-use software or the internet.
Summary

Iowa SF 2415 imposes mental health safety and disclosure obligations on providers (designers, deployers, or operators) of AI chatbots accessible to Iowa users. Providers are prohibited from designing or operating chatbots that offer or simulate professional mental health advice, and chatbots may not represent themselves as licensed professionals or offer services requiring licensure under Iowa psychology (chapter 154B) or behavioral science (chapter 154D) statutes. Providers must implement reasonable protocols to detect user expressions of self-harm, suicidal ideation, or emotional distress, and refer users to crisis services upon detection. Chatbots must disclose they are AI, not human, and not a substitute for professional mental health care — at the start of interaction, at regular intervals, and when generating mental-health-related responses. Enforcement is by the attorney general under Iowa's consumer fraud statute (§ 714.16), with civil penalties up to $40,000 per violation. Educational institutions and libraries are exempt from liability solely for providing access to general-use software or the internet.

Enforcement & Penalties
Enforcement Authority
The attorney general has authority to enforce this chapter. A violation is an unfair practice under Iowa Code § 714.16 (consumer fraud), enforceable by the attorney general through injunction, disgorgement, restoration of improperly acquired moneys, and civil penalties. No private right of action is expressly created by the bill. The Department of Health and Human Services, in consultation with the chief information officer of the Department of Management, has rulemaking authority to implement the chapter.
Penalties
Violations are unfair practices under Iowa Code § 714.16, punishable by injunction, disgorgement of moneys, restoration of improperly acquired moneys, and a civil penalty of up to $40,000 per violation. No private damages remedy is created; remedies are available only through attorney general enforcement.
Who Is Covered
"Provider" means a person that designs, deploys, or operates an artificial intelligence chatbot that is accessible to users in this state.
What Is Covered
"Artificial intelligence chatbot" means a software program or application that uses natural language processing or similar machine-learning techniques to simulate human conversation or generate human-like responses to user input.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeveloperDeployer · ChatbotHealthcare
§ 554J.2(1)
Plain Language
Providers may not design or operate their AI chatbots in a way that allows the chatbot to offer or simulate professional mental health advice. The definition of mental health advice is broad — covering any statement, recommendation, or response purporting to diagnose, treat, mitigate, or address emotional distress, psychological disorders, self-harm, suicidal ideation, or other mental health concerns. This is a design-level prohibition — the provider must affirmatively prevent the chatbot from generating such outputs, not merely disclaim them.
Statutory Text
1. A provider shall not design or operate an artificial intelligence chatbot in a manner that allows the artificial intelligence chatbot to offer or simulate professional mental health advice.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeveloperDeployer · ChatbotHealthcare
§ 554J.2(2)
Plain Language
AI chatbots may not represent themselves as licensed professionals or offer services that would require licensure under Iowa's psychology (chapter 154B) or behavioral science (chapter 154D) statutes. This is a direct prohibition on the chatbot's output — the chatbot must not claim to be a psychologist, social worker, counselor, or similar licensed professional, and must not offer services (such as therapy sessions or diagnostic assessments) that require such licensure. While the obligation is stated as applying to the chatbot itself, compliance responsibility falls on the provider who designs, deploys, or operates the chatbot.
Statutory Text
2. An artificial intelligence chatbot shall not represent itself as a licensed professional or offer services that would require licensure under chapter 154B or 154D.
MN-02 AI Crisis Response Protocols · MN-02.1 · DeveloperDeployer · ChatbotHealthcare
§ 554J.2(3)
Plain Language
Providers must implement reasonable protocols enabling their chatbot to detect when users express self-harm, suicidal ideation, or emotional distress. When detected, the chatbot must refer the user to appropriate crisis services — the bill specifically names the national suicide prevention lifeline, the Iowa crisis hotline, and emergency services, but these are non-exhaustive examples. This is an ongoing operational requirement — the protocols must remain active and effective at all times, not just documented pre-launch. The 'reasonable protocols' standard gives providers some flexibility in implementation, but the detection-and-referral obligation is mandatory.
Statutory Text
3. A provider shall implement reasonable protocols to have the provider's artificial intelligence chatbot detect expressions of self-harm, suicidal ideation, or emotional distress by users. Upon detection of such expressions, the artificial intelligence chatbot shall refer the user to appropriate crisis services, including but not limited to the national suicide prevention lifeline, the Iowa crisis hotline, or emergency services.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeveloperDeployer · ChatbotHealthcare
§ 554J.3(1)–(2)
Plain Language
Every AI chatbot accessible to Iowa users must provide a clear, conspicuous, and easily understood disclosure stating three things: (1) it is artificial intelligence, (2) it is not a human, and (3) it is not a substitute for professional mental health care. This disclosure must appear at three mandatory times: before the chatbot's first response to the user, at regular intervals during continuous interaction, and whenever the chatbot generates a response related to emotional well-being, mental health, or self-harm. The bill does not specify a numeric interval (e.g., every three hours) — the Department of HHS is directed to adopt rules on acceptable disclosure formats. The mental-health-triggered disclosure in subsection 2(c) creates an additional, context-specific disclosure obligation beyond the periodic reminder.
Statutory Text
1. Each artificial intelligence chatbot accessible to a user in this state shall explicitly disclose in clear, conspicuous, and easily understood language that the artificial intelligence chatbot is artificial intelligence, is not a human, and is not a substitute for professional mental health care. 2. A disclosure required under this section shall appear at all of the following times: a. At the beginning of the artificial intelligence chatbot's interaction with a user prior to providing the user with a response to user input. b. At regular intervals during a user's continuous interaction with the artificial intelligence chatbot. c. When the artificial intelligence chatbot generates a response related to emotional well-being, mental health, or self-harm.
Other · ChatbotHealthcare
§ 554J.6
Plain Language
The Iowa Department of Health and Human Services, in consultation with the Department of Management's chief information officer, must adopt rules implementing this chapter. Required rulemaking topics include standards for crisis detection protocols, acceptable disclosure formats, and safe use guidelines for AI chatbot technologies. This is a directive to a government agency, not a compliance obligation on providers — but the resulting rules will directly shape what compliance requires. Providers should monitor HHS rulemaking proceedings for specific standards that will operationalize the bill's general requirements.
Statutory Text
The department of health and human services, in consultation with the chief information officer of the department of management, shall adopt rules to implement this chapter. Rules shall include but not be limited to all of the following: 1. Standards for detection protocols for self-harm, suicidal ideation, and emotional distress. 2. Acceptable formats for providing disclosures under section 554J.3. 3. Safe use guidelines for artificial intelligence chatbot technologies.