SF-2415
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2026-07-01
Iowa Senate File 2415 — A bill for an act relating to provider requirements concerning the mental health of users of an artificial intelligence chatbot, and providing civil penalties
Iowa SF 2415 imposes mental health safety obligations on providers of AI chatbots accessible to users in Iowa. Providers are prohibited from designing or operating chatbots that offer or simulate professional mental health advice, and chatbots may not represent themselves as licensed professionals or offer services requiring licensure under Iowa psychology or behavioral science chapters. Providers must implement reasonable protocols to detect expressions of self-harm, suicidal ideation, or emotional distress and refer users to crisis services upon detection. Chatbots must disclose that they are AI, not human, and not a substitute for professional mental health care — at the start of interaction, at regular intervals during continuous use, and whenever the chatbot generates responses related to emotional well-being, mental health, or self-harm. Enforcement is through the attorney general under Iowa's consumer fraud statute (§ 714.16), with civil penalties of up to $40,000 per violation. Educational institutions and libraries are exempt from liability solely for providing access to general-use software or the internet.
Summary

Iowa SF 2415 imposes mental health safety obligations on providers of AI chatbots accessible to users in Iowa. Providers are prohibited from designing or operating chatbots that offer or simulate professional mental health advice, and chatbots may not represent themselves as licensed professionals or offer services requiring licensure under Iowa psychology or behavioral science chapters. Providers must implement reasonable protocols to detect expressions of self-harm, suicidal ideation, or emotional distress and refer users to crisis services upon detection. Chatbots must disclose that they are AI, not human, and not a substitute for professional mental health care — at the start of interaction, at regular intervals during continuous use, and whenever the chatbot generates responses related to emotional well-being, mental health, or self-harm. Enforcement is through the attorney general under Iowa's consumer fraud statute (§ 714.16), with civil penalties of up to $40,000 per violation. Educational institutions and libraries are exempt from liability solely for providing access to general-use software or the internet.

Enforcement & Penalties
Enforcement Authority
The attorney general has authority to enforce this chapter. Violations are treated as unfair practices under Iowa Code § 714.16 (consumer fraud), which authorizes the attorney general to bring enforcement actions. No private right of action is created by the bill; enforcement is agency-initiated through the attorney general's office.
Penalties
Violations are unfair practices under Iowa Code § 714.16, which provides for injunction, disgorgement of moneys, restoration of improperly acquired moneys, and a civil penalty of up to $40,000 per violation.
Who Is Covered
"Provider" means a person that designs, deploys, or operates an artificial intelligence chatbot that is accessible to users in this state.
What Is Covered
"Artificial intelligence chatbot" means a software program or application that uses natural language processing or similar machine-learning techniques to simulate human conversation or generate human-like responses to user input.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeveloperDeployer · ChatbotHealthcare
§ 554J.2(1)
Plain Language
Providers may not design or operate an AI chatbot in a way that allows it to offer or simulate professional mental health advice. The defined scope of "mental health advice" covers statements purporting to diagnose, treat, mitigate, or address emotional distress, psychological disorders, self-harm, suicidal ideation, or other mental health concerns. This is a design and operational prohibition — the provider must affirmatively prevent the chatbot from generating such outputs, not merely disclaim them.
Statutory Text
1. A provider shall not design or operate an artificial intelligence chatbot in a manner that allows the artificial intelligence chatbot to offer or simulate professional mental health advice.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeveloperDeployer · ChatbotHealthcare
§ 554J.2(2)
Plain Language
AI chatbots may not represent themselves as licensed professionals (psychologists under chapter 154B or behavioral science professionals under chapter 154D) or offer services that would require such licensure. This is a distinct prohibition from the § 554J.2(1) ban on simulating mental health advice — this subsection specifically targets false claims of professional identity or licensure status, while § 554J.2(1) targets the substance of the output. A chatbot violates this provision by claiming to be a licensed psychologist or by offering to conduct therapy sessions, regardless of whether a disclaimer is present.
Statutory Text
2. An artificial intelligence chatbot shall not represent itself as a licensed professional or offer services that would require licensure under chapter 154B or 154D.
S-04 AI Crisis Response Protocols · S-04.1 · DeveloperDeployer · ChatbotHealthcare
§ 554J.2(3)
Plain Language
Providers must implement reasonable detection protocols so their chatbots can identify when users express self-harm, suicidal ideation, or emotional distress. Once detected, the chatbot must refer the user to appropriate crisis services — the statute specifically lists the national suicide prevention lifeline, the Iowa crisis hotline, and emergency services as examples, but the list is non-exhaustive. This is a continuing operational requirement: the detection protocols must be active at all times the chatbot is accessible. The standard is 'reasonable protocols,' giving providers some flexibility in implementation methodology. Educational institutions and libraries are exempt from liability solely for providing access to general-use software or the internet (§ 554J.5).
Statutory Text
3. A provider shall implement reasonable protocols to have the provider's artificial intelligence chatbot detect expressions of self-harm, suicidal ideation, or emotional distress by users. Upon detection of such expressions, the artificial intelligence chatbot shall refer the user to appropriate crisis services, including but not limited to the national suicide prevention lifeline, the Iowa crisis hotline, or emergency services.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeveloperDeployer · ChatbotHealthcare
§ 554J.3(1)–(2)
Plain Language
Every AI chatbot accessible to Iowa users must disclose — in clear, conspicuous, and easily understood language — three facts: (1) it is artificial intelligence, (2) it is not a human, and (3) it is not a substitute for professional mental health care. This disclosure must appear at three distinct points: before the chatbot provides its first response, at regular intervals during continuous interaction, and whenever the chatbot generates a response related to emotional well-being, mental health, or self-harm. The bill does not specify a minimum interval for periodic re-disclosure (contrast CA SB 243's every-three-hours floor), so the 'regular intervals' standard will likely be defined by HHS rulemaking under § 554J.6. The third trigger — mental health topic responses — is context-activated and functionally adds a heightened disclosure requirement beyond standard AI identity disclosure.
Statutory Text
1. Each artificial intelligence chatbot accessible to a user in this state shall explicitly disclose in clear, conspicuous, and easily understood language that the artificial intelligence chatbot is artificial intelligence, is not a human, and is not a substitute for professional mental health care. 2. A disclosure required under this section shall appear at all of the following times: a. At the beginning of the artificial intelligence chatbot's interaction with a user prior to providing the user with a response to user input. b. At regular intervals during a user's continuous interaction with the artificial intelligence chatbot. c. When the artificial intelligence chatbot generates a response related to emotional well-being, mental health, or self-harm.
Other · ChatbotHealthcare
§ 554J.4(1)–(2)
Plain Language
The attorney general is designated as the enforcement authority for the chapter, and any violation is classified as an unfair practice under Iowa's consumer fraud statute (§ 714.16). This provision does not create a new compliance obligation — it establishes the enforcement mechanism and penalty framework for the substantive obligations in §§ 554J.2 and 554J.3.
Statutory Text
1. The attorney general shall have authority to enforce this chapter. 2. A violation of this chapter is an unfair practice under section 714.16.
Other · Government · ChatbotHealthcare
§ 554J.6
Plain Language
The Iowa Department of Health and Human Services, consulting with the Department of Management's chief information officer, must adopt implementing rules. The rules must cover at minimum: detection protocol standards for self-harm, suicidal ideation, and emotional distress; acceptable disclosure formats; and safe use guidelines for AI chatbot technologies. This provision imposes a duty on a state agency, not on providers — though the resulting rules will create binding compliance standards for providers once adopted.
Statutory Text
The department of health and human services, in consultation with the chief information officer of the department of management, shall adopt rules to implement this chapter. Rules shall include but not be limited to all of the following: 1. Standards for detection protocols for self-harm, suicidal ideation, and emotional distress. 2. Acceptable formats for providing disclosures under section 554J.3. 3. Safe use guidelines for artificial intelligence chatbot technologies.
Other · ChatbotHealthcare
§ 714.16(2)(t) (as amended by Sec. 7)
Plain Language
This conforming amendment adds violations of chapter 554J (AI chatbot mental health requirements) to the enumerated list of unlawful practices in Iowa's consumer fraud statute. It does not create a new obligation — it ensures that the substantive obligations in §§ 554J.2 and 554J.3 are enforceable through the attorney general's existing consumer fraud enforcement powers.
Statutory Text
NEW PARAGRAPH. t. It is an unlawful practice for a person to violate chapter 554J.