HF-2715
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2025-07-01
Iowa House File 2715 — A bill for an act relating to chatbots, including deployer requirements and interactions with minors
Iowa HF 2715 imposes safety and disclosure obligations on deployers of public-facing chatbots, AI companions, and therapeutic chatbots available to Iowa users. Deployers must maintain harm-mitigation protocols, limit user data collection to what is necessary, disclose the chatbot's AI nature and lack of professional licensure at the start of each interaction and every three hours of continuous use, and implement suicide and self-harm crisis response protocols. Heightened requirements apply to AI companions and therapeutic chatbots when minors are involved, including commercially reasonable age-determination measures and parental notification for self-harm prompts. Therapeutic chatbots may only be made available to minors under strict conditions including a licensed professional's recommendation and peer-reviewed clinical trial data. Enforcement is exclusively by the attorney general, with civil penalties up to $2,500 per violation ($7,500 for injunction violations) and a 30-day cure period except where imminent harm to a minor is at stake. A safe harbor protects deployers making commercially reasonable compliance efforts from liability for unforeseeable or emergent outputs.
Summary

Iowa HF 2715 imposes safety and disclosure obligations on deployers of public-facing chatbots, AI companions, and therapeutic chatbots available to Iowa users. Deployers must maintain harm-mitigation protocols, limit user data collection to what is necessary, disclose the chatbot's AI nature and lack of professional licensure at the start of each interaction and every three hours of continuous use, and implement suicide and self-harm crisis response protocols. Heightened requirements apply to AI companions and therapeutic chatbots when minors are involved, including commercially reasonable age-determination measures and parental notification for self-harm prompts. Therapeutic chatbots may only be made available to minors under strict conditions including a licensed professional's recommendation and peer-reviewed clinical trial data. Enforcement is exclusively by the attorney general, with civil penalties up to $2,500 per violation ($7,500 for injunction violations) and a 30-day cure period except where imminent harm to a minor is at stake. A safe harbor protects deployers making commercially reasonable compliance efforts from liability for unforeseeable or emergent outputs.

Enforcement & Penalties
Enforcement Authority
The attorney general may bring an action on behalf of the state to enforce the chapter and may seek an injunction for violations. Prior to initiating a proceeding to obtain a civil penalty, the attorney general must notify the person in violation and give the person thirty calendar days to cure the violation. The cure period does not apply if a violation will cause imminent harm to a minor. No private right of action is created.
Penalties
A court may issue a civil penalty of not more than $2,500 for each violation, or $7,500 if a person violates an injunction issued under the chapter. The attorney general may also seek injunctive relief. Penalties are deposited into the general fund of the state.
Who Is Covered
"Deployer" means a person that makes an AI companion, a public-facing chatbot, or a therapeutic chatbot available to users in this state.
What Is Covered
"Chatbot" means artificial intelligence that is described by all of the following: (1) The artificial intelligence accepts open-ended, natural-language, or multimodal user input and produces adaptive or context-responsive output. (2) The artificial intelligence produces new expressive content or responses that were not fully predetermined by the person who created or who operates the artificial intelligence. "Chatbot" does not include a service limited to internal business operations or a service requiring user authentication through an employer, an educational institution, or a similar organization.
"Public-facing chatbot" means a chatbot intentionally made available to the general public or marketed directly to consumers for independent use without the ongoing supervision of the deployer or an institutional consumer. "Public-facing chatbot" does not include any of the following: (1) Software designed primarily for internal business operations. (2) Enterprise software licensed to a specific business, nonprofit organization, or governmental entity. (3) Chatbots used solely within the context of an existing customer relationship. (4) Systems requiring authentication through an employer, educational institution, health care provider, or similar organization prior to use.
"AI companion" means a public-facing chatbot designed to simulate a human-like romantic or emotional bond.
"Therapeutic chatbot" means a public-facing chatbot that is designed for the primary purpose of providing mental health support, counseling, or therapy by diagnosing, treating, mitigating, or preventing a mental health condition.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.4S-01.5 · Deployer · Chatbot
§ 554J.2(1)(a)
Plain Language
Deployers of public-facing chatbots must implement and maintain protocols to detect, respond to, report, and mitigate harms the chatbot may cause users. The standard is commercially reasonable — proportionate to the deployer's size, resources, and technical capabilities and consistent with prevailing industry standards. This is a continuing operational obligation, not a one-time pre-launch check. A deployer making commercially reasonable efforts to comply is protected from liability for unforeseeable or emergent outputs under the safe harbor provision in § 554J.5.
Statutory Text
A deployer of a public-facing chatbot shall do all of the following: a. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the public-facing chatbot may cause a user in a manner that takes commercially reasonable steps to protect the safety and well-being of users.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
§ 554J.2(1)(b)
Plain Language
Deployers must minimize user information collected and stored by the public-facing chatbot to only what is necessary for the deployer's stated purpose in making the chatbot publicly available. This is a data minimization obligation — secondary uses of collected data beyond the chatbot's stated purpose are not permitted. The obligation applies to all data collected by the chatbot, not only personal data.
Statutory Text
b. Limit the collection and storage of user information collected by the public-facing chatbot to what is necessary to fulfill the deployer's purpose for making the public-facing chatbot publicly available.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
§ 554J.2(1)(c)-(d)
Plain Language
Deployers must provide a clear, conspicuous disclosure at the start of every interaction that the chatbot is AI and is not a licensed medical, legal, financial, or mental health professional. This disclosure must be repeated every three hours during continuous interactions. Unlike some jurisdictions, this is unconditional — it applies regardless of whether a reasonable person would be misled. The disclosure combines AI identity disclosure with an anti-professional-impersonation notice in a single mandatory statement.
Statutory Text
c. Clearly and conspicuously disclose each time the deployer's public-facing chatbot begins an interaction with a user that the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional. d. At each three-hour interval of the deployer's public-facing chatbot continuously interacting with a user, clearly and conspicuously disclose the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
§ 554J.2(1)(e)
Plain Language
Deployers must implement protocols for their public-facing chatbots to detect and respond to user prompts indicating suicidal ideation or intent to self-harm. At a minimum, these protocols must include making reasonable efforts to refer the user to crisis service providers such as a suicide hotline or crisis text line. This applies to all public-facing chatbots — not only AI companions or therapeutic chatbots — and is a continuing operational requirement.
Statutory Text
e. Implement protocols for the deployer's public-facing chatbot for responding to user prompts indicating the user has suicidal ideations or the intent to cause self-harm. Protocols shall include but are not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5CP-01.9 · Deployer · Chatbot
§ 554J.2(2)
Plain Language
Deployers are prohibited from knowingly or recklessly designing or making available a public-facing chatbot that: (1) misleads a reasonable user into believing it is a specific human being; (2) misleads a reasonable user into believing it is licensed by the state; or (3) encourages, promotes, or coerces a user to commit suicide, perform self-harm, or engage in sexual or physical violence against humans or animals. The mens rea standard is 'knowingly or recklessly' — negligent failure to detect such behavior is not covered, but willful blindness or conscious disregard of the risk would be. Sub-paragraph (c) functions as both an output restriction (S-02.7) and a deceptive conduct prohibition.
Statutory Text
2. A deployer shall not knowingly or recklessly design or make a public-facing chatbot available that does any of the following: a. Misleads a reasonable user into believing the public-facing chatbot is a specific human being. b. Misleads a reasonable user into believing the public-facing chatbot is licensed by the state. c. Encourages, promotes, or coerces a user to commit suicide, perform acts of self-harm, or engage in sexual or physical violence against a human or an animal.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 554J.3(1)(a)-(c)
Plain Language
Deployers of AI companions and therapeutic chatbots must implement commercially reasonable measures to determine whether a user is a minor. The approach must be risk-based, calibrated to the chatbot's nature and foreseeable harms. Acceptable methods include self-attestation, technical measures, or other commercially reasonable approaches. Government-issued ID verification is explicitly not required. A deployer is not liable for a user's misrepresentation of age if the deployer has made commercially reasonable efforts to comply (§ 554J.3(4)). Note this obligation applies only to AI companions and therapeutic chatbots — not to all public-facing chatbots.
Statutory Text
1. a. A deployer of an AI companion or a therapeutic chatbot shall implement commercially reasonable measures to determine whether a user is a minor. The measures must use a risk-based approach appropriate with the nature of the public-facing chatbot and the reasonably foreseeable harm that may come from using the public-facing chatbot. b. Reasonable measures to determine whether a user is a minor may include self-attestation, technical measures, or other commercially reasonable approaches. c. This section shall not be construed to require a deployer to verify a user's age using government-issued identification.
MN-02 AI Crisis Response Protocols · MN-02.4 · Deployer · ChatbotMinors
§ 554J.3(2)
Plain Language
Deployers of AI companions and therapeutic chatbots must implement protocols to notify a minor user's parent, legal guardian, or legal custodian when the minor enters a prompt indicating suicidal ideation or intent to self-harm. This is a minor-specific parental notification obligation that operates in addition to the general crisis referral protocol in § 554J.2(1)(e). The deployer must have a mechanism to identify both the minor's status and their parent or guardian contact information to satisfy this obligation.
Statutory Text
2. A deployer of an AI companion or a therapeutic chatbot shall implement protocols for sending a notification to a minor user's parent, legal guardian, or legal custodian when the minor user enters a prompt indicating the minor user has suicidal ideations or the intent to cause self-harm.
S-01 AI System Safety Program · S-01.1 · Deployer · ChatbotMinorsHealthcare
§ 554J.3(3)
Plain Language
Deployers may not make a therapeutic chatbot available to a minor unless all five conditions are satisfied: (1) a licensed psychologist (chapter 154B) or mental health professional (chapter 154D) recommended the chatbot for the specific minor after evaluation; (2) the developer has significant testing documentation; (3) peer-reviewed clinical trial data demonstrates the chatbot is safe and effective for the minor's mental health condition; (4) the deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to both the recommending professional and the minor's parents, guardians, or custodians; and (5) the deployer developed and implemented protocols for testing the chatbot for risks to users, identifying risks, mitigating risks, and quickly rectifying harm. This is an extraordinarily high bar — effectively requiring FDA-style clinical evidence and a licensed professional's individualized recommendation before a minor can access a therapeutic chatbot.
Statutory Text
3. A deployer shall only make a therapeutic chatbot available for a minor's use or purchase if all of the following apply: a. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. b. The therapeutic chatbot's developer has significant documentation of how the public-facing chatbot was tested. c. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. d. The therapeutic chatbot's deployer provided clear disclosures of the therapeutic chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "a", and to the minor's parents, guardians, or custodians. e. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.