HF-2715
IA · State · USA
IA
USA
● Pending
Iowa House File 2715 — A bill for an act relating to chatbots, including deployer requirements and interactions with minors
Iowa HF 2715 imposes safety, disclosure, and conduct requirements on deployers of public-facing chatbots, with heightened obligations for AI companions and therapeutic chatbots used by minors. Deployers must maintain harm-detection and mitigation protocols, limit data collection to what is necessary, disclose AI identity and non-licensure status at the start of every interaction and every three hours during continuous use, and implement crisis referral protocols for suicidal ideation and self-harm. Deployers of AI companions and therapeutic chatbots must implement commercially reasonable age-determination measures and parental notification protocols when a minor expresses suicidal ideation. Therapeutic chatbots may only be made available to minors under strict conditions including a licensed professional's recommendation, peer-reviewed clinical trial data, and deployer safety testing protocols. Enforcement is exclusively through the attorney general, with civil penalties up to $2,500 per violation ($7,500 for injunction violations) and a 30-day cure period except for imminent harm to minors.
Summary

Iowa HF 2715 imposes safety, disclosure, and conduct requirements on deployers of public-facing chatbots, with heightened obligations for AI companions and therapeutic chatbots used by minors. Deployers must maintain harm-detection and mitigation protocols, limit data collection to what is necessary, disclose AI identity and non-licensure status at the start of every interaction and every three hours during continuous use, and implement crisis referral protocols for suicidal ideation and self-harm. Deployers of AI companions and therapeutic chatbots must implement commercially reasonable age-determination measures and parental notification protocols when a minor expresses suicidal ideation. Therapeutic chatbots may only be made available to minors under strict conditions including a licensed professional's recommendation, peer-reviewed clinical trial data, and deployer safety testing protocols. Enforcement is exclusively through the attorney general, with civil penalties up to $2,500 per violation ($7,500 for injunction violations) and a 30-day cure period except for imminent harm to minors.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement. The attorney general may bring an action on behalf of the state and may seek an injunction. Prior to initiating a proceeding to obtain a civil penalty, the attorney general must notify the person of the violation and give the person thirty calendar days to cure, unless the violation will cause imminent harm to a minor. No private right of action.
Penalties
Civil penalty of not more than $2,500 per violation, or $7,500 per violation of an injunction issued under the chapter. Injunctive relief is available. Penalties are deposited into the general fund of the state. A deployer that makes commercially reasonable efforts to comply is not subject to liability for unforeseeable or emergent outputs generated by the deployer's public-facing chatbot (safe harbor).
Who Is Covered
"Deployer" means a person that makes an AI companion, a public-facing chatbot, or a therapeutic chatbot available to users in this state.
What Is Covered
"Chatbot" means artificial intelligence that is described by all of the following: (1) The artificial intelligence accepts open-ended, natural-language, or multimodal user input and produces adaptive or context-responsive output. (2) The artificial intelligence produces new expressive content or responses that were not fully predetermined by the person who created or who operates the artificial intelligence. "Chatbot" does not include a service limited to internal business operations or a service requiring user authentication through an employer, an educational institution, or a similar organization.
"Public-facing chatbot" means a chatbot intentionally made available to the general public or marketed directly to consumers for independent use without the ongoing supervision of the deployer or an institutional consumer. "Public-facing chatbot" does not include any of the following: (1) Software designed primarily for internal business operations. (2) Enterprise software licensed to a specific business, nonprofit organization, or governmental entity. (3) Chatbots used solely within the context of an existing customer relationship. (4) Systems requiring authentication through an employer, educational institution, health care provider, or similar organization prior to use.
"AI companion" means a public-facing chatbot designed to simulate a human-like romantic or emotional bond.
"Therapeutic chatbot" means a public-facing chatbot that is designed for the primary purpose of providing mental health support, counseling, or therapy by diagnosing, treating, mitigating, or preventing a mental health condition.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.4S-01.5 · Deployer · Chatbot
§ 554J.2(1)(a)
Plain Language
Deployers must implement and maintain ongoing protocols to detect, respond to, report, and mitigate harms their public-facing chatbot may cause users. The protocols must take commercially reasonable steps — meaning steps consistent with prevailing industry standards and proportionate to the deployer's size and resources — to protect user safety and well-being. This is a continuous operating requirement, not a one-time pre-launch check. A deployer that makes commercially reasonable efforts to comply with the entire chapter is not liable for unforeseeable or emergent outputs (safe harbor under § 554J.5).
Statutory Text
A deployer of a public-facing chatbot shall do all of the following: a. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the public-facing chatbot may cause a user in a manner that takes commercially reasonable steps to protect the safety and well-being of users.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
§ 554J.2(1)(b)
Plain Language
Deployers must minimize the collection and storage of user information gathered by their public-facing chatbot to only what is necessary to fulfill the deployer's stated purpose for making the chatbot publicly available. This is a data minimization obligation — deployers may not collect or retain user data beyond what the chatbot's stated purpose requires.
Statutory Text
b. Limit the collection and storage of user information collected by the public-facing chatbot to what is necessary to fulfill the deployer's purpose for making the public-facing chatbot publicly available.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
§ 554J.2(1)(c)-(d)
Plain Language
Deployers must provide a clear and conspicuous disclosure at the start of every interaction that the chatbot is AI and is not a licensed medical, legal, financial, or mental health professional. This disclosure must be repeated every three hours during continuous interactions. Unlike some jurisdictions, this is unconditional — it applies regardless of whether a reasonable person would be misled. The disclosure includes both AI identity and a non-licensure disclaimer, combining transparency and anti-deception functions.
Statutory Text
c. Clearly and conspicuously disclose each time the deployer's public-facing chatbot begins an interaction with a user that the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional. d. At each three-hour interval of the deployer's public-facing chatbot continuously interacting with a user, clearly and conspicuously disclose the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
§ 554J.2(1)(e)
Plain Language
Deployers must implement protocols for their public-facing chatbot to detect and respond to user prompts indicating suicidal ideation or self-harm intent. At a minimum, the protocols must make reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate service. This applies to all public-facing chatbots — not just AI companions or therapeutic chatbots.
Statutory Text
e. Implement protocols for the deployer's public-facing chatbot for responding to user prompts indicating the user has suicidal ideations or the intent to cause self-harm. Protocols shall include but are not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5CP-01.9 · Deployer · Chatbot
§ 554J.2(2)
Plain Language
Deployers are prohibited from knowingly or recklessly designing or making available a public-facing chatbot that: (a) misleads a reasonable user into thinking the chatbot is a specific human being; (b) misleads a reasonable user into thinking the chatbot is state-licensed; or (c) encourages, promotes, or coerces a user to commit suicide, self-harm, or sexual or physical violence against a human or animal. The knowledge standard is 'knowingly or recklessly' — negligent design alone does not trigger liability. Sub-paragraph (c) overlaps with S-02.7 (self-harm content restrictions) but is grouped here because it is part of a single enumerated prohibition list.
Statutory Text
2. A deployer shall not knowingly or recklessly design or make a public-facing chatbot available that does any of the following: a. Misleads a reasonable user into believing the public-facing chatbot is a specific human being. b. Misleads a reasonable user into believing the public-facing chatbot is licensed by the state. c. Encourages, promotes, or coerces a user to commit suicide, perform acts of self-harm, or engage in sexual or physical violence against a human or an animal.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 554J.3(1)(a)-(c)
Plain Language
Deployers of AI companions or therapeutic chatbots must implement commercially reasonable measures to determine whether a user is a minor. The measures must use a risk-based approach proportionate to the nature of the chatbot and its foreseeable harm potential. Acceptable measures include self-attestation, technical measures, or other commercially reasonable approaches. Government-issued ID verification is explicitly not required. A deployer is not liable for a user's misrepresentation of age if the deployer has made commercially reasonable efforts to comply (safe harbor under § 554J.3(4)).
Statutory Text
1. a. A deployer of an AI companion or a therapeutic chatbot shall implement commercially reasonable measures to determine whether a user is a minor. The measures must use a risk-based approach appropriate with the nature of the public-facing chatbot and the reasonably foreseeable harm that may come from using the public-facing chatbot. b. Reasonable measures to determine whether a user is a minor may include self-attestation, technical measures, or other commercially reasonable approaches. c. This section shall not be construed to require a deployer to verify a user's age using government-issued identification.
S-04 AI Crisis Response Protocols · MN-01.10 · Deployer · ChatbotMinors
§ 554J.3(2)
Plain Language
Deployers of AI companions or therapeutic chatbots must implement protocols to notify a minor user's parent, legal guardian, or legal custodian when the minor enters a prompt indicating suicidal ideation or intent to self-harm. This is a parental notification obligation specific to minors, triggered by crisis-indicating prompts — it operates alongside the general crisis referral protocol required under § 554J.2(1)(e) for all users.
Statutory Text
2. A deployer of an AI companion or a therapeutic chatbot shall implement protocols for sending a notification to a minor user's parent, legal guardian, or legal custodian when the minor user enters a prompt indicating the minor user has suicidal ideations or the intent to cause self-harm.
Other · Deployer · ChatbotHealthcareMinors
§ 554J.3(3)(a)-(e)
Plain Language
A deployer may only make a therapeutic chatbot available to a minor if five cumulative conditions are met: (a) the chatbot was recommended by a licensed psychologist (ch. 154B) or mental health professional (ch. 154D) who evaluated the minor; (b) the developer has significant documentation of how the chatbot was tested; (c) peer-reviewed clinical trial data demonstrates the chatbot is safe and effective for the minor's mental health condition; (d) the deployer disclosed the chatbot's functions, limitations, and data privacy policies to both the recommending professional and the minor's parents or guardians; and (e) the deployer has developed and implemented protocols for testing, risk identification, risk mitigation, and harm rectification. All five conditions must be satisfied before access is permitted — this functions as a gating pre-authorization regime.
Statutory Text
3. A deployer shall only make a therapeutic chatbot available for a minor's use or purchase if all of the following apply: a. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. b. The therapeutic chatbot's developer has significant documentation of how the public-facing chatbot was tested. c. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. d. The therapeutic chatbot's deployer provided clear disclosures of the therapeutic chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "a", and to the minor's parents, guardians, or custodians. e. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.