HF-2204
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2025-07-01
Iowa House File 2204 — A bill for an act relating to the requirements for chatbot deployers, including required protocols, limitations on data collection, and requirements for minors to interact with artificial intelligence companions and therapeutic chatbots, and providing civil penalties, punitive penalties, and civil causes of action
Imposes safety, data minimization, and minor-protection obligations on deployers of chatbots in Iowa. All chatbot deployers must maintain harm-detection and mitigation protocols that prioritize user safety over deployer interests, and must limit data collection to what is necessary for the chatbot's stated purpose. Deployers must implement reasonable age verification to prevent minors from using AI companions. A chatbot knowingly designed to impersonate a real individual (living or deceased) may not be made publicly available without consent, subject to narrow exceptions for educational, research, artistic, cultural, or political-value chatbots. Therapeutic chatbots may only be made available to minors if multiple conditions are satisfied, including a licensed professional recommendation, peer-reviewed clinical trial data, and deployer safety protocols. Enforcement is via attorney general civil actions (up to $2,500 per violation) and a private right of action for minors (punitive damages of $100–$750 or actual damages, plus emotional distress damages and attorney fees). The chapter does not apply to chatbots that provide only generic responses where a reasonable person would not expect the responses to create an emotional bond.
Summary

Imposes safety, data minimization, and minor-protection obligations on deployers of chatbots in Iowa. All chatbot deployers must maintain harm-detection and mitigation protocols that prioritize user safety over deployer interests, and must limit data collection to what is necessary for the chatbot's stated purpose. Deployers must implement reasonable age verification to prevent minors from using AI companions. A chatbot knowingly designed to impersonate a real individual (living or deceased) may not be made publicly available without consent, subject to narrow exceptions for educational, research, artistic, cultural, or political-value chatbots. Therapeutic chatbots may only be made available to minors if multiple conditions are satisfied, including a licensed professional recommendation, peer-reviewed clinical trial data, and deployer safety protocols. Enforcement is via attorney general civil actions (up to $2,500 per violation) and a private right of action for minors (punitive damages of $100–$750 or actual damages, plus emotional distress damages and attorney fees). The chapter does not apply to chatbots that provide only generic responses where a reasonable person would not expect the responses to create an emotional bond.

Enforcement & Penalties
Enforcement Authority
The attorney general may bring an action on behalf of the state to enforce the chapter and may seek injunctive relief. A minor who uses a chatbot that does not comply with the chapter may bring a private civil action to recover damages. No cure period or safe harbor is specified.
Penalties
AG enforcement: civil penalty of up to $2,500 per violation, or up to $7,500 per violation of an injunction; penalties deposited into the state general fund. Private right of action for minors: the greater of (1) punitive damages of not less than $100 but not more than $750 or (2) actual damages; plus emotional distress damages, court costs, and reasonable attorney fees. Statutory punitive damages do not require proof of actual monetary harm.
Who Is Covered
"Deployer" means a person that owns an artificial intelligence available for public use.
What Is Covered
"Chatbot" means an artificial intelligence that is made to simulate human conversation with a user through text or audio output.
"AI companion" means a chatbot that interacts with users to simulate a human-like romantic or emotional bond.
"Therapeutic chatbot" means a chatbot designed for the primary purpose of providing mental health support, counseling, or therapy by diagnosing, treating, mitigating, or preventing a mental health condition.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.5 · Deployer · Chatbot
§ 554J.2(1)
Plain Language
Every deployer of a chatbot must establish and maintain ongoing protocols designed to detect, respond to, report, and mitigate harms the chatbot may cause users. These protocols must prioritize user safety and well-being over the deployer's commercial or other interests. This is a continuing obligation — the protocols must be maintained, not merely established once. The statute does not specify the content of the protocols in detail, leaving significant discretion to deployers but also creating compliance ambiguity.
Statutory Text
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
§ 554J.2(2)
Plain Language
Deployers must minimize the collection and storage of user information gathered by the chatbot to only what is necessary for the deployer's stated purpose in making the chatbot publicly available. This is a data minimization obligation — it prohibits collecting data beyond what is functionally necessary. The standard is tied to the deployer's purpose, which creates some ambiguity about how broadly or narrowly that purpose may be defined.
Statutory Text
A deployer of a chatbot shall do all of the following: ... 2. Limit the collection and storage of user information collected by the chatbot to what is necessary to fulfill the deployer's purpose for making the chatbot publicly available.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 554J.3(1)
Plain Language
Deployers must implement reasonable age verification — using government ID, financial documents evidencing age, or another widely accepted practice — to ensure that no minor can use or purchase an AI companion the deployer makes publicly available. This is a categorical prohibition on minor access to AI companions (chatbots simulating romantic or emotional bonds), not merely an enhanced-obligations regime. The obligation is on the deployer to verify age, not on the minor to self-certify. Note that this applies specifically to AI companions, not to all chatbots.
Statutory Text
1. A deployer shall implement reasonable age verification measures to ensure that a minor cannot use or purchase an AI companion the deployer makes publicly available.
CP-02 Non-Consensual Intimate Imagery · CP-02.4 · Deployer · Chatbot
§ 554J.3(2)
Plain Language
Deployers may not make publicly available any chatbot knowingly designed to impersonate a real person — living or dead — without first obtaining permission from the individual (or their legal representative) if living, or from the person responsible for the estate if deceased. A narrow exception applies for deceased individuals with no responsible estate: the chatbot may be deployed without permission if it was designed solely as an educational or research tool, or if a reasonable person would believe the chatbot has objective artistic, cultural, or political value. This functions as a digital likeness consent requirement applied to AI chatbots, covering both living and post-mortem rights.
Statutory Text
2. A deployer shall not make a chatbot publicly available if the chatbot was knowingly designed to impersonate a real individual, regardless of whether the individual is living or deceased, unless the deployer first obtains permission to impersonate the individual from any of the following: a. For a living individual, from the individual or the individual's legal representative. b. For a deceased individual, from the person responsible for the deceased individual's estate. If no person is responsible for the deceased individual's estate, a deployer may make a chatbot that was designed to knowingly impersonate a deceased individual publicly available without permission if the chatbot was designed solely as an educational or research tool or if a reasonable person would believe the chatbot has objective artistic, cultural, or political value.
MN-01 Minor User AI Safety Protections · Deployer · ChatbotMinorsHealthcare
§ 554J.3(3)
Plain Language
Deployers may not make a therapeutic chatbot available to minors unless six cumulative conditions are met: (a) a clear disclaimer at the start of each interaction that the chatbot is AI and not a licensed professional; (b) a licensed psychologist (chapter 154B) or mental health professional (chapter 154D) recommended the chatbot after evaluating the specific minor; (c) the developer has significant testing documentation; (d) peer-reviewed clinical trial data demonstrates safety and efficacy for the minor's condition; (e) the deployer disclosed the chatbot's functions, limitations, and data privacy policies to both the recommending professional and the minor's parents, guardians, or custodians; and (f) the deployer has developed and implemented protocols for testing, risk identification, risk mitigation, and harm rectification. All six conditions must be satisfied — failure to meet any one is a violation. This is one of the most restrictive minor-access regimes for therapeutic AI chatbots in any U.S. jurisdiction.
Statutory Text
3. A deployer shall not make a therapeutic chatbot available for a minor's use or purchase unless all of the following apply: a. The therapeutic chatbot provides a clear and conspicuous disclaimer at the beginning of each interaction with the therapeutic chatbot that the therapeutic chatbot is an artificial intelligence and is not a licensed professional. b. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. c. The therapeutic chatbot's developer has significant documentation of how the therapeutic chatbot was tested. d. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. e. The therapeutic chatbot's deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "b", and to the minor's parents, guardians, or custodians. f. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.
Other · Chatbot
§ 554J.4
Plain Language
The entire chapter does not apply to chatbots designed to provide only generic responses (including encouragement) where a reasonable person would not expect the responses to create an emotional bond. This carves out simple customer-service or FAQ-style bots from all obligations in the chapter. Both prongs must be satisfied — the chatbot must be designed to provide only generic responses, and a reasonable person must not expect the responses to create an emotional bond.
Statutory Text
This chapter shall not apply to a chatbot that is designed to only provide generic responses, including encouragement, to input from a user, and a reasonable person would not expect the responses to create an emotional bond between two individuals having the same conversation.