HF-2507
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2027-07-01
Iowa House File 2507 — A bill for an act establishing requirements and guidelines for conversational AI services, and providing civil penalties, and including applicability provisions
Iowa HF 2507 establishes requirements for operators of conversational AI services — defined as publicly accessible AI systems whose primary purpose is simulating human conversation. Operators must disclose AI identity to minor account holders (via persistent disclaimer or session-start plus every-three-hours notice), prevent sexually explicit content and emotional dependency outputs directed at minors, offer privacy management tools for minors and their parents/guardians, adopt suicide and self-harm crisis referral protocols for all users, and refrain from misrepresenting their service as providing licensed psychology or behavioral health services. The bill is enforced exclusively by the Iowa Attorney General with civil penalties of up to $1,000 per violation (capped at $500,000 per operator) and injunctive relief; no private right of action is created. The bill applies July 1, 2027.
Summary

Iowa HF 2507 establishes requirements for operators of conversational AI services — defined as publicly accessible AI systems whose primary purpose is simulating human conversation. Operators must disclose AI identity to minor account holders (via persistent disclaimer or session-start plus every-three-hours notice), prevent sexually explicit content and emotional dependency outputs directed at minors, offer privacy management tools for minors and their parents/guardians, adopt suicide and self-harm crisis referral protocols for all users, and refrain from misrepresenting their service as providing licensed psychology or behavioral health services. The bill is enforced exclusively by the Iowa Attorney General with civil penalties of up to $1,000 per violation (capped at $500,000 per operator) and injunctive relief; no private right of action is created. The bill applies July 1, 2027.

Enforcement & Penalties
Enforcement Authority
The attorney general has authority to enforce this chapter and shall adopt rules pursuant to chapter 17A to administer this chapter. Enforcement is agency-initiated. The statute expressly provides that it shall not be construed to create a private right of action under this chapter or any other law. The statute shall not be construed to make a developer of an artificial intelligence model liable solely because a third party used the developer's artificial intelligence model to create or train a conversational AI service.
Penalties
An operator that violates this chapter shall be subject to an injunction and liable for the greater of actual damages or a civil penalty of $1,000 per violation, up to a maximum of $500,000 per operator. Injunctive relief is also available.
Who Is Covered
"Operator" means a person who develops and makes a conversational AI service available to the public. "Operator" does not include a mobile device application store or a search engine solely because the mobile device application store or a search engine provides access to a conversational AI service.
What Is Covered
"Conversational AI service" means an artificial intelligence, available by software application, web interface, or computer program, that is accessible to the general public and that has the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. "Conversational AI service" does not include a software application, web interface, or computer program that is any of the following: (1) Primarily designed and marketed for research and development purposes. (2) A feature within another software application, web interface, or computer program that does not have the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. (3) Designed to provide outputs relating to a narrow and discrete topic. (4) Primarily designed and marketed for commercial use by business entities to assist customers in obtaining services or purchasing goods from the business. (5) Functions as a speaker and voice command interface or voice-activated virtual assistant for an electronic device widely available to consumers. (6) Used by a business solely for internal purposes.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
§ 554J.2(1)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI. The operator may satisfy this obligation through either (a) a persistent visible disclaimer always displayed during the interaction, or (b) a disclaimer at the beginning of each interaction plus a recurring disclaimer at least every three hours of continuous interaction. This is an unconditional disclosure requirement — it applies whenever the operator knows or is reasonably certain the user is under 18, regardless of whether the chatbot could be mistaken for a human.
Statutory Text
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
§ 554J.2(2)
Plain Language
Operators are prohibited from using variable-ratio reward mechanics (points or similar rewards at unpredictable intervals) toward minor users when the intent is to encourage increased engagement with the conversational AI service. This is an anti-addictive-design prohibition targeting variable reinforcement schedules specifically directed at minors.
Statutory Text
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
§ 554J.2(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would mislead a reasonable person into thinking they are interacting with a human when interacting with minor account holders. The statute provides a non-exhaustive list of prohibited statement types: claims of sentience or being human, simulated emotional dependence on the minor, simulated romantic interactions or sexual innuendo, and role-playing adult-minor romantic relationships. The 'reasonable measures' standard and the 'including but not limited to' framing mean these are minimum examples — operators must also address analogous deceptive statements not specifically enumerated.
Statutory Text
4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
§ 554J.2(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from (a) producing visual depictions of sexually explicit material for minor account holders, (b) directing or encouraging minor account holders to engage in sexually explicit conduct, and (c) sexually objectifying minor account holders. 'Sexually explicit conduct' and 'visual depiction' incorporate the federal definitions from 18 U.S.C. § 2256. This is a reasonable-measures standard, not an absolute prohibition — but operators must demonstrate affirmative steps to prevent these outputs.
Statutory Text
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
§ 554J.2(5)
Plain Language
Operators must provide privacy and account management tools to three categories of users: (a) all minor account holders themselves; (b) parents or guardians of minors under thirteen; and (c) parents or guardians of minors who have additional risk factors as determined by attorney general rulemaking. For minors 13–17, only the minor themselves receives tools. For under-13 minors, both the minor and parent/guardian must have tools. The attorney general has rulemaking authority to define additional risk factors that trigger parental tools for minors 13 and older.
Statutory Text
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor has additional risk factors identified by the attorney general by rule.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
§ 554J.3
Plain Language
If a reasonable person using the conversational AI service would believe they are interacting with a human, the operator must display a persistent visible disclaimer that the service is AI. This is a conditional obligation — it triggers only when a reasonable person would be misled. The disclosure mechanism must be a persistent visible disclaimer, not a one-time notice. This applies to all users, not just minors. Compare to the minor-specific unconditional disclosure in § 554J.2(1).
Statutory Text
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
§ 554J.4
Plain Language
Operators must adopt and maintain protocols for their conversational AI service to respond to user prompts involving suicidal ideation or self-harm. At minimum, these protocols must include making reasonable efforts to refer users to crisis service providers — such as suicide hotlines, crisis text lines, or other appropriate crisis services. The 'includes but is not limited to' language means crisis referral is a floor, not a ceiling; additional protocol measures may be expected. This obligation applies to all users, not just minors.
Statutory Text
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotHealthcare
§ 554J.5
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to make representations or statements that would lead a reasonable person to believe the service provides professional psychology or behavioral health services requiring licensure under Iowa chapters 154B (psychology) or 154D (behavioral health). This is a mental-state-gated prohibition — it requires both knowing and intentional conduct, so accidental or emergent outputs that happen to resemble professional health advice may not trigger liability. The standard is what a reasonable individual would believe, not what the service actually provides.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
Other · Chatbot
§ 554J.6(1)-(4)
Plain Language
This provision establishes the enforcement and penalty framework for the chapter. Violating operators face injunctive relief and the greater of actual damages or $1,000 per violation (capped at $500,000 per operator). Enforcement is exclusively by the attorney general, who must also adopt implementing rules. No private right of action exists. Upstream AI model developers are shielded from liability solely because a third party used their model to create a conversational AI service. This provision creates no new affirmative compliance obligation — it structures enforcement of obligations imposed by other sections.
Statutory Text
1. An operator that violates this chapter shall be subject to an injunction and liable for the greater of the following: a. Actual damages. b. A civil penalty of one thousand dollars per violation, up to a maximum of five hundred thousand dollars per operator. 2. The attorney general shall have the authority to enforce this chapter and shall adopt rules pursuant to chapter 17A to administer this chapter. 3. This chapter shall not be construed to create a private right of action under this chapter or any other law. 4. This section shall not be construed to make a developer of an artificial intelligence model liable solely because a third party used the developer's artificial intelligence model to create or train a conversational AI service.