HF-2507
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2027-07-01
Iowa House File 2507 — A bill for an act establishing requirements and guidelines for conversational AI services, and providing civil penalties, and including applicability provisions.
Iowa HF 2507 establishes requirements for operators of conversational AI services — defined as publicly accessible AI systems whose primary purpose is simulating human conversation. The bill imposes heightened obligations when users are minors, including mandatory AI identity disclosure via persistent or recurring disclaimers, prohibitions on addictive reward mechanisms, and reasonable measures to prevent sexually explicit content, emotional dependency simulations, and deceptive human-like interactions. For all users, operators must disclose AI identity when a reasonable person could be misled, adopt suicide and self-harm response protocols with crisis referrals, and refrain from representing the service as providing licensed psychology or behavioral health services. Enforcement is exclusively through the attorney general, with civil penalties up to $1,000 per violation and a $500,000 cap per operator. No private right of action is created.
Summary

Iowa HF 2507 establishes requirements for operators of conversational AI services — defined as publicly accessible AI systems whose primary purpose is simulating human conversation. The bill imposes heightened obligations when users are minors, including mandatory AI identity disclosure via persistent or recurring disclaimers, prohibitions on addictive reward mechanisms, and reasonable measures to prevent sexually explicit content, emotional dependency simulations, and deceptive human-like interactions. For all users, operators must disclose AI identity when a reasonable person could be misled, adopt suicide and self-harm response protocols with crisis referrals, and refrain from representing the service as providing licensed psychology or behavioral health services. Enforcement is exclusively through the attorney general, with civil penalties up to $1,000 per violation and a $500,000 cap per operator. No private right of action is created.

Enforcement & Penalties
Enforcement Authority
The attorney general has authority to enforce the chapter and shall adopt rules pursuant to chapter 17A to administer it. Enforcement is agency-initiated. The statute expressly provides that it shall not be construed to create a private right of action under this chapter or any other law. The statute also provides that it shall not be construed to make a developer of an artificial intelligence model liable solely because a third party used the developer's model to create or train a conversational AI service.
Penalties
An operator that violates this chapter is subject to an injunction and liable for the greater of actual damages or a civil penalty of $1,000 per violation, up to a maximum of $500,000 per operator. Statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Operator" means a person who develops and makes a conversational AI service available to the public. "Operator" does not include a mobile device application store or a search engine solely because the mobile device application store or a search engine provides access to a conversational AI service.
What Is Covered
"Conversational AI service" means an artificial intelligence, available by software application, web interface, or computer program, that is accessible to the general public and that has the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. "Conversational AI service" does not include a software application, web interface, or computer program that is any of the following: (1) Primarily designed and marketed for research and development purposes. (2) A feature within another software application, web interface, or computer program that does not have the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. (3) Designed to provide outputs relating to a narrow and discrete topic. (4) Primarily designed and marketed for commercial use by business entities to assist customers in obtaining services or purchasing goods from the business. (5) Functions as a speaker and voice command interface or voice-activated virtual assistant for an electronic device widely available to consumers. (6) Used by a business solely for internal purposes.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
§ 554J.2(1)
Plain Language
When the operator knows or is reasonably certain a user is under 18, it must clearly and conspicuously disclose that the user is interacting with AI. The operator may satisfy this through either (a) a persistent visible disclaimer always on screen, or (b) a disclaimer at the beginning of each interaction plus a recurring disclaimer at least every three hours of continuous use. This is an unconditional obligation for minor account holders — no reasonable-person trigger is required. The operator has flexibility to choose between the two disclosure methods.
Statutory Text
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
§ 554J.2(2)
Plain Language
Operators are prohibited from using variable-ratio reward schedules (e.g., points, badges, or similar incentives delivered at unpredictable intervals) toward minor users when the purpose is to drive increased engagement with the conversational AI service. The prohibition requires intent — operators must not design reward mechanisms that are intended to be addictive for minors. Note the statute uses "minor user" here rather than "minor account holder," which may have a broader scope.
Statutory Text
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
§ 554J.2(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from: (1) producing visual depictions of sexually explicit material for minor account holders, (2) telling minors they should engage in sexually explicit conduct, and (3) sexually objectifying minor account holders. The definitions of "sexually explicit conduct" and "visual depiction" incorporate the federal definitions under 18 U.S.C. §2256. The standard is "reasonable measures" — not absolute prevention — so operators have some implementation flexibility but must demonstrate affirmative steps to block this content for minors.
Statutory Text
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
§ 554J.2(4)
Plain Language
Operators must take reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe they are interacting with a human when engaging with minor account holders. The bill provides a non-exhaustive list of prohibited content: claims of sentience or humanity, emotional dependence simulations, romantic or sexually suggestive statements, and adult-minor romantic role-playing. This provision combines anti-deception and emotional dependency protections specifically for minors. The "including but not limited to" language means the listed behaviors are illustrative — operators must also address other statements that could create a false impression of human interaction.
Statutory Text
4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
§ 554J.2(5)
Plain Language
Operators must provide privacy and account management tools to minor account holders directly. When a minor is under 13, operators must also provide such tools to the minor's parent or guardian. Additionally, the attorney general may identify by rule additional risk factors that trigger the same parental/guardian tool requirement for older minors. This creates a tiered system: all minors get self-management tools, under-13 minors and minors with AG-identified risk factors also get parental/guardian tools. The specific features required in these tools are not prescribed, but they must cover privacy and account settings.
Statutory Text
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor has additional risk factors identified by the attorney general by rule.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
§ 554J.3
Plain Language
For all users (not just minors), operators must display a persistent visible disclaimer identifying the conversational AI service as AI, but only when a reasonable person would otherwise believe they are interacting with a human. Unlike the minor-specific obligation in § 554J.2(1), this is a conditional trigger — if no reasonable person would be misled, no disclosure is required. The disclosure must be persistent and visible, meaning it must remain on screen during the interaction rather than appearing only once.
Statutory Text
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
§ 554J.4
Plain Language
Operators must adopt and maintain protocols governing how the conversational AI service responds when any user (not limited to minors) expresses suicidal ideation or self-harm. At a minimum, the protocol must include making reasonable efforts to refer the user to crisis services such as a suicide hotline or crisis text line. The "includes but is not limited to" language means crisis referral is a floor, not a ceiling — operators should consider additional response measures. Unlike CA SB 243, this bill does not require the protocol to be published on the operator's website or impose annual reporting obligations.
Statutory Text
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotHealthcare
§ 554J.5
Plain Language
Operators are prohibited from knowingly and intentionally causing or programming a conversational AI service to represent — through its outputs, interface, or marketing — that it is designed to provide professional psychology or behavioral health services that would require Iowa licensure under chapter 154B (psychologists) or 154D (behavioral health). The mental state requirement is dual: the operator must act both knowingly and intentionally. This does not prevent AI from discussing mental health topics generally — it prohibits creating the impression that the AI is a licensed professional service. The provision applies to all users, not just minors.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.