SF-2417
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2027-07-01
Iowa Senate File 2417 — An Act establishing requirements and guidelines for conversational AI services, and providing civil penalties, and including applicability provisions
Iowa SF 2417 imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public. It requires operators to disclose AI identity to minor account holders via persistent disclaimers or session-start plus periodic reminders, prohibits variable-ratio reward mechanics targeting minors, mandates reasonable measures to prevent sexually explicit content and emotional dependency simulations directed at minors, and requires adoption of crisis response protocols for suicidal ideation and self-harm. The bill also prohibits operators from misrepresenting their AI as providing licensed psychology or behavioral health services. Enforcement is exclusively through the Iowa attorney general, with civil penalties of up to $1,000 per violation capped at $500,000 per operator, plus injunctive relief. The bill applies July 1, 2027.
Summary

Iowa SF 2417 imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public. It requires operators to disclose AI identity to minor account holders via persistent disclaimers or session-start plus periodic reminders, prohibits variable-ratio reward mechanics targeting minors, mandates reasonable measures to prevent sexually explicit content and emotional dependency simulations directed at minors, and requires adoption of crisis response protocols for suicidal ideation and self-harm. The bill also prohibits operators from misrepresenting their AI as providing licensed psychology or behavioral health services. Enforcement is exclusively through the Iowa attorney general, with civil penalties of up to $1,000 per violation capped at $500,000 per operator, plus injunctive relief. The bill applies July 1, 2027.

Enforcement & Penalties
Enforcement Authority
The attorney general has authority to enforce the chapter and shall adopt rules pursuant to chapter 17A to administer the chapter. Enforcement is agency-initiated by the attorney general. No private right of action is created under this chapter or any other law. Upstream AI model developers are shielded from liability solely because a third party used the developer's model to create or train a conversational AI service.
Penalties
Greater of actual damages or a civil penalty of $1,000 per violation, up to a maximum of $500,000 per operator. Injunctive relief is also available. Civil penalties collected are deposited into the general fund of the state. Statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Operator" means a person who develops and makes a conversational AI service available to the public. "Operator" does not include a mobile device application store or a search engine solely because the mobile device application store or a search engine provides access to a conversational AI service.
What Is Covered
"Conversational AI service" means an artificial intelligence, available by software application, web interface, or computer program, that is accessible to the general public and that has the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. "Conversational AI service" does not include a software application, web interface, or computer program that is any of the following: (1) Primarily designed and marketed for research and development purposes. (2) A feature within another software application, web interface, or computer program that does not have the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. (3) Designed to provide outputs relating to a narrow and discrete topic. (4) Primarily designed and marketed for commercial use by business entities to assist customers in obtaining services or purchasing goods from the business. (5) Functions as a speaker and voice command interface or voice-activated virtual assistant for an electronic device widely available to consumers. (6) Used by a business solely for internal purposes.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
§ 554J.2(1)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI. The operator may satisfy this through either (a) a persistent visible disclaimer that remains on screen, or (b) a disclaimer at the beginning of each interaction plus a recurring disclaimer at least every three hours during continuous sessions. Unlike the general consumer disclosure in § 554J.3, this minor-specific obligation is unconditional — it applies regardless of whether a reasonable person would be misled.
Statutory Text
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
§ 554J.2(2)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points or similar incentives delivered at unpredictable intervals — to drive engagement by minor users. This targets addictive design patterns (sometimes called 'loot box' or 'slot machine' mechanics) that exploit unpredictability to encourage compulsive use. The prohibition requires intent to encourage increased engagement, so incidental or fixed-schedule rewards are not covered.
Statutory Text
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
MN-01 Minor User AI Safety Protections · MN-01.5MN-01.6 · Deployer · ChatbotMinors
§ 554J.2(3)-(4)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from (1) producing sexually explicit visual content for minor account holders, (2) encouraging minors to engage in sexually explicit conduct, (3) sexually objectifying minors, and (4) generating statements that would lead a reasonable person to believe they are interacting with a human — including claims of sentience, simulated emotional dependence on a minor, simulated romantic interactions or sexual innuendo, and adult-minor romantic role-playing. The sexually explicit conduct and visual depiction definitions incorporate the federal definitions at 18 U.S.C. § 2256. The standard is 'reasonable measures,' not absolute prevention, providing operators some latitude in implementation.
Statutory Text
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder. 4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
§ 554J.2(5)
Plain Language
Operators must provide three tiers of privacy and account management tools: (a) tools directly available to all minor account holders to manage their own privacy and account settings; (b) tools available to a parent or guardian to manage a minor's privacy and account settings when the minor is under thirteen; and (c) tools available to a parent or guardian to manage a minor's privacy and account settings as appropriate based on relevant risks, regardless of the minor's age. For minors under thirteen, both the minor-facing and parent-facing tools must be provided. The 'as appropriate based on relevant risks' language in subsection (c) gives operators discretion to calibrate parental tools to the risk profile of the platform.
Statutory Text
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings as appropriate based on relevant risks.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
§ 554J.3
Plain Language
If a reasonable person interacting with the conversational AI service would believe they are talking to a human, the operator must disclose that the service is AI. The disclosure must be made via either a persistent visible disclaimer or a disclaimer appearing at least every three hours of continuous interaction. This is a conditional obligation — it triggers only when a reasonable person could be misled. Compare to the unconditional minor-specific disclosure in § 554J.2(1), which applies regardless of whether a reasonable person would be misled.
Statutory Text
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer, or a disclaimer that appears after every three hours of continuous interaction with the operator's conversational AI service, that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
§ 554J.4
Plain Language
Operators must adopt and maintain protocols governing how their conversational AI service responds to user prompts expressing suicidal ideation or self-harm. At minimum, the protocol must include making reasonable efforts to refer users to crisis service providers — such as a suicide hotline, crisis text line, or equivalent service. The 'includes but is not limited to' language means crisis referral is a floor, not a ceiling — additional response measures may be appropriate. Unlike CA SB 243, there is no requirement to publish the protocol on the operator's website or to report crisis metrics to a state agency.
Statutory Text
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotHealthcare
§ 554J.5
Plain Language
Operators may not knowingly and intentionally cause their conversational AI service to represent — whether through explicit statements or implied functionality — that it provides professional psychology or behavioral health services requiring licensure under Iowa chapters 154B (psychology) or 154D (behavioral science). The mens rea requirement is dual: the operator must both 'knowingly and intentionally' cause or program the misrepresentation. This prohibits designing or configuring AI to present itself as a licensed mental health professional, but does not impose strict liability for unexpected model outputs — the prohibition targets deliberate design choices.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.