SF-2417
IA · State · USA
IA
USA
● Passed
Proposed Effective Date
2027-07-01
Iowa SF 2417 — A bill for an act establishing requirements and guidelines for conversational AI services, and providing civil penalties, and including applicability provisions
Iowa SF 2417 establishes safety, disclosure, and minor protection requirements for operators of conversational AI services — defined as publicly accessible AI systems whose primary purpose is simulating human conversation. Operators must disclose to minor account holders that they are interacting with AI, must not use addictive reward patterns with minors, must prevent sexually explicit content and emotional dependency simulations directed at minors, and must offer privacy management tools to minors and their parents. All users must receive AI identity disclosure when a reasonable person would believe they are interacting with a human. Operators must adopt crisis response protocols for suicidal ideation and self-harm prompts and may not represent their AI as providing licensed psychology or behavioral health services. Enforcement is exclusively by the attorney general, with civil penalties of up to $1,000 per violation and a $500,000 cap per operator. No private right of action is created. The bill applies July 1, 2027.
Summary

Iowa SF 2417 establishes safety, disclosure, and minor protection requirements for operators of conversational AI services — defined as publicly accessible AI systems whose primary purpose is simulating human conversation. Operators must disclose to minor account holders that they are interacting with AI, must not use addictive reward patterns with minors, must prevent sexually explicit content and emotional dependency simulations directed at minors, and must offer privacy management tools to minors and their parents. All users must receive AI identity disclosure when a reasonable person would believe they are interacting with a human. Operators must adopt crisis response protocols for suicidal ideation and self-harm prompts and may not represent their AI as providing licensed psychology or behavioral health services. Enforcement is exclusively by the attorney general, with civil penalties of up to $1,000 per violation and a $500,000 cap per operator. No private right of action is created. The bill applies July 1, 2027.

Enforcement & Penalties
Enforcement Authority
The attorney general has authority to enforce the chapter and shall adopt rules pursuant to chapter 17A to administer the chapter. Enforcement is agency-initiated. No private right of action is created under this chapter or any other law.
Penalties
Greater of actual damages or a civil penalty of $1,000 per violation, up to a maximum of $500,000 per operator. Injunctive relief is also available. Civil penalties collected are deposited into the general fund of the state. Statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Operator" means a person who develops and makes a conversational AI service available to the public. "Operator" does not include a mobile device application store or a search engine solely because the mobile device application store or a search engine provides access to a conversational AI service.
What Is Covered
"Conversational AI service" means an artificial intelligence, available by software application, web interface, or computer program, that is accessible to the general public and that has the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. "Conversational AI service" does not include a software application, web interface, or computer program that is any of the following: (1) Primarily designed and marketed for research and development purposes. (2) A feature within another software application, web interface, or computer program that does not have the primary purpose of simulating human conversation and interaction through text, audio communication, or visual communication. (3) Designed to provide outputs relating to a narrow and discrete topic. (4) Primarily designed and marketed for commercial use by business entities to assist customers in obtaining services or purchasing goods from the business. (5) Functions as a speaker and voice command interface or voice-activated virtual assistant for an electronic device widely available to consumers. (6) Used by a business solely for internal purposes.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
§ 554J.2(1)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI. This disclosure must be delivered through either (a) a persistent visible disclaimer always on screen, or (b) a disclaimer at the start of each interaction plus a recurring reminder at least every three hours of continuous use. This is an unconditional obligation — it applies whenever the operator knows or is reasonably certain the user is a minor, regardless of whether a reasonable person would be misled.
Statutory Text
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
§ 554J.2(2)
Plain Language
Operators may not give minor users points or similar rewards at unpredictable intervals intended to drive increased engagement with their conversational AI service. This targets variable-ratio reward schedules — a common addictive design pattern. The prohibition is intent-based: the operator must have the intent to encourage increased engagement through the unpredictable reward mechanism.
Statutory Text
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
§ 554J.2(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from producing visual depictions of sexually explicit material for minor account holders, directing minor account holders to engage in sexually explicit conduct, or sexually objectifying minor account holders. The terms 'sexually explicit conduct' and 'visual depiction' are defined by reference to federal law at 18 U.S.C. § 2256. The standard is 'reasonable measures' — not absolute prevention — giving operators some implementation flexibility.
Statutory Text
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
§ 554J.2(4)
Plain Language
Operators must take reasonable measures to prevent their conversational AI service from generating statements that would lead a reasonable person to believe they are interacting with a human when interacting with a minor account holder. The statute provides a non-exhaustive list of prohibited statement types: claims of sentience or humanity, simulated emotional dependence on the minor, simulated romantic interaction or sexual innuendo, and role-playing an adult-minor romantic relationship. The 'including but not limited to' language means the obligation extends beyond the enumerated examples to any statement that would create the belief of human interaction.
Statutory Text
4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
§ 554J.2(5)
Plain Language
Operators must provide three tiers of privacy and account management tools: (a) tools for all minor account holders themselves to manage their own privacy and account settings; (b) tools for parents or guardians to manage the minor's privacy and account settings when the minor is under thirteen; and (c) tools for parents or guardians to manage the minor's settings as appropriate based on relevant risks, regardless of age. The under-thirteen parental tools are mandatory; the risk-based parental tools apply to all minors and give operators discretion to calibrate based on assessed risks.
Statutory Text
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings as appropriate based on relevant risks.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
§ 554J.3
Plain Language
For all users (not just minors), operators must disclose that their conversational AI service is artificial intelligence when a reasonable person would believe they are interacting with a human. The disclosure must be made either via a persistent visible disclaimer or via a disclaimer that appears after every three hours of continuous interaction. This is a conditional obligation — it is triggered only when the AI is realistic enough that a reasonable person could be misled. Compare to § 554J.2(1), which imposes an unconditional disclosure obligation for minors regardless of whether a reasonable person would be misled.
Statutory Text
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer, or a disclaimer that appears after every three hours of continuous interaction with the operator's conversational AI service, that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
§ 554J.4
Plain Language
Operators must adopt and maintain protocols governing how their conversational AI service responds to user prompts involving suicidal ideation or self-harm. At a minimum, the protocol must include making reasonable efforts to refer users to crisis service providers such as a suicide hotline, crisis text line, or equivalent. The 'includes but is not limited to' language means the referral is a floor — operators may need additional protocol elements depending on context. This applies to all users, not just minors.
Statutory Text
An operator shall adopt protocols for the operator's conversational AI service for responding to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to refer the user to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotHealthcare
§ 554J.5
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to represent — through statements or representations — that it provides professional psychology or behavioral health services that would require licensure under Iowa chapters 154B (psychologists) or 154D (behavioral science). This is a scienter-based prohibition: it requires both knowledge and intent. Accidental or emergent outputs that a user might interpret as therapeutic advice do not violate this provision unless the operator knowingly and intentionally caused or programmed the behavior.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
Other · Chatbot
§ 554J.6
Plain Language
This provision establishes the enforcement framework and penalty structure for the chapter. Violations are subject to injunctive relief and the greater of actual damages or $1,000 per violation (capped at $500,000 per operator). Enforcement authority is vested exclusively in the attorney general, who is also directed to adopt administrative rules. No private right of action is created. Upstream AI model developers are not liable solely because a third party used their model to create or train a conversational AI service. This creates no independent compliance obligation — it is an enforcement and liability provision.
Statutory Text
1. An operator that violates this chapter shall be subject to an injunction and liable for the greater of the following: a. Actual damages. b. A civil penalty of one thousand dollars per violation, up to a maximum of five hundred thousand dollars per operator. 2. The attorney general shall have the authority to enforce this chapter and shall adopt rules pursuant to chapter 17A to administer this chapter. 3. A civil penalty collected under this section shall be deposited into the general fund of the state. 4. This chapter shall not be construed to create a private right of action under this chapter or any other law. 5. This section shall not be construed to make a developer of an artificial intelligence model liable solely because a third party used the developer's artificial intelligence model to create or train a conversational AI service.