SSB-3011
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2025-07-01
Iowa Senate Study Bill 3011 — A bill for an act establishing requirements and guidelines for chatbots, making appropriations, and providing civil penalties
Imposes safety and disclosure obligations on any person who designs, develops, or makes a chatbot available. Prohibits making a chatbot available with knowledge or reckless disregard that it encourages suicide, self-injury, or physical or sexual violence. Requires chatbots to disclose their non-human identity at the start of each conversation and at thirty-minute intervals, to truthfully identify as non-human when asked, to disclaim the provision of professional services at the start of each conversation and at regular intervals, and to be programmed to prevent representing themselves as licensed professionals. Enforced exclusively by the Iowa attorney general through civil actions, with penalties up to $100,000 per violation. The attorney general is directed to adopt implementing rules.
Summary

Imposes safety and disclosure obligations on any person who designs, develops, or makes a chatbot available. Prohibits making a chatbot available with knowledge or reckless disregard that it encourages suicide, self-injury, or physical or sexual violence. Requires chatbots to disclose their non-human identity at the start of each conversation and at thirty-minute intervals, to truthfully identify as non-human when asked, to disclaim the provision of professional services at the start of each conversation and at regular intervals, and to be programmed to prevent representing themselves as licensed professionals. Enforced exclusively by the Iowa attorney general through civil actions, with penalties up to $100,000 per violation. The attorney general is directed to adopt implementing rules.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. The attorney general may bring a civil action to enjoin a violation of or enforce compliance with the chapter or rules adopted pursuant to the chapter. No private right of action is created. No cure period or safe harbor is specified.
Penalties
Civil penalties of not more than $100,000 for each violation. The attorney general may also seek restitution or other appropriate relief. Penalties collected are credited to the general fund of the state and appropriated to the attorney general for the purpose of performing duties under the chapter.
Who Is Covered
What Is Covered
"Chatbot" means any interactive computer service or software application that does all of the following: a. Produces new expressive content or responses not fully predetermined by the developer or operator of the interactive computer service or software application. b. Accepts open-ended, natural-language, or multimodal user input and produces adaptive or context-responsive output. "Chatbot" does not include an interactive computer service or a software application described by all of the following: a. The responses of the interactive computer service or software application are limited to information only contained within the interactive computer service or software application, including user input, except for information necessary to make the interactive computer service or software application coherent. b. The interactive computer service or software application is only able to respond to topics in a narrow, specified field.
Compliance Obligations 4 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · DeveloperDeployer · Chatbot
§ 554J.2(1)
Plain Language
It is unlawful for any person to design, develop, or make a chatbot available if the person knows — or recklessly disregards the possibility — that the chatbot encourages, promotes, or coerces users to commit suicide, perform self-injury, or perform acts of physical or sexual violence on humans or animals. The mental state threshold is knowledge or reckless disregard, not negligence — mere failure to foresee is likely insufficient. The prohibition covers the entire lifecycle: design, development, and making available. This goes beyond suicide and self-harm content restrictions in other jurisdictions by also covering physical and sexual violence against humans and animals.
Statutory Text
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeveloperDeployer · Chatbot
§ 554J.2(2)(a)
Plain Language
Every chatbot must provide a clear and conspicuous disclosure that it is a chatbot and not a human being. This disclosure must appear at two points: (1) at the beginning of each conversation, and (2) at thirty-minute intervals during ongoing conversations. This is an unconditional requirement — it applies regardless of whether a reasonable person would be misled. The thirty-minute re-disclosure interval is more frequent than some comparable statutes (e.g., CA SB 243's three-hour interval).
Statutory Text
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
T-01 AI Identity Disclosure · T-01.3 · DeveloperDeployer · Chatbot
§ 554J.2(2)(b)
Plain Language
Chatbots must be programmed so they cannot claim to be human and cannot respond deceptively when a user directly asks whether the chatbot is a human. This is both a proactive prohibition (no affirmative claims of humanity) and a reactive on-demand obligation (truthful response when asked). The 'programmed to prevent' language suggests a design-level requirement, not merely a policy-level instruction.
Statutory Text
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeveloperDeployer · Chatbot
§ 554J.2(2)(c)-(d)
Plain Language
Two related obligations apply to every chatbot. First, the chatbot must clearly and conspicuously disclaim that it does not provide medical, legal, financial, or psychological services and must direct the user to consult a licensed professional. This disclaimer must appear at the beginning of each conversation and at regular intervals (the statute does not specify the interval length, unlike the thirty-minute interval for AI identity disclosure — 'regular intervals' will likely be clarified by attorney general rulemaking). Second, the chatbot must be programmed to prevent it from representing itself as a licensed professional of any kind, including therapists, physicians, lawyers, and financial advisors. The enumerated list is illustrative, not exhaustive.
Statutory Text
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.