SSB-3011
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2025-07-01
Iowa Senate Study Bill 3011 — A bill for an act establishing requirements and guidelines for chatbots, making appropriations, and providing civil penalties
Imposes disclosure and safety obligations on any person who designs, develops, or makes available a chatbot in Iowa. Requires chatbots to disclose their non-human identity at the start of each conversation and at thirty-minute intervals, to truthfully identify as non-human when asked, to disclaim the provision of professional services, and to refrain from representing themselves as licensed professionals. Prohibits persons from knowingly or recklessly making available chatbots that encourage suicide, self-injury, or physical or sexual violence. Enforced exclusively by the Iowa attorney general through civil actions, with penalties up to $100,000 per violation. The attorney general is directed to adopt implementing rules.
Summary

Imposes disclosure and safety obligations on any person who designs, develops, or makes available a chatbot in Iowa. Requires chatbots to disclose their non-human identity at the start of each conversation and at thirty-minute intervals, to truthfully identify as non-human when asked, to disclaim the provision of professional services, and to refrain from representing themselves as licensed professionals. Prohibits persons from knowingly or recklessly making available chatbots that encourage suicide, self-injury, or physical or sexual violence. Enforced exclusively by the Iowa attorney general through civil actions, with penalties up to $100,000 per violation. The attorney general is directed to adopt implementing rules.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement. The attorney general may bring a civil action to enjoin a violation of or enforce compliance with the chapter or rules adopted pursuant to the chapter. No private right of action is created. Enforcement is agency-initiated.
Penalties
Civil penalties of not more than $100,000 per violation. The attorney general may also seek restitution or other appropriate relief. Penalties collected are credited to the state general fund and appropriated to the attorney general for performing duties under the chapter.
Who Is Covered
What Is Covered
"Chatbot" means any interactive computer service or software application that does all of the following: a. Produces new expressive content or responses not fully predetermined by the developer or operator of the interactive computer service or software application. b. Accepts open-ended, natural-language, or multimodal user input and produces adaptive or context-responsive output. "Chatbot" does not include an interactive computer service or a software application described by all of the following: a. The responses of the interactive computer service or software application are limited to information only contained within the interactive computer service or software application, including user input, except for information necessary to make the interactive computer service or software application coherent. b. The interactive computer service or software application is only able to respond to topics in a narrow, specified field.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · DeveloperDeployer · Chatbot
§ 554J.2(1)
Plain Language
No person may design, develop, or make available a chatbot if they know — or recklessly disregard the possibility — that the chatbot encourages, promotes, or coerces users to commit suicide, self-injury, or acts of physical or sexual violence against humans or animals. The mental state threshold is knowledge or reckless disregard, not strict liability. This prohibition covers the full lifecycle: design, development, and distribution. Note that the violence prohibition extends beyond self-harm to include violence against others and animals, which is broader than most comparable chatbot safety statutes.
Statutory Text
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeveloperDeployer · Chatbot
§ 554J.2(2)(a)
Plain Language
Every chatbot must provide a clear and conspicuous disclosure that it is a chatbot, not a human, at two points: (1) at the beginning of each conversation, and (2) at recurring thirty-minute intervals during ongoing interactions. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The thirty-minute interval is notably more frequent than comparable statutes (e.g., California SB 243 requires three-hour intervals for minors only).
Statutory Text
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
T-01 AI Identity Disclosure · T-01.3 · DeveloperDeployer · Chatbot
§ 554J.2(2)(b)
Plain Language
Chatbots must be programmed so they cannot claim to be human and must respond truthfully when a user asks whether the chatbot is a human. This is both a proactive prohibition (no affirmative claims of humanity) and a reactive obligation (honest response on demand). The term 'respond deceptively' goes beyond simple non-disclosure to prohibit any misleading answer to a direct identity question.
Statutory Text
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeveloperDeployer · Chatbot
§ 554J.2(2)(c)-(d)
Plain Language
Two related obligations apply. First, every chatbot must display a clear and conspicuous disclaimer at the start of each conversation and at regular intervals stating that it does not provide medical, legal, financial, or psychological services and directing users to consult a licensed professional. Second, the chatbot must be programmed to prevent it from representing itself as a licensed professional of any type — therapist, physician, lawyer, financial advisor, or otherwise. Together these provisions prevent chatbots from impersonating or substituting for licensed professionals. Unlike the thirty-minute interval specified for AI identity disclosure, the interval for this professional services disclaimer is 'regular' — leaving the specific cadence to implementing rules or operator judgment.
Statutory Text
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
Other · Chatbot
§ 554J.4
Plain Language
The attorney general is directed to adopt administrative rules under Iowa's Administrative Procedure Act (chapter 17A) to implement the chatbot requirements chapter. This creates no new compliance obligation on chatbot developers or operators but signals that additional requirements may be forthcoming via rulemaking. Compliance teams should monitor rulemaking proceedings for additional obligations.
Statutory Text
The attorney general shall adopt rules pursuant to chapter 17A to implement this chapter.