HSB-611
IA · State · USA
IA
USA
● Pending
Proposed Effective Date
2025-07-01
Iowa House Study Bill 611 — A bill for an act establishing requirements and guidelines for chatbots, making appropriations, and providing civil penalties
Imposes safety and disclosure obligations on persons who design, develop, or make chatbots available. Prohibits making chatbots available with knowledge or reckless disregard that the chatbot encourages suicide, self-injury, or physical or sexual violence. Requires all chatbots to disclose their non-human identity at the start of each conversation and at thirty-minute intervals, to respond truthfully when asked if they are human, to disclaim that they do not provide medical, legal, financial, or psychological services, and to be programmed to prevent representing themselves as licensed professionals. Enforcement is exclusively through the attorney general, who may seek civil penalties of up to $100,000 per violation, restitution, or injunctive relief. The attorney general is directed to adopt implementing rules.
Summary

Imposes safety and disclosure obligations on persons who design, develop, or make chatbots available. Prohibits making chatbots available with knowledge or reckless disregard that the chatbot encourages suicide, self-injury, or physical or sexual violence. Requires all chatbots to disclose their non-human identity at the start of each conversation and at thirty-minute intervals, to respond truthfully when asked if they are human, to disclaim that they do not provide medical, legal, financial, or psychological services, and to be programmed to prevent representing themselves as licensed professionals. Enforcement is exclusively through the attorney general, who may seek civil penalties of up to $100,000 per violation, restitution, or injunctive relief. The attorney general is directed to adopt implementing rules.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. The attorney general may bring a civil action to enjoin a violation of or enforce compliance with the chapter or rules adopted pursuant to the chapter. No private right of action is created. Enforcement is agency-initiated at the attorney general's discretion.
Penalties
Civil penalties of not more than $100,000 per violation. The attorney general may also seek restitution or other appropriate relief. Penalties collected are credited to the general fund of the state and appropriated to the attorney general for performing duties under the chapter.
Who Is Covered
What Is Covered
"Chatbot" means any interactive computer service or software application that does all of the following: a. Produces new expressive content or responses not fully predetermined by the developer or operator of the interactive computer service or software application. b. Accepts open-ended, natural-language, or multimodal user input and produces adaptive or context-responsive output. "Chatbot" does not include an interactive computer service or a software application described by all of the following: a. The responses of the interactive computer service or software application are limited to information only contained within the interactive computer service or software application, including user input, except for information necessary to make the interactive computer service or software application coherent. b. The interactive computer service or software application is only able to respond to topics in a narrow, specified field.
Compliance Obligations 4 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · DeveloperDeployer · Chatbot
§ 554J.2(1)
Plain Language
It is unlawful for any person to design, develop, or make available a chatbot if that person knows — or recklessly disregards the possibility — that the chatbot encourages, promotes, or coerces users to commit suicide, perform self-injury, or commit physical or sexual violence against humans or animals. The scienter requirement is knowledge or reckless disregard, not mere negligence. The prohibition covers the full lifecycle chain: designers, developers, and those who make chatbots available to users. This goes beyond restricting self-harm content to also prohibit content encouraging violence against others and animals.
Statutory Text
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeveloperDeployer · Chatbot
§ 554J.2(2)(a)
Plain Language
Every chatbot must clearly and conspicuously disclose to the user that it is a chatbot and not a human. This disclosure must occur at two points: (1) at the beginning of each conversation, and (2) at recurring thirty-minute intervals during ongoing interactions. This is an unconditional obligation — the disclosure is required regardless of whether a reasonable person would be misled. The thirty-minute interval is more frequent than some comparable state laws (e.g., California SB 243's three-hour interval).
Statutory Text
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
T-01 AI Identity Disclosure · T-01.3 · DeveloperDeployer · Chatbot
§ 554J.2(2)(b)
Plain Language
Chatbots must be programmed so that they cannot claim to be human and cannot respond deceptively when a user asks whether the chatbot is a human. This is both a proactive design requirement (the chatbot must be prevented from spontaneously claiming human identity) and an on-demand disclosure obligation (the chatbot must truthfully identify itself as non-human when asked). The 'respond deceptively' standard is broader than merely requiring a truthful answer — it prohibits evasive or misleading responses as well.
Statutory Text
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeveloperDeployer · Chatbot
§ 554J.2(2)(c)-(d)
Plain Language
Chatbots must satisfy two related professional-services obligations. First, they must clearly and conspicuously disclose at the beginning of each conversation and at regular intervals that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such services. Second, chatbots must be programmed to prevent the system from representing itself as a licensed professional — including therapists, physicians, lawyers, financial advisors, and other professionals. Unlike the thirty-minute interval specified for AI identity disclosure, the interval for the professional-services disclaimer is left to 'regular intervals' without a specific time floor, leaving the precise cadence to implementing rules or reasonable judgment.
Statutory Text
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.