HB-2032
MO · State · USA
MO
USA
● Pending
Proposed Effective Date
2026-08-28
Missouri HB 2032 — Guidelines for User Age-Verification and Responsible Dialogue Act of 2026 (GUARD Act)
Imposes age verification, AI identity disclosure, and content safety obligations on covered entities that own, operate, or make available artificial intelligence chatbots to individuals in Missouri. Requires all chatbot users to create accounts and undergo a reasonable age verification process that goes beyond self-certification; existing accounts must be frozen and re-verified. Minors are categorically prohibited from accessing AI companions. Makes it unlawful to design, develop, or make available chatbots that solicit minors to engage in sexually explicit conduct or that encourage suicide, self-harm, or imminent violence. Requires chatbots to disclose their AI nature at the start of each conversation and every 30 minutes, and prohibits chatbots from representing themselves as licensed professionals. Enforced by the Attorney General with civil penalties up to $100,000 per violation and direct fines up to $100,000 per offense for prohibited conduct.
Summary

Imposes age verification, AI identity disclosure, and content safety obligations on covered entities that own, operate, or make available artificial intelligence chatbots to individuals in Missouri. Requires all chatbot users to create accounts and undergo a reasonable age verification process that goes beyond self-certification; existing accounts must be frozen and re-verified. Minors are categorically prohibited from accessing AI companions. Makes it unlawful to design, develop, or make available chatbots that solicit minors to engage in sexually explicit conduct or that encourage suicide, self-harm, or imminent violence. Requires chatbots to disclose their AI nature at the start of each conversation and every 30 minutes, and prohibits chatbots from representing themselves as licensed professionals. Enforced by the Attorney General with civil penalties up to $100,000 per violation and direct fines up to $100,000 per offense for prohibited conduct.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement. The AG may bring a civil action in circuit court to enjoin violations, enforce compliance, or obtain civil penalties, restitution, or other appropriate relief for violations of subsections 5 or 6 (age verification and minor access restrictions). The AG may issue subpoenas, administer oaths, and compel production of documents or testimony. The AG may also act as parens patriae on behalf of Missouri residents to obtain injunctive relief. Subsections 3 and 4 (prohibited chatbot conduct) carry direct fines imposed on the violator without requiring AG action. No private right of action is created.
Penalties
For violations of subsection 3 (sexual content involving minors) or subsection 4 (encouraging suicide/self-harm/violence): fine not more than $100,000 per offense. For violations of subsection 5 (age verification) or subsection 6 (minor access to AI companions): civil penalty not to exceed $100,000 per violation, with each violation considered a separate violation. AG may also obtain injunctive relief, restitution, or other appropriate relief.
Who Is Covered
"Covered entity", any person who owns, operates, or otherwise makes available an artificial intelligence chatbot to individuals in this state;.
What Is Covered
"Artificial intelligence chatbot": (a) Any interactive computer service or software application that: a. Produces new expressive content or responses not fully predetermined by the developer or operator of the service or application; and b. Accepts open-ended natural language or multimodal user input and produces adaptive or context-responsive output; and (b) Does not include an interactive computer service or software application, the responses of which are limited to contextualized replies and that is unable to respond on a range of topics outside of a narrow, specified purpose;
"AI companion", an artificial intelligence chatbot that: (a) Provides adaptive, human-like responses to user inputs; and (b) Is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication;
Compliance Obligations 6 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · DeveloperDeployer · ChatbotMinors
§ 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of sexually explicit conduct. The mental state requirement is knowledge or reckless disregard — not strict liability. Each offense carries a fine of up to $100,000. This obligation applies to any person, not just covered entities — it extends to developers and anyone in the supply chain who makes the chatbot available.
Statutory Text
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · DeveloperDeployer · Chatbot
§ 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. Unlike subsection 3, this prohibition is not limited to minors — it applies to chatbots accessible to any user. The mental state threshold is knowledge or reckless disregard. Each offense carries a fine of up to $100,000. This applies to any person involved in the design, development, or distribution chain.
Statutory Text
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 1.2058(5)(1)-(2)(a)-(e)
Plain Language
Covered entities must require all chatbot users to create accounts. For existing accounts as of August 28, 2026, covered entities must freeze the account and require re-verification before restoring access. For new accounts, age verification must occur at account creation. All users must be classified as minor or adult. Importantly, self-certification (e.g., clicking 'I am 18+' or entering a birth date) is explicitly insufficient — the process must use government ID or another commercially reasonable method that can reliably determine adult status. IP-address or hardware-identifier sharing with a verified adult user also does not qualify. Covered entities may use third-party verification services but remain fully liable. Age verification data must be subject to data minimization, encryption, retention limits, and a prohibition on sharing, transferring, or selling the data to any other entity. Periodic re-verification of existing accounts is also required.
Statutory Text
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section. (e) A covered entity shall: a. Establish, implement, and maintain reasonable data security to: (i) Limit collection of personal data to that which is minimally necessary to verify a user's age or maintain compliance with this section; and (ii) Protect such age verification data against unauthorized access; b. Protect such age verification data against unauthorized access; c. Protect the integrity and confidentiality of such data by only transmitting such data using industry-standard encryption protocols; d. Retain such data for no longer than is reasonably necessary to verify a user's age or maintain compliance with this section; and e. Not share with, transfer to, or sell to any other entity such data.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
§ 1.2058(5)(3)(a)
Plain Language
Every AI chatbot must clearly and conspicuously disclose to the user at the start of each conversation — and again every 30 minutes during the conversation — that it is an AI system and not a human being. This is an unconditional requirement that applies regardless of whether the user would otherwise be misled. Additionally, the chatbot must be programmed so that it does not claim to be human or respond deceptively when a user asks whether it is a human. The on-demand honesty requirement is ongoing — the chatbot must accurately self-identify whenever asked, not just during the initial or periodic disclosures.
Statutory Text
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
§ 1.2058(5)(3)(b)
Plain Language
AI chatbots are prohibited from representing — directly or indirectly — that they are licensed professionals of any kind, including therapists, physicians, lawyers, or financial advisors. In addition to this prohibition, chatbots must affirmatively disclose at the start of each conversation and at reasonably regular intervals that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such advice. This is both a negative prohibition (do not claim to be a professional) and a positive disclosure obligation (affirmatively tell users to seek licensed professionals). The 'reasonably regular intervals' standard for re-disclosure is less precise than the 30-minute interval in subsection 5(3)(a).
Statutory Text
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
§ 1.2058(6)
Plain Language
If the age verification process determines that a user is a minor (17 or under), the covered entity must completely prohibit that minor from accessing or using any AI companion the covered entity owns, operates, or makes available. This is a categorical access ban — not a content restriction or feature limitation. Note the scope: this prohibition applies specifically to AI companions (chatbots designed to simulate interpersonal or emotional relationships), not to all AI chatbots. A covered entity could potentially allow a verified minor to use non-companion AI chatbots while blocking access to companion products.
Statutory Text
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.