HB-2032
MO · State · USA
MO
USA
● Pending
Proposed Effective Date
2026-08-28
Missouri HB 2032 — Guidelines for User Age-Verification and Responsible Dialogue Act of 2026 (GUARD Act)
Imposes age verification, content safety, and disclosure obligations on covered entities that make AI chatbots available to users in Missouri. Requires all chatbot users to have accounts subject to reasonable age verification that goes beyond simple self-attestation; existing accounts must be frozen and re-verified by August 28, 2026. Prohibits minors from accessing AI companion products. Prohibits designing or making available chatbots that pose a risk of soliciting minors into sexually explicit conduct or that encourage suicide, self-harm, or violence. Requires all chatbots to disclose their AI nature at conversation initiation and every 30 minutes, respond honestly when asked if they are human, and disclaim licensed professional status. Enforcement is primarily through the Attorney General, with civil penalties up to $100,000 per violation and fines up to $100,000 per offense for prohibited content violations.
Summary

Imposes age verification, content safety, and disclosure obligations on covered entities that make AI chatbots available to users in Missouri. Requires all chatbot users to have accounts subject to reasonable age verification that goes beyond simple self-attestation; existing accounts must be frozen and re-verified by August 28, 2026. Prohibits minors from accessing AI companion products. Prohibits designing or making available chatbots that pose a risk of soliciting minors into sexually explicit conduct or that encourage suicide, self-harm, or violence. Requires all chatbots to disclose their AI nature at conversation initiation and every 30 minutes, respond honestly when asked if they are human, and disclaim licensed professional status. Enforcement is primarily through the Attorney General, with civil penalties up to $100,000 per violation and fines up to $100,000 per offense for prohibited content violations.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement. The AG may bring civil actions in circuit court to enjoin violations, enforce compliance, and obtain civil penalties, restitution, or other appropriate relief for violations of subsections 5 and 6. The AG may issue subpoenas, administer oaths, and compel production of documents or testimony. The AG may also act as parens patriae on behalf of state residents to obtain injunctive relief. Subsections 3 and 4 impose direct criminal-style fines without specifying an enforcement mechanism beyond the fine itself. No private right of action is created.
Penalties
Civil penalties up to $100,000 per violation for violations of subsections 3, 4, 5, or 6. Each violation is a separate offense. The AG may also obtain injunctive relief, restitution, or other appropriate relief for violations of subsections 5 and 6. No statutory minimum is specified — the $100,000 figure is a cap, not a floor.
Who Is Covered
"Covered entity", any person who owns, operates, or otherwise makes available an artificial intelligence chatbot to individuals in this state;.
What Is Covered
"Artificial intelligence chatbot": (a) Any interactive computer service or software application that: a. Produces new expressive content or responses not fully predetermined by the developer or operator of the service or application; and b. Accepts open-ended natural language or multimodal user input and produces adaptive or context-responsive output; and (b) Does not include an interactive computer service or software application, the responses of which are limited to contextualized replies and that is unable to respond on a range of topics outside of a narrow, specified purpose;
"AI companion", an artificial intelligence chatbot that: (a) Provides adaptive, human-like responses to user inputs; and (b) Is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication;
Compliance Obligations 8 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · DeveloperDeployer · ChatbotMinors
§ 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that it poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of sexually explicit conduct. The mens rea threshold is knowledge or reckless disregard — negligence alone is insufficient. Each offense carries a fine of up to $100,000. This applies to any person, not just covered entities.
Statutory Text
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · DeveloperDeployer · ChatbotMinors
§ 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that the chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. This is a universal prohibition — not limited to minors — and applies to any person, not just covered entities. The knowledge or reckless disregard standard applies. Each offense carries a fine of up to $100,000.
Statutory Text
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 1.2058(5)(1)-(2)
Plain Language
Covered entities must require all users to create accounts to access AI chatbots. All existing accounts must be frozen on August 28, 2026, and can only be restored after the user completes age verification. New accounts must be age-verified at creation. Covered entities must also periodically re-verify previously verified accounts. Critically, self-attestation of age (e.g., clicking 'I am 18+' or entering a birth date) does not qualify as a reasonable age verification measure. IP address sharing or device-based inference is also insufficient. Third-party verification services may be used, but the covered entity remains fully liable. Each user must be classified as a minor or an adult based on the verified age data.
Statutory Text
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section.
D-01 Automated Processing Rights & Data Controls · D-01.6 · Deployer · ChatbotMinors
§ 1.2058(5)(2)(e)
Plain Language
Covered entities must implement comprehensive data governance for age verification data: collection must be minimized to what is strictly necessary for age verification or compliance; data must be protected against unauthorized access using industry-standard encryption; retention must be limited to what is reasonably necessary; and the data must never be shared with, transferred to, or sold to any other entity. This creates a standalone data minimization and security framework specifically for age verification data that goes beyond general data protection requirements.
Statutory Text
(e) A covered entity shall: a. Establish, implement, and maintain reasonable data security to: (i) Limit collection of personal data to that which is minimally necessary to verify a user's age or maintain compliance with this section; and (ii) Protect such age verification data against unauthorized access; b. Protect such age verification data against unauthorized access; c. Protect the integrity and confidentiality of such data by only transmitting such data using industry-standard encryption protocols; d. Retain such data for no longer than is reasonably necessary to verify a user's age or maintain compliance with this section; and e. Not share with, transfer to, or sell to any other entity such data.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
§ 1.2058(5)(3)(a)
Plain Language
Every AI chatbot made available to users must provide two types of AI identity disclosure: (1) an unconditional, clear and conspicuous disclosure at the start of each conversation and repeated every 30 minutes that the chatbot is AI and not human; and (2) accurate identification as AI when asked — the chatbot must never claim to be human or respond deceptively when a user asks. The disclosure obligation is unconditional — it does not depend on whether a reasonable person would be misled. The 30-minute interval applies to all users, not just minors.
Statutory Text
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
§ 1.2058(5)(3)(b)
Plain Language
AI chatbots face two obligations: (1) a prohibition on representing — directly or indirectly — that the chatbot is a licensed professional such as a therapist, physician, lawyer, or financial advisor; and (2) an affirmative disclosure requirement at the start of each conversation and at reasonably regular intervals that the chatbot does not provide medical, legal, financial, or psychological services and that users should consult licensed professionals for such advice. The prohibition is absolute; the disclosure is recurring and unconditional.
Statutory Text
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
MN-01 Minor User AI Safety Protections · MN-01.6MN-01.11 · Deployer · ChatbotMinors
§ 1.2058(6)
Plain Language
Once the age verification process identifies a user as a minor, the covered entity must completely block the minor from accessing or using any AI companion product the covered entity offers. This is a categorical prohibition on minor access to AI companion products — not a restriction with parental override or content filtering. Note the scope: the prohibition applies specifically to AI companions (chatbots designed for interpersonal or emotional interaction), not to all AI chatbots. A covered entity could allow a minor to use a general-purpose AI chatbot that is not an AI companion.
Statutory Text
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.
Other · ChatbotMinors
§ 1.2058(7)(1)-(4)
Plain Language
This provision establishes the enforcement framework for the GUARD Act. The Attorney General may bring civil actions for violations of subsections 5 (age verification, disclosure) and 6 (minor access prohibition), seek injunctive relief, civil penalties up to $100,000 per violation, and restitution. The AG has investigatory subpoena power and rulemaking authority. The AG may also act as parens patriae on behalf of Missouri residents. Each violation counts separately. This creates no new affirmative compliance obligation — it activates existing enforcement powers for the substantive obligations in subsections 3–6.
Statutory Text
7. (1) In the case of a violation of subsection 5 or 6 of this section, or a rule or regulation promulgated thereunder, the attorney general may bring a civil action in an appropriate circuit court to: (a) Enjoin the violation; (b) Enforce compliance with subsection 5 or 6 of this section, or any rules or regulations promulgated thereunder; or (c) Obtain civil penalties under subdivision (3) of this subsection, restitution, or other appropriate relief. (2) (a) For the purpose of conducting investigations or bringing enforcement actions under this section, the attorney general may issue subpoenas, administer oaths, and compel the production of documents or testimony. (b) The attorney general may promulgate all necessary rules and regulations for the administration of this section. Any rule or portion of a rule, as that term is defined in section 536.010, that is created under the authority delegated in this section shall become effective only if it complies with and is subject to all of the provisions of chapter 536 and, if applicable, section 536.028. This section and chapter 536 are nonseverable and if any of the powers vested with the general assembly pursuant to chapter 536 to review, to delay the effective date, or to disapprove and annul a rule are subsequently held unconstitutional, then the grant of rulemaking authority and any rule proposed or adopted after August 28, 2026, shall be invalid and void. (3) (a) Any person who violates subsection 5 or 6 of this section, or any rule or regulation promulgated thereunder, shall be subject to a civil penalty not to exceed one hundred thousand dollars for each violation. (b) Each violation described in paragraph (a) of this subdivision shall be considered a separate violation. (4) In any case in which the attorney general has reason to believe that an interest of the residents of this state has been or is being threatened or adversely affected by the engagement of any covered entity in a violation of this section, or any rule or regulation promulgated thereunder, the attorney general, as parens patriae, may bring a civil action on behalf of the residents of this state in a circuit court of this state with appropriate jurisdiction to obtain injunctive relief.