SB-1455
MO · State · USA
MO
USA
● Pending
Proposed Effective Date
2026-08-28
Missouri Senate Bill No. 1455 — Guidelines for User Age-Verification and Responsible Dialogue Act of 2026 (GUARD Act)
Imposes age verification, content safety, and disclosure obligations on any person who owns, operates, or makes available an AI chatbot to individuals in Missouri. Requires covered entities to implement reasonable age verification processes for all chatbot users, classify users as minors or adults, and prohibit minors from accessing AI companion products. Prohibits designing or making available AI chatbots that pose a risk of soliciting minors into sexually explicit conduct or that encourage suicide, self-injury, or imminent violence. Requires chatbots to disclose their AI nature at the start of each conversation and at 30-minute intervals, and prohibits chatbots from representing themselves as licensed professionals. Enforced exclusively by the attorney general with civil penalties up to $100,000 per violation and fines up to $100,000 per offense for prohibited conduct.
Summary

Imposes age verification, content safety, and disclosure obligations on any person who owns, operates, or makes available an AI chatbot to individuals in Missouri. Requires covered entities to implement reasonable age verification processes for all chatbot users, classify users as minors or adults, and prohibit minors from accessing AI companion products. Prohibits designing or making available AI chatbots that pose a risk of soliciting minors into sexually explicit conduct or that encourage suicide, self-injury, or imminent violence. Requires chatbots to disclose their AI nature at the start of each conversation and at 30-minute intervals, and prohibits chatbots from representing themselves as licensed professionals. Enforced exclusively by the attorney general with civil penalties up to $100,000 per violation and fines up to $100,000 per offense for prohibited conduct.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement. The attorney general may bring a civil action to enjoin violations, enforce compliance, or obtain civil penalties, restitution, or other appropriate relief for violations of subsections 5 or 6. The attorney general may also bring parens patriae actions on behalf of state residents for injunctive relief. The attorney general has subpoena power and may compel production of documents or testimony. The attorney general may promulgate rules and regulations for administration of the section. Violations of subsections 3 and 4 (prohibited conduct) carry direct statutory fines without a specified enforcement mechanism beyond the penalty itself. No private right of action is created.
Penalties
For violations of subsections 5 or 6 (age verification, disclosure, and minor access prohibitions): civil penalty not to exceed $100,000 per violation; each violation is a separate offense. The attorney general may also obtain injunctive relief and restitution. For violations of subsection 3 (sexual exploitation of minors) or subsection 4 (encouraging suicide, self-injury, or violence): fine not to exceed $100,000 per offense. No private right of action; no attorney fees provision.
Who Is Covered
"Covered entity", any person who owns, operates, or otherwise makes available an artificial intelligence chatbot to individuals in this state;.
What Is Covered
"Artificial intelligence chatbot": (a) Any interactive computer service or software application that: a. Produces new expressive content or responses not fully predetermined by the developer or operator of the service or application; and b. Accepts open-ended natural language or multimodal user input and produces adaptive or context-responsive output; and (b) Does not include an interactive computer service or software application, the responses of which are limited to contextualized replies and that is unable to respond on a range of topics outside of a narrow, specified purpose;
"AI companion", an artificial intelligence chatbot that: (a) Provides adaptive, human-like responses to user inputs; and (b) Is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication;
Compliance Obligations 7 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · DeveloperDeployer · ChatbotMinors
RSMo § 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of sexually explicit conduct. The mental state requirement is knowledge or reckless disregard — not strict liability. Violations carry a fine up to $100,000 per offense. This applies to any person, not just covered entities.
Statutory Text
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · DeveloperDeployer · Chatbot
RSMo § 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. This is broader than self-harm alone — it also covers imminent physical and sexual violence. The mental state requirement is knowledge or reckless disregard. Violations carry a fine up to $100,000 per offense. This applies to any person, not just covered entities.
Statutory Text
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
RSMo § 1.2058(5)(1)-(2)
Plain Language
Covered entities must require all users to create an account to interact with an AI chatbot. For existing accounts as of August 28, 2026, covered entities must freeze the account and require the user to provide verifiable age data before restoring functionality. For new accounts, age data must be collected and verified at the time of account creation. All users must be classified as minors or adults. Covered entities must also periodically re-verify previously verified accounts. Self-attestation of age or entering a birth date is explicitly insufficient — the process must use commercially reasonable verification methods such as government-issued ID or equivalent. Covered entities may outsource verification to a third party but remain fully liable.
Statutory Text
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
RSMo § 1.2058(5)(2)(e)
Plain Language
Covered entities must establish and maintain reasonable data security for age verification data, including: limiting collection to what is minimally necessary for age verification or statutory compliance; protecting the data against unauthorized access; transmitting data only using industry-standard encryption; retaining data only as long as reasonably necessary; and never sharing, transferring, or selling the data to any other entity. This is a comprehensive data minimization and security obligation specific to age verification data — it goes beyond general data governance to impose specific technical requirements (encryption) and an absolute prohibition on third-party data sharing.
Statutory Text
(e) A covered entity shall: a. Establish, implement, and maintain reasonable data security to: (i) Limit collection of personal data to that which is minimally necessary to verify a user's age or maintain compliance with this section; and (ii) Protect such age verification data against unauthorized access; b. Protect such age verification data against unauthorized access; c. Protect the integrity and confidentiality of such data by only transmitting such data using industry-standard encryption protocols; d. Retain such data for no longer than is reasonably necessary to verify a user's age or maintain compliance with this section; and e. Not share with, transfer to, or sell to any other entity such data.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
RSMo § 1.2058(5)(3)(a)
Plain Language
Every AI chatbot made available to users must clearly and conspicuously disclose at the start of each conversation — and again every 30 minutes — that it is an AI system and not a human being. This is an unconditional requirement applying to all users, not just minors. Additionally, the chatbot must be programmed so it does not claim to be human or respond deceptively when a user asks whether it is human. The 30-minute interval applies to all users; compare to CA SB 243, which imposes periodic re-disclosure only for minors.
Statutory Text
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
RSMo § 1.2058(5)(3)(b)
Plain Language
AI chatbots are categorically prohibited from representing — directly or indirectly — that they are licensed professionals, including therapists, physicians, lawyers, financial advisors, or any other professional. Additionally, at the start of each conversation and at reasonably regular intervals, chatbots must clearly and conspicuously disclose that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such advice. This is both a prohibition on professional misrepresentation and an affirmative recurring disclosure obligation.
Statutory Text
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
MN-01 Minor User AI Safety Protections · MN-01.6MN-01.11 · Deployer · ChatbotMinors
RSMo § 1.2058(6)
Plain Language
Once a covered entity's age verification process determines a user is a minor, the covered entity must completely prohibit that minor from accessing or using any AI companion the entity owns, operates, or makes available. This is a categorical ban on minor access to AI companions — not a content restriction or parental consent alternative. Note this applies specifically to AI companions (chatbots designed to simulate emotional interaction, friendship, companionship, or therapeutic communication) and not to all AI chatbots.
Statutory Text
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.