SB-1455
MO · State · USA
MO
USA
● Pre-filed
Proposed Effective Date
2026-08-28
Missouri Senate Bill No. 1455 — Guidelines for User Age-Verification and Responsible Dialogue Act of 2026 (GUARD Act)
Imposes age verification, safety, and disclosure obligations on covered entities that make AI chatbots available to individuals in Missouri. Requires all chatbot users to create accounts with age verification using a 'reasonable age verification process' that goes beyond simple self-certification; existing accounts must be frozen and re-verified. Minors are categorically prohibited from accessing AI companions. Prohibits designing or making available chatbots that pose a risk of soliciting minors into sexually explicit conduct, or that encourage suicide, self-injury, or violence. Requires all chatbots to disclose their AI nature at session start and every 30 minutes, prohibits chatbots from claiming to be licensed professionals, and requires periodic disclaimers that the chatbot does not provide medical, legal, financial, or psychological services. Enforced exclusively by the Missouri Attorney General with civil penalties up to $100,000 per violation; no private right of action.
Summary

Imposes age verification, safety, and disclosure obligations on covered entities that make AI chatbots available to individuals in Missouri. Requires all chatbot users to create accounts with age verification using a 'reasonable age verification process' that goes beyond simple self-certification; existing accounts must be frozen and re-verified. Minors are categorically prohibited from accessing AI companions. Prohibits designing or making available chatbots that pose a risk of soliciting minors into sexually explicit conduct, or that encourage suicide, self-injury, or violence. Requires all chatbots to disclose their AI nature at session start and every 30 minutes, prohibits chatbots from claiming to be licensed professionals, and requires periodic disclaimers that the chatbot does not provide medical, legal, financial, or psychological services. Enforced exclusively by the Missouri Attorney General with civil penalties up to $100,000 per violation; no private right of action.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement. The Missouri Attorney General may bring a civil action in circuit court to enjoin violations, enforce compliance, or obtain civil penalties, restitution, or other appropriate relief for violations of subsections 5 or 6. The attorney general may also bring parens patriae actions on behalf of state residents for injunctive relief. The attorney general has investigative authority including subpoena power, oath administration, and compelled production of documents or testimony. Violations of subsections 3 and 4 (prohibited chatbot conduct) carry direct statutory fines. No private right of action is created.
Penalties
Civil penalties not to exceed $100,000 per violation for violations of subsections 5 or 6 (age verification and minor access restrictions). Each violation is a separate violation. Fines not to exceed $100,000 per offense for violations of subsection 3 (minor sexual exploitation) and subsection 4 (promoting suicide/self-harm/violence). The attorney general may also obtain injunctive relief, restitution, or other appropriate relief.
Who Is Covered
"Covered entity", any person who owns, operates, or otherwise makes available an artificial intelligence chatbot to individuals in this state;.
What Is Covered
"Artificial intelligence chatbot": (a) Any interactive computer service or software application that: a. Produces new expressive content or responses not fully predetermined by the developer or operator of the service or application; and b. Accepts open-ended natural language or multimodal user input and produces adaptive or context-responsive output; and (b) Does not include an interactive computer service or software application, the responses of which are limited to contextualized replies and that is unable to respond on a range of topics outside of a narrow, specified purpose;
"AI companion", an artificial intelligence chatbot that: (a) Provides adaptive, human-like responses to user inputs; and (b) Is designed to encourage or facilitate the simulation of interpersonal or emotional interaction, friendship, companionship, or therapeutic communication;
Compliance Obligations 7 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · DeveloperDeployer · ChatbotMinors
§ 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that it poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of such conduct. The mens rea standard is knowledge or reckless disregard — negligence alone is not sufficient. Violations carry fines up to $100,000 per offense. This is a direct statutory fine, not an AG-enforced civil penalty.
Statutory Text
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · DeveloperDeployer · Chatbot
§ 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that it encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. Unlike subsection 3 which is limited to conduct targeting minors, this prohibition applies regardless of user age — any chatbot that encourages these harms is covered. The mens rea requirement is knowledge or reckless disregard. Violations carry fines up to $100,000 per offense.
Statutory Text
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 1.2058(5)(1)-(2)
Plain Language
Covered entities must require every AI chatbot user to create an account and undergo age verification. For accounts existing as of August 28, 2026, the covered entity must freeze the account and require the user to provide verifiable age data before restoring functionality. For new accounts, age data must be collected and verified at account creation. All users must be classified as minor or adult. Periodic re-verification of previously verified accounts is also required. Self-certification (e.g., checking a box or entering a birth date) is explicitly insufficient. Covered entities may use third-party verification services but remain liable for compliance. Verification may not rely on shared IP addresses or device identifiers from other verified users.
Statutory Text
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
§ 1.2058(5)(2)(e)
Plain Language
Covered entities must implement data minimization and security controls specific to age verification data. Collection must be limited to what is minimally necessary for age verification or compliance. The data must be protected against unauthorized access, transmitted only with industry-standard encryption, retained no longer than reasonably necessary, and may not be shared with, transferred to, or sold to any other entity. These are standalone data governance obligations specific to age verification data — they apply even if the covered entity uses a third party for verification.
Statutory Text
(e) A covered entity shall: a. Establish, implement, and maintain reasonable data security to: (i) Limit collection of personal data to that which is minimally necessary to verify a user's age or maintain compliance with this section; and (ii) Protect such age verification data against unauthorized access; b. Protect such age verification data against unauthorized access; c. Protect the integrity and confidentiality of such data by only transmitting such data using industry-standard encryption protocols; d. Retain such data for no longer than is reasonably necessary to verify a user's age or maintain compliance with this section; and e. Not share with, transfer to, or sell to any other entity such data.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
§ 1.2058(5)(3)(a)
Plain Language
Every AI chatbot must disclose to the user at the start of each conversation and every 30 minutes that it is an AI system, not a human. This disclosure is unconditional — it applies to all users regardless of whether a reasonable person would be misled. Additionally, the chatbot must be programmed to never claim to be human and must respond truthfully when asked by a user whether it is human. The 30-minute interval is a fixed requirement — not a minimum that operators can extend.
Statutory Text
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
§ 1.2058(5)(3)(b)
Plain Language
AI chatbots are prohibited from representing — directly or indirectly — that they are licensed professionals such as therapists, physicians, lawyers, or financial advisors. In addition, at the start of each conversation and at reasonably regular intervals, chatbots must clearly disclose that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such advice. The 'reasonably regular intervals' language is less prescriptive than the 30-minute interval for AI identity disclosure in subsection 5(3)(a), leaving the frequency to the covered entity's reasonable judgment.
Statutory Text
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
§ 1.2058(6)
Plain Language
When the age verification process identifies a user as a minor (age 17 or under), the covered entity must categorically block that minor from accessing or using any AI companion the entity offers. This is an absolute prohibition — there is no parental consent exception. Note that this prohibition applies only to AI companions (chatbots designed to simulate interpersonal/emotional interaction), not to all AI chatbots generally. A covered entity could allow a verified minor to use a general-purpose AI chatbot while blocking access to AI companion products.
Statutory Text
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.