HB-2671
KS · State · USA
KS
USA
● Pending
Proposed Effective Date
2026-07-01
Kansas House Bill No. 2671 — An Act concerning consumer protection; establishing the Kansas community harmed by AI technology act
The Kansas CHAT Act imposes safety and disclosure obligations on covered entities that make companion AI chatbots available to Kansas users. It requires mandatory user accounts with age verification using commercially available methods, parental account affiliation and verifiable parental consent for minors, and blocking minor access to suicidal ideation and sexually explicit content. Covered entities must monitor all interactions for suicidal ideation and provide crisis resources (including the National Suicide Prevention Lifeline) to affected users and affiliated parental accounts. A clear popup disclosure that the user is interacting with AI — not a human — must be shown at the start of every interaction and at least every 60 minutes thereafter. Violations are enforced as deceptive acts under the Kansas Consumer Protection Act by the attorney general, with a safe harbor for entities that comply with AG guidance and rely in good faith on user-provided age information.
Summary

The Kansas CHAT Act imposes safety and disclosure obligations on covered entities that make companion AI chatbots available to Kansas users. It requires mandatory user accounts with age verification using commercially available methods, parental account affiliation and verifiable parental consent for minors, and blocking minor access to suicidal ideation and sexually explicit content. Covered entities must monitor all interactions for suicidal ideation and provide crisis resources (including the National Suicide Prevention Lifeline) to affected users and affiliated parental accounts. A clear popup disclosure that the user is interacting with AI — not a human — must be shown at the start of every interaction and at least every 60 minutes thereafter. Violations are enforced as deceptive acts under the Kansas Consumer Protection Act by the attorney general, with a safe harbor for entities that comply with AG guidance and rely in good faith on user-provided age information.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement under the Kansas Consumer Protection Act (K.S.A. 50-623 et seq.). Violations are deemed deceptive acts under KCPA, granting the attorney general authority to bring enforcement actions. The person alleging a violation is deemed a consumer and the covered entity is deemed the supplier for purposes of KCPA remedies; proof of a consumer transaction is not required. A safe harbor applies where the covered entity relied in good faith on user-provided age information, applied attorney general-approved age verification methods, and otherwise complied with attorney general guidance.
Penalties
Violations are deemed deceptive acts under the Kansas Consumer Protection Act (K.S.A. 50-623 et seq.), making all KCPA remedies and penalties available. KCPA provides for civil penalties, injunctive relief, restitution, and attorney fees and costs. Proof of a consumer transaction is not required to access KCPA remedies. Specific penalty amounts are set by KCPA, not by this act directly.
Who Is Covered
"Covered entity" means any person that owns, operates or otherwise makes available a companion AI chatbot to individuals in the state of Kansas.
What Is Covered
"Companion AI chatbot" means any software-based artificial intelligence system or program that exists for the primary purpose of simulating interpersonal or emotional interaction, friendship, companionship or mental health therapeutic communication with a user.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
Sec. 3(a)-(b)
Plain Language
Covered entities must require every user to create a user account before accessing a companion AI chatbot. For existing accounts as of July 1, 2026, the entity must freeze the account until the user provides age information verified through a commercially available method and classify the user as a minor or adult. For new accounts, the entity must collect and verify age information at account creation using the same standard. The verification standard is 'commercially available method or process that is reasonably designed to ensure accuracy' — not a specific technology mandate.
Statutory Text
(a) A covered entity shall require each individual accessing a companion AI chatbot to make a user account to use or otherwise interact with such chatbot. (b) (1) With respect to each user account of a companion AI chatbot that exists as of July 1, 2026, a covered entity shall: (A) On such date, freeze any such account; (B) inform the individual owning such user account that in order to restore the functionality of such account, the user is required to provide age information that is verifiable using a commercially available method or process that is reasonably designed to ensure accuracy; and (C) use such age information to classify each user as a minor or an adult. (2) At the time that an individual creates a new user account to use or interact with a companion AI chatbot, a covered entity shall: (A) Require the individual to submit age information to the covered entity; and (B) verify the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy.
MN-01 Minor User AI Safety Protections · MN-01.2 · Deployer · ChatbotMinors
Sec. 3(c)(1)-(2)
Plain Language
When age verification identifies a user as a minor, the covered entity must require the minor's account to be affiliated with a verified parental account and must obtain verifiable parental consent from the parent before allowing the minor to access the chatbot. The parent's account must also be age-verified using a commercially available method reasonably designed to ensure accuracy. Both parental affiliation and consent are prerequisites to minor access — neither alone is sufficient.
Statutory Text
(c) If the age verification process described in subsection (b) determines that a user is a minor, a covered entity shall: (1) Require the account of such user to be affiliated with a parental account that such covered entity has verified the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy; (2) obtain verifiable parental consent from the holder of the account before allowing a minor to access and use the companion AI chatbot;
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
Sec. 3(c)(3)-(4)
Plain Language
Covered entities must block a minor's access to the companion AI chatbot in two situations: (1) when any interaction involving suicidal ideation occurs — meaning the minor expresses thoughts of self-harm or suicide — the entity must block access and immediately notify the affiliated parental account; and (2) the entity must block the minor's access to any companion AI chatbot that engages in sexually explicit communication. The suicidal ideation blocking is reactive (triggered by detected expression), while the sexually explicit blocking appears to be a categorical prohibition on minor access to chatbots that engage in such content.
Statutory Text
(3) when any interaction involving suicidal ideation occurs, block the minor's access to the companion AI chatbot and immediately inform the holder of the parental account; and (4) block the minor's access to any companion AI chatbot that engages in sexually explicit communication.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
Sec. 3(d)
Plain Language
Covered entities must minimize age verification data by limiting its collection, processing, use, and storage to what is strictly necessary for three purposes: verifying user age, obtaining verifiable parental consent, and maintaining compliance records. This is a data minimization obligation specific to age information — it does not permit secondary use of age data for advertising, profiling, or any purpose beyond the enumerated three.
Statutory Text
(d) A covered entity shall protect the confidentiality of age information provided by a user for age verification by limiting the collection, processing, use and storage of such information to what is strictly necessary to verify a user's age, obtain verifiable parental consent or maintain compliance records.
S-04 AI Crisis Response Protocols · S-04.1MN-01.10 · Deployer · ChatbotMinors
Sec. 3(e)
Plain Language
Covered entities must continuously monitor all companion AI chatbot interactions for suicidal ideation — defined as any dialogue in which a minor expresses thoughts of self-harm or suicide. When suicidal ideation is detected, the entity must present the National Suicide Prevention Lifeline contact information both to the user and to the affiliated parental account. This is an ongoing monitoring and response obligation, not a one-time configuration. Note that the statutory definition of suicidal ideation is limited to interactions between a minor and a chatbot, but the monitoring obligation in subsection (e) uses the broader phrasing 'monitor companion AI chatbot interactions for suicidal ideation' without explicitly limiting it to minors — creating some ambiguity about whether adult interactions must also be monitored.
Statutory Text
(e) A covered entity shall monitor companion AI chatbot interactions for suicidal ideation and, in response to any such interaction, provide to the user and the parental account affiliated with such user appropriate resources by presenting contact information for the national suicide prevention lifeline.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Sec. 3(f)
Plain Language
At the start of every interaction and at least every 60 minutes during a continuing interaction, the covered entity must show a clear popup notification telling the user two things: (1) they are not talking to a human, and (2) the AI chatbot is not licensed or credentialed to provide advice or guidance on any topic. This is an unconditional obligation — it applies to all users (not just minors) and does not depend on whether a reasonable person would be misled. The popup is a dismissible visible notification. The 60-minute interval is a floor; operators may remind more frequently. The professional credential disclaimer goes beyond standard AI identity disclosure and effectively warns users not to rely on chatbot output as professional advice.
Statutory Text
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotMinors
Sec. 3(f)
Plain Language
The recurring popup required by Sec. 3(f) must include a disclaimer that the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic. This effectively prohibits any implication that chatbot output constitutes professional advice — healthcare, legal, financial, or otherwise. This is a companion mapping to the T-01 mapping for the same provision, capturing the professional-credential disclaimer dimension separately because it implicates a distinct compliance category.
Statutory Text
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
Other · ChatbotMinors
Sec. 4(a)
Plain Language
The attorney general must issue compliance guidance for covered entities by December 31, 2026. This is an obligation on the state officer, not on covered entities, and creates no new compliance obligation for entities subject to the act. However, the guidance is practically significant because compliance with it is a prerequisite for the safe harbor in Section 5.
Statutory Text
(a) On or before December 31, 2026, the attorney general shall issue guidance to assist covered entities in complying with the requirements of this act.
Other · ChatbotMinors
Sec. 4(b)(1)-(3)
Plain Language
Violations of the Kansas CHAT Act are automatically deemed deceptive acts under the Kansas Consumer Protection Act, triggering KCPA remedies and penalties. The provision also removes the KCPA's consumer-transaction requirement — a person alleging a violation is deemed a consumer and the covered entity is deemed a supplier without needing to prove an underlying consumer transaction. This is an enforcement hook that channels violations into the existing KCPA framework; it creates no new affirmative compliance obligation.
Statutory Text
(b) (1) A violation of this act by a covered entity shall constitute a deceptive act pursuant to the Kansas consumer protection act, K.S.A. 50-623 et seq., and amendments thereto. For purposes of the remedies and penalties provided by the Kansas consumer protection act: (2) The person alleging a violation of this section shall be deemed a consumer, and the covered entity that violates this section shall be deemed the supplier; and (3) proof of a consumer transaction shall not be required.