HB-4083
OK · State · USA
OK
USA
● Pending
Proposed Effective Date
2026-11-01
Oklahoma HB 4083 — An Act relating to technology; providing definitions; directing deployers of chatbots to ensure AI chatbots do not make human-like features available to minors; directing deployers to implement reasonable age verification systems; permitting deployers to provide alternative versions of chatbot without human-like features; directing deployers to ensure social AI companions are not available to minors; providing exemptions for certain therapeutic chatbots; directing deployers to implement and maintain effective systems to detect emergency situations; directing deployers to only collect information that does not conflict with a trusting party's best interest; directing the Attorney General to bring action against businesses or persons who are in violation; creating a private right of action; providing for codification; and providing an effective date.
Imposes obligations on deployers of AI chatbots and social AI companions to protect minors in Oklahoma. Deployers must ensure chatbots do not make 'human-like features' — including claims of sentience, emotional relationship-building, and impersonation — available to minors, and must implement reasonable age verification systems. Social AI companions are categorically prohibited from being made available to minors. A narrow exemption permits therapeutic chatbots for minors only when prescribed by a licensed mental health professional, supported by peer-reviewed clinical trial data, and accompanied by AI identity disclaimers. Deployers must also maintain emergency detection systems for self-harm and harm-to-others situations and comply with data minimization requirements. Enforcement is through AG civil actions (up to $7,500 per intentional violation) and a private right of action for minors or their parents/guardians with statutory damages of $100–$750 per user per incident.
Summary

Imposes obligations on deployers of AI chatbots and social AI companions to protect minors in Oklahoma. Deployers must ensure chatbots do not make 'human-like features' — including claims of sentience, emotional relationship-building, and impersonation — available to minors, and must implement reasonable age verification systems. Social AI companions are categorically prohibited from being made available to minors. A narrow exemption permits therapeutic chatbots for minors only when prescribed by a licensed mental health professional, supported by peer-reviewed clinical trial data, and accompanied by AI identity disclaimers. Deployers must also maintain emergency detection systems for self-harm and harm-to-others situations and comply with data minimization requirements. Enforcement is through AG civil actions (up to $7,500 per intentional violation) and a private right of action for minors or their parents/guardians with statutory damages of $100–$750 per user per incident.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement via civil action for injunctions, disgorgement, and civil penalties. Private right of action for minors who use non-compliant chatbots, or a parent or guardian acting on their behalf, individually or on a class-wide basis. No cure period or safe harbor is specified.
Penalties
AG enforcement: injunction, disgorgement of unjust gains, and civil penalties up to $2,500 per violation or $7,500 per intentional violation. Private right of action (minors or parents/guardians): damages not less than $100 and not greater than $750 per user per incident, or actual damages, whichever is greater; plus injunctive or declaratory relief. Class actions permitted. Statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Deployers" means any person, partnership, state or local governmental agency, corporation, or developer that operates or distributes a chatbot;.
What Is Covered
"Chatbot" means a generative artificial intelligence system with which users can interact by or through an interface that approximates or simulates conversation through a text, audio, or visual medium;
"Social AI Companion" means generative artificial intelligence systems that are specifically designed, marketed, or optimized to form ongoing social or emotional bonds with users, whether or not such systems also provide information, complete tasks, or assist with specific functions;
"Therapy chatbot" means any chatbot modified or designed with a primary purpose of providing mental health support, counseling, or therapeutic intervention through the diagnosis, treatment, mitigation, or prevention of mental health conditions;
Compliance Obligations 7 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
Section 2(A)(1), (3)
Plain Language
Deployers must ensure that their generative AI chatbots do not expose minors to human-like features — which includes claims of sentience or humanity, emotional relationship-building behaviors (e.g., expressing attachment, nudging users to return for companionship, excessive praise to foster attachment, or gating intimacy behind engagement or payment), and impersonation of real persons. Generic social formalities and neutral offers of help are carved out. Deployers may optionally provide a stripped-down alternative version without human-like features for minors and unverified users, but this is permissive, not mandatory.
Statutory Text
A. Each deployer: 1. Shall ensure that any generative AI chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with; ... 3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and non-verified users without human-like features.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
Section 2(A)(2)
Plain Language
Deployers must implement reasonable age verification systems to prevent minors from accessing chatbots that include human-like features. The statute does not prescribe a specific verification method — it uses a reasonableness standard. This obligation applies to all generative AI chatbots with human-like features, not only social AI companions.
Statutory Text
2. Shall implement reasonable age verification systems to ensure that generative AI chatbots with human-like features are not provisioned to minors;
MN-01 Minor User AI Safety Protections · MN-01.1MN-01.5 · Deployer · ChatbotMinors
Section 2(B)(1)-(2)
Plain Language
Deployers of generative AI systems that primarily function as companions face a stricter obligation than deployers of general chatbots: social AI companions must be categorically blocked from minors — not merely stripped of human-like features. Deployers must also implement reasonable age verification to enforce this prohibition. Unlike Section 2(A), which allows a stripped-down version for minors, this subsection provides no alternative-version option. Social AI companions are entirely off-limits to minors.
Statutory Text
B. Deployers operating generative AI systems that primarily function as companions shall: 1. Ensure that any such chatbots operated or distributed by the deployer are not available to minors to use, interact with, purchase, or converse with; and 2. Implement reasonable age verification systems to ensure that such chatbots are not provisioned to minors.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotHealthcareMinors
Section 2(C)(1)
Plain Language
A therapeutic chatbot may only be made available to minors if it provides a clear and conspicuous disclaimer at the start of each interaction that it is AI and not a licensed professional. This is an unconditional, per-interaction disclosure requirement — not triggered by whether a reasonable person would be misled. It is one of five cumulative conditions that must all be satisfied for the therapeutic exemption to apply.
Statutory Text
C. Therapeutic chatbots that meet all of the following requirements may be made available to minors: 1. The chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is AI and not a licensed professional;
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1HC-02.2HC-02.3HC-02.4HC-02.5 · DeployerDeveloperProfessional · ChatbotHealthcareMinors
Section 2(C)(2)-(5)
Plain Language
The therapeutic chatbot exemption for minors requires meeting four additional conditions beyond AI identity disclosure: (1) the chatbot must not be marketed as a substitute for a human professional; (2) a licensed mental health professional must assess the minor user's suitability, prescribe the tool as part of a treatment plan, and monitor its use; (3) developers must provide independent, peer-reviewed clinical trial data demonstrating safety and efficacy for the specific conditions and populations served; and (4) the system's functions, limitations, and data privacy policies must be transparent to both the prescribing professional and the user, with clear accountability lines for harms. All five conditions (including the AI disclosure in C(1)) are cumulative — failure to satisfy any one means the therapeutic chatbot cannot be made available to minors.
Statutory Text
C. Therapeutic chatbots that meet all of the following requirements may be made available to minors: ... 2. The chatbot is not marketed or designated as a substitute for a human professional; 3. A licensed mental health professional (such as a clinical psychologist) assesses a user's suitability and prescribes the tool as part of a comprehensive treatment plan, and monitors its use and impact; 4. Developers provide robust, independent, peer-reviewed clinical trial data demonstrating both the safety and efficacy of the tool for specific conditions and populations; and 5. The system's functions, limitations, and data privacy policies are transparent to both the licensed mental health professional and the user. Clear lines of accountability are established for any harms caused by the system.
MN-02 AI Crisis Response Protocols · MN-02.1MN-02.2 · Deployer · ChatbotMinors
Section 3(A)
Plain Language
Deployers must implement and maintain reasonably effective systems that detect when a user indicates intent to harm themselves or others, promptly respond to such situations, report them, and mitigate the risk. The deployer must prioritize user safety and well-being over its own commercial or other interests. This is a continuous operating requirement — systems must be maintained, not merely established. The statute does not prescribe specific crisis referral resources (e.g., 988 Lifeline), leaving the response mechanism to the deployer's reasonable judgment, but the obligation to detect and respond is mandatory.
Statutory Text
A. Deployers shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the deployer's other interests.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
Section 3(B)(1)-(3)
Plain Language
Deployers must limit data collection and storage to information that does not conflict with users' best interests and that satisfies a three-part test: adequacy (sufficient to fulfill a legitimate purpose), relevance (linked to that purpose), and necessity (the minimum amount needed). This is a data minimization obligation functionally similar to GDPR's adequacy/relevance/necessity framework. The 'trusting party' language is unusual and undefined — it likely refers to the user but introduces ambiguity. Deployers should treat this as requiring purpose-limited data collection with a necessity floor.
Statutory Text
B. Deployers shall collect and store only that information that does not conflict with a trusting party's best interests. Such information must be: 1. Adequate, in the sense that it is sufficient to fulfill a legitimate purpose of the deployer; 2. Relevant, in the sense that the information has a relevant link to that legitimate purpose; and 3. Necessary, in the sense that it is the minimum amount of information which is needed for that legitimate purpose.