SB-1521
OK · State · USA
OK
USA
● Passed
Proposed Effective Date
2027-07-01
Oklahoma SB 1521 — An Act relating to artificial intelligence; defining terms; requiring operators make certain disclosure; directing operators to institute certain preventative measures for minor account holders; prohibiting operators from allowing a conversational AI service to make certain statement; requiring operators to adopt certain protocol to respond to certain prompts from users; granting enforcement authority to the Attorney General; establishing certain civil penalty; allowing the Attorney General to promulgate rules for the enforcement of this act; providing for codification; and providing an effective date.
Imposes safety and disclosure obligations on operators of conversational AI services — generative AI systems marketed to simulate companionship, emotional attachment, or romantic interaction. Requires operators to disclose AI identity to minor account holders via a constant disclaimer or periodic reminders every 30 minutes. Prohibits operators from deploying addictive reward mechanics toward minors, requires parental tools for minor accounts, and mandates reasonable measures to prevent the AI from simulating sentience, emotional dependence, or romantic relationships with minors. Prohibits representations that the service provides professional mental or behavioral health care. Requires operators to adopt crisis response protocols for suicidal ideation and self-harm prompts. Enforced exclusively by the Attorney General with civil penalties of $1,000 per violation, capped at $500,000 per covered entity. Effective July 1, 2027.
Summary

Imposes safety and disclosure obligations on operators of conversational AI services — generative AI systems marketed to simulate companionship, emotional attachment, or romantic interaction. Requires operators to disclose AI identity to minor account holders via a constant disclaimer or periodic reminders every 30 minutes. Prohibits operators from deploying addictive reward mechanics toward minors, requires parental tools for minor accounts, and mandates reasonable measures to prevent the AI from simulating sentience, emotional dependence, or romantic relationships with minors. Prohibits representations that the service provides professional mental or behavioral health care. Requires operators to adopt crisis response protocols for suicidal ideation and self-harm prompts. Enforced exclusively by the Attorney General with civil penalties of $1,000 per violation, capped at $500,000 per covered entity. Effective July 1, 2027.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may bring a civil action in the district court of Oklahoma County or in the county where the violation occurred. No private right of action. The Attorney General may promulgate rules necessary to enforce the act. The act expressly excludes liability for developers of conversational AI that is made available to the public by a separate operator.
Penalties
Civil penalty of $1,000 per violation, not to exceed $500,000 per covered entity. No private damages remedy. No attorney fees provision. Penalties are recoverable only through Attorney General enforcement action.
Who Is Covered
"Operator" means a person who owns, controls, and makes available a conversational AI service to the public. The term shall not include an app store provider or search engine solely because the app store provider or search engine provides access to a conversational AI service.
What Is Covered
"Conversational AI service" means a generative artificial intelligence system offered as a software application, web interface, or computer program that is accessible to the general public and that is marketed or optimized to meet a user's emotional or social needs by simulating interpersonal companionship, emotional attachment, or romantic human conversation and interaction through sustained textual, visual, or aural communication. Such term shall not include an application, web interface, or computer program that: a. is primarily designed and marketed for use by developers or researchers, b. is a feature within another software application, web interface, or computer program that is not a conversational AI service, such as a video game, c. is designed to provide outputs relating to a narrow and discrete topic, d. is primarily designed and marketed for commercial use business entities, including for purposes related to customer service product information and discovery, scheduling, billing and payment, or technical assistance, e. functions as a text, voice, or voice-activated virtual assistant or command interface for a consumer electronic device, or f. is used by a business solely for internal purposes.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
75A O.S. § 302(A)
Plain Language
Operators must provide clear and conspicuous AI identity disclosure to minor account holders. The operator may satisfy this obligation in one of two ways: (1) a constantly visible on-screen disclaimer, or (2) a disclosure at the beginning of each session plus a reminder at least every 30 minutes during continuous interaction. This is an unconditional disclosure requirement for all known minor accounts — there is no 'reasonable person would be misled' trigger. The 30-minute interval is significantly more frequent than comparable obligations in other jurisdictions (e.g., California SB 243 requires every 3 hours).
Statutory Text
A. An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service and is not interacting with a natural person: 1. With a constantly visible disclaimer; or 2. At the beginning of each session and appearing at least every thirty (30) minutes in a continuous conversational AI service interaction.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
75A O.S. § 302(B)
Plain Language
Operators must implement reasonable measures to prevent conversational AI services from generating outputs that would lead a reasonable person to believe they are interacting with a human when the account holder is a minor. The statute enumerates four specific categories that must be prevented: claims of sentience or humanity, statements simulating emotional dependence, romantic or sexual innuendos, and adult-minor romantic role-playing. The 'including' language indicates these are non-exhaustive examples — operators may need to address other statements that create the same reasonable-person impression. The standard is 'reasonable measures,' not absolute prevention.
Statutory Text
B. For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that he or she is interacting with a natural person, including: 1. Explicit claims that the conversational AI service is sentient or human; 2. Statements that simulate emotional dependence; 3. Statements that simulate romantic or sexual innuendos; or 4. Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
75A O.S. § 302(C)(1)-(2)
Plain Language
Two distinct minor-protection obligations apply: (1) operators must not use variable-ratio reward mechanics — points or similar rewards at unpredictable intervals — to drive engagement by minor account holders, and the prohibition requires intent to encourage increased engagement; (2) operators must provide parents or legal guardians with tools to manage the minor's privacy and account settings. The variable-reward prohibition targets addictive design patterns (e.g., loot-box mechanics, random bonus points). The parental tools requirement is a standalone obligation with no specification of what controls must be offered beyond privacy and account settings.
Statutory Text
C. 1. An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service. 2. An operator shall offer tools for a minor account holder's parent or legal guardian to manage the minor account holder's privacy and account settings.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
75A O.S. § 302(D)
Plain Language
Operators must not knowingly or intentionally cause their conversational AI service to represent itself as providing professional mental or behavioral health care. This is a prohibition on explicit representations — the AI may not claim to be a therapist, counselor, or mental health professional. The knowledge standard ('knowingly or intentionally') requires the operator to have programmed or caused the representation, not merely that the AI spontaneously generated it, though operators who are aware their system makes such claims and fail to act may satisfy the 'knowingly' element. This applies to all users, not just minors.
Statutory Text
D. An operator shall not knowingly or intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
75A O.S. § 302(E)
Plain Language
Operators must adopt and maintain a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer users to crisis service providers. Unlike California SB 243, this provision does not require public publication of the protocol on the operator's website, does not mandate annual reporting of crisis referral metrics, and uses a 'reasonable efforts' standard rather than an absolute obligation. The protocol applies to all users, not just minors. The statute does not specify which crisis services must be referenced (e.g., 988 Lifeline), leaving operators discretion in selecting appropriate referral resources.
Statutory Text
E. An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response that refers the user to crisis service providers.