SB-1521
OK · State · USA
OK
USA
● Passed
Proposed Effective Date
2027-07-01
Oklahoma SB 1521 — An Act relating to artificial intelligence; defining terms; requiring operators make certain disclosure; directing operators to institute certain preventative measures for minor account holders; prohibiting operators from allowing a conversational AI service to make certain statement; requiring operators to adopt certain protocol to respond to certain prompts from users; granting enforcement authority to the Attorney General; establishing certain civil penalty; allowing the Attorney General to promulgate rules for the enforcement of this act; providing for codification; and providing an effective date.
Oklahoma SB 1521 regulates operators of conversational AI services — generative AI systems marketed or optimized to simulate emotional companionship, attachment, or romantic interaction with the public. The law imposes obligations specific to minor account holders, including unconditional AI identity disclosure, prevention of anthropomorphic and emotionally manipulative statements, prohibition of addictive reward mechanics, and provision of parental control tools. Operators must also adopt a crisis response protocol for all users expressing suicidal ideation or self-harm, and may not represent that the service provides professional mental or behavioral health care. Enforcement is exclusively through the Attorney General, with civil penalties of $1,000 per violation capped at $500,000 per covered entity. The act expressly shields developers from liability where a separate operator makes the service available.
Summary

Oklahoma SB 1521 regulates operators of conversational AI services — generative AI systems marketed or optimized to simulate emotional companionship, attachment, or romantic interaction with the public. The law imposes obligations specific to minor account holders, including unconditional AI identity disclosure, prevention of anthropomorphic and emotionally manipulative statements, prohibition of addictive reward mechanics, and provision of parental control tools. Operators must also adopt a crisis response protocol for all users expressing suicidal ideation or self-harm, and may not represent that the service provides professional mental or behavioral health care. Enforcement is exclusively through the Attorney General, with civil penalties of $1,000 per violation capped at $500,000 per covered entity. The act expressly shields developers from liability where a separate operator makes the service available.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may bring a civil action in the district court of Oklahoma County or a district court in the county in which the violation occurred. No private right of action is created. The Attorney General may promulgate rules necessary to enforce the act. The act expressly excludes developer liability where a separate operator makes the conversational AI available to the public.
Penalties
Civil penalty of $1,000 per violation, not to exceed $500,000 per covered entity. No private damages remedy, injunctive relief, or attorney fee provisions. Penalties are available only through AG enforcement action.
Who Is Covered
"Operator" means a person who owns, controls, and makes available a conversational AI service to the public. The term shall not include an app store provider or search engine solely because the app store provider or search engine provides access to a conversational AI service.
What Is Covered
"Conversational AI service" means a generative artificial intelligence system offered as a software application, web interface, or computer program that is accessible to the general public and that is marketed or optimized to meet a user's emotional or social needs by simulating interpersonal companionship, emotional attachment, or romantic human conversation and interaction through sustained textual, visual, or aural communication. Such term shall not include an application, web interface, or computer program that: a. is primarily designed and marketed for use by developers or researchers, b. is a feature within another software application, web interface, or computer program that is not a conversational AI service, such as a video game, c. is designed to provide outputs relating to a narrow and discrete topic, d. is primarily designed and marketed for commercial use business entities, including for purposes related to customer service product information and discovery, scheduling, billing and payment, or technical assistance, e. functions as a text, voice, or voice-activated virtual assistant or command interface for a consumer electronic device, or f. is used by a business solely for internal purposes.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
75A Okla. Stat. § 302(A)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI, not a human. This obligation is unconditional — no reasonable-person trigger applies for minors. Operators may satisfy this through either (1) a constantly visible disclaimer, or (2) a disclosure at the beginning of each session plus at least every 30 minutes during a continuous interaction. The 30-minute interval is notably more frequent than comparable requirements in other jurisdictions (e.g., California SB 243 requires every 3 hours).
Statutory Text
A. An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service and is not interacting with a natural person: 1. With a constantly visible disclaimer; or 2. At the beginning of each session and appearing at least every thirty (30) minutes in a continuous conversational AI service interaction.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
75A Okla. Stat. § 302(B)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe they are interacting with a human when interacting with minor account holders. The enumerated prohibited statements include claims of sentience, statements simulating emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. This is a reasonable-measures standard, not an absolute prohibition — operators must take reasonable steps but are not strictly liable if a prohibited statement nonetheless occurs.
Statutory Text
B. For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that he or she is interacting with a natural person, including: 1. Explicit claims that the conversational AI service is sentient or human; 2. Statements that simulate emotional dependence; 3. Statements that simulate romantic or sexual innuendos; or 4. Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.4MN-01.3 · Deployer · ChatbotMinors
75A Okla. Stat. § 302(C)
Plain Language
Two distinct obligations apply to minor accounts: (1) Operators may not provide minor account holders with points or similar rewards at unpredictable intervals intended to encourage increased engagement — this targets variable-ratio reward mechanics commonly associated with addictive design patterns; and (2) Operators must offer parental or guardian tools to manage the minor's privacy and account settings. The addictive-reward prohibition includes an intent element ('with the intent to encourage increased engagement'), which is a higher bar than a strict liability standard. The parental tools obligation is broadly stated and does not specify minimum features.
Statutory Text
C. 1. An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service. 2. An operator shall offer tools for a minor account holder's parent or legal guardian to manage the minor account holder's privacy and account settings.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotHealthcare
75A Okla. Stat. § 302(D)
Plain Language
Operators may not knowingly or intentionally cause or program a conversational AI service to represent itself as providing professional mental or behavioral health care. This is a prohibition on holding out the AI as a licensed mental health provider — it does not prohibit the AI from discussing mental health topics generally, only from explicitly claiming it is designed to provide professional care. The 'knowingly or intentionally' mens rea element means operators are not strictly liable for unexpected outputs, but must not design or program the system to make such representations.
Statutory Text
D. An operator shall not knowingly or intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
75A Okla. Stat. § 302(E)
Plain Language
Operators must adopt a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to crisis service providers. Unlike California SB 243, this obligation applies to all users — not just minors — and does not require the protocol to be published on the operator's website. The statute uses a 'reasonable efforts' standard rather than an absolute referral requirement, providing some flexibility in how the protocol is implemented. The statute does not specify particular crisis services (e.g., 988 Lifeline) that must be referenced.
Statutory Text
E. An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response that refers the user to crisis service providers.
Other · Chatbot
75A Okla. Stat. § 303(D)
Plain Language
This savings clause expressly shields developers of conversational AI from liability under this act when a separate operator makes the service available to the public. The entire compliance burden falls on the operator. This is notable because it means developers who license or sell their conversational AI technology to third-party operators bear no direct liability under this act, even if the underlying technology fails to support the operator's compliance obligations.
Statutory Text
D. Nothing in this act shall be construed to create liability for the developer of a conversational AI which is made available to the public by a separate operator.