HB-3544
OK · State · USA
OK
USA
● Pending
Proposed Effective Date
2026-11-01
Oklahoma HB 3544 — An Act relating to technology; defining terms; directing deployers to ensure social artificial intelligence companions are not knowingly made available to minors; permitting lawful access by adults; directing deployers to implement protocols to prioritize safety and well-being of users; establishing civil penalties; granting the attorney general rulemaking authority
Oklahoma HB 3544 targets deployers of 'social AI companions' — AI systems primarily designed to simulate sustained interpersonal companionship, emotional attachment, or romantic interaction. Deployers are prohibited from knowingly making such systems available to minors and must implement reasonable measures to prevent minor access. Deployers must also adopt protocols for responding to user prompts indicating suicidal ideation or self-harm, including crisis service referrals. The bill is narrowly scoped with extensive carve-outs for customer service bots, virtual assistants, search engines, general-purpose AI, and narrow-topic systems. Enforcement is exclusively through the Attorney General, with civil penalties up to $2,500 per violation ($7,500 for intentional violations), injunctive relief, and disgorgement.
Summary

Oklahoma HB 3544 targets deployers of 'social AI companions' — AI systems primarily designed to simulate sustained interpersonal companionship, emotional attachment, or romantic interaction. Deployers are prohibited from knowingly making such systems available to minors and must implement reasonable measures to prevent minor access. Deployers must also adopt protocols for responding to user prompts indicating suicidal ideation or self-harm, including crisis service referrals. The bill is narrowly scoped with extensive carve-outs for customer service bots, virtual assistants, search engines, general-purpose AI, and narrow-topic systems. Enforcement is exclusively through the Attorney General, with civil penalties up to $2,500 per violation ($7,500 for intentional violations), injunctive relief, and disgorgement.

Enforcement & Penalties
Enforcement Authority
Enforcement is exclusively by the Attorney General through civil actions. No private right of action is created. The Attorney General may also promulgate rules necessary to enforce the act. Enforcement is agency-initiated — there is no complaint-driven or private suit mechanism.
Penalties
Civil penalty of not more than $2,500 per violation or $7,500 per intentional violation. Each day a violation continues constitutes a separate violation. Deployers are also subject to injunction and disgorgement of profits directly attributable to the violation. No statutory minimum is specified — penalties are capped, not floored.
Who Is Covered
"Deployer" means any person, partnership, corporation, or governmental entity that operates, controls, or makes available a social AI companion to users in this state. A deployer does not include a mobile application store, search engine, Internet service provider, or provider of general-purpose artificial intelligence models solely because such entity provides access to, hosts, or transmits a system developed or controlled by another person.
What Is Covered
"Social AI companion" means an artificial intelligence system primarily designed or marketed to simulate sustained interpersonal companionship and emotional attachment or romantic interaction with a user as the system's core functionality. The term "Social AI companion" does not include: a. a system used solely for customer service, business operations, productivity, analysis related to source information, internal purposes, research purposes, or technical assistance, b. a stand-alone consumer electronic device that incorporates a speaker or voice or text command interface, acts as a virtual assistant, and does not sustain a relationship across multiple interactions and generate outputs intended to create emotional attachment with the user, c. a search engine feature that provides information in response to user queries and is not designed or marketed to simulate companionship, d. a system designed to provide outputs relating to a narrow and discrete functional topic and not primarily intended to simulate interpersonal companionship, e. a system that is not primarily designed or marketed for companionship where the developer does not control the specific deployment context in which the system interacts with end users, or f. a general-purpose artificial intelligence system that is not primarily designed or marketed to simulate interpersonal companionship, including systems used for education, counseling, research, productivity, or professional assistance.
Compliance Obligations 2 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
Section 3(A)(1)-(2), (B) (75A Okla. Stat. § 11)
Plain Language
Deployers of social AI companions must not knowingly — or where they reasonably should know — make a social AI companion available to a minor (under 18). Beyond avoiding knowing provision, deployers must affirmatively implement reasonable measures designed to prevent minors from accessing the system. The 'reasonably should know' standard goes beyond actual knowledge and imposes a duty of reasonable inquiry. The statute expressly preserves lawful adult access, so the age-gating measures must be calibrated to block minors without unduly restricting adults. The bill does not specify what constitutes 'reasonable measures,' leaving room for the Attorney General to elaborate by rule.
Statutory Text
A. Each deployer: 1. Shall not knowingly, or under circumstances where the deployer reasonably should know, make a social AI companion available to a minor; and 2. Shall implement reasonable measures designed to prevent minors from accessing a social AI companion. B. Nothing in this section shall be construed to restrict lawful access to such systems by adults.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · ChatbotMinors
Section 4 (75A Okla. Stat. § 12)
Plain Language
Deployers must adopt a protocol governing how their social AI companion responds when a user's prompts indicate suicidal ideation or threats of self-harm. At a minimum, the protocol must include making reasonable efforts to refer the user to crisis service providers — such as a suicide hotline, crisis text line, or equivalent crisis services. The 'includes, but is not limited to' language means the crisis referral is a floor, not a ceiling — additional protective measures may be required. Note that unlike California SB 243, this statute does not require public publication of the protocol on the deployer's website, nor does it require annual reporting of crisis referral metrics. The obligation applies to all users, not just minors.
Statutory Text
A deployer shall adopt a protocol for a social AI companion to respond to user prompts indicating suicidal ideation or threats of self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.