LB-1185
NE · State · USA
NE
USA
● Pending
Proposed Effective Date
2027-07-01
Nebraska LB 1185 — Conversational Artificial Intelligence Safety Act
Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Nebraska. Requires operators to disclose AI identity to all users when a reasonable person could be misled, with stricter unconditional disclosure and periodic reminders for known minor account holders. Prohibits addictive engagement patterns for minors, requires reasonable measures to prevent sexually explicit content and emotional manipulation outputs targeting minors, and mandates parental/guardian privacy tools. Requires operators to adopt crisis response protocols for suicidal ideation and self-harm and prohibits representations that the service provides professional mental or behavioral health care. Enforced exclusively by the Attorney General with civil penalties of $1,000–$500,000 per operator; no private right of action. Operative July 1, 2027.
Summary

Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Nebraska. Requires operators to disclose AI identity to all users when a reasonable person could be misled, with stricter unconditional disclosure and periodic reminders for known minor account holders. Prohibits addictive engagement patterns for minors, requires reasonable measures to prevent sexually explicit content and emotional manipulation outputs targeting minors, and mandates parental/guardian privacy tools. Requires operators to adopt crisis response protocols for suicidal ideation and self-harm and prohibits representations that the service provides professional mental or behavioral health care. Enforced exclusively by the Attorney General with civil penalties of $1,000–$500,000 per operator; no private right of action. Operative July 1, 2027.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may bring a civil action for appropriate relief against an operator for a violation, on behalf of the State of Nebraska or on behalf of any person aggrieved by a violation. The statute explicitly provides that nothing in the act can be interpreted as creating a private right of action. The act also shields developers of underlying AI models from liability for violations committed by third-party systems built on those models.
Penalties
Preliminary and other equitable or declaratory relief as may be appropriate. Actual damages. Civil penalties of at least $1,000 per violation, but no more than $500,000 per operator. Reasonable expenses incurred in bringing the civil action, including court costs, reasonable attorney's fees, investigative costs, witness fees, and deposition costs.
Who Is Covered
(6)(a) Operator means a person who develops and makes available a conversational artificial intelligence service to the public. (b) Operator does not include mobile application stores or search engines solely because they provide access to a conversational artificial intelligence service;.
What Is Covered
(2)(a) Conversational artificial intelligence service means an artificial intelligence software application, web interface, or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communications. (b) Conversational artificial intelligence service does not include an application, web interface, or computer program that is any of the following: (i) Primarily designed and marketed for use by developers or researchers; (ii) A feature within another software application, web interface, or computer program that is not a conversational artificial intelligence service; (iii) Designed to provide outputs relating to a narrow and discrete topic; (iv) Primarily designed and marketed for commercial use by business entities; (v) Functions as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device; or (vi) Used by a business solely for internal purposes;
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Sec. 3(1)
Plain Language
Operators must unconditionally disclose to every known minor account holder that they are interacting with AI. The operator may satisfy this either with a persistent on-screen disclaimer visible at all times, or by disclosing at the beginning of each session and at least every three hours in a continuous interaction. Unlike the general disclosure in Sec. 4, this obligation is not conditional on whether a reasonable person would be misled — it applies whenever the operator knows or has reasonable certainty the user is under 18.
Statutory Text
(1) An operator shall clearly and conspicuously disclose to each minor account holder that such minor account holder is interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three hours in a continuous conversational artificial intelligence service interaction.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
Sec. 3(2)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points, badges, or similar incentives delivered at unpredictable intervals — to encourage minors to engage more with the conversational AI service. The prohibition requires intent to encourage increased engagement, so incidental or fixed-schedule reward systems would not be covered.
Statutory Text
(2) An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational artificial intelligence service.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Sec. 3(3)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI from producing three categories of sexually harmful output directed at minor account holders: (1) visual depictions of sexually explicit conduct (as defined under federal law at 18 U.S.C. 2256), (2) direct statements urging the minor to engage in sexually explicit conduct, and (3) statements that sexually objectify the minor. The standard is reasonable measures — not an absolute guarantee — but operators must demonstrate affirmative steps to prevent these outputs.
Statutory Text
(3) An operator shall, for minor account holders, institute reasonable measures to prevent the conversational artificial intelligence service from: (a) Producing visual depictions of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
Sec. 3(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI from producing outputs that would mislead minor account holders into believing they are interacting with a human. The enumerated prohibited categories include claims of sentience, emotional dependence statements, romantic or sexual innuendos, and adult-minor romantic role-playing. The list is non-exhaustive ('including'), so operators should consider other output categories that could similarly mislead minors into perceiving the AI as human.
Statutory Text
(4) For minor account holders, the operator shall institute reasonable measures to prevent the conversational artificial intelligence service from generating statements that would lead a reasonable person to believe that they are interacting with a human, including: (a) Explicit claims that the conversational artificial intelligence service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
Sec. 3(5)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For minors under 13, these tools must also be provided directly to parents or guardians. For minors 13 and older, the operator must also offer related tools to parents or guardians as appropriate based on relevant risks — giving operators some discretion on the scope of parental tools for older teens. The provision does not define what specific settings must be manageable, but the obligation to offer tools is mandatory.
Statutory Text
(5) An operator shall offer tools for minor account holders, and, when such account holders are younger than thirteen years of age, their parents or guardians, to manage the account holders' privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen years of age and older, as appropriate based on relevant risks.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Sec. 4
Plain Language
If a reasonable person could be misled into thinking they are talking to a human, the operator must provide a clear and conspicuous disclosure that the service is AI. This is a conditional trigger — it applies only when the interaction could mislead a reasonable person. Unlike the minor-specific disclosure in Sec. 3(1), this provision applies to all users but only when the deception threshold is met. Compare to CA SB 243, which uses the same conditional reasonable-person standard for general users.
Statutory Text
If a reasonable person interacting with a conversational artificial intelligence system would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational artificial intelligence service is artificial intelligence.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
Sec. 5
Plain Language
Operators must adopt and maintain a protocol for responding to user prompts about suicidal ideation or self-harm. At minimum, the protocol must include reasonable efforts to refer users to crisis service providers such as suicide hotlines or crisis text lines. The standard is reasonable efforts — not a guarantee of successful referral. Note that unlike CA SB 243, this provision does not require the operator to publicly post the protocol details on its website, nor does it require annual reporting on crisis referral metrics. The obligation applies to all users, not just minors.
Statutory Text
An operator shall adopt a protocol for the conversational artificial intelligence service to respond to user prompts regarding suicidal ideation or self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
Sec. 6
Plain Language
Operators may not knowingly and intentionally cause their conversational AI service to represent that it provides professional mental or behavioral health care. The prohibition covers explicit representations — the AI must not claim or indicate it is a licensed therapist, counselor, psychiatrist, or similar professional. The mens rea standard requires both knowledge and intent, so inadvertent outputs that a user interprets as therapeutic would likely not violate this provision. This is narrower than some other jurisdictions, which prohibit implying equivalence to licensed professional services through interface design or terminology, not just explicit representations.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the conversational artificial intelligence service is designed to provide professional mental or behavioral health care.