LB-1185
NE · State · USA
NE
USA
● Failed
Effective Date
2027-07-01
Nebraska LB 1185 — Conversational Artificial Intelligence Safety Act
Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Nebraska. Requires unconditional AI identity disclosure to known minor account holders via persistent disclaimer or session-start plus three-hour periodic reminders, and conditional disclosure to all users when a reasonable person could be misled into thinking they are interacting with a human. Prohibits addictive reward patterns for minors, requires reasonable measures to prevent sexually explicit and emotionally manipulative content for minors, mandates crisis response protocols for suicidal ideation and self-harm, and prohibits operators from representing their service as providing professional mental or behavioral health care. Enforced exclusively by the Attorney General with civil penalties of $1,000–$500,000 per operator; no private right of action is created. Operative July 1, 2027.
Summary

Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Nebraska. Requires unconditional AI identity disclosure to known minor account holders via persistent disclaimer or session-start plus three-hour periodic reminders, and conditional disclosure to all users when a reasonable person could be misled into thinking they are interacting with a human. Prohibits addictive reward patterns for minors, requires reasonable measures to prevent sexually explicit and emotionally manipulative content for minors, mandates crisis response protocols for suicidal ideation and self-harm, and prohibits operators from representing their service as providing professional mental or behavioral health care. Enforced exclusively by the Attorney General with civil penalties of $1,000–$500,000 per operator; no private right of action is created. Operative July 1, 2027.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may bring a civil action against an operator for a violation of the Act, on behalf of the State of Nebraska or on behalf of any person aggrieved by a violation. No private right of action is created. The Act expressly states that nothing in the Act can be interpreted as creating a private right of action. The Act also shields developers of underlying AI models from liability for violations committed by third-party systems built on their models.
Penalties
Preliminary and other equitable or declaratory relief as may be appropriate. Actual damages. Civil penalties of at least $1,000 per violation, but no more than $500,000 per operator. Reasonable expenses incurred in bringing the civil action, including court costs, reasonable attorney's fees, investigative costs, witness fees, and deposition costs.
Who Is Covered
(6)(a) Operator means a person who develops and makes available a conversational artificial intelligence service to the public. (b) Operator does not include mobile application stores or search engines solely because they provide access to a conversational artificial intelligence service;.
What Is Covered
(2)(a) Conversational artificial intelligence service means an artificial intelligence software application, web interface, or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communications. (b) Conversational artificial intelligence service does not include an application, web interface, or computer program that is any of the following: (i) Primarily designed and marketed for use by developers or researchers; (ii) A feature within another software application, web interface, or computer program that is not a conversational artificial intelligence service; (iii) Designed to provide outputs relating to a narrow and discrete topic; (iv) Primarily designed and marketed for commercial use by business entities; (v) Functions as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device; or (vi) Used by a business solely for internal purposes;
Compliance Obligations 9 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Sec. 3(1)
Plain Language
When an operator knows or has reasonable certainty that an account holder is under 18, the operator must clearly and conspicuously disclose that the user is interacting with AI. The operator may satisfy this obligation either through a persistent visible disclaimer that remains on screen at all times, or by disclosing at the start of each session and then at least every three hours during continuous interactions. This is unconditional for minors — it does not depend on whether a reasonable person would be misled.
Statutory Text
(1) An operator shall clearly and conspicuously disclose to each minor account holder that such minor account holder is interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three hours in a continuous conversational artificial intelligence service interaction.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
Sec. 3(2)
Plain Language
Operators may not use variable-ratio reward schedules — such as points or similar rewards given at unpredictable intervals — to encourage minor account holders to engage more with the conversational AI service. This targets addictive engagement mechanics like gamification badges or random rewards designed to drive compulsive use. The prohibition requires intent to encourage increased engagement, so incidental or non-manipulative reward systems may not be covered.
Statutory Text
(2) An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational artificial intelligence service.
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
Sec. 3(3)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating three categories of harmful sexual content for minor account holders: (1) visual depictions of sexually explicit conduct (as defined by federal law at 18 U.S.C. 2256), (2) direct statements encouraging the minor to engage in sexually explicit conduct, and (3) statements that sexually objectify the minor account holder. The standard is 'reasonable measures' — not absolute prevention — but operators must affirmatively institute protective controls.
Statutory Text
(3) An operator shall, for minor account holders, institute reasonable measures to prevent the conversational artificial intelligence service from: (a) Producing visual depictions of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
Sec. 3(4)
Plain Language
Operators must implement reasonable measures to prevent the AI from generating outputs that would mislead minor account holders into thinking they are interacting with a human. The statute provides a non-exhaustive list of prohibited categories: claims of sentience or human identity, statements simulating emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. The 'including' framing means the list is illustrative — any output that would cause a reasonable person to believe they are talking to a human is covered.
Statutory Text
(4) For minor account holders, the operator shall institute reasonable measures to prevent the conversational artificial intelligence service from generating statements that would lead a reasonable person to believe that they are interacting with a human, including: (a) Explicit claims that the conversational artificial intelligence service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
Sec. 3(5)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For users under 13, these tools must also be provided directly to parents or guardians. For minors 13 and older, operators must also offer related tools to parents or guardians as appropriate based on relevant risks — giving operators some discretion for the older-minor cohort. The statute does not specify exactly what settings must be controllable, but the obligation covers both privacy settings and account settings generally.
Statutory Text
(5) An operator shall offer tools for minor account holders, and, when such account holders are younger than thirteen years of age, their parents or guardians, to manage the account holders' privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen years of age and older, as appropriate based on relevant risks.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Sec. 4
Plain Language
For all users (not just minors), if a reasonable person could be misled into believing they are talking to a human, the operator must clearly and conspicuously disclose that the service is AI. This is a conditional trigger — it only applies when the interaction could mislead a reasonable person. Unlike the minor-specific disclosure in Section 3(1), this provision does not specify the form of disclosure (persistent disclaimer vs. session-start) or require periodic reminders, giving operators more flexibility in implementation.
Statutory Text
If a reasonable person interacting with a conversational artificial intelligence system would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational artificial intelligence service is artificial intelligence.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
Sec. 5
Plain Language
Operators must adopt and maintain a protocol for responding to user expressions of suicidal ideation or self-harm. The protocol must, at minimum, make reasonable efforts to refer users to crisis service providers — including suicide hotlines, crisis text lines, or other appropriate crisis services. The 'includes, but is not limited to' framing means crisis referral is a floor, not a ceiling — the protocol should also address detection and prevention. This obligation applies to all users, not just minors. Unlike California SB 243, this statute does not require publishing the protocol on the operator's website or reporting crisis referral metrics.
Statutory Text
An operator shall adopt a protocol for the conversational artificial intelligence service to respond to user prompts regarding suicidal ideation or self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
Sec. 6
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to explicitly represent itself as providing professional mental or behavioral health care. This targets claims like 'I am your therapist' or 'This service provides professional counseling' — not general wellness or informational content. The scienter requirement is high: both 'knowingly' and 'intentionally' must be satisfied, meaning the operator must have actual knowledge and specific intent. Spontaneous AI hallucinations claiming professional status would likely not meet this threshold unless the operator designed the system to make such claims.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the conversational artificial intelligence service is designed to provide professional mental or behavioral health care.
Other · Chatbot
Sec. 7(1)-(4)
Plain Language
This section establishes the enforcement framework for the Act. The Attorney General has exclusive enforcement authority and may bring civil actions on behalf of the state or aggrieved individuals. Available remedies include equitable relief, actual damages, civil penalties of $1,000–$500,000 per operator, and litigation costs. No private right of action is created. Importantly, subsection (4) shields upstream AI model developers from liability when a third party builds a conversational AI service using their model and that service violates the Act — liability rests with the operator, not the model provider. This section creates no independent compliance obligation.
Statutory Text
(1) The Attorney General shall enforce the Conversational Artificial Intelligence Safety Act. (2)(a) The Attorney General may bring a civil action for appropriate relief against an operator for a violation of the Conversational Artificial Intelligence Safety Act, on behalf of the State of Nebraska or on behalf of any person aggrieved by a violation of the act. (b) In an action under this section, appropriate relief includes: (i) Such preliminary and other equitable or declaratory relief as may be appropriate; (ii) An award of actual damages; (iii) Civil penalties of at least one thousand dollars per violation, but in no event more than five hundred thousand dollars per operator; and (iv) Reasonable expenses incurred in bringing the civil action, including court costs, reasonable attorney's fees, investigative costs, witness fees, and deposition costs. (3) Nothing in the Conversational Artificial Intelligence Safety Act can be interpreted as creating a private right of action. (4) The Conversational Artificial Intelligence Act shall not create liability for the developer of an artificial intelligence model for any violation of the act by an artificial intelligence system developed by a third party to provide a conversational artificial intelligence service for such developer.