LB-1083
NE · State · USA
NE
USA
● Pending
Proposed Effective Date
2027-01-01
Nebraska LB 1083 — Transparency in Artificial Intelligence Risk Management Act
Nebraska LB 1083 imposes transparency and safety obligations on two categories of covered entities: large frontier developers (frontier model developers with $500M+ annual revenue) and large chatbot providers (operators of covered chatbots with $25M+ annual revenue and 1M+ monthly users). Both must write, implement, comply with, and publicly publish a 'public safety and child protection plan' addressing catastrophic risks (for frontier developers) and child safety risks (for chatbot providers). Large frontier developers must publish catastrophic risk assessment summaries before deploying frontier models and submit quarterly internal-use risk assessments to the Attorney General. All frontier developers must report critical safety incidents within 15 days (24 hours if imminent death/injury risk), and large chatbot providers must report child safety incidents within 15 days. The bill includes robust whistleblower protections with a private right of action for retaliation, and civil penalties up to $1M per violation for large frontier developers and $50K per violation for large chatbot providers. A federal compliance safe harbor allows entities to satisfy state obligations by complying with substantially equivalent designated federal standards.
Summary

Nebraska LB 1083 imposes transparency and safety obligations on two categories of covered entities: large frontier developers (frontier model developers with $500M+ annual revenue) and large chatbot providers (operators of covered chatbots with $25M+ annual revenue and 1M+ monthly users). Both must write, implement, comply with, and publicly publish a 'public safety and child protection plan' addressing catastrophic risks (for frontier developers) and child safety risks (for chatbot providers). Large frontier developers must publish catastrophic risk assessment summaries before deploying frontier models and submit quarterly internal-use risk assessments to the Attorney General. All frontier developers must report critical safety incidents within 15 days (24 hours if imminent death/injury risk), and large chatbot providers must report child safety incidents within 15 days. The bill includes robust whistleblower protections with a private right of action for retaliation, and civil penalties up to $1M per violation for large frontier developers and $50K per violation for large chatbot providers. A federal compliance safe harbor allows entities to satisfy state obligations by complying with substantially equivalent designated federal standards.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement. The Attorney General may bring an action to enforce the act. No private right of action for substantive violations of the act, except that employees or applicants who suffer whistleblower retaliation may bring a private civil action in district court within one year of the alleged violation or its discovery. The Attorney General establishes reporting mechanisms and may transmit reports to the Legislature, Governor, federal government, or appropriate state agencies. A federal compliance safe harbor is available: frontier developers or large chatbot providers may declare intent to comply via designated substantially equivalent federal law, regulation, or guidance document, in which case compliance with the federal standard satisfies the state obligation.
Penalties
Civil penalties enforced by the Attorney General: up to $1,000,000 per violation for large frontier developers; up to $50,000 per violation for large chatbot providers. Penalty amounts are dependent upon severity. For whistleblower retaliation claims brought by aggrieved employees or applicants, courts may award temporary or permanent injunctive relief, general and special damages, and reasonable attorney's fees and court costs. Penalties collected by the AG are remitted to the State Treasurer for distribution per Article VII, section 5 of the Nebraska Constitution.
Who Is Covered
(11) Frontier developer means a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications found in subdivision (12) of this section, except as otherwise provided by rules and regulations adopted and promulgated pursuant to section 6 of this act. Accredited postsecondary educational institutions shall not be considered frontier developers under the act to the extent that such institutions are developing or using frontier models exclusively for academic research purposes. If a person subsequently transfers full intellectual property rights of a frontier model to another person, including the right to resell the model, and retains none of those rights for themself, then the receiving person shall be considered the frontier developer with respect to that frontier model on and after such transfer;.
(14) Large frontier developer means, unless otherwise provided by rules and regulations adopted and promulgated pursuant to section 6 of this act, a frontier developer who together with its affiliates had a collective annual revenue in the preceding calender year of five hundred million dollars or more;.
(13) Large chatbot provider means a person who makes a covered chatbot available in this state and who, together with its affiliates, collectively had an annual revenue in the preceding calendar year of twenty-five million dollars or more, except as otherwise specified by rules and regulations adopted and promulgated pursuant to section 6 of this act.
What Is Covered
(12) Frontier model means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, except as otherwise provided by rules and regulations adopted and promulgated pursuant to section 6 of this act. The quantity of computing power described in this subdivision shall include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model;
(6) Covered chatbot means a service that: (a) Allows an ordinary person to have conversations in which human-like responses are generated by a foundation model; (b) Is foreseeably likely to be accessed by minors; and (c) Has at least one million active users monthly;
Compliance Obligations 16 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Sec. 4(1)(a)
Plain Language
Large frontier developers must write, implement, comply with, and publicly publish on their website a detailed public safety and child protection plan covering catastrophic risk. The plan must describe how the developer defines and assesses catastrophic risk thresholds (which may be multi-tiered), applies mitigations, reviews risk assessments as part of deployment and internal-use decisions, uses third-party evaluators, implements cybersecurity to protect unreleased model weights, and manages catastrophic risk from internal model use including risks from models circumventing oversight. This is both a documentation obligation and a continuous operational requirement — the developer must implement and comply with the plan, not merely publish it.
Statutory Text
(1) A large frontier developer or large chatbot provider shall write, implement, comply with, and clearly and conspicuously publish on its website a public safety and child protection plan that describes in detail: (a) For a large frontier developer, how the large frontier developer: (i) Defines and assesses thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (ii) Applies mitigations to address the potential for catastrophic risks based on the results of the assessments undertaken pursuant to subdivision (1)(a)(i) of this section; (iii) Reviews assessments of catastrophic risk and adequacy of mitigations of catastrophic risk as part of the decision to deploy a frontier model or use it extensively internally; (iv) Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (v) Implements cybersecurity practices to secure unreleased frontier model weights from unauthorized modification or transfer by internal or external parties; and (vi) Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms;
S-03 Frontier Model Safety Obligations · S-03.5 · Deployer · ChatbotMinors
Sec. 4(1)(b)
Plain Language
Large chatbot providers must include in their public safety and child protection plan a detailed description of how they assess child safety risks, apply mitigations based on those assessments, and use third parties to evaluate child safety risk potential and mitigation effectiveness. This plan must be written, implemented, complied with, and published on the provider's website per the parent obligation in Sec. 4(1).
Statutory Text
(b) For a large chatbot provider, how the large chatbot provider: (i) Assesses potential for child safety risks. (ii) Applies mitigations to address the potential for child safety risks based on the results of the assessments undertaken pursuant to subdivision (1)(b)(i) of this section; and (iii) Uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks;
S-03 Frontier Model Safety Obligations · S-03.5 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 4(1)(c)
Plain Language
Both large frontier developers and large chatbot providers must describe in their public safety and child protection plan how they incorporate national and international standards and industry best practices, how they revisit and update the plan (including triggers for updates and criteria for determining when models are substantially modified enough to require new disclosures), how they identify and respond to safety incidents, and what internal governance practices ensure the plan is actually implemented. This shared section applies to both entity types on top of their entity-specific plan requirements.
Statutory Text
(c) For both large frontier developers and large chatbot providers, how the large frontier developer or large chatbot provider: (i) Incorporates national standards, international standards, and industry-consensus best practices into its public safety and child protection plan; (ii) Revisits and updates the public safety and child protection plan, including any criteria that trigger updates and how such developer or provider determines when its foundation models or frontier models are substantially modified enough to require disclosures pursuant to subsection (3) or subsection (4) of this section; (iii) Identifies and responds to safety incidents; and (iv) Institutes internal governance practices to ensure implementation of its public safety and child protection plan.
G-02 Public Transparency & Documentation · G-02.4 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 4(2)
Plain Language
When a large frontier developer or large chatbot provider materially modifies its public safety and child protection plan, it must publish the updated plan and a written justification for the changes on its website within 30 days. This is an ongoing disclosure obligation triggered by material plan modifications — not a one-time publication requirement.
Statutory Text
(2) If a large frontier developer or large chatbot provider makes a material modification to its public safety and child protection plan, the large frontier developer or large chatbot provider shall clearly and conspicuously publish on such developer's or provider's website the modified public safety and child protection plan and a justification for such modification within thirty days after such material modification.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · ChatbotMinors
Sec. 4(3)
Plain Language
Before or concurrently with integrating a new or substantially modified foundation model into a covered chatbot, a large chatbot provider must publish on its website summaries of its child safety risk assessments, the assessment results, the degree of third-party evaluator involvement, and other steps taken to fulfill the child protection plan. This disclosure is triggered each time a new or substantially modified foundation model is integrated into a covered chatbot.
Statutory Text
(3) Before, or concurrently with, integrating a new foundation model, or a version of an existing foundation model that has been substantially modified, into a covered chatbot operated by the large chatbot provider, a large chatbot provider shall conspicuously publish on its website summaries of all of the following: (i) Assessments of child safety risks conducted pursuant to the large chatbot provider's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to child safety risks.
G-02 Public Transparency & Documentation · G-02.3 · Developer · Frontier AI System
Sec. 4(4)(a)-(b)
Plain Language
Before or concurrently with deploying a new or substantially modified frontier model, a large frontier developer must publish on its website summaries of its catastrophic risk assessments, assessment results, third-party evaluator involvement, and other steps taken to address catastrophic risks. Publication as part of a system card or model card satisfies this requirement. This disclosure is triggered each time a new or substantially modified frontier model is deployed.
Statutory Text
(4)(a) Before, or concurrently with, deploying a new frontier model or a version of an existing frontier model that the large frontier developer has substantially modified, a large frontier developer shall conspicuously publish on its website summaries of all of the following: (i) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to catastrophic risks from the frontier model. (b) A large frontier developer that publishes the information described in subdivision (5)(a) of this section as part of a larger document, including a system card or model card, shall be deemed in compliance with this subsection.
CP-01 Deceptive & Manipulative AI Conduct · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 4(5)(a)-(b)
Plain Language
Large frontier developers and large chatbot providers are prohibited from making materially false or misleading statements or omissions about (1) covered risks from their activities or management of those risks, or (2) their implementation of or compliance with their public safety and child protection plan. A good-faith safe harbor applies: the prohibition does not cover statements made in good faith that were reasonable under the circumstances. This is a deceptive conduct prohibition that could be violated by public communications, marketing, investor disclosures, or regulatory submissions.
Statutory Text
(5)(a)(i) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about covered risks from its activities or its management of covered risks. (ii) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about its implementation of, or compliance with, its public safety and child protection plan. (b) Subdivision (5)(a) of this section does not apply to a statement that was made in good faith and was reasonable under the circumstances.
G-01 AI Governance Program & Documentation · G-01.3 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 4(6)(a)-(b)
Plain Language
When publishing documents to comply with Sec. 4, large frontier developers and large chatbot providers may redact information necessary to protect trade secrets, cybersecurity, public safety, national security, or to comply with law. However, any redaction must be accompanied by a description of the character and justification of the redaction in the published document, and the unredacted information must be retained for five years. This creates a recordkeeping obligation that survives the publication event.
Statutory Text
(6)(a) When a large frontier developer or large chatbot provider publishes documents to comply with this section, the large frontier developer or large chatbot provider may make redactions to those documents that are necessary to protect the large frontier developer's trade secrets, the large frontier developer's or large chatbot provider's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (b) If a large frontier developer or large chatbot provider redacts information in a document pursuant to subdivision (6)(a) of this section, the large frontier developer or large chatbot provider shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Sec. 5(2)-(3)
Plain Language
All frontier developers (not just large frontier developers) must report any critical safety incident to the Attorney General within 15 days of discovery. Critical safety incidents include model weight exfiltration, mass casualty events, loss of model control, and deceptive model behavior. If the incident poses an imminent risk of death or serious physical injury, accelerated 24-hour reporting to an appropriate authority (including law enforcement or public safety agencies) is required. Note the broader scope — this obligation applies to all frontier developers, not just those meeting the $500M revenue threshold.
Statutory Text
(2) A frontier developer shall report any critical safety incident pertaining to one of its frontier models to the Attorney General within fifteen days after discovering the critical safety incident. (3) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within twenty-four hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.
R-01 Incident Reporting · R-01.1 · Deployer · ChatbotMinors
Sec. 5(4)
Plain Language
Large chatbot providers must report any child safety incident involving their covered chatbots to the Attorney General within 15 days of discovery. A child safety incident is defined as chatbot behavior that, if committed by a human, would constitute intentionally or recklessly causing death, bodily injury, or severe emotional distress to a minor.
Statutory Text
(4) A large chatbot provider shall report any child safety incident pertaining to one of its covered chatbots to the Attorney General within fifteen days after discovering the child safety incident.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Frontier AI System
Sec. 5(5)-(6)
Plain Language
Large frontier developers must submit to the Attorney General summaries of their catastrophic risk assessments from internal use of frontier models at least every three months (quarterly). The Attorney General will establish a confidential submission mechanism. This is a proactive, scheduled submission — the developer cannot wait to be asked. The obligation covers internal use specifically, distinguishing it from the pre-deployment public disclosure requirement in Sec. 4(4).
Statutory Text
(5) The Attorney General shall establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models. (6) A large frontier developer shall transmit to the Attorney General a summary of any assessment of catastrophic risk resulting from internal use of its frontier models no less frequently than every three months.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 7(2)-(4)
Plain Language
Frontier developers and large chatbot providers must not retaliate against employees for good-faith disclosures to the Attorney General, federal authorities, or internal personnel about activities posing a specific and substantial danger to public health, safety, or minors' health or safety, or about violations of the act. Retaliation protections also cover employees and applicants who testify, assist, or participate in investigations or proceedings concerning violations. Employment agreements and NDAs cannot waive these protections — any such waiver is void and unenforceable as against public policy. An aggrieved employee or applicant may bring a civil action within one year for injunctive relief, general and special damages, and attorney's fees.
Statutory Text
(2) A frontier developer or large chatbot provider shall not take adverse action against or otherwise penalize an employee for disclosing information to the Attorney General, a federal authority, a person with authority over the employee, or another employee who has authority to investigate, discover, or correct the reported issue, if the employee has reasonable cause to believe that the information discloses either of the following: (a) The frontier developer's or large chatbot provider's activities pose a specific and substantial danger to the public health or safety or to the health or safety of a minor; or (b) The frontier developer or large chatbot provider has violated the Transparency in Artificial Intelligence Risk Management Act. (3) A frontier developer or large chatbot provider shall not require an employee or applicant to waive or limit any protection granted under this section as a condition of continued employment or of applying for or receiving an offer of employment. Any agreement to waive any right or protection under the act is against the public policy of this state and is void and unenforceable. (4) A frontier developer or large chatbot provider shall not retaliate, discriminate or take adverse action against an employee or applicant because the employee or applicant testifies, assists, or participates in an investigation, proceeding, or action concerning a violation of the Transparency in Artificial Intelligence Risk Management Act.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1G-03.2 · Developer · Frontier AI System
Sec. 7(5)(a)-(b)
Plain Language
Large frontier developers must establish an internal anonymous disclosure process for employees who believe in good faith that the company's activities pose a specific and substantial threat to public health, safety, or minor safety, or that the company has violated the act. The process must include monthly status updates to the disclosing employee on the investigation and response. All disclosures and responses must be shared with officers and directors quarterly, except that an officer or director accused of wrongdoing in a disclosure is excluded from receiving that disclosure. This is an operational infrastructure requirement — the process must exist, function, and be maintained.
Statutory Text
(5)(a) A large frontier developer shall provide a reasonable internal process through which an employee may anonymously disclose information to the large frontier developer if the employee believes in good faith that the information indicates that the large frontier developer's activities (i) pose a specific and substantial threat to the public health or safety or to the health or safety of a minor or (b) that the large frontier developer or large chatbot provider has violated the Transparency in Artificial Intelligence Risk Management Act. Such internal process shall include providing a monthly update to the person who made the disclosure regarding the status of the large frontier developer's investigation of the disclosure and the actions taken by the large frontier developer in response to the disclosure. Except as provided in subdivision (ii) of this subsection, the disclosures and responses of the process required by this subdivision shall be shared with officers and directors of the large frontier developer at least once each quarter. (b) If an employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, subdivision (a) of this subsection shall not apply with respect to that officer or director.
Other · Frontier AI SystemChatbotMinors
Sec. 5(1)
Plain Language
The Attorney General must establish a public-facing mechanism for frontier developers, large chatbot providers, and members of the public to report safety incidents. Reports must include the date, the qualifying reasons, and a short description. This is an obligation on the Attorney General to build regulatory infrastructure — not a direct compliance obligation on covered entities, though it creates the reporting channel that entities must use under Sec. 5(2) and (4).
Statutory Text
(1) The Attorney General shall establish a mechanism to be used by a frontier developer, a large chatbot provider, or a member of the public to report a safety incident that includes all of the following: (a) The date of the safety incident; (b) The reasons the incident qualifies as a safety incident; and (c) A short and plain statement describing the safety incident.
Other · Frontier AI SystemChatbotMinors
Sec. 6(1)-(2)
Plain Language
The Attorney General must annually assess developments in AI and may update the statutory definitions of frontier model, frontier developer, large frontier developer, and large chatbot provider by rule. In doing so, the AG must consider international and federal standards, stakeholder input, ease and timing of coverage determinations, external verifiability, and thresholds used by other states. The AG must align updated definitions with federal law to the extent consistent with the act's purposes. This creates a dynamic regulatory framework — entities must monitor AG rulemaking to determine whether they fall within updated coverage definitions.
Statutory Text
(1) On or before January 1, 2027, and annually thereafter, the Attorney General shall assess recent evidence and developments relevant to the purposes of the Transparency in Artificial Intelligence Risk Management Act and may adopt and promulgate rules and regulations to update definitions for any of the following terms for the purposes of the act to ensure that such definitions accurately reflect technological developments, scientific literature, and widely accepted national and international standards: (a) Frontier model, so that such definition applies to foundation models at the frontier of artificial intelligence development; (b) Frontier developer, so that such definition applies to developers of frontier models who are themselves at the frontier of artificial intelligence development; (c) Large frontier developer so that such definition applies to well-resourced frontier developers; and (d) Large chatbot provider so that such definition applies to well-resourced companies developing covered chatbots that may pose child safety risks. (2) In adopting and promulgating rules and regulations pursuant to this section, the Attorney General shall take into account all of the following: (a) Similar thresholds used in international standards or federal law, guidance, or regulations for the management of catastrophic risks or child safety risks. The Attorney General shall align any updated definition with a definition adopted in a federal law or regulation to the extent that it is consistent with the purposes of the Transparency in Artificial Intelligence Risk Management Act; (b) Input from stakeholders, such as academic and technology industry professionals, the open-source community, and governmental entities; (c) The extent to which a person will be able to determine, before beginning to train or deploy a foundation model, whether that person will be subject to the definition as a frontier developer or as a large frontier developer with an aim toward allowing earlier determinations if possible; (d) The complexity of determining whether a person or foundation model is covered, with an aim toward allowing simpler determinations if possible; (e) The external verifiability of determining whether a person or foundation model is covered, with an aim toward definitions that are verifiable by parties other than the frontier developer; and (f) Thresholds used by other states in similar laws.
Other · Frontier AI SystemChatbotMinors
Sec. 5(8)-(10)
Plain Language
The Attorney General may designate substantially equivalent federal laws, regulations, or guidance documents as safe harbors. A frontier developer or large chatbot provider may declare to the AG its intent to comply with such a designated federal standard instead of the state requirements. Once declared, compliance with the federal standard satisfies the corresponding state obligations. However, failure to comply with the elected federal standard is itself a violation of this act. The developer may revoke or modify its declaration, and the AG must revoke a designation if the federal standard no longer meets the equivalency requirements. This creates a reciprocal commitment — opting in to the federal safe harbor is voluntary but binding once elected.
Statutory Text
(8) The Attorney General may adopt and promulgate rules and regulations designating one or more federal laws, regulations, or guidance documents that meet all of the following conditions for the purposes of subsection (9) of this section: (a) The law, regulation, or guidance document imposes or states standards or requirements for safety incident reporting that are substantially equivalent to, or stricter than, those required by this section for critical safety incidents, child safety incidents, or both. A law, regulation, or guidance document may satisfy this subdivision even if it does not require safety incident reporting to the State of Nebraska; and (b) The law, regulation, or guidance document is intended to assess, detect, or mitigate catastrophic risk, child safety risk, or both. (9)(a) A frontier developer or large chatbot provider that intends to comply with all or part of this section by complying with the requirements of, or meeting the standards stated by, a federal law, regulation, or guidance document designated pursuant to subsection (8) of this section by the Attorney General shall declare its intent to do so to the Attorney General. (b) After a frontier developer or large chatbot provider has declared its intent pursuant to subdivision (9)(a) of this section, the following shall apply: (i) To the extent that such developer or provider meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document, such developer or provider shall be deemed in compliance with the obligations under this section pertaining to: (A) Critical safety incidents, if such designated law, regulation, or document is intended to assess, detect, or mitigate catastrophic risk; and (B) Child safety incidents, if such designated law, regulation, or document is intended to assess, detect, or mitigate child safety risk; and (ii) The failure by such developer or provider to meet the standards of, or comply with the requirements stated by, such designated law, regulation, or document, shall be considered a violation of the Transparency in Artificial Intelligence Risk Management Act. (c) Subdivision (9)(b) of this section shall not apply to a frontier developer or large chatbot provider to the extent that: (i) Such developer or provider makes a declaration of intent to the Attorney General to modify or revoke a declaration of intent under subdivision (9)(a) of this section; or (ii) The Attorney General revokes a rule or regulation pursuant to subsection (10) of this section. (10) The Attorney General shall revoke a rule or regulation adopted under or promulgated under subsection (8) of this section if the requirements of subsection (8) are no longer met.