LB-1083
NE · State · USA
NE
USA
● Failed
Effective Date
2027-01-01
Nebraska LB 1083 — Transparency in Artificial Intelligence Risk Management Act
Imposes transparency and safety obligations on large frontier AI model developers and large chatbot providers operating in Nebraska. Large frontier developers (≥$500M annual revenue) must publish a public safety and child protection plan addressing catastrophic risk assessment, mitigation, third-party evaluation, cybersecurity, and internal governance. Large chatbot providers (≥$25M annual revenue operating chatbots with ≥1M monthly users that are foreseeably accessible to minors) must publish a similar plan addressing child safety risks. Both must report safety incidents to the Attorney General within 15 days (24 hours for imminent life-threatening incidents). Large frontier developers must also quarterly submit confidential catastrophic risk assessments. The act includes whistleblower protections with a private right of action for retaliation, and provides a federal compliance safe harbor mechanism. Enforced by the Attorney General with penalties up to $1M per violation for large frontier developers and $50K for large chatbot providers.
Summary

Imposes transparency and safety obligations on large frontier AI model developers and large chatbot providers operating in Nebraska. Large frontier developers (≥$500M annual revenue) must publish a public safety and child protection plan addressing catastrophic risk assessment, mitigation, third-party evaluation, cybersecurity, and internal governance. Large chatbot providers (≥$25M annual revenue operating chatbots with ≥1M monthly users that are foreseeably accessible to minors) must publish a similar plan addressing child safety risks. Both must report safety incidents to the Attorney General within 15 days (24 hours for imminent life-threatening incidents). Large frontier developers must also quarterly submit confidential catastrophic risk assessments. The act includes whistleblower protections with a private right of action for retaliation, and provides a federal compliance safe harbor mechanism. Enforced by the Attorney General with penalties up to $1M per violation for large frontier developers and $50K for large chatbot providers.

Enforcement & Penalties
Enforcement Authority
Attorney General has enforcement authority and may bring civil actions to enforce the act. No private right of action for general violations of the act. Employees and applicants who suffer retaliation under the whistleblower provision (Sec. 7) may institute a civil action in district court within one year of the alleged violation or discovery thereof. The Attorney General also establishes reporting mechanisms for safety incidents and receives confidential submissions from frontier developers.
Penalties
For AG enforcement: large frontier developers face civil penalties up to $1,000,000 per violation; large chatbot providers face civil penalties up to $50,000 per violation, in each case dependent on severity. For whistleblower retaliation claims brought by employees or applicants: appropriate relief including temporary or permanent injunctive relief, general and special damages, and reasonable attorney's fees and court costs. No statutory minimum specified for whistleblower claims.
Who Is Covered
(11) Frontier developer means a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications found in subdivision (12) of this section, except as otherwise provided by rules and regulations adopted and promulgated pursuant to section 6 of this act. Accredited postsecondary educational institutions shall not be considered frontier developers under the act to the extent that such institutions are developing or using frontier models exclusively for academic research purposes. If a person subsequently transfers full intellectual property rights of a frontier model to another person, including the right to resell the model, and retains none of those rights for themself, then the receiving person shall be considered the frontier developer with respect to that frontier model on and after such transfer;.
(14) Large frontier developer means, unless otherwise provided by rules and regulations adopted and promulgated pursuant to section 6 of this act, a frontier developer who together with its affiliates had a collective annual revenue in the preceding calender year of five hundred million dollars or more;.
(13) Large chatbot provider means a person who makes a covered chatbot available in this state and who, together with its affiliates, collectively had an annual revenue in the preceding calendar year of twenty-five million dollars or more, except as otherwise specified by rules and regulations adopted and promulgated pursuant to section 6 of this act.
What Is Covered
(12) Frontier model means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, except as otherwise provided by rules and regulations adopted and promulgated pursuant to section 6 of this act. The quantity of computing power described in this subdivision shall include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model;
(6) Covered chatbot means a service that: (a) Allows an ordinary person to have conversations in which human-like responses are generated by a foundation model; (b) Is foreseeably likely to be accessed by minors; and (c) Has at least one million active users monthly;
Compliance Obligations 17 obligations · click obligation ID to open requirement page
S-03 Frontier Model Safety Obligations · S-03.5 · Developer · Frontier AI System
Sec. 4(1)(a)(i)-(vi), (1)(c)(i)-(iv)
Plain Language
Large frontier developers must write, implement, comply with, and conspicuously publish on their website a public safety and child protection plan that details how they assess catastrophic risk thresholds, apply mitigations, review risks before deployment or extensive internal use, use third-party evaluators, secure unreleased model weights, and manage risks from internal model use including evasion of oversight. The plan must also describe how the developer incorporates national and international standards, revisits and updates the plan, identifies and responds to safety incidents, and maintains internal governance for implementation. This is a continuing obligation — the plan must be kept current and compliance is ongoing.
Statutory Text
(1) A large frontier developer or large chatbot provider shall write, implement, comply with, and clearly and conspicuously publish on its website a public safety and child protection plan that describes in detail: (a) For a large frontier developer, how the large frontier developer: (i) Defines and assesses thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (ii) Applies mitigations to address the potential for catastrophic risks based on the results of the assessments undertaken pursuant to subdivision (1)(a)(i) of this section; (iii) Reviews assessments of catastrophic risk and adequacy of mitigations of catastrophic risk as part of the decision to deploy a frontier model or use it extensively internally; (iv) Uses third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (v) Implements cybersecurity practices to secure unreleased frontier model weights from unauthorized modification or transfer by internal or external parties; and (vi) Assesses and manages catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms; (c) For both large frontier developers and large chatbot providers, how the large frontier developer or large chatbot provider: (i) Incorporates national standards, international standards, and industry-consensus best practices into its public safety and child protection plan; (ii) Revisits and updates the public safety and child protection plan, including any criteria that trigger updates and how such developer or provider determines when its foundation models or frontier models are substantially modified enough to require disclosures pursuant to subsection (3) or subsection (4) of this section; (iii) Identifies and responds to safety incidents; and (iv) Institutes internal governance practices to ensure implementation of its public safety and child protection plan.
S-03 Frontier Model Safety Obligations · S-03.5 · Deployer · ChatbotMinors
Sec. 4(1)(b)(i)-(iii), (1)(c)(i)-(iv)
Plain Language
Large chatbot providers must write, implement, comply with, and conspicuously publish on their website a public safety and child protection plan that describes how they assess child safety risks, apply mitigations based on those assessments, and use third parties to evaluate risks and mitigation effectiveness. The plan must also cover incorporation of standards and best practices, update triggers, safety incident identification and response, and internal governance practices. This is the chatbot-provider-specific counterpart to the large frontier developer's plan obligations.
Statutory Text
(b) For a large chatbot provider, how the large chatbot provider: (i) Assesses potential for child safety risks. (ii) Applies mitigations to address the potential for child safety risks based on the results of the assessments undertaken pursuant to subdivision (1)(b)(i) of this section; and (iii) Uses third parties to assess the potential for child safety risks and the effectiveness of mitigations of child safety risks; and (c) For both large frontier developers and large chatbot providers, how the large frontier developer or large chatbot provider: (i) Incorporates national standards, international standards, and industry-consensus best practices into its public safety and child protection plan; (ii) Revisits and updates the public safety and child protection plan, including any criteria that trigger updates and how such developer or provider determines when its foundation models or frontier models are substantially modified enough to require disclosures pursuant to subsection (3) or subsection (4) of this section; (iii) Identifies and responds to safety incidents; and (iv) Institutes internal governance practices to ensure implementation of its public safety and child protection plan.
G-02 Public Transparency & Documentation · G-02.4 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 4(2)
Plain Language
Whenever a large frontier developer or large chatbot provider materially modifies its public safety and child protection plan, it must publish the updated plan and a justification for the changes on its website within 30 days. This ensures ongoing public transparency about safety plan evolution.
Statutory Text
(2) If a large frontier developer or large chatbot provider makes a material modification to its public safety and child protection plan, the large frontier developer or large chatbot provider shall clearly and conspicuously publish on such developer's or provider's website the modified public safety and child protection plan and a justification for such modification within thirty days after such material modification.
G-02 Public Transparency & Documentation · G-02.3 · Deployer · ChatbotMinors
Sec. 4(3)(i)-(iv)
Plain Language
Before or when integrating a new or substantially modified foundation model into a covered chatbot, the large chatbot provider must publish summaries of its child safety risk assessments, the results, the extent of third-party evaluator involvement, and other steps taken to address child safety risks. This ensures that each model change triggers fresh public disclosure about child safety evaluation. The timing obligation is tied to model integration, not a fixed calendar schedule.
Statutory Text
(3) Before, or concurrently with, integrating a new foundation model, or a version of an existing foundation model that has been substantially modified, into a covered chatbot operated by the large chatbot provider, a large chatbot provider shall conspicuously publish on its website summaries of all of the following: (i) Assessments of child safety risks conducted pursuant to the large chatbot provider's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to child safety risks.
G-02 Public Transparency & Documentation · G-02.3 · Developer · Frontier AI System
Sec. 4(4)(a)(i)-(iv), (4)(b)
Plain Language
Before or when deploying a new or substantially modified frontier model, the large frontier developer must publish summaries of catastrophic risk assessments, their results, third-party evaluator involvement, and other safety steps taken. Publishing this information as part of a system card or model card satisfies the requirement. This is a per-deployment obligation — each new model or substantial modification triggers a new publication.
Statutory Text
(4)(a) Before, or concurrently with, deploying a new frontier model or a version of an existing frontier model that the large frontier developer has substantially modified, a large frontier developer shall conspicuously publish on its website summaries of all of the following: (i) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to catastrophic risks from the frontier model. (b) A large frontier developer that publishes the information described in subdivision (5)(a) of this section as part of a larger document, including a system card or model card, shall be deemed in compliance with this subsection.
CP-01 Deceptive & Manipulative AI Conduct · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 4(5)(a)(i)-(ii), (5)(b)
Plain Language
Large frontier developers and large chatbot providers are prohibited from making materially false or misleading statements or omissions about covered risks from their activities, their management of those risks, or their implementation of or compliance with their public safety and child protection plan. A good-faith safe harbor applies: the prohibition does not cover statements made in good faith that were reasonable under the circumstances. This effectively creates an anti-fraud obligation specific to AI safety communications.
Statutory Text
(5)(a)(i) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about covered risks from its activities or its management of covered risks. (ii) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about its implementation of, or compliance with, its public safety and child protection plan. (b) Subdivision (5)(a) of this section does not apply to a statement that was made in good faith and was reasonable under the circumstances.
G-01 AI Governance Program & Documentation · G-01.3 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 4(6)(a)-(b)
Plain Language
When publishing safety plan documents, large frontier developers and large chatbot providers may redact information to protect trade secrets, cybersecurity, public safety, national security, or to comply with law. However, any redaction must be described and justified in the published version (to the extent the justifying concerns permit), and the unredacted version must be retained for five years. This creates both a permissive redaction framework and a mandatory recordkeeping obligation for the unredacted originals.
Statutory Text
(6)(a) When a large frontier developer or large chatbot provider publishes documents to comply with this section, the large frontier developer or large chatbot provider may make redactions to those documents that are necessary to protect the large frontier developer's trade secrets, the large frontier developer's or large chatbot provider's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (b) If a large frontier developer or large chatbot provider redacts information in a document pursuant to subdivision (6)(a) of this section, the large frontier developer or large chatbot provider shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
R-01 Incident Reporting · R-01.1 · Developer · Frontier AI System
Sec. 5(2)-(3)
Plain Language
All frontier developers (not just large frontier developers) must report critical safety incidents involving their frontier models to the Attorney General within 15 days of discovery. If the incident poses an imminent risk of death or serious physical injury, the developer must additionally disclose it within 24 hours to an appropriate authority, including law enforcement or public safety agencies. Critical safety incidents include unauthorized model weight access, mass-casualty events, loss of model control, and model deception of its developer.
Statutory Text
(2) A frontier developer shall report any critical safety incident pertaining to one of its frontier models to the Attorney General within fifteen days after discovering the critical safety incident. (3) If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer shall disclose that incident within twenty-four hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.
R-01 Incident Reporting · R-01.1 · Deployer · ChatbotMinors
Sec. 5(4)
Plain Language
Large chatbot providers must report any child safety incident involving their covered chatbots to the Attorney General within 15 days of discovery. A child safety incident includes chatbot behavior toward a minor that, if committed by a human, would constitute intentional or reckless causation of death, bodily injury, or severe emotional distress. This is a mandatory reporting obligation triggered by discovery, not by external complaint.
Statutory Text
(4) A large chatbot provider shall report any child safety incident pertaining to one of its covered chatbots to the Attorney General within fifteen days after discovering the child safety incident.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Frontier AI System
Sec. 5(5)-(6)
Plain Language
Large frontier developers must submit to the Attorney General confidential summaries of catastrophic risk assessments related to internal use of their frontier models at least every three months (quarterly). The Attorney General will establish a confidential submission mechanism for this purpose. This is a proactive scheduled regulatory submission — the developer cannot wait to be asked.
Statutory Text
(5) The Attorney General shall establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models. (6) A large frontier developer shall transmit to the Attorney General a summary of any assessment of catastrophic risk resulting from internal use of its frontier models no less frequently than every three months.
R-01 Incident Reporting · R-01.1 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 5(1)(a)-(c)
Plain Language
The Attorney General must establish a public reporting mechanism for safety incidents usable by frontier developers, large chatbot providers, and members of the public. Reports must include the incident date, the reasons it qualifies as a safety incident, and a short and plain statement describing it. This provision creates infrastructure for the incident reporting obligations in the rest of Section 5, and also opens reporting to the general public.
Statutory Text
(1) The Attorney General shall establish a mechanism to be used by a frontier developer, a large chatbot provider, or a member of the public to report a safety incident that includes all of the following: (a) The date of the safety incident; (b) The reasons the incident qualifies as a safety incident; and (c) A short and plain statement describing the safety incident.
Other · Frontier AI SystemChatbotMinors
Sec. 5(8)-(10)
Plain Language
The Attorney General may designate federal laws, regulations, or guidance as substantially equivalent to the act's safety incident reporting requirements. A frontier developer or large chatbot provider may then declare intent to comply via the designated federal framework instead. If accepted, compliance with the federal standard satisfies the state obligation — but failure to meet the federal standard constitutes a violation of the Nebraska act. The AG must revoke a designation if its prerequisites are no longer met, and entities can revoke their declarations. This is a safe harbor mechanism, not a new affirmative obligation.
Statutory Text
(8) The Attorney General may adopt and promulgate rules and regulations designating one or more federal laws, regulations, or guidance documents that meet all of the following conditions for the purposes of subsection (9) of this section: (a) The law, regulation, or guidance document imposes or states standards or requirements for safety incident reporting that are substantially equivalent to, or stricter than, those required by this section for critical safety incidents, child safety incidents, or both. A law, regulation, or guidance document may satisfy this subdivision even if it does not require safety incident reporting to the State of Nebraska; and (b) The law, regulation, or guidance document is intended to assess, detect, or mitigate catastrophic risk, child safety risk, or both. (9)(a) A frontier developer or large chatbot provider that intends to comply with all or part of this section by complying with the requirements of, or meeting the standards stated by, a federal law, regulation, or guidance document designated pursuant to subsection (8) of this section by the Attorney General shall declare its intent to do so to the Attorney General. (b) After a frontier developer or large chatbot provider has declared its intent pursuant to subdivision (9)(a) of this section, the following shall apply: (i) To the extent that such developer or provider meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document, such developer or provider shall be deemed in compliance with the obligations under this section pertaining to: (A) Critical safety incidents, if such designated law, regulation, or document is intended to assess, detect, or mitigate catastrophic risk; and (B) Child safety incidents, if such designated law, regulation, or document is intended to assess, detect, or mitigate child safety risk; and (ii) The failure by such developer or provider to meet the standards of, or comply with the requirements stated by, such designated law, regulation, or document, shall be considered a violation of the Transparency in Artificial Intelligence Risk Management Act. (c) Subdivision (9)(b) of this section shall not apply to a frontier developer or large chatbot provider to the extent that: (i) Such developer or provider makes a declaration of intent to the Attorney General to modify or revoke a declaration of intent under subdivision (9)(a) of this section; or (ii) The Attorney General revokes a rule or regulation pursuant to subsection (10) of this section. (10) The Attorney General shall revoke a rule or regulation adopted under or promulgated under subsection (8) of this section if the requirements of subsection (8) are no longer met.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 7(2)-(4)
Plain Language
Frontier developers and large chatbot providers are prohibited from retaliating against employees who report potential public safety dangers or violations of the act to the Attorney General, federal authorities, or authorized internal personnel. The anti-retaliation protection extends to employees who testify or participate in investigations. Employers cannot require employees or applicants to waive these protections as a condition of employment — any such waiver is void and unenforceable. Note that the scope of the 'employee' definition is limited to individuals employed by large frontier developers or large chatbot providers, even though subsections (2)-(4) refer more broadly to 'frontier developer or large chatbot provider.'
Statutory Text
(2) A frontier developer or large chatbot provider shall not take adverse action against or otherwise penalize an employee for disclosing information to the Attorney General, a federal authority, a person with authority over the employee, or another employee who has authority to investigate, discover, or correct the reported issue, if the employee has reasonable cause to believe that the information discloses either of the following: (a) The frontier developer's or large chatbot provider's activities pose a specific and substantial danger to the public health or safety or to the health or safety of a minor; or (b) The frontier developer or large chatbot provider has violated the Transparency in Artificial Intelligence Risk Management Act. (3) A frontier developer or large chatbot provider shall not require an employee or applicant to waive or limit any protection granted under this section as a condition of continued employment or of applying for or receiving an offer of employment. Any agreement to waive any right or protection under the act is against the public policy of this state and is void and unenforceable. (4) A frontier developer or large chatbot provider shall not retaliate, discriminate or take adverse action against an employee or applicant because the employee or applicant testifies, assists, or participates in an investigation, proceeding, or action concerning a violation of the Transparency in Artificial Intelligence Risk Management Act.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1G-03.2 · Developer · Frontier AI System
Sec. 7(5)(a)-(b)
Plain Language
Large frontier developers must establish an internal anonymous reporting process for employees who believe in good faith that the developer's activities pose a specific and substantial threat to public health or safety (including the safety of minors) or that the developer has violated the act. The process must provide monthly status updates to the disclosing employee on the investigation and response. Disclosures and responses must be shared with officers and directors at least quarterly — except that if a disclosure alleges wrongdoing by a specific officer or director, that person is excluded from receiving the report.
Statutory Text
(5)(a) A large frontier developer shall provide a reasonable internal process through which an employee may anonymously disclose information to the large frontier developer if the employee believes in good faith that the information indicates that the large frontier developer's activities (i) pose a specific and substantial threat to the public health or safety or to the health or safety of a minor or (b) that the large frontier developer or large chatbot provider has violated the Transparency in Artificial Intelligence Risk Management Act. Such internal process shall include providing a monthly update to the person who made the disclosure regarding the status of the large frontier developer's investigation of the disclosure and the actions taken by the large frontier developer in response to the disclosure. Except as provided in subdivision (ii) of this subsection, the disclosures and responses of the process required by this subdivision shall be shared with officers and directors of the large frontier developer at least once each quarter. (b) If an employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, subdivision (a) of this subsection shall not apply with respect to that officer or director.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Frontier AI SystemChatbotMinors
Sec. 7(6)
Plain Language
Employees or applicants who suffer retaliation for protected whistleblower activity may bring a civil action in district court within one year of the violation or its discovery, whichever is later. Successful plaintiffs may recover temporary or permanent injunctive relief, general and special damages, and reasonable attorney's fees and costs. This is the only private right of action in the act — it is limited to whistleblower retaliation claims and does not extend to general violations of the act.
Statutory Text
(6) Upon violation of this section, an aggrieved employee or applicant may, in addition to any other available remedy, institute a civil action within one year after the date of the alleged violation or the discovery of the alleged violation, whichever is later. The employee or applicant shall file an action directly in the district court of the county where such alleged violation occurred. The district court shall file and try such case as any other civil action, and any successful complainant shall be entitled to appropriate relief, including temporary or permanent injunctive relief, general and special damages, and reasonable attorney's fees and court costs.
Other · Frontier AI SystemChatbotMinors
Sec. 6(1)-(2)
Plain Language
The Attorney General must annually assess developments in AI and may update the definitions of frontier model, frontier developer, large frontier developer, and large chatbot provider by rulemaking. The AG must consider federal alignment, stakeholder input, predictability for covered entities, simplicity, external verifiability, and sister-state thresholds. This is a delegation of regulatory authority to the AG, not a direct compliance obligation on covered entities — but practitioners should monitor for definitional changes that could expand or contract coverage.
Statutory Text
(1) On or before January 1, 2027, and annually thereafter, the Attorney General shall assess recent evidence and developments relevant to the purposes of the Transparency in Artificial Intelligence Risk Management Act and may adopt and promulgate rules and regulations to update definitions for any of the following terms for the purposes of the act to ensure that such definitions accurately reflect technological developments, scientific literature, and widely accepted national and international standards: (a) Frontier model, so that such definition applies to foundation models at the frontier of artificial intelligence development; (b) Frontier developer, so that such definition applies to developers of frontier models who are themselves at the frontier of artificial intelligence development; (c) Large frontier developer so that such definition applies to well-resourced frontier developers; and (d) Large chatbot provider so that such definition applies to well-resourced companies developing covered chatbots that may pose child safety risks. (2) In adopting and promulgating rules and regulations pursuant to this section, the Attorney General shall take into account all of the following: (a) Similar thresholds used in international standards or federal law, guidance, or regulations for the management of catastrophic risks or child safety risks. The Attorney General shall align any updated definition with a definition adopted in a federal law or regulation to the extent that it is consistent with the purposes of the Transparency in Artificial Intelligence Risk Management Act; (b) Input from stakeholders, such as academic and technology industry professionals, the open-source community, and governmental entities; (c) The extent to which a person will be able to determine, before beginning to train or deploy a foundation model, whether that person will be subject to the definition as a frontier developer or as a large frontier developer with an aim toward allowing earlier determinations if possible; (d) The complexity of determining whether a person or foundation model is covered, with an aim toward allowing simpler determinations if possible; (e) The external verifiability of determining whether a person or foundation model is covered, with an aim toward definitions that are verifiable by parties other than the frontier developer; and (f) Thresholds used by other states in similar laws.
Other · Frontier AI SystemChatbotMinors
Sec. 11 (amending § 84-712.05(30))
Plain Language
Safety incident notifications, catastrophic risk assessment summaries submitted to the AG, and whistleblower disclosures under the act are added to the list of records that may be withheld from public records requests under Nebraska's public records law. This protects the confidentiality of regulatory submissions and whistleblower reports but creates no new obligation on covered entities.
Statutory Text
(30) A notification or summary of assessment submitted under section 5 of this act or a disclosure made pursuant to section 7 of this act.