LB-642
NE · State · USA
NE
USA
● Failed
Effective Date
2026-02-01
Nebraska LB 642 — Artificial Intelligence Consumer Protection Act
Nebraska LB 642 imposes obligations on developers and deployers of high-risk AI systems — defined as AI systems that make consequential decisions without human review — to protect consumers from algorithmic discrimination. Developers must provide deployers with documentation on intended uses, known risks, training data summaries, and bias mitigation measures, and must maintain a public use case inventory. Deployers must implement risk management programs, complete impact assessments, notify consumers before and after consequential automated decisions, and provide appeal and data correction rights. The bill includes extensive carve-outs for insurers, financial institutions, federal contractors, and systems approved by federal agencies. Enforcement is exclusively by the Attorney General with a mandatory 90-day cure period; no private right of action is created. The Act is modeled closely on Colorado SB 205 and takes effect February 1, 2026.
Summary

Nebraska LB 642 imposes obligations on developers and deployers of high-risk AI systems — defined as AI systems that make consequential decisions without human review — to protect consumers from algorithmic discrimination. Developers must provide deployers with documentation on intended uses, known risks, training data summaries, and bias mitigation measures, and must maintain a public use case inventory. Deployers must implement risk management programs, complete impact assessments, notify consumers before and after consequential automated decisions, and provide appeal and data correction rights. The bill includes extensive carve-outs for insurers, financial institutions, federal contractors, and systems approved by federal agencies. Enforcement is exclusively by the Attorney General with a mandatory 90-day cure period; no private right of action is created. The Act is modeled closely on Colorado SB 205 and takes effect February 1, 2026.

Enforcement & Penalties
Enforcement Authority
The Attorney General has exclusive authority to enforce the Act. Enforcement is agency-initiated. Prior to initiating any enforcement action, the Attorney General must issue a notice of violation describing the alleged violation and the actions required to cure it. If the developer, deployer, or other person fails to cure within 90 days of receipt, the Attorney General may bring an action. The 90-day cure period does not apply where existing rights, claims, remedies, presumptions, or defenses at law or in equity are involved — the Act's rebuttable presumptions and affirmative defenses apply only to AG enforcement actions. No private right of action is created.
Penalties
The Act does not specify statutory damages, civil penalties, or specific remedy types. The Act expressly does not provide the basis for any private right of action. It preserves all existing rights, claims, remedies, presumptions, and defenses available at law or in equity, but does not itself define monetary remedies. Enforcement remedies are left to the Attorney General's general enforcement authority.
Who Is Covered
(6) Deployer means a person doing business in this state that deploys a high-risk artificial intelligence system in this state;.
(7) Developer means a person doing business in this state that develops or intentionally and substantially modifies a high-risk artificial intelligence system in this state;.
What Is Covered
(9)(a) High-risk artificial intelligence system means any artificial intelligence system that, when deployed, makes a consequential decision without human review or intervention; and (b) High-risk artificial intelligence system does not include: (i) Any artificial intelligence system if the artificial intelligence system is intended to: (A) Perform a narrow procedural task; (B) Improve the result of a previously completed human activity; (C) Perform a preparatory task to an assessment that is relevant to a consequential decision; or (D) Detect decisionmaking patterns or deviations from preexisting decisionmaking patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; and (ii) Any of the following technology: (A) Antifraud technology; (B) Antimalware; (C) Antivirus; (D) Artificial intelligence-enabled video game; (E) Calculator; (F) Cybersecurity; (G) Database; (H) Data storage; (I) Firewall; (J) Internet domain registration; (K) Internet website loading; (L) Networking; (M) Spam-filtering; (N) Robocall-filtering; (O) Spell-checking; (P) Spreadsheet; (Q) Web caching; (R) Web hosting or any similar technology; or (S) Technology that: (I) Communicates with any consumer in natural language for the purpose of providing such consumer with information, making any referral or recommendation, or answering any question; and (II) Is subject to an acceptable use policy that prohibits generating content that is unlawful or harmful;
Compliance Obligations 17 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Developer · Automated Decisionmaking
Sec. 3(1)(a)-(b)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known risks of algorithmic discrimination arising from the system's intended and contracted uses. Compliance with all of Section 3's developer obligations creates a rebuttable presumption that reasonable care was used, but only in AG enforcement actions. Self-testing for bias and diversity expansion efforts are expressly carved out from the definition of algorithmic discrimination.
Statutory Text
(1)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 3(2)(a)-(d)
Plain Language
Developers must provide deployers (or downstream developers) with comprehensive documentation covering: intended and harmful uses, training data summary, known limitations and discrimination risks, system purpose, pre-deployment bias evaluation methods, data governance measures, intended outputs, discrimination mitigation steps, usage and monitoring guidance, and output-understanding documentation. This is a deployer-facing disclosure — not a public posting — and is subject to the trade secret exemption in Sec. 3(6). A developer that also serves as its own deployer is exempt unless the system is provided to an unaffiliated deployer.
Statutory Text
(2) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, each developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (a) A general statement describing the uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (b) Documentation disclosing: (i) A high-level summary of the types of data used to train the high-risk artificial intelligence system; (ii) Each known limitation of the high-risk artificial intelligence system, including each known or reasonably foreseeable risk of algorithmic discrimination arising from the intended use of the high-risk artificial intelligence system; (iii) The purpose of the high-risk artificial intelligence system; (iv) Any intended benefit and use of the high-risk artificial intelligence system; and (v) Information necessary to allow the deployer to comply with the requirements of section 4 of this act; (c) Documentation describing: (i) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) Intended outputs of the high-risk artificial intelligence system; (iv) The measures the developer has taken to mitigate known risks of algorithmic discrimination that could arise from the deployment of the high-risk artificial intelligence system; and (v) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (d) Documentation that is reasonably necessary to assist the deployer in understanding each output and monitor the performance of the high-risk artificial intelligence system for each risk of algorithmic discrimination.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 3(3)(a)-(b)
Plain Language
When a developer makes a high-risk AI system available to a deployer, it must provide — to the extent feasible — documentation sufficient for the deployer to complete an impact assessment under Section 4(3). This includes any model card or impact assessment the developer has already completed. The self-deploying developer exemption applies: this obligation only triggers when the system is provided to an unaffiliated deployer.
Statutory Text
(3)(a) Except as otherwise provided in subsection (6) of this section, on or after February 1, 2026, a developer that offers, sells, leases, licenses, gives, or otherwise makes any high-risk artificial intelligence system available to a deployer or other developer shall to the extent feasible make available to the deployer or other developer the documentation and information necessary for the deployer or a third party contracted by the deployer to complete an impact assessment pursuant to subsection (3) of section 4 of this act. Such documentation and information includes any model card or other impact assessment. (b) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
Sec. 3(4)(a)-(b)
Plain Language
Developers must maintain a publicly available use case inventory summarizing: the types of high-risk AI systems they currently offer, any systems they have intentionally and substantially modified, and how they manage known algorithmic discrimination risks. This inventory must be kept accurate on an ongoing basis and updated within 90 days of any intentional and substantial modification.
Statutory Text
(4)(a) On and after February 1, 2026, a developer shall make a statement summarizing the following available in a manner that is clear and readily available in a public use case inventory: (i) The types of high-risk artificial intelligence systems that the developer has developed and currently makes available to a deployer or other developer; (ii) The types of high-risk artificial intelligence system that the developer has intentionally and substantially modified and currently makes available to a deployer or other developer; and (iii) How the developer manages known risks of algorithmic discrimination that could arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in subdivisions (4)(a)(i) and (ii) of this section. (b) A developer shall update the statement described in subdivision (4)(a) of this section: (i) As necessary to ensure that the statement remains accurate; and (ii) No later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subdivision (4)(a)(ii) of this section.
R-01 Incident Reporting · R-01.3 · Developer · Automated Decisionmaking
Sec. 3(5)(a)-(b)
Plain Language
When a developer discovers through its own testing or receives a credible report from a deployer that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must disclose the known discrimination risks to all known deployers and other developers of that system without unreasonable delay. The Attorney General prescribes the form and manner of disclosure. This functions as a discrimination-specific incident notification obligation from developer to deployers.
Statutory Text
(5)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall disclose to all known deployers or other developers of the high-risk artificial intelligence system, each known risk of algorithmic discrimination arising from any intended use of the high-risk artificial intelligence system without unreasonable delay after the date on which: (i) The developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (ii) The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination. (b) The Attorney General shall prescribe the form and manner of the disclosure described in subdivision (a) of this subsection.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
Sec. 3(7)(a)-(d)
Plain Language
The Attorney General may issue a written demand requiring a developer to produce the documentation described in Sec. 3(2) — including use statements, training data summaries, limitation disclosures, and bias evaluation documentation — in connection with an ongoing investigation. Developers may designate materials as proprietary or trade secret, and such materials are exempt from public disclosure. Documentation must be produced in the form and manner prescribed by the AG.
Statutory Text
(7)(a) On and after February 1, 2026, the Attorney General may provide a written demand to any developer to disclose to the Attorney General the statement or documentation described in subsection (2) of this section if such a statement or documentation is relevant to an investigation related to the developer conducted by the Attorney General. Such statement or documentation shall be provided to the Attorney General in a form and manner prescribed by the Attorney General. (b) The Attorney General may evaluate such statement or documentation, if it is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) In any disclosure pursuant to this subsection, any developer may designate the statement or documentation as including proprietary information or a trade secret. (d) To the extent any such statement or documentation includes any proprietary information or any trade secret, such statement or documentation shall be exempt from disclosure.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Deployer · Automated Decisionmaking
Sec. 4(1)(a)-(b)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known algorithmic discrimination risks. Compliance with all deployer obligations in Section 4 creates a rebuttable presumption of reasonable care, applicable only in AG enforcement actions. This is the deployer counterpart to the developer's reasonable care obligation in Section 3(1).
Statutory Text
(1)(a) On and after February 1, 2026, a deployer of any high-risk artificial intelligence system shall use reasonable care to protect consumers from each known risk of algorithmic discrimination. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated Decisionmaking
Sec. 4(2)(a)-(b)
Plain Language
Deployers must implement a risk management policy and program governing their deployment of high-risk AI systems. Conformity with the NIST AI RMF or ISO/IEC 42001 (as of January 1, 2025) creates a presumption of compliance. A single program may cover multiple high-risk AI systems. Small deployers (fewer than 50 FTEs who do not use their own data to train) are exempt under Section 4(6).
Statutory Text
(2)(a) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. High-risk artificial intelligence systems that are in conformity with the guidance and standards set forth in the following as of January 1, 2025, shall be presumed to be in conformity with this section: (i) The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology; or (ii) The standard ISO/IEC 42001 of the International Organization for Standardization. (b) Any risk management policy and program implemented pursuant to subdivision (a) of this subsection may cover multiple high-risk artificial intelligence systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 4(3)(a)-(f)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system deployed on or after February 1, 2026, and within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and use cases, deployment context, benefits, algorithmic discrimination risk analysis and mitigation, data input/output categories, customization data, performance metrics, transparency measures, and post-deployment monitoring safeguards. Post-modification assessments must also disclose whether actual use deviated from the developer's intended use. A single assessment may cover comparable systems. Assessments completed under other substantially equivalent laws satisfy this requirement. Deployers must retain the most recent assessment and all records, plus all prior assessments for at least three years after final deployment. Small deployers meeting the Section 4(6) exemption criteria are exempt.
Statutory Text
(3)(a) Except as otherwise provided in this subsection or subsection (6) of this section: (i) An impact assessment shall be completed for each high-risk artificial intelligence system deployed on or after February 1, 2026. Such impact assessment shall be completed by the deployer or by a third party contracted by the deployer; and (ii) On and after February 1, 2026, for each deployed high-risk artificial intelligence system, a deployer or a third party contracted by the deployer shall complete an impact assessment within ninety days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (b) An impact assessment completed pursuant to this subsection shall include to the extent reasonably known by or available to the deployer: (i) A statement by the deployer disclosing: (A) The purpose of the high-risk artificial intelligence system; (B) Any intended-use case for the high-risk artificial intelligence system; (C) The deployment context of the high-risk artificial intelligence system; and (D) Any benefit afforded by the high-risk artificial intelligence system; (ii) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known risk of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate any such risk; (iii) A high-level summary of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) Any metric used to evaluate the performance and any known limitation of the high-risk artificial intelligence system; (vi) A description of any transparency measure taken concerning the high-risk artificial intelligence system, including any measure taken to disclose to a consumer when the high-risk artificial intelligence system is in use; and (vii) A description of each postdeployment monitoring and user safeguard provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address any issue that arises from the deployment of the high-risk artificial intelligence system. (c) Any impact assessment completed pursuant to this subsection following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, shall include a statement that discloses the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from any use of the high-risk artificial intelligence system intended by the developer. (d) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (e) Any impact assessment completed to comply with another applicable law or regulation by a deployer or by a third party contracted by the deployer shall satisfy this subsection if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (f) A deployer shall maintain: (i) The most recently completed impact assessment required under this subsection for each high-risk artificial intelligence system of the deployer; (ii) Each record concerning each such impact assessment; and (iii) For at least three years following the final deployment of each high-risk artificial intelligence system, each prior impact assessment, if any, and each record concerning such impact assessment.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Sec. 4(4)(a)(i)-(iii)
Plain Language
Before deploying a high-risk AI system to make or substantially contribute to a consequential decision about a consumer, the deployer must: (1) notify the consumer that such a system is being used; (2) provide a statement disclosing the system's purpose and the nature of the consequential decision, the deployer's contact information, a plain-language system description, and instructions to access the deployer's public statement under Section 4(5); and (3) where applicable, inform the consumer of their opt-out right under Nebraska's data privacy law (Section 87-1107). This is a pre-decision notice obligation — it must be completed before the consequential decision is made.
Statutory Text
(4)(a) On and after February 1, 2026, prior to deploying any high-risk artificial intelligence system to make or be a substantial factor in making any consequential decision concerning any consumer, the deployer shall: (i) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make or be a substantial factor in making a consequential decision; (ii) Provide to the consumer: (A) A statement that discloses the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description written in plain language that describes the high-risk artificial intelligence system; and (D) Instructions on how to access the statement described in subdivision (5)(a) of this section; and (iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.4H-01.5 · Deployer · Automated Decisionmaking
Sec. 4(4)(b)(i)-(iii)
Plain Language
When a high-risk AI system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement explaining the principal reasons for the decision — including the AI system's degree and manner of contribution, the types of data processed, and each data source; (2) an opportunity to correct any incorrect personal data used in the decision; and (3) an opportunity to appeal the adverse decision, with human review if technically feasible. The appeal right has a narrow exception for situations where delay would risk the consumer's life or safety.
Statutory Text
(b) On and after February 1, 2026, for each high-risk artificial intelligence system that makes or is a substantial factor in making any consequential decision that is adverse to any consumer, the deployer of such high-risk artificial intelligence system shall provide to such consumer: (i) A statement that discloses each principal reason for the consequential decision, including: (A) The degree to and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) Each source of the data described in subdivision (b)(i)(B) of this subsection; (ii) An opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making or processed as a substantial factor in making the consequential decision; and (iii) An opportunity to appeal any adverse consequential decision concerning the consumer arising from the deployment of the high-risk artificial intelligence system unless providing the opportunity for appeal is not in the best interest of the consumer, including instances when any delay might pose a risk to the life or safety of such consumer. Any such appeal shall allow for human review if technically feasible.
H-01 Human Oversight of Automated Decisions · Deployer · Automated Decisionmaking
Sec. 4(4)(c)(i)-(ii)
Plain Language
All consumer notices required under Section 4(4)(a) and (b) — pre-decision notification and post-adverse-decision disclosures — must be delivered directly to the consumer, in plain language, in all languages the deployer normally uses for business communications, and in formats accessible to consumers with disabilities. If direct delivery is impossible, the deployer must use a method reasonably calculated to reach the consumer. This is a delivery-format requirement that qualifies the notice obligations in the preceding subsections.
Statutory Text
(c)(i) Except as provided in subdivision (c)(ii) of this subsection, a deployer shall provide the notice, statement, contact information, and description required under subdivisions (4)(a) and (b) of this section: (A) Directly to the consumer; (B) In plain language; (C) In each language in which the deployer in the ordinary course of business provides any contract, disclaimer, sale announcement, or other information to any consumer; and (D) In a format that is accessible to any consumer with any disability. (ii) If the deployer is unable to provide the notice, statement, contact information, and description required under subdivisions (a) and (b) of this subsection directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Sec. 4(5)(a)-(b)
Plain Language
Deployers must publish and maintain a clear, readily available public statement disclosing: the types of high-risk AI systems they currently deploy, how they manage algorithmic discrimination risks, and the nature, source, and extent of information they collect and use. This statement must be updated at least annually. Small deployers meeting Section 4(6) criteria are exempt.
Statutory Text
(5)(a) Except as provided in subsection (6) of this section, on and after February 1, 2026, a deployer shall make a statement with the following information available in a manner that is clear and readily available: (i) The types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) How the deployer manages known risks of algorithmic discrimination that may arise from the deployment of the types of high-risk artificial intelligence systems described in subdivision (a)(ii) of this subsection; and (iii) A description of the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall update the statement described in subdivision (a) of this subsection at least once each year.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
Sec. 4(8)(a)-(d)
Plain Language
In connection with an ongoing investigation, the Attorney General may require a deployer (or its contracted third party) to produce its risk management policy, impact assessments, and related records within 90 days. Disclosures are not public records and deployers may designate materials as proprietary or trade secret. This is a responsive disclosure obligation — triggered by AG demand, not a proactive filing requirement.
Statutory Text
(8)(a) On and after February 1, 2026, in connection with an ongoing investigation related to the deployer, the Attorney General may require any deployer or third party contracted by a deployer to disclose any of the following to the Attorney General no later than ninety days after such request in a form and manner prescribed by the Attorney General: (i) The risk management policy implemented pursuant to subsection (2) of this section; (ii) The impact assessment completed pursuant to subsection (3) of this section; or (iii) The records maintained pursuant to subdivision (3)(f) of this section. (b) If such risk management policy, impact assessment, or record is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, the Attorney General may evaluate the risk management policy, impact assessment, or records disclosed pursuant to subdivision (a) of this subsection to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) Any disclosure under this subsection shall not be a public record subject to disclosure pursuant to sections 84-712 to 84-712.09. (d) A deployer may designate any statement or documentation disclosed under this subsection as including proprietary information or a trade secret.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
Sec. 5(1)-(2)
Plain Language
Any deployer or developer that makes available an AI system intended to interact with consumers must disclose to each interacting consumer that they are interacting with an AI system. This applies to all AI systems (not just high-risk ones) that are designed to interact with consumers. Disclosure is not required where it would be obvious to a reasonable person that they are interacting with AI. Note this is broader than the high-risk AI system framework — it covers any consumer-facing AI system.
Statutory Text
(1) On and after February 1, 2026, and except as otherwise provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available any artificial intelligence system that is intended to interact with any consumer shall include in the disclosure to each consumer who interacts with such artificial intelligence system that the consumer is interacting with an artificial intelligence system. (2) Disclosure is not required under subsection (1) of this section under any circumstance when it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
D-01 Automated Processing Rights & Data Controls · D-01.3 · Deployer · Automated Decisionmaking
Sec. 4(4)(a)(iii)
Plain Language
Where applicable, deployers must inform consumers of their existing right under Nebraska's data privacy law (Section 87-1107) to opt out of the processing of personal data for profiling in furtherance of consequential decisions. This is a cross-reference disclosure obligation — it does not create a new opt-out right but requires deployers to inform consumers of their existing one at the point of AI-driven consequential decision-making.
Statutory Text
(iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107.
Other · Automated Decisionmaking
Sec. 7(1)-(6)
Plain Language
The Attorney General has exclusive enforcement authority with a mandatory 90-day cure period before bringing suit. An affirmative defense is available to entities that discover and cure violations through user feedback, adversarial testing/red teaming, or internal review, provided they also comply with NIST AI RMF, ISO/IEC 42001, or an equivalent framework. The burden of proving the affirmative defense rests with the entity. The Act expressly does not create any private right of action and does not preempt existing legal rights or remedies. The Act's rebuttable presumptions and affirmative defenses apply only to AG enforcement actions.
Statutory Text
(1) The Attorney General has exclusive authority to enforce the Artificial Intelligence Consumer Protection Act. (2) Except as provided in subsection (5) of this section, the Attorney General shall, prior to initiating any action for a violation of the Artificial Intelligence Consumer Protection Act, issue a notice of violation to the developer, deployer, or other person describing with specificity the alleged violation and the actions that shall be taken by the recipient of the notice to cure the violation. If the developer, deployer, or other person fails to cure such violation not later than ninety days after receipt of the notice of violation, the Attorney General may bring an action under the Artificial Intelligence Consumer Protection Act. (3) In any action commenced by the Attorney General to enforce the Artificial Intelligence Consumer Protection Act, it is an affirmative defense that the developer, deployer, or other person: (a) Discovers and cures a violation of the Artificial Intelligence Consumer Protection Act as a result of: (i) Feedback that the developer, deployer, or other person encourages deployers or users to provide to the developer, deployer, or other person; (ii) Adversarial testing or red teaming; or (iii) An internal review process; and (b) Is otherwise in compliance with: (i) The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology and standard ISO/IEC 42001 of the International Organization for Standardization, as such framework and standard existed on January 1, 2025; (ii) Another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of the Artificial Intelligence Consumer Protection Act as determined by the Attorney General; or (iii) Any risk management framework for artificial intelligence systems designated and publicly disseminated by the Attorney General. (4) Any developer, deployer, or other person bears the burden of demonstrating to the Attorney General that the requirements of subsection (3) of this section have been satisfied. (5)(a) The Artificial Intelligence Consumer Protection Act shall not be construed to preempt or otherwise affect any right, claim, remedy, presumption, or defense available at law or in equity. (b) Any rebuttable presumption or affirmative defense under the Artificial Intelligence Consumer Protection Act applies only to an enforcement action brought by the Attorney General pursuant to this section and shall not apply to any right, claim, remedy, presumption, or defense available at law or in equity. (6) The Artificial Intelligence Consumer Protection Act does not provide the basis for and is not subject to any private right of action for any violation of the Artificial Intelligence Consumer Protection Act or any other law.