LB-642
NE · State · USA
NE
USA
● Pending
Proposed Effective Date
2026-02-01
Nebraska LB 642 — Artificial Intelligence Consumer Protection Act
Nebraska LB 642 establishes the Artificial Intelligence Consumer Protection Act, imposing obligations on developers and deployers of high-risk AI systems — defined as systems that make consequential decisions (employment, housing, lending, healthcare, insurance, education, legal services, government services, and criminal justice) without human review. Developers must provide deployers with documentation on intended uses, training data, known limitations, discrimination risks, and mitigation measures, and must maintain a public use case inventory. Deployers must implement risk management programs, complete impact assessments, notify consumers before AI-driven consequential decisions, explain adverse decisions, and offer data correction and appeal opportunities. The Attorney General has exclusive enforcement authority with a mandatory 90-day cure period before initiating action; no private right of action exists. Extensive carve-outs apply for insurers, financial institutions under prudential regulation, federal contractors, federally approved systems, and healthcare providers with human-in-the-loop.
Summary

Nebraska LB 642 establishes the Artificial Intelligence Consumer Protection Act, imposing obligations on developers and deployers of high-risk AI systems — defined as systems that make consequential decisions (employment, housing, lending, healthcare, insurance, education, legal services, government services, and criminal justice) without human review. Developers must provide deployers with documentation on intended uses, training data, known limitations, discrimination risks, and mitigation measures, and must maintain a public use case inventory. Deployers must implement risk management programs, complete impact assessments, notify consumers before AI-driven consequential decisions, explain adverse decisions, and offer data correction and appeal opportunities. The Attorney General has exclusive enforcement authority with a mandatory 90-day cure period before initiating action; no private right of action exists. Extensive carve-outs apply for insurers, financial institutions under prudential regulation, federal contractors, federally approved systems, and healthcare providers with human-in-the-loop.

Enforcement & Penalties
Enforcement Authority
Attorney General has exclusive enforcement authority. AG-initiated enforcement only; must issue a notice of violation describing the alleged violation and required cure actions before initiating any action. If the developer, deployer, or other person fails to cure within 90 days of receipt, the AG may bring an enforcement action. An affirmative defense is available to entities that discover and cure violations through feedback, adversarial testing, or internal review and are otherwise in compliance with NIST AI RMF, ISO/IEC 42001, or another recognized or AG-designated risk management framework.
Penalties
The Act does not specify statutory damages, civil penalty amounts, or monetary remedies. The Act explicitly does not provide the basis for any private right of action. The Act preserves all existing rights, claims, remedies, presumptions, and defenses available at law or in equity. Rebuttable presumptions and affirmative defenses under the Act apply only to AG enforcement actions.
Who Is Covered
Deployer means a person doing business in this state that deploys a high-risk artificial intelligence system in this state;.
Developer means a person doing business in this state that develops or intentionally and substantially modifies a high-risk artificial intelligence system in this state;.
What Is Covered
(9)(a) High-risk artificial intelligence system means any artificial intelligence system that, when deployed, makes a consequential decision without human review or intervention; and (b) High-risk artificial intelligence system does not include: (i) Any artificial intelligence system if the artificial intelligence system is intended to: (A) Perform a narrow procedural task; (B) Improve the result of a previously completed human activity; (C) Perform a preparatory task to an assessment that is relevant to a consequential decision; or (D) Detect decisionmaking patterns or deviations from preexisting decisionmaking patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; and (ii) Any of the following technology: (A) Antifraud technology; (B) Antimalware; (C) Antivirus; (D) Artificial intelligence-enabled video game; (E) Calculator; (F) Cybersecurity; (G) Database; (H) Data storage; (I) Firewall; (J) Internet domain registration; (K) Internet website loading; (L) Networking; (M) Spam-filtering; (N) Robocall-filtering; (O) Spell-checking; (P) Spreadsheet; (Q) Web caching; (R) Web hosting or any similar technology; or (S) Technology that: (I) Communicates with any consumer in natural language for the purpose of providing such consumer with information, making any referral or recommendation, or answering any question; and (II) Is subject to an acceptable use policy that prohibits generating content that is unlawful or harmful;
Compliance Obligations 15 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Developer · Automated Decisionmaking
Sec. 3(1)(a)-(b)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known risks of algorithmic discrimination arising from the system's intended and contracted uses. Compliance with all developer obligations under the Act creates a rebuttable presumption that reasonable care was used, but this presumption applies only to AG enforcement actions. Self-testing for bias and diversity-expanding uses are carved out of the definition of algorithmic discrimination.
Statutory Text
(1)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 3(2)(a)-(d)
Plain Language
Developers must provide deployers with comprehensive documentation covering: intended and harmful uses, training data summaries, known limitations and discrimination risks, system purpose and benefits, pre-deployment performance and bias evaluations, data governance measures, intended outputs, discrimination mitigation steps, usage and monitoring guidance, and information needed for deployers to complete their own impact assessments. This documentation need not be made publicly available — it is a deployer-facing disclosure obligation. A developer that also serves as a deployer is exempt unless the system is provided to an unaffiliated deployer. Trade secrets and security-sensitive information may be withheld.
Statutory Text
(2) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, each developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (a) A general statement describing the uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (b) Documentation disclosing: (i) A high-level summary of the types of data used to train the high-risk artificial intelligence system; (ii) Each known limitation of the high-risk artificial intelligence system, including each known or reasonably foreseeable risk of algorithmic discrimination arising from the intended use of the high-risk artificial intelligence system; (iii) The purpose of the high-risk artificial intelligence system; (iv) Any intended benefit and use of the high-risk artificial intelligence system; and (v) Information necessary to allow the deployer to comply with the requirements of section 4 of this act; (c) Documentation describing: (i) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) Intended outputs of the high-risk artificial intelligence system; (iv) The measures the developer has taken to mitigate known risks of algorithmic discrimination that could arise from the deployment of the high-risk artificial intelligence system; and (v) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (d) Documentation that is reasonably necessary to assist the deployer in understanding each output and monitor the performance of the high-risk artificial intelligence system for each risk of algorithmic discrimination.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
Sec. 3(4)(a)-(b)
Plain Language
Developers must publish and maintain a public use case inventory summarizing: the types of high-risk AI systems they have developed or substantially modified that are currently available, and how they manage known algorithmic discrimination risks. This must be kept current and updated within 90 days of any intentional and substantial modification. Changes from ongoing machine learning that were pre-planned and documented in the initial impact assessment are excluded from the modification trigger.
Statutory Text
(4)(a) On and after February 1, 2026, a developer shall make a statement summarizing the following available in a manner that is clear and readily available in a public use case inventory: (i) The types of high-risk artificial intelligence systems that the developer has developed and currently makes available to a deployer or other developer; (ii) The types of high-risk artificial intelligence system that the developer has intentionally and substantially modified and currently makes available to a deployer or other developer; and (iii) How the developer manages known risks of algorithmic discrimination that could arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in subdivisions (4)(a)(i) and (ii) of this section. (b) A developer shall update the statement described in subdivision (4)(a) of this section: (i) As necessary to ensure that the statement remains accurate; and (ii) No later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subdivision (4)(a)(ii) of this section.
R-01 Incident Reporting · R-01.3 · Developer · Automated Decisionmaking
Sec. 3(5)(a)-(b)
Plain Language
When a developer discovers — through its own testing or via a credible deployer report — that a deployed high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify all known deployers and other developers without unreasonable delay. The Attorney General will prescribe the form and manner of this notification. This is a reactive disclosure obligation triggered by discovery of actual or likely discrimination, not a periodic reporting requirement.
Statutory Text
(5)(a) On and after February 1, 2026, a developer of a high-risk artificial intelligence system shall disclose to all known deployers or other developers of the high-risk artificial intelligence system, each known risk of algorithmic discrimination arising from any intended use of the high-risk artificial intelligence system without unreasonable delay after the date on which: (i) The developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (ii) The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination. (b) The Attorney General shall prescribe the form and manner of the disclosure described in subdivision (a) of this subsection.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
Sec. 3(7)(a)-(d)
Plain Language
Upon written demand in connection with an ongoing investigation, the Attorney General may require a developer to produce the documentation described in Sec. 3(2) (uses, training data, limitations, discrimination risks, evaluations, data governance, etc.). The developer must produce it in the AG's prescribed form. Developers may designate materials as proprietary or trade secret, and such designated materials are exempt from public disclosure.
Statutory Text
(7)(a) On and after February 1, 2026, the Attorney General may provide a written demand to any developer to disclose to the Attorney General the statement or documentation described in subsection (2) of this section if such a statement or documentation is relevant to an investigation related to the developer conducted by the Attorney General. Such statement or documentation shall be provided to the Attorney General in a form and manner prescribed by the Attorney General. (b) The Attorney General may evaluate such statement or documentation, if it is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) In any disclosure pursuant to this subsection, any developer may designate the statement or documentation as including proprietary information or a trade secret. (d) To the extent any such statement or documentation includes any proprietary information or any trade secret, such statement or documentation shall be exempt from disclosure.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Automated Decisionmaking
Sec. 4(1)(a)-(b)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from each known risk of algorithmic discrimination. Full compliance with all deployer obligations under Section 4 creates a rebuttable presumption of reasonable care, but only in AG enforcement actions. This is the overarching deployer duty — the specific compliance obligations that follow (risk management, impact assessments, consumer notifications) flesh out what reasonable care requires in practice.
Statutory Text
(1)(a) On and after February 1, 2026, a deployer of any high-risk artificial intelligence system shall use reasonable care to protect consumers from each known risk of algorithmic discrimination. (b) In any enforcement action brought on or after February 1, 2026, by the Attorney General pursuant to section 7 of this act, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated Decisionmaking
Sec. 4(2)(a)-(b)
Plain Language
Deployers must implement a risk management policy and program governing their deployment of high-risk AI systems. Conformity with the NIST AI RMF or ISO/IEC 42001 (as of January 1, 2025) creates a presumption of compliance. A single program may cover multiple high-risk systems. Small deployers (fewer than 50 FTEs who do not use their own data to train the system) are exempt under the conditions specified in Sec. 4(6).
Statutory Text
(2)(a) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. High-risk artificial intelligence systems that are in conformity with the guidance and standards set forth in the following as of January 1, 2025, shall be presumed to be in conformity with this section: (i) The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology; or (ii) The standard ISO/IEC 42001 of the International Organization for Standardization. (b) Any risk management policy and program implemented pursuant to subdivision (a) of this subsection may cover multiple high-risk artificial intelligence systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
Sec. 4(3)(a)-(f)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system before deployment and within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and intended uses, deployment context, benefits, analysis of algorithmic discrimination risks and mitigations, data input/output summaries, customization data, performance metrics and known limitations, transparency measures, and post-deployment monitoring and user safeguards. Post-modification assessments must also disclose how actual use compared to developer-intended use. A single assessment may cover comparable systems. Assessments completed for other regulatory compliance satisfy this requirement if reasonably similar in scope. Deployers must retain current assessments, all records, and prior assessments for at least three years after final deployment. Small deployer exemption applies under Sec. 4(6) conditions.
Statutory Text
(3)(a) Except as otherwise provided in this subsection or subsection (6) of this section: (i) An impact assessment shall be completed for each high-risk artificial intelligence system deployed on or after February 1, 2026. Such impact assessment shall be completed by the deployer or by a third party contracted by the deployer; and (ii) On and after February 1, 2026, for each deployed high-risk artificial intelligence system, a deployer or a third party contracted by the deployer shall complete an impact assessment within ninety days after any intentional and substantial modification to such high-risk artificial intelligence system is made available. (b) An impact assessment completed pursuant to this subsection shall include to the extent reasonably known by or available to the deployer: (i) A statement by the deployer disclosing: (A) The purpose of the high-risk artificial intelligence system; (B) Any intended-use case for the high-risk artificial intelligence system; (C) The deployment context of the high-risk artificial intelligence system; and (D) Any benefit afforded by the high-risk artificial intelligence system; (ii) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known risk of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate any such risk; (iii) A high-level summary of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) Any metric used to evaluate the performance and any known limitation of the high-risk artificial intelligence system; (vi) A description of any transparency measure taken concerning the high-risk artificial intelligence system, including any measure taken to disclose to a consumer when the high-risk artificial intelligence system is in use; and (vii) A description of each postdeployment monitoring and user safeguard provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address any issue that arises from the deployment of the high-risk artificial intelligence system. (c) Any impact assessment completed pursuant to this subsection following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, shall include a statement that discloses the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from any use of the high-risk artificial intelligence system intended by the developer. (d) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (e) Any impact assessment completed to comply with another applicable law or regulation by a deployer or by a third party contracted by the deployer shall satisfy this subsection if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (f) A deployer shall maintain: (i) The most recently completed impact assessment required under this subsection for each high-risk artificial intelligence system of the deployer; (ii) Each record concerning each such impact assessment; and (iii) For at least three years following the final deployment of each high-risk artificial intelligence system, each prior impact assessment, if any, and each record concerning such impact assessment.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3 · Deployer · Automated Decisionmaking
Sec. 4(4)(a)(i)-(iii), (c)
Plain Language
Before deploying a high-risk AI system to make or substantially factor into a consequential decision about a consumer, the deployer must: notify the consumer that an AI system is being used for the decision; disclose the system's purpose and the nature of the decision; provide deployer contact information and a plain-language system description; and provide instructions for accessing the deployer's public statement. Where applicable under Nebraska's data privacy law (§ 87-1107), the deployer must also inform the consumer of their right to opt out of profiling. All disclosures must be direct, in plain language, multilingual where the deployer ordinarily communicates in multiple languages, and accessible to consumers with disabilities. If direct delivery is infeasible, the deployer must use a method reasonably calculated to reach the consumer.
Statutory Text
(4)(a) On and after February 1, 2026, prior to deploying any high-risk artificial intelligence system to make or be a substantial factor in making any consequential decision concerning any consumer, the deployer shall: (i) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make or be a substantial factor in making a consequential decision; (ii) Provide to the consumer: (A) A statement that discloses the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description written in plain language that describes the high-risk artificial intelligence system; and (D) Instructions on how to access the statement described in subdivision (5)(a) of this section; and (iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107. (c)(i) Except as provided in subdivision (c)(ii) of this subsection, a deployer shall provide the notice, statement, contact information, and description required under subdivisions (4)(a) and (b) of this section: (A) Directly to the consumer; (B) In plain language; (C) In each language in which the deployer in the ordinary course of business provides any contract, disclaimer, sale announcement, or other information to any consumer; and (D) In a format that is accessible to any consumer with any disability. (ii) If the deployer is unable to provide the notice, statement, contact information, and description required under subdivisions (a) and (b) of this subsection directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.4H-01.5 · Deployer · Automated Decisionmaking
Sec. 4(4)(b)(i)-(iii)
Plain Language
When a high-risk AI system makes or substantially factors into an adverse consequential decision about a consumer, the deployer must provide: a statement disclosing each principal reason for the decision (including how the AI contributed, the types of data processed, and the data sources); an opportunity to correct any incorrect personal data the system used; and an opportunity to appeal the decision, with human review if technically feasible. The appeal requirement has a narrow exception where delay would risk the consumer's life or safety. These are post-decision adverse action obligations — they supplement the pre-deployment notice in Sec. 4(4)(a).
Statutory Text
(b) On and after February 1, 2026, for each high-risk artificial intelligence system that makes or is a substantial factor in making any consequential decision that is adverse to any consumer, the deployer of such high-risk artificial intelligence system shall provide to such consumer: (i) A statement that discloses each principal reason for the consequential decision, including: (A) The degree to and manner in which the high-risk artificial intelligence system contributed to the consequential decision; (B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) Each source of the data described in subdivision (b)(i)(B) of this subsection; (ii) An opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making or processed as a substantial factor in making the consequential decision; and (iii) An opportunity to appeal any adverse consequential decision concerning the consumer arising from the deployment of the high-risk artificial intelligence system unless providing the opportunity for appeal is not in the best interest of the consumer, including instances when any delay might pose a risk to the life or safety of such consumer. Any such appeal shall allow for human review if technically feasible.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Sec. 4(5)(a)-(b)
Plain Language
Deployers must publish and maintain a clear, readily available public statement describing: the types of high-risk AI systems they currently deploy, how they manage known algorithmic discrimination risks, and the nature, source, and extent of information they collect and use. This statement must be updated at least annually. Small deployer exemption applies under Sec. 4(6) conditions.
Statutory Text
(5)(a) Except as provided in subsection (6) of this section, on and after February 1, 2026, a deployer shall make a statement with the following information available in a manner that is clear and readily available: (i) The types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) How the deployer manages known risks of algorithmic discrimination that may arise from the deployment of the types of high-risk artificial intelligence systems described in subdivision (a)(ii) of this subsection; and (iii) A description of the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall update the statement described in subdivision (a) of this subsection at least once each year.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
Sec. 4(8)(a)-(d)
Plain Language
In connection with an ongoing investigation, the AG may require a deployer (or its contracted third party) to produce its risk management policy, impact assessments, and associated records within 90 days. Disclosures to the AG are not public records under Nebraska's public records law, and deployers may designate materials as proprietary or trade secret. This requires deployers to maintain documentation in a form that can be produced to the AG on demand.
Statutory Text
(8)(a) On and after February 1, 2026, in connection with an ongoing investigation related to the deployer, the Attorney General may require any deployer or third party contracted by a deployer to disclose any of the following to the Attorney General no later than ninety days after such request in a form and manner prescribed by the Attorney General: (i) The risk management policy implemented pursuant to subsection (2) of this section; (ii) The impact assessment completed pursuant to subsection (3) of this section; or (iii) The records maintained pursuant to subdivision (3)(f) of this section. (b) If such risk management policy, impact assessment, or record is relevant to an investigation conducted by the Attorney General regarding a violation of the Artificial Intelligence Consumer Protection Act, the Attorney General may evaluate the risk management policy, impact assessment, or records disclosed pursuant to subdivision (a) of this subsection to ensure compliance with the Artificial Intelligence Consumer Protection Act. (c) Any disclosure under this subsection shall not be a public record subject to disclosure pursuant to sections 84-712 to 84-712.09. (d) A deployer may designate any statement or documentation disclosed under this subsection as including proprietary information or a trade secret.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
Sec. 5(1)-(2)
Plain Language
Any deployer or developer that makes an AI system intended to interact with consumers available must disclose to each interacting consumer that they are interacting with an AI system. This obligation applies broadly to all AI systems intended for consumer interaction — not just high-risk systems. Disclosure is not required where it would be obvious to a reasonable person that they are interacting with AI. Note this is the inverse of CA SB 243's trigger: here, disclosure is required by default unless it would be obviously unnecessary, rather than triggered only when a reasonable person could be misled.
Statutory Text
(1) On and after February 1, 2026, and except as otherwise provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available any artificial intelligence system that is intended to interact with any consumer shall include in the disclosure to each consumer who interacts with such artificial intelligence system that the consumer is interacting with an artificial intelligence system. (2) Disclosure is not required under subsection (1) of this section under any circumstance when it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
D-01 Automated Processing Rights & Data Controls · D-01.3 · Deployer · Automated Decisionmaking
Sec. 4(4)(a)(iii)
Plain Language
Where Nebraska's data privacy law (§ 87-1107) provides consumers a right to opt out of profiling for consequential decisions, deployers of high-risk AI systems must affirmatively inform consumers of that right before using the system to make or substantially factor into a consequential decision. This provision creates a notification obligation about an existing right under another statute — it does not independently create the opt-out right itself.
Statutory Text
(iii) If applicable, provide information to the consumer regarding the consumer's right to opt out of the processing of personal data concerning the consumer for any purpose of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under subdivision (2)(e)(iii) of section 87-1107.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Sec. 3(3)(a)-(b)
Plain Language
When making a high-risk AI system available to a deployer, developers must provide — to the extent feasible — all documentation and information needed for the deployer to complete its own impact assessment, including any model card or developer impact assessment. Developer-deployers that use the system only internally are exempt unless the system is provided to an unaffiliated deployer. This is a deployer-enabling disclosure obligation distinct from the general documentation requirements in Sec. 3(2).
Statutory Text
(3)(a) Except as otherwise provided in subsection (6) of this section, on or after February 1, 2026, a developer that offers, sells, leases, licenses, gives, or otherwise makes any high-risk artificial intelligence system available to a deployer or other developer shall to the extent feasible make available to the deployer or other developer the documentation and information necessary for the deployer or a third party contracted by the deployer to complete an impact assessment pursuant to subsection (3) of section 4 of this act. Such documentation and information includes any model card or other impact assessment. (b) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.