A-00768
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2027-01-01
New York Assembly Bill 768 — New York Artificial Intelligence Consumer Protection Act
Imposes obligations on developers and deployers of high-risk AI decision systems — systems that make or substantially contribute to consequential decisions in employment, housing, credit, insurance, healthcare, education, legal services, and government services — to protect New York consumers from algorithmic discrimination. Developers must provide deployers with documentation on training data, bias risks, and mitigation measures, and must publish public summaries of their high-risk AI systems. Deployers must implement risk management programs aligned with NIST AI RMF or equivalent frameworks, complete and retain impact assessments (at least annually and within 90 days of substantial modifications), conduct annual discrimination reviews, and provide consumers with pre-decision notice and post-adverse-decision explanations with appeal rights. Developers of general-purpose AI models face separate technical documentation and downstream disclosure obligations. Enforcement is exclusively by the Attorney General under an unfair trade practices theory; no private right of action is created. A 60-day mandatory cure period applies during the first year of enforcement.
Summary

Imposes obligations on developers and deployers of high-risk AI decision systems — systems that make or substantially contribute to consequential decisions in employment, housing, credit, insurance, healthcare, education, legal services, and government services — to protect New York consumers from algorithmic discrimination. Developers must provide deployers with documentation on training data, bias risks, and mitigation measures, and must publish public summaries of their high-risk AI systems. Deployers must implement risk management programs aligned with NIST AI RMF or equivalent frameworks, complete and retain impact assessments (at least annually and within 90 days of substantial modifications), conduct annual discrimination reviews, and provide consumers with pre-decision notice and post-adverse-decision explanations with appeal rights. Developers of general-purpose AI models face separate technical documentation and downstream disclosure obligations. Enforcement is exclusively by the Attorney General under an unfair trade practices theory; no private right of action is created. A 60-day mandatory cure period applies during the first year of enforcement.

Enforcement & Penalties
Enforcement Authority
Exclusive enforcement authority rests with the Attorney General. During the first year (January 1, 2027 through January 1, 2028), the AG must issue a notice of violation and allow a 60-day cure period before initiating an action, if the violation is curable. After January 1, 2028, the AG has discretion on whether to offer a cure opportunity, considering factors such as number of violations, entity size, likelihood of public injury, and whether the violation was caused by human or technical error. An affirmative defense is available for violations discovered through red-teaming if cured within 60 days, reported to the AG with evidence of harm mitigation, and the entity is otherwise in compliance with NIST AI RMF, ISO/IEC 42001, or a substantially equivalent framework. A violation constitutes an unfair trade practice under GBL § 349, enforced solely by the AG; however, GBL § 349(h) (the private action provision) is expressly excluded. No private right of action is created.
Penalties
Violations constitute unfair trade practices under GBL § 349, enforced solely by the Attorney General. The statute does not specify statutory minimums, per-violation penalty amounts, or damages provisions of its own. Available remedies are those available to the AG under GBL § 349 enforcement actions. GBL § 349(h) (private action provision) is expressly excluded.
Who Is Covered
"Developer" shall mean any person doing business in this state that develops, or intentionally and substantially modifies, an artificial intelligence decision system.
"Deployer" shall mean any person doing business in this state that deploys a high-risk artificial intelligence decision system.
"Person" shall mean any individual, association, corporation, limited liability company, partnership, trust or other legal entity authorized to do business in this state.
What Is Covered
"Artificial intelligence decision system" shall mean any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including any content, decision, prediction, or recommendation, that is used to substantially assist or replace discretionary decision making for making consequential decisions that impact consumers.
"High-risk artificial intelligence decision system": (a) shall mean any artificial intelligence decision system that, when deployed, makes, or is a substantial factor in making, a consequential decision; and (b) shall not include: (i) any artificial intelligence decision system that is intended to: (A) perform any narrow procedural task; or (B) detect decision-making patterns, or deviations from decision-making patterns, unless such artificial intelligence decision system is intended to replace or influence any assessment previously completed by an individual without sufficient human review; or (ii) unless the technology, when deployed, makes, or is a substantial factor in making, a consequential decision: (A) any anti-fraud technology that does not make use of facial recognition technology; (B) any artificial intelligence-enabled video game technology; (C) any anti-malware, anti-virus, calculator, cybersecurity, database, data storage, firewall, Internet domain registration, Internet-web-site loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, web-caching, web-hosting, or similar technology; (D) any technology that performs tasks exclusively related to an entity's internal management affairs, including, but not limited to, ordering office supplies or processing payments; or (E) any technology that communicates with consumers in natural language for the purpose of providing consumers with information, making referrals or recommendations, and answering questions, and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
"General-purpose artificial intelligence model": (a) shall mean any form of artificial intelligence decision system that: (i) displays significant generality; (ii) is capable of competently performing a wide range of distinct tasks; and (iii) can be integrated into a variety of downstream applications or systems; and (b) shall not include any artificial intelligence model that is used for development, prototyping, and research activities before such artificial intelligence model is released on the market.
Compliance Obligations 20 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.2H-02.6H-02.3 · Developer · Automated Decisionmaking
GBL § 1551(1)(a)-(b)
Plain Language
Developers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses. A rebuttable presumption of reasonable care applies if the developer (1) complies with the documentation requirements in § 1551 and (2) retains an AG-approved independent third party to complete bias and governance audits. The AG must publish and annually update a list of qualified independent auditors. Self-testing to identify discrimination and pool-expansion activities are carved out from the definition of algorithmic discrimination.
Statutory Text
(a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
GBL § 1551(2)(a)-(d)
Plain Language
Developers of high-risk AI decision systems must provide deployers and downstream developers with comprehensive documentation covering: foreseeable and harmful uses, training data summaries, known limitations and discrimination risks, purpose and intended benefits, pre-deployment evaluation methodology, data governance measures, intended outputs, discrimination mitigation steps, human monitoring instructions, and any additional documentation needed for compliance. This is a deployer-facing documentation obligation — not a public disclosure requirement. Trade secrets and security-sensitive information are exempt under § 1551(5).
Statutory Text
Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination.
T-03 Training Data Disclosure · T-03.3 · Developer · Automated Decisionmaking
GBL § 1551(2)(c)(ii)
Plain Language
Developers must disclose to deployers the data governance measures applied to training datasets, including how data source suitability was evaluated, possible biases identified, and mitigation steps taken. This is part of the broader documentation package required under § 1551(2) and is specifically a training data governance disclosure obligation to downstream deployers.
Statutory Text
Documentation describing: (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation;
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
GBL § 1551(3)(a)-(b)
Plain Language
Developers distributing high-risk AI decision systems must, to the extent feasible, provide deployers and downstream developers with the documentation needed to complete impact assessments under this article, delivered through model cards, dataset cards, or similar artifacts. A developer that also serves as its own deployer is exempt from this documentation requirement unless the system is provided to an unaffiliated deployer. Trade secrets and security-sensitive information are exempt.
Statutory Text
(a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
GBL § 1551(4)(a)-(b)
Plain Language
Developers must publish on their website or a public use case inventory a clear, readily available summary of the types of high-risk AI decision systems they currently offer and how they manage algorithmic discrimination risks associated with those systems. This statement must be kept current and updated within 90 days of any intentional and substantial modification. Trade secrets and security-sensitive information are exempt under § 1551(5).
Statutory Text
(a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
GBL § 1551(6)
Plain Language
The AG may require developers to produce their deployer-facing documentation and general statements as part of an investigation. Developers may designate submitted materials as trade secret, confidential, or privileged — such materials are exempt from public disclosure and production does not waive attorney-client privilege or work product protection. This is a responsive disclosure obligation — triggered by AG request, not on a defined schedule.
Statutory Text
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.2H-02.6H-02.3 · Deployer · Automated Decisionmaking
GBL § 1552(1)(a)-(b)
Plain Language
Deployers of high-risk AI decision systems must exercise reasonable care to protect consumers from algorithmic discrimination. A rebuttable presumption of reasonable care applies if the deployer (1) complies with all § 1552 requirements and (2) retains an AG-approved independent third-party auditor to complete bias and governance audits. The AG must publish and annually update a list of qualified auditors. This mirrors the developer reasonable care obligation in § 1551(1) but applies to deployers.
Statutory Text
(a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
GBL § 1552(2)(a)-(b)
Plain Language
Deployers must implement and maintain a risk management policy and program governing their deployment of high-risk AI decision systems, covering principles, processes, and personnel for identifying, documenting, and mitigating algorithmic discrimination risks. The program must be iterative and regularly reviewed and updated over the system lifecycle. Reasonableness is assessed against NIST AI RMF, ISO/IEC 42001, or an equivalent framework, scaled by the deployer's size and complexity, the nature of the deployed systems, and data sensitivity and volume. A single policy and program may cover multiple high-risk systems. The obligation may be shifted to the developer by contract under the § 1552(7) exemption conditions.
Statutory Text
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
GBL § 1552(3)(a)-(e)
Plain Language
Deployers must complete impact assessments for each high-risk AI decision system before deployment, at least annually thereafter, and within 90 days of any intentional and substantial modification. Each assessment must cover: system purpose and deployment context, algorithmic discrimination risk analysis and mitigation steps, data input and output descriptions, customization data overview, performance metrics and limitations, transparency measures, and post-deployment monitoring and safeguards. Post-modification assessments must also disclose how the system was used relative to the developer's intended uses. A single assessment may cover a comparable set of systems. Assessments completed for other regulatory purposes count if reasonably similar in scope. All impact assessments and associated records must be retained for at least three years after final deployment. The obligation may be shifted to the developer by contract under § 1552(7).
Statutory Text
(a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.8 · Deployer · Automated Decisionmaking
GBL § 1552(4)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI decision system to verify it is not causing algorithmic discrimination. This is a separate, ongoing operational obligation distinct from the pre-deployment impact assessment — it requires affirmative verification that the live system is not producing discriminatory outcomes. Reviews may be conducted by the deployer or a contracted third party. The obligation may be shifted to the developer by contract under § 1552(7).
Statutory Text
Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
GBL § 1552(5)(a)
Plain Language
Before deploying a high-risk AI decision system to make or substantially contribute to a consequential decision about a consumer, the deployer must provide the consumer with pre-decision notice including: (1) that AI is being used to make or contribute to the decision, (2) the system's purpose, (3) the nature of the consequential decision, (4) deployer contact information, (5) a plain-language description of the system, and (6) instructions for accessing the deployer's public statement under § 1552(6). The notice must be provided directly to the consumer, in plain language, in all languages the deployer ordinarily uses, and in a disability-accessible format (per § 1552(5)(c)).
Statutory Text
(a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.4H-01.5 · Deployer · Automated Decisionmaking
GBL § 1552(5)(b)-(c)
Plain Language
When a high-risk AI decision system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide the consumer with: (1) an explanation of the principal reasons for the decision, including the AI system's degree of contribution, the types of data processed, and data sources; (2) the opportunity to correct inaccurate personal data used in the decision; and (3) the opportunity to appeal the decision, which must include human review if technically feasible unless delay would endanger the consumer. All notices must be delivered directly, in plain language, in all languages the deployer ordinarily uses, and in disability-accessible formats. This creates a right to explanation, data correction, and human-reviewed appeal for adverse automated decisions.
Statutory Text
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
GBL § 1552(6)(a)-(b)
Plain Language
Deployers must publish on their website a clear, readily available statement summarizing: the types of high-risk AI decision systems they deploy, how they manage algorithmic discrimination risks for each system, and detailed information about the nature, source, and extent of data they collect and use. The statement must be periodically updated to remain current. The obligation may be shifted to the developer by contract under § 1552(7).
Statutory Text
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
GBL § 1552(9)
Plain Language
The AG may require deployers (or their contracted third parties) to produce risk management policies, impact assessments, and related records within 90 days of a request, as part of an AG investigation. Deployers may designate materials as trade secret or confidential, and production does not waive attorney-client privilege or work product protection. This is a responsive disclosure obligation triggered by AG request.
Statutory Text
Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
G-01 AI Governance Program & Documentation · G-01.3 · Developer · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(1)(a)
Plain Language
Developers of general-purpose AI models must create and maintain technical documentation covering: training and testing processes, compliance evaluation results, intended tasks, integration contexts, acceptable use policies, release date, distribution methods, and input/output modalities. Documentation must be reviewed and revised at least annually. The scope of required content scales with the model's size and risk profile. This obligation is distinct from the high-risk system documentation obligations in § 1551 and applies to GPAI models specifically. Open-source models with publicly available parameters are exempt from this documentation requirement (but not from downstream disclosure under § 1553(1)(b)) under § 1553(2)(a). Models used exclusively for internal management affairs are fully exempt under § 1553(2)(b).
Statutory Text
(a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(1)(b)
Plain Language
Developers of general-purpose AI models must create, maintain, and make available to downstream integrators documentation enabling them to understand model capabilities and limitations and comply with their own obligations under this article. At minimum, the documentation must cover technical integration requirements and the model specification information (intended tasks, integration contexts, acceptable use policies, release date, distribution methods, and I/O modalities). Documentation must be reviewed and revised at least annually. Open-source models with public parameters are exempt from the annual review requirement but not from the initial documentation obligation. Trade secrets are exempt under § 1553(3).
Statutory Text
(b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
G-01 AI Governance Program & Documentation · G-01.1 · Developer · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(2)(d)
Plain Language
Developers of general-purpose AI models that qualify for the internal-use/multi-entity exemption under § 1553(2)(a)(ii) — i.e., models not offered for market sale, not intended to interact with consumers, and used solely for internal purposes — must still establish and maintain an AI risk management framework. The framework must be iterative and ongoing, and must include at minimum: internal governance, risk context mapping, risk management, and risk measurement/tracking functions. This is a residual governance obligation for otherwise-exempt internal-use GPAI models.
Statutory Text
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(4)
Plain Language
The AG may require developers of general-purpose AI models to produce technical documentation maintained under § 1553 within 90 days of request, as part of an investigation. Developers may designate materials as trade secret or confidential, and production does not waive attorney-client privilege or work product protection.
Statutory Text
Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Automated Decisionmaking
GBL § 1554(1)-(2)
Plain Language
Any person doing business in New York that deploys or makes available a consumer-facing AI decision system must disclose to each interacting consumer that they are interacting with an AI system. This is a broad obligation applying to all AI decision systems intended to interact with consumers — not limited to high-risk systems. No disclosure is required where a reasonable person would obviously recognize they are interacting with AI. The obligation covers deployers and any other person making an AI system available to consumers.
Statutory Text
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
T-02 AI Content Labeling & Provenance · T-02.1 · Deployer · Automated Decisionmaking
GBL § 1554(1)-(2)
Plain Language
While the primary mapping of this provision is to T-01 (AI identity disclosure), the bill also defines 'synthetic digital content' in § 1550(15), and the disclosure obligation in § 1554 applies to any AI decision system intended to interact with consumers — which would include content-generating systems. The obligation to disclose that a consumer is interacting with AI effectively serves as a labeling function for AI-generated content in interactive contexts. However, the bill does not impose standalone content provenance or watermarking requirements.
Statutory Text
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.