S-01962
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-10-11
New York Senate Bill 1962 — New York Artificial Intelligence Consumer Protection Act
Imposes obligations on developers and deployers of high-risk AI decision systems used to make or substantially influence consequential decisions affecting New York consumers in areas such as employment, housing, credit, healthcare, education, insurance, and legal services. Developers must provide deployers with documentation on training data, known risks of algorithmic discrimination, intended uses, and mitigation measures, and must publish a public summary of their high-risk AI systems. Deployers must implement and maintain a risk management policy and program, complete annual impact assessments, conduct annual anti-discrimination reviews, provide pre-decision consumer notice, and offer adverse-decision explanations with appeal rights including human review. Developers of general-purpose AI models must create and maintain technical documentation and make it available to downstream integrators. Enforcement is exclusively by the Attorney General under an unfair trade practices framework, with a mandatory 60-day cure period during the first year and a red-teaming affirmative defense tied to NIST AI RMF or equivalent frameworks. No private right of action.
Summary

Imposes obligations on developers and deployers of high-risk AI decision systems used to make or substantially influence consequential decisions affecting New York consumers in areas such as employment, housing, credit, healthcare, education, insurance, and legal services. Developers must provide deployers with documentation on training data, known risks of algorithmic discrimination, intended uses, and mitigation measures, and must publish a public summary of their high-risk AI systems. Deployers must implement and maintain a risk management policy and program, complete annual impact assessments, conduct annual anti-discrimination reviews, provide pre-decision consumer notice, and offer adverse-decision explanations with appeal rights including human review. Developers of general-purpose AI models must create and maintain technical documentation and make it available to downstream integrators. Enforcement is exclusively by the Attorney General under an unfair trade practices framework, with a mandatory 60-day cure period during the first year and a red-teaming affirmative defense tied to NIST AI RMF or equivalent frameworks. No private right of action.

Enforcement & Penalties
Enforcement Authority
The Attorney General has exclusive enforcement authority. During the first year (January 1, 2027 through January 1, 2028), the AG must issue a notice of violation and provide a 60-day cure period before initiating an action, if the violation is curable. After January 1, 2028, the AG has discretion on whether to offer a cure opportunity. Violations constitute unfair trade practices under GBL § 349 but are enforced solely by the AG; the private right of action under § 349(h) is expressly excluded. An affirmative defense is available to entities that discover a violation through red-teaming, cure within 60 days, notify the AG with evidence of mitigation, and are otherwise in compliance with NIST AI RMF, ISO/IEC 42001, or an equivalent risk management framework.
Penalties
Violations are treated as unfair trade practices under GBL § 349, enforceable solely by the Attorney General. The AG may seek civil penalties, injunctive relief, and other remedies available under § 349 and the AG's general enforcement powers. The statute expressly excludes the private right of action under § 349(h). No statutory minimum damages are specified. The statute preserves all other rights, claims, remedies, presumptions, and defenses available at law or in equity.
Who Is Covered
"Developer" shall mean any person doing business in this state that develops, or intentionally and substantially modifies, an artificial intelligence decision system.
"Deployer" shall mean any person doing business in this state that deploys a high-risk artificial intelligence decision system.
"Person" shall mean any individual, association, corporation, limited liability company, partnership, trust or other legal entity authorized to do business in this state.
What Is Covered
"Artificial intelligence decision system" shall mean any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including any content, decision, prediction, or recommendation, that is used to substantially assist or replace discretionary decision making for making consequential decisions that impact consumers.
"High-risk artificial intelligence decision system": (a) shall mean any artificial intelligence decision system that, when deployed, makes, or is a substantial factor in making, a consequential decision; and (b) shall not include: (i) any artificial intelligence decision system that is intended to: (A) perform any narrow procedural task; or (B) detect decision-making patterns, or deviations from decision-making patterns, unless such artificial intelligence decision system is intended to replace or influence any assessment previously completed by an individual without sufficient human review; or (ii) unless the technology, when deployed, makes, or is a substantial factor in making, a consequential decision: (A) any anti-fraud technology that does not make use of facial recognition technology; (B) any artificial intelligence-enabled video game technology; (C) any anti-malware, anti-virus, calculator, cybersecurity, database, data storage, firewall, Internet domain registration, Internet-web-site loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, web-caching, web-hosting, or similar technology; (D) any technology that performs tasks exclusively related to an entity's internal management affairs, including, but not limited to, ordering office supplies or processing payments; or (E) any technology that communicates with consumers in natural language for the purpose of providing consumers with information, making referrals or recommendations, and answering questions, and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
"General-purpose artificial intelligence model": (a) shall mean any form of artificial intelligence decision system that: (i) displays significant generality; (ii) is capable of competently performing a wide range of distinct tasks; and (iii) can be integrated into a variety of downstream applications or systems; and (b) shall not include any artificial intelligence model that is used for development, prototyping, and research activities before such artificial intelligence model is released on the market.
Compliance Obligations 23 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.5 · Developer · Automated Decisionmaking
GBL § 1551(1)(a)-(b)
Plain Language
Developers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended uses. A rebuttable presumption of reasonable care arises if the developer complies with § 1551 requirements and retains an AG-identified independent third party to conduct bias and governance audits. The AG must identify qualified independent auditors and publish a list on its website by January 1, 2026, updated annually.
Statutory Text
1. (a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
GBL § 1551(2)(a)-(d)
Plain Language
Developers must provide deployers and other downstream developers with comprehensive documentation covering: foreseeable and harmful uses, training data summaries, known limitations and discrimination risks, system purpose and intended benefits, pre-deployment performance and bias evaluation methods, data governance measures, intended outputs, discrimination mitigation measures, usage and monitoring guidance, and any additional documentation necessary for downstream compliance. This documentation must be made available beginning January 1, 2027, subject to a trade secret and security risk carve-out under subdivision 5.
Statutory Text
2. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination.
T-03 Training Data Disclosure · T-03.3 · Developer · Automated Decisionmaking
GBL § 1551(2)(c)(ii)
Plain Language
Developers must disclose to deployers the data governance measures applied to training datasets, including examination of data source suitability, possible biases, and mitigation steps taken. This is a component of the broader documentation package required under § 1551(2) and specifically addresses training data governance transparency to downstream deployers.
Statutory Text
(ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation;
G-01 AI Governance Program & Documentation · G-01.3 · Developer · Automated Decisionmaking
GBL § 1551(3)(a)-(b)
Plain Language
When a developer distributes a high-risk AI decision system to deployers, it must make available — to the extent feasible — all documentation and information needed for the deployer to complete an impact assessment, delivered through model cards, dataset cards, or similar artifacts. Developers that also act as deployers of the same system are exempt from this documentation obligation unless the system is provided to an unaffiliated deployer.
Statutory Text
3. (a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
GBL § 1551(4)(a)-(b)
Plain Language
Developers must publish on their website or a public use case inventory a clear summary of the types of high-risk AI decision systems they currently make available and how they manage known or foreseeable risks of algorithmic discrimination. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a covered system. Continuous-learning changes that were predetermined and documented in the initial impact assessment do not trigger this update obligation.
Statutory Text
4. (a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
GBL § 1551(6)
Plain Language
The AG may require developers to produce their deployer-facing documentation (the general statement and supporting documentation under § 1551(2)) as part of an investigation. Developers may designate trade secrets, FOIL-exempt information, and attorney-client privileged materials, which will remain exempt from public disclosure. Producing privileged materials to the AG does not waive the privilege.
Statutory Text
6. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
S-01 AI System Safety Program · S-01.5 · Deployer · Automated Decisionmaking
GBL § 1552(1)(a)-(b)
Plain Language
Deployers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. A rebuttable presumption of reasonable care arises if the deployer complies with § 1552's requirements and retains an AG-identified independent third party for bias and governance audits. This mirrors the parallel developer obligation in § 1551(1) but applies to deployers.
Statutory Text
1. (a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
GBL § 1552(2)(a)-(b)
Plain Language
Deployers must implement and maintain a risk management policy and program covering all deployed high-risk AI decision systems. The program must specify principles, processes, and personnel for identifying, documenting, and mitigating algorithmic discrimination risks. Both the policy and program must be iterative and regularly reviewed and updated over the system lifecycle. Reasonableness is evaluated against NIST AI RMF, ISO/IEC 42001, or an equivalent framework, adjusted for the deployer's size, the system's nature and scope, and data sensitivity and volume. A single risk management program may cover multiple high-risk systems. Deployers meeting the conditions in subdivision 7 (developer contract assumption, non-exclusive data, and impact assessment pass-through) are exempt.
Statutory Text
2. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.8H-02.10 · Deployer · Automated Decisionmaking
GBL § 1552(3)(a)-(e)
Plain Language
Deployers must complete an impact assessment for each high-risk AI decision system before deployment and at least annually thereafter, plus within 90 days of any intentional and substantial modification. The assessment must cover the system's purpose and deployment context, discrimination risk analysis and mitigation steps, data input categories and outputs, customization data, performance metrics and limitations, transparency measures, and post-deployment monitoring safeguards. Assessments following a substantial modification must also address whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems. Cross-compliance credit is available if another law requires a reasonably similar assessment. All impact assessments and related records must be retained for at least three years following final deployment. Deployers meeting the subdivision 7 conditions are exempt.
Statutory Text
3. (a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.8 · Deployer · Automated Decisionmaking
GBL § 1552(4)
Plain Language
Deployers must conduct an annual review — separate from the impact assessment — of each deployed high-risk AI decision system to affirmatively verify it is not causing algorithmic discrimination. This ongoing review obligation applies in addition to the impact assessment requirement and may be performed by a contracted third party. Deployers meeting the subdivision 7 conditions are exempt.
Statutory Text
4. Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
GBL § 1552(5)(a)
Plain Language
Before using a high-risk AI decision system to make or substantially contribute to a consequential decision about a consumer, the deployer must provide pre-decision notice including: notification that an AI system will be used, the system's purpose, the nature of the consequential decision, deployer contact information, a plain-language system description, and instructions for accessing the deployer's public summary statement under § 1552(6). This notice must be provided directly to the consumer, in plain language, in all languages the deployer ordinarily uses for consumer communications, and in an accessible format for consumers with disabilities.
Statutory Text
5. (a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.4H-01.5 · Deployer · Automated Decisionmaking
GBL § 1552(5)(b)-(c)
Plain Language
When a high-risk AI decision system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement explaining the principal reasons for the adverse decision, including the degree and manner of AI contribution, the type of data processed, and the data source; (2) an opportunity to correct incorrect personal data the system used; and (3) an opportunity to appeal, which must include human review if technically feasible, unless delay would endanger the consumer. All notices must be delivered directly in plain language, in all languages the deployer uses in ordinary business, and in a disability-accessible format.
Statutory Text
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
GBL § 1552(6)(a)-(b)
Plain Language
Deployers must publish and maintain on their website a clear, readily available statement summarizing: the types of high-risk AI decision systems they currently deploy, how they manage known or foreseeable algorithmic discrimination risks for each system, and the nature, source, and extent of information they collect and use. The statement must be periodically updated. Deployers meeting the subdivision 7 conditions are exempt.
Statutory Text
6. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
GBL § 1552(9)
Plain Language
The AG may require deployers (or their contracted third parties) to produce their risk management policies, impact assessments, and associated records within 90 days of a request, as part of an investigation. Deployers may designate trade secrets, FOIL-exempt information, and attorney-client privileged materials, which remain exempt from public disclosure. Producing privileged materials does not waive the privilege.
Statutory Text
9. Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
G-01 AI Governance Program & Documentation · G-01.3 · Developer · General-Purpose AIAutomated Decisionmaking
GBL § 1553(1)(a)
Plain Language
Developers of general-purpose AI models must create and maintain technical documentation covering: training and testing processes, compliance evaluation results, intended tasks, types of downstream AI systems the model is intended for, acceptable use policies, release date, distribution methods, and input/output modalities and formats. The documentation must be reviewed and revised at least annually. The scope is calibrated to the model's size and risk profile. Exemptions apply for open-source models (subdivision 2(a)) and models used solely for internal purposes (subdivision 2(b)).
Statutory Text
1. Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
G-02 Public Transparency & Documentation · G-02.1 · Developer · General-Purpose AIAutomated Decisionmaking
GBL § 1553(1)(b)
Plain Language
Developers of general-purpose AI models must create, maintain, and make available to downstream integrators documentation enabling them to understand the model's capabilities and limitations, comply with their own obligations under this article, and integrate the model technically. The documentation must disclose technical integration requirements and all the model-level information required in the technical documentation (tasks, target systems, acceptable use policies, release date, distribution methods, I/O formats). Must be reviewed and revised at least annually.
Statutory Text
(b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
G-01 AI Governance Program & Documentation · G-01.1 · Developer · General-Purpose AIAutomated Decisionmaking
GBL § 1553(2)(d)
Plain Language
Developers of general-purpose AI models that qualify for the internal-use exemption from technical documentation requirements must still establish and maintain an AI risk management framework. The framework must be iterative and ongoing, and must include at minimum: internal governance, risk-framing (map function), risk management, and risk measurement (assess, analyze, and track). This ensures that even internally-used models have a baseline governance structure despite being exempt from external-facing documentation.
Statutory Text
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · General-Purpose AIAutomated Decisionmaking
GBL § 1553(4)
Plain Language
The AG may require developers of general-purpose AI models to produce their technical documentation within 90 days of a request, as part of an investigation. Developers may designate trade secrets, FOIL-exempt information, and attorney-client privileged materials, which remain exempt from public disclosure. Producing privileged materials does not waive the privilege.
Statutory Text
4. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Automated Decisionmaking
GBL § 1554(1)-(2)
Plain Language
Any person doing business in New York that makes available an AI decision system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. This obligation applies broadly — not just to deployers of high-risk systems but to any person making a consumer-facing AI system available. The disclosure is not required where a reasonable person would find it obvious they are interacting with AI. Note that unlike the high-risk obligations in §§ 1551–1552, this applies to all AI decision systems intended to interact with consumers, not just high-risk systems.
Statutory Text
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.6 · DeveloperDeployer · Automated Decisionmaking
GBL § 1551(1)(a)(ii), § 1552(1)(a)(ii)
Plain Language
Both developers and deployers may obtain a rebuttable presumption of reasonable care by retaining an AG-identified independent third party to complete bias and governance audits. While not mandatory, this audit requirement is the statutory path to the safe harbor. The audit must include at minimum an assessment of the system's disparate impact across enumerated protected characteristics. The AG maintains and publishes an annual list of qualified auditors.
Statutory Text
(ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system.
Other · Automated Decisionmaking
GBL § 1552(7)
Plain Language
A deployer is exempt from the risk management program, impact assessment, annual discrimination review, and public website summary requirements if all of the following conditions are continuously met: (1) the developer has contractually assumed those duties; (2) the deployer does not exclusively use its own data to train the system; (3) the system is used for developer-disclosed intended uses; (4) the system continues learning from broad data sources; and (5) the deployer makes the developer's impact assessment available to consumers with substantially similar content. This is a conditional exemption that shifts compliance burden to the developer via contract.
Statutory Text
7. The provisions of subdivisions two, three, four, and six of this section shall not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence decision system, and at all times while the high-risk artificial intelligence decision system is deployed: (a) the deployer: (i) has entered into a contract with the developer in which the developer has agreed to assume the deployer's duties pursuant to subdivisions two, three, four, or six of this section; and (ii) does not exclusively use such deployer's own data to train such high-risk artificial intelligence decision system; (b) such high-risk artificial intelligence decision system: (i) is used for the intended uses that are disclosed to such deployer pursuant to subparagraph (iv) of paragraph (b) of subdivision two of section one thousand five hundred fifty-one of this article; and (ii) continues learning based on a broad range of data sources and not solely based on the deployer's own data; and (c) such deployer makes available to consumers any impact assessment that: (i) the developer of such high-risk artificial intelligence decision system has completed and provided to such deployer; and (ii) includes information that is substantially similar to the information included in the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of paragraph (b) of subdivision three of this section.
Other · Automated Decisionmaking
GBL § 1552(8)
Plain Language
Deployers are not required to disclose trade secrets or legally protected information to consumers. However, if a deployer withholds information on these grounds, it must notify the affected consumer that information is being withheld and explain the basis for the withholding. This creates transparency about the limits of disclosure without requiring the protected information itself to be revealed.
Statutory Text
8. Nothing in this subdivision or subdivisions two, three, four, five, or six of this section shall be construed to require a deployer to disclose any information that is a trade secret or otherwise protected from disclosure pursuant to state or federal law. If a deployer withholds any information from a consumer pursuant this subdivision, the deployer shall send notice to such consumer disclosing: (a) that the deployer is withholding such information from such consumer; and (b) the basis for the deployer's decision to withhold such information from such consumer.
T-02 AI Content Labeling & Provenance · T-02.1 · DeveloperDeployer · Automated DecisionmakingContent Generation
GBL § 1550(15)
Plain Language
The statute defines 'synthetic digital content' broadly to cover any audio, image, text, or video produced or manipulated by an AI decision system. While this definition is established, the bill does not contain a standalone operative provision requiring labeling or provenance marking of synthetic digital content. The definition appears to be anticipatory or supporting the general disclosure obligations elsewhere in the article. No independent labeling obligation is triggered by this definition alone.
Statutory Text
"Synthetic digital content" shall mean any digital content, including, but not limited to, any audio, image, text, or video, that is produced or manipulated by an artificial intelligence decision system, including, but not limited to, a general-purpose artificial intelligence model.