S-01962
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-10-11
New York Senate Bill 1962 — New York Artificial Intelligence Consumer Protection Act
The NY AI Consumer Protection Act imposes obligations on developers and deployers of high-risk AI decision systems — systems that make or substantially factor into consequential decisions in education, employment, lending, government services, healthcare, housing, insurance, or legal services. Developers must provide deployers with documentation on foreseeable uses, training data, bias risks, and mitigation measures, and must publish a public summary of their high-risk systems and discrimination risk management. Deployers must implement and maintain risk management programs, complete annual impact assessments, conduct annual reviews to ensure systems are not causing algorithmic discrimination, notify consumers before consequential decisions, and provide explanations, data correction, and appeal rights following adverse decisions. Developers of general-purpose AI models must maintain technical documentation and make downstream integration information available. The Attorney General has exclusive enforcement authority, with a mandatory 60-day cure period in the first year and an affirmative defense for entities that discover and cure violations through red-teaming while maintaining compliance with the NIST AI RMF or equivalent frameworks. No private right of action is created.
Summary

The NY AI Consumer Protection Act imposes obligations on developers and deployers of high-risk AI decision systems — systems that make or substantially factor into consequential decisions in education, employment, lending, government services, healthcare, housing, insurance, or legal services. Developers must provide deployers with documentation on foreseeable uses, training data, bias risks, and mitigation measures, and must publish a public summary of their high-risk systems and discrimination risk management. Deployers must implement and maintain risk management programs, complete annual impact assessments, conduct annual reviews to ensure systems are not causing algorithmic discrimination, notify consumers before consequential decisions, and provide explanations, data correction, and appeal rights following adverse decisions. Developers of general-purpose AI models must maintain technical documentation and make downstream integration information available. The Attorney General has exclusive enforcement authority, with a mandatory 60-day cure period in the first year and an affirmative defense for entities that discover and cure violations through red-teaming while maintaining compliance with the NIST AI RMF or equivalent frameworks. No private right of action is created.

Enforcement & Penalties
Enforcement Authority
The Attorney General has exclusive enforcement authority. During the first year (January 1, 2027 through January 1, 2028), the AG must issue a notice of violation and allow a 60-day cure period before initiating enforcement if the violation is curable. After January 1, 2028, the AG has discretion over whether to offer a cure opportunity, considering factors such as the number of violations, entity size, public injury likelihood, and whether the violation was caused by human or technical error. Violations constitute unfair trade practices under GBL § 349, enforceable solely by the AG; the private right of action otherwise available under § 349(h) is expressly excluded. An affirmative defense is available if the entity discovered the violation through red-teaming, cured it within 60 days, notified the AG with evidence of harm mitigation, and is otherwise in compliance with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent risk management framework. No private right of action is created.
Penalties
Violations are treated as unfair trade practices under GBL § 349, giving the AG access to the remedies available under that section (injunctive relief, restitution, civil penalties up to $5,000 per violation under Executive Law § 63(12)). The statute itself does not specify a statutory minimum or maximum. The private right of action and treble damages otherwise available under § 349(h) are expressly excluded.
Who Is Covered
"Deployer" shall mean any person doing business in this state that deploys a high-risk artificial intelligence decision system.
"Developer" shall mean any person doing business in this state that develops, or intentionally and substantially modifies, an artificial intelligence decision system.
What Is Covered
"Artificial intelligence decision system" shall mean any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including any content, decision, prediction, or recommendation, that is used to substantially assist or replace discretionary decision making for making consequential decisions that impact consumers.
"High-risk artificial intelligence decision system": (a) shall mean any artificial intelligence decision system that, when deployed, makes, or is a substantial factor in making, a consequential decision; and (b) shall not include: (i) any artificial intelligence decision system that is intended to: (A) perform any narrow procedural task; or (B) detect decision-making patterns, or deviations from decision-making patterns, unless such artificial intelligence decision system is intended to replace or influence any assessment previously completed by an individual without sufficient human review; or (ii) unless the technology, when deployed, makes, or is a substantial factor in making, a consequential decision: (A) any anti-fraud technology that does not make use of facial recognition technology; (B) any artificial intelligence-enabled video game technology; (C) any anti-malware, anti-virus, calculator, cybersecurity, database, data storage, firewall, Internet domain registration, Internet-web-site loading, networking, robocall-filtering, spam-filtering, spellchecking, spreadsheet, web-caching, web-hosting, or similar technology; (D) any technology that performs tasks exclusively related to an entity's internal management affairs, including, but not limited to, ordering office supplies or processing payments; or (E) any technology that communicates with consumers in natural language for the purpose of providing consumers with information, making referrals or recommendations, and answering questions, and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
"General-purpose artificial intelligence model": (a) shall mean any form of artificial intelligence decision system that: (i) displays significant generality; (ii) is capable of competently performing a wide range of distinct tasks; and (iii) can be integrated into a variety of downstream applications or systems; and (b) shall not include any artificial intelligence model that is used for development, prototyping, and research activities before such artificial intelligence model is released on the market.
Compliance Obligations 22 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.6H-02.7 · Developer · Automated Decisionmaking
GBL § 1551(1)(a)-(b)
Plain Language
Developers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable algorithmic discrimination risks. A rebuttable presumption of reasonable care applies if the developer both complies with the documentation requirements in § 1551 and retains an AG-identified independent third party to complete bias and governance audits. The AG must publish and annually update a list of qualified independent auditors. The safe harbor incentivizes — but does not mandate — independent audits; developers who forgo audits lose the rebuttable presumption but may still demonstrate reasonable care by other means.
Statutory Text
1. (a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
GBL § 1551(2)(a)-(d), § 1551(3)(a)-(b), § 1551(5)
Plain Language
Developers must provide deployers and downstream developers with comprehensive pre-deployment documentation covering: foreseeable and harmful uses, training data summaries, known limitations and discrimination risks, system purpose, performance evaluation methods, data governance measures, intended outputs, discrimination mitigation steps, and usage/monitoring guidance. Documentation must be delivered through model cards, dataset cards, or equivalent artifacts and must be sufficient for deployers to complete their own impact assessments. A developer that is also the sole deployer of a system is exempt unless the system is provided to an unaffiliated deployer. Trade secrets and security-sensitive information are exempt from disclosure.
Statutory Text
2. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination. 3. (a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer. 5. Nothing in subdivisions two or four of this section shall be construed to require a developer to disclose any information: (a) that is a trade secret or otherwise protected from disclosure pursuant to state or federal law; or (b) the disclosure of which would present a security risk to such developer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
GBL § 1551(4)(a)-(b)
Plain Language
Developers must publish and maintain on their website or a public use case inventory a clear summary describing: the types of high-risk AI decision systems they have developed or substantially modified and currently make available, and how they manage known or foreseeable algorithmic discrimination risks. The statement must be updated as needed for accuracy and within 90 days of any intentional and substantial modification to a covered system.
Statutory Text
4. (a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
GBL § 1551(6)
Plain Language
The Attorney General may require developers to produce their deployer-facing documentation (foreseeable uses, training data summaries, bias risks, mitigation measures, etc.) as part of an AG investigation. Developers may designate trade secrets, FOIL-exempt information, and attorney-client privileged material as confidential, and such designations are honored — disclosure to the AG does not waive privilege. This is a demand-driven disclosure obligation, not a proactive filing requirement.
Statutory Text
6. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general and in a form and manner prescribed by the attorney general, the general statement or documentation described in subdivision two of this section. The attorney general may evaluate such general statement or documentation to ensure compliance with the provisions of this section. In disclosing such general statement or documentation to the attorney general pursuant to this subdivision, the developer may designate such general statement or documentation as including any information that is exempt from disclosure pursuant to subdivision five of this section or article six of the public officers law. To the extent such general statement or documentation includes such information, such general statement or documentation shall be exempt from disclosure. To the extent any information contained in such general statement or documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.6H-02.7 · Deployer · Automated Decisionmaking
GBL § 1552(1)(a)-(b)
Plain Language
Deployers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or foreseeable algorithmic discrimination risks. A rebuttable presumption of reasonable care applies if the deployer both complies with § 1552's risk management, impact assessment, and annual review requirements and retains an AG-identified independent third party for bias and governance audits. As with developers, the audit is incentivized through the safe harbor but not strictly mandated — deployers who forgo audits must demonstrate reasonable care by other means.
Statutory Text
1. (a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
GBL § 1552(2)(a)-(b)
Plain Language
Deployers must implement and maintain a risk management policy and program governing their high-risk AI decision system deployments. The program must specify principles, processes, and personnel for identifying, documenting, and mitigating algorithmic discrimination risks, and must be iteratively reviewed and updated over each system's lifecycle. Reasonableness is calibrated to recognized frameworks (NIST AI RMF, ISO/IEC 42001, or substantially equivalent standards), deployer size and complexity, system scope, and data sensitivity. A single program may cover multiple high-risk systems. Deployers that meet the conditions of § 1552(7) — where the developer has contractually assumed these duties — are exempt.
Statutory Text
2. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · Deployer · Automated Decisionmaking
GBL § 1552(3)(a)-(e)
Plain Language
Deployers must complete an impact assessment for each high-risk AI decision system before deployment, then annually and within 90 days of any intentional and substantial modification. Each assessment must cover: system purpose, use cases, and benefits; algorithmic discrimination risk analysis and mitigation; input data categories and outputs; any customization data; performance metrics and limitations; transparency measures; and post-deployment monitoring safeguards. Post-modification assessments must also disclose whether the system was used consistently with the developer's intended uses. A single assessment may cover comparable systems. Assessments completed under other substantially similar laws are accepted. All assessments and associated records must be retained for at least three years after final deployment. Deployers meeting the § 1552(7) delegation conditions are exempt.
Statutory Text
3. (a) Except as provided in paragraphs (c) and (d) of this subdivision and subdivision seven of this section: (i) a deployer that deploys a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, or a third party contracted by the deployer, shall complete an impact assessment of the high-risk artificial intelligence decision system; and (ii) beginning on January first, two thousand twenty-seven, a deployer, or a third party contracted by the deployer, shall complete an impact assessment of a deployed high-risk artificial intelligence decision system: (A) at least annually; and (B) no later than ninety days after an intentional and substantial modification to such high-risk artificial intelligence decision system is made available. (b) (i) Each impact assessment completed pursuant to this subdivision shall include, at a minimum and to the extent reasonably known by, or available to, the deployer: (A) a statement by the deployer disclosing the purpose, intended use cases and deployment context of, and benefits afforded by, the high-risk artificial intelligence decision system; (B) an analysis of whether the deployment of the high-risk artificial intelligence decision system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of such algorithmic discrimination and the steps that have been taken to mitigate such risks; (C) A description of: (I) the categories of data the high-risk artificial intelligence decision system processes as inputs; and (II) the outputs such high-risk artificial intelligence decision system produces; (D) if the deployer used data to customize the high-risk artificial intelligence decision system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence decision system; (E) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence decision system; (F) a description of any transparency measures taken concerning the high-risk artificial intelligence decision system, including, but not limited to, any measures taken to disclose to a consumer that such high-risk artificial intelligence decision system is in use when such high-risk artificial intelligence decision system is in use; and (G) a description of the post-deployment monitoring and user safeguards provided concerning such high-risk artificial intelligence decision system, including, but not limited to, the oversight, use, and learning process established by the deployer to address issues arising from deployment of such high-risk artificial intelligence decision system. (ii) In addition to the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of this paragraph, an impact assessment completed pursuant to this subdivision following an intentional and substantial modification made to a high-risk artificial intelligence decision system on or after January first, two thousand twenty-seven, shall include a statement disclosing the extent to which the high-risk artificial intelligence decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence decision system. (c) A single impact assessment may address a comparable set of high-risk artificial intelligence decision systems deployed by a deployer. (d) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subdivision if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subdivision. (e) A deployer shall maintain the most recently completed impact assessment of a high-risk artificial intelligence decision system as required pursuant to this subdivision, all records concerning each such impact assessment and all prior impact assessments, if any, for a period of at least three years following the final deployment of the high-risk artificial intelligence decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.8 · Deployer · Automated Decisionmaking
GBL § 1552(4)
Plain Language
Deployers (or their contracted third parties) must conduct at least annual reviews of each deployed high-risk AI decision system to affirmatively verify it is not causing algorithmic discrimination. This is a distinct, ongoing operational obligation separate from the initial and periodic impact assessments under § 1552(3). The first review must be completed by January 1, 2027. Deployers meeting the § 1552(7) delegation conditions are exempt.
Statutory Text
4. Except as provided in subdivision seven of this section, a deployer, or a third party contracted by the deployer, shall review, no later than January first, two thousand twenty-seven, and at least annually thereafter, the deployment of each high-risk artificial intelligence decision system deployed by the deployer to ensure that such high-risk artificial intelligence decision system is not causing algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
GBL § 1552(5)(a), § 1552(5)(c)
Plain Language
Before deploying a high-risk AI decision system to make or substantially influence a consequential decision about a consumer, the deployer must notify the consumer directly. The notice must include: that a high-risk AI system is being used, the system's purpose, the nature of the consequential decision, deployer contact information, a plain-language system description, and instructions for accessing the deployer's public statement under § 1552(6). All notices must be in plain language, in all languages the deployer normally uses for consumer communications, and in formats accessible to consumers with disabilities.
Statutory Text
5. (a) Beginning on January first, two thousand twenty-seven, and before a deployer deploys a high-risk artificial intelligence decision system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence decision system to make, or be a substantial factor in making, such consequential decision; and (ii) provide to the consumer: (A) a statement disclosing: (I) the purpose of such high-risk artificial intelligence decision system; and (II) the nature of such consequential decision; (B) contact information for such deployer; (C) a description, in plain language, of such high-risk artificial intelligence decision system; and (D) instructions on how to access the statement made available pursuant to paragraph (a) of subdivision six of this section. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.4H-01.5 · Deployer · Automated Decisionmaking
GBL § 1552(5)(b), § 1552(5)(c)
Plain Language
When a high-risk AI decision system makes or substantially contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement of the principal reasons for the adverse decision, including the AI system's degree of contribution, the types of data it processed, and data sources; (2) an opportunity to correct inaccurate personal data used in the decision; and (3) an appeal mechanism that must include human review if technically feasible, unless delay would endanger the consumer. All adverse-decision communications must be delivered directly to the consumer, in plain language, in all languages the deployer uses for consumer communications, and in disability-accessible formats.
Statutory Text
(b) Beginning on January first, two thousand twenty-seven, a deployer that has deployed a high-risk artificial intelligence decision system to make, or as a substantial factor in making, a consequential decision concerning a consumer shall, if such consequential decision is adverse to the consumer, provide to such consumer: (i) a statement disclosing the principal reason or reasons for such adverse consequential decision, including, but not limited to: (A) the degree to which, and manner in which, the high-risk artificial intelligence decision system contributed to such adverse consequential decision; (B) the type of data that was processed by such high-risk artificial intelligence decision system in making such adverse consequential decision; and (C) the source of such data; and (ii) an opportunity to: (A) correct any incorrect personal data that the high-risk artificial intelligence decision system processed in making, or as a substantial factor in making, such adverse consequential decision; and (B) appeal such adverse consequential decision, which shall, if technically feasible, allow for human review unless providing such opportunity is not in the best interest of such consumer, including, but not limited to, in instances in which any delay might pose a risk to the life or safety of such consumer. (c) The deployer shall provide the notice, statements, information, description, and instructions required pursuant to paragraphs (a) and (b) of this subdivision: (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which such deployer, in the ordinary course of such deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
GBL § 1552(6)(a)-(b)
Plain Language
Deployers must publish and maintain on their website a clear statement summarizing: the types of high-risk AI decision systems they currently deploy, their algorithmic discrimination risk management practices for each system, and detailed information about the nature, source, and extent of data collected and used. The statement must be periodically updated. Deployers meeting the § 1552(7) delegation conditions are exempt.
Statutory Text
6. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
Other · Automated Decisionmaking
GBL § 1552(7)(a)-(c)
Plain Language
A deployer is exempt from the risk management policy, impact assessment, annual review, and public disclosure requirements if all of the following conditions are continuously met: (1) the developer has contractually assumed those duties; (2) the deployer does not exclusively use its own data to train the system; (3) the system is used for developer-disclosed intended uses; (4) the system learns from a broad range of data sources; and (5) the deployer makes the developer's impact assessment available to consumers. All conditions must hold at all times during deployment — if any lapses, the exemption is lost. This is a conditional delegation mechanism, not an independent obligation.
Statutory Text
7. The provisions of subdivisions two, three, four, and six of this section shall not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence decision system, and at all times while the high-risk artificial intelligence decision system is deployed: (a) the deployer: (i) has entered into a contract with the developer in which the developer has agreed to assume the deployer's duties pursuant to subdivisions two, three, four, or six of this section; and (ii) does not exclusively use such deployer's own data to train such high-risk artificial intelligence decision system; (b) such high-risk artificial intelligence decision system: (i) is used for the intended uses that are disclosed to such deployer pursuant to subparagraph (iv) of paragraph (b) of subdivision two of section one thousand five hundred fifty-one of this article; and (ii) continues learning based on a broad range of data sources and not solely based on the deployer's own data; and (c) such deployer makes available to consumers any impact assessment that: (i) the developer of such high-risk artificial intelligence decision system has completed and provided to such deployer; and (ii) includes information that is substantially similar to the information included in the statement, analysis, descriptions, overview, and metrics required pursuant to subparagraph (i) of paragraph (b) of subdivision three of this section.
Other · Deployer · Automated Decisionmaking
GBL § 1552(8)
Plain Language
Deployers are not required to disclose trade secrets or information otherwise protected under law to consumers. However, when a deployer withholds information from a consumer on this basis, it must notify the consumer that information is being withheld and explain the legal basis for the withholding. This creates a transparency-about-non-disclosure obligation — consumers must know that something was withheld and why, even if they cannot see the underlying information.
Statutory Text
8. Nothing in this subdivision or subdivisions two, three, four, five, or six of this section shall be construed to require a deployer to disclose any information that is a trade secret or otherwise protected from disclosure pursuant to state or federal law. If a deployer withholds any information from a consumer pursuant this subdivision, the deployer shall send notice to such consumer disclosing: (a) that the deployer is withholding such information from such consumer; and (b) the basis for the deployer's decision to withhold such information from such consumer.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
GBL § 1552(9)
Plain Language
The Attorney General may require deployers (or their contracted third parties) to produce their risk management policy, impact assessments, and associated records within 90 days of a request, as part of an AG investigation. Deployers may designate trade secrets, FOIL-exempt information, and privileged material as confidential. Disclosure to the AG does not waive attorney-client privilege or work product protection. This is a demand-driven disclosure obligation with a 90-day response window.
Statutory Text
9. Beginning on January first, two thousand twenty-seven, the attorney general may require that a deployer, or a third party contracted by the deployer pursuant to subdivision three of this section, as applicable, disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general, and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subdivision two of this section, the impact assessment completed pursuant to subdivision three of this section; or records maintained pursuant to paragraph (e) of subdivision three of this section. The attorney general may evaluate such risk management policy, impact assessment or records to ensure compliance with the provisions of this section. In disclosing such risk management policy, impact assessment or records to the attorney general pursuant to this subdivision, the deployer or third-party contractor, as applicable, may designate such risk management policy, impact assessment or records as including any information that is exempt from disclosure pursuant to subdivision eight of this section or article six of the public officers law. To the extent such risk management policy, impact assessment, or records include such information, such risk management policy, impact assessment, or records shall be exempt from disclosure. To the extent any information contained in such risk management policy, impact assessment, or record is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
G-01 AI Governance Program & Documentation · G-01.1G-01.2G-01.3 · Developer · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(1)(a)-(b)
Plain Language
Developers of general-purpose AI models must create and maintain technical documentation covering training and testing processes, evaluation results for article compliance, intended tasks, target integration systems, acceptable use policies, release dates, distribution methods, and input/output formats. Documentation must be reviewed and revised at least annually. Developers must also make available to downstream integrators documentation enabling them to understand model capabilities and limitations, comply with their own obligations under the article, and integrate the model technically. This downstream-facing documentation must also be reviewed at least annually. Open-source models, internal-only models, and internal management tools may qualify for exemptions under § 1553(2).
Statutory Text
1. Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation; and (b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
Other · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(2)(a)-(c)
Plain Language
Three categories of GPAI models are partially or fully exempt from § 1553 technical documentation requirements: (1) open-source models released with publicly available parameters (exempt from internal technical documentation and annual revision of downstream documentation, but not from the initial downstream documentation obligation — unless deployed as a high-risk system, in which case the open-source exemption does not apply); (2) internal-use-only models not offered commercially or intended for consumers; and (3) models performing exclusively internal management tasks. The developer bears the burden of proving an exemption applies. These are scope limitations, not independent obligations.
Statutory Text
2. (a) The provisions of paragraph (a) and subparagraph (iii) of paragraph (b) of subdivision one of this section shall not apply to a developer that develops, or intentionally and substantially modifies, a general-purpose artificial intelligence model on or after January first, two thousand twenty-seven, if: (i) (A) the developer releases such general-purpose artificial intelligence model under a free and open-source license that allows for: (I) access to, and modification, distribution, and usage of, such general-purpose artificial intelligence model; and (II) the parameters of such general-purpose artificial intelligence model to be made publicly available pursuant to clause (B) of this subparagraph; and (B) unless such general-purpose artificial intelligence model is deployed as a high-risk artificial intelligence decision system, the parameters of such general-purpose artificial intelligence model, including, but not limited to, the weights and information concerning the model architecture and model usage for such general-purpose artificial intelligence model, are made publicly available; or (ii) the general-purpose artificial intelligence model is: (A) not offered for sale in the market; (B) not intended to interact with consumers; and (C) solely utilized: (I) for an entity's internal purposes; or (II) pursuant to an agreement between multiple entities for such entities' internal purposes. (b) The provisions of this section shall not apply to a developer that develops, or intentionally and substantially modifies, a general-purpose artificial intelligence model on or after January first, two thousand twenty-seven, if such general purpose artificial intelligence model performs tasks exclusively related to an entity's internal management affairs, including, but not limited to, ordering office supplies or processing payments. (c) A developer that takes any action under an exemption pursuant to paragraph (a) or (b) of this subdivision shall bear the burden of demonstrating that such action qualifies for such exemption.
G-01 AI Governance Program & Documentation · G-01.1 · Developer · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(2)(d)
Plain Language
Developers of internal-use-only GPAI models that are exempt from technical documentation requirements under § 1553(2)(a)(ii) must still establish and maintain an AI risk management framework. The framework must be iterative and include: internal governance, risk context mapping, risk management, and risk measurement/tracking functions. This ensures internal-use GPAI models are subject to baseline governance even though they are exempt from documentation disclosure obligations.
Statutory Text
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated DecisionmakingGeneral-Purpose AI
GBL § 1553(3)-(4)
Plain Language
GPAI model developers need not disclose trade secrets or legally protected information in their technical documentation. The AG may require developers to produce their § 1553 technical documentation within 90 days as part of an investigation. Developers may designate trade secrets, FOIL-exempt information, and privileged material as confidential, and disclosure to the AG does not waive attorney-client privilege or work product protection.
Statutory Text
3. Nothing in subdivision one of this section shall be construed to require a developer to disclose any information that is a trade secret or otherwise protected from disclosure pursuant to state or federal law. 4. Beginning on January first, two thousand twenty-seven, the attorney general may require that a developer disclose to the attorney general, as part of an investigation conducted by the attorney general, no later than ninety days after a request by the attorney general and in a form and manner prescribed by the attorney general, any documentation maintained pursuant to this section. The attorney general may evaluate such documentation to ensure compliance with the provisions of this section. In disclosing any documentation to the attorney general pursuant to this subdivision, the developer may designate such documentation as including any information that is exempt from disclosure pursuant to subdivision three of this section or article six of the public officers law. To the extent such documentation includes such information, such documentation shall be exempt from disclosure. To the extent any information contained in such documentation is subject to the attorney-client privilege or work product protection, such disclosure shall not constitute a waiver of such privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Automated Decisionmaking
GBL § 1554(1)-(2)
Plain Language
Any person doing business in New York — including deployers — that offers an AI decision system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. The disclosure obligation is conditional: it does not apply where a reasonable person would find it obvious they are interacting with AI. This applies to all AI decision systems intended for consumer interaction, not just high-risk systems. The broader 'person doing business in this state' scope means this obligation reaches beyond the defined developer/deployer roles.
Statutory Text
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
Other · Automated Decisionmaking
GBL § 1555(1)-(5)
Plain Language
This comprehensive preemption and savings clause preserves existing rights to comply with other laws, cooperate with law enforcement, conduct research, respond to security incidents (except via facial recognition), effectuate recalls, and repair technical errors. It exempts: systems approved by federal agencies (FDA, FAA, FHFA-regulated entities), research for federal approvals, work under DOD/DOC/NASA contracts (except employment/housing decisions), HIPAA-covered entities providing non-high-risk AI healthcare recommendations requiring provider implementation, systems acquired by the federal government (except for employment/housing), and any situation where compliance would violate evidentiary privilege or First Amendment rights. These are scope limitations and exemptions — they create no new compliance obligations.
Statutory Text
1. Nothing in this article shall be construed to restrict a developer's, deployer's, or other person's ability to: (a) comply with federal, state or municipal law; (b) comply with a civil, criminal or regulatory inquiry, investigation, subpoena, or summons by a federal, state, municipal, or other governmental authority; (c) cooperate with a law enforcement agency concerning conduct or activity that the developer, deployer, or other person reasonably and in good faith believes may violate federal, state, or municipal law; (d) investigate, establish, exercise, prepare for, or defend a legal claim; (e) take immediate steps to protect an interest that is essential for the life or physical safety of a consumer or another individual; (f) (i) by any means other than facial recognition technology, prevent, detect, protect against, or respond to: (A) a security incident; (B) a malicious or deceptive activity; or (C) identity theft, fraud, harassment or any other illegal activity; (ii) investigate, report, or prosecute the persons responsible for any action described in subparagraph (i) of this paragraph; or (iii) preserve the integrity or security of systems; (g) engage in public or peer-reviewed scientific or statistical research in the public interest that: (i) adheres to all other applicable ethics and privacy laws; and (ii) is conducted in accordance with: (A) part forty-six of title forty-five of the code of federal regulations, as amended; or (B) relevant requirements established by the federal food and drug administration; (h) conduct research, testing, and development activities regarding an artificial intelligence decision system or model, other than testing conducted pursuant to real world conditions, before such artificial intelligence decision system or model is placed on the market, deployed, or put into service, as applicable; (i) effectuate a product recall; (j) identify and repair technical errors that impair existing or intended functionality; or (k) assist another developer, deployer, or person with any of the obligations imposed pursuant to this article. 2. The obligations imposed on developers, deployers, or other persons pursuant to this article shall not apply where compliance by the developer, deployer, or other person with the provisions of this article would violate an evidentiary privilege pursuant to state law. 3. Nothing in this article shall be construed to impose any obligation on a developer, deployer, or other person that adversely affects the rights or freedoms of any person, including, but not limited to, the rights of any person: (a) to freedom of speech or freedom of the press guaranteed in: (i) the first amendment to the United States constitution; and (ii) section eight of the New York state constitution; or (b) pursuant to section seventy-nine-h of the civil rights law. 4. Nothing in this article shall be construed to apply to any developer, deployer, or other person: (a) insofar as such developer, deployer or other person develops, deploys, puts into service, or intentionally and substantially modifies, as applicable, a high-risk artificial intelligence decision system: (i) that has been approved, authorized, certified, cleared, developed, or granted by: (A) a federal agency, including, but not limited to, the federal food and drug administration or the federal aviation administration, acting within the scope of such federal agency's authority; or (B) a regulated entity subject to supervision and regulation by the federal housing finance agency; or (ii) in compliance with standards that are: (A) established by: (I) any federal agency, including, butnot limited to, the federal office of the national coordinator for health information technology; or (II) a regulated entity subject to supervision and regulation by the federal housing finance agency; and (B) substantially equivalent to, and at least as stringent as, the standards established pursuant to this article; (b) conducting research to support an application: (i) for approval or certification from any federal agency, including, but not limited to, the federal food and drug administration, the federal aviation administration, or the federal communications commission; or (ii) that is otherwise subject to review by any federal agency; (c) performing work pursuant to, or in connection with, a contract with the federal department of commerce, the federal department of defense, or the national aeronautics and space administration, unless such developer, deployer, or other person is performing such work on a high-risk artificial intelligence decision system that is used to make, or as a substantial factor in making, a decision concerning employment or housing; or (d) that is a covered entity, as defined by the health insurance portability and accountability act of 1996 and the regulations promulgated thereunder, as amended, and providing health care recommendations that: (i) are generated by an artificial intelligence decision system; (ii) require a health care provider to take action to implement such recommendations; and (iii) are not considered to be high risk. 5. Nothing in this article shall be construed to apply to any artificial intelligence decision system that is acquired by or for the federal government or any federal agency or department, including, but not limited to, the federal department of commerce, the federal department of defense, or the national aeronautics and space administration, unless such artificial intelligence decision system is a high-risk artificial intelligence decision system that is used to make, or as a substantial factor in making, a decision concerning employment or housing.
Other · Automated DecisionmakingFinancial Services
GBL § 1555(6)-(7)
Plain Language
Insurers and fraternal benefit societies are deemed in full compliance if they implement and maintain an AI program meeting all requirements established by the Superintendent of Financial Services. Banks, credit unions, and their affiliates are deemed in full compliance if they are subject to examination by a state or federal prudential regulator under published guidance or regulations that are substantially equivalent to this article and require at minimum regular anti-discrimination audits and mitigation of algorithmic discrimination. These are safe harbor provisions — they create no new obligations but rather recognize compliance through existing regulatory channels.
Statutory Text
6. Any insurer, as defined by section five hundred one of the insurance law, or fraternal benefit society, as defined by section four thousand five hundred one of the insurance law, shall be deemed to be in full compliance with the provisions of this article if such insurer or fraternal benefit society has implemented and maintains a written artificial intelligence decision systems program in accordance with all requirements established by the superintendent of financial services. 7. (a) Any bank, out-of-state bank, New York credit union, federal credit union, or out-of-state credit union, or any affiliate or subsidiary thereof, shall be deemed to be in full compliance with the provisions of this article if such bank, out-of-state bank, New York credit union, federal credit union, out-of-state credit union, affiliate, or subsidiary is subject to examination by any state or federal prudential regulator pursuant to any published guidance or regulations that apply to the use of high-risk artificial intelligence decision systems, and such guidance or regulations: (i) impose requirements that are substantially equivalent to, and at least as stringent as, the requirements of this article; and (ii) at a minimum, require such bank, out-of-state bank, New York credit union, federal credit union, out-of-state credit union, affiliate, or subsidiary to: (A) regularly audit such bank's, out-of-state bank's, New York credit union's, federal credit union's, out-of-state credit union's, affiliate's, or subsidiary's use of high-risk artificial intelligence decision systems for compliance with state and federal anti-discrimination laws and regulations applicable to such bank, out-of-state bank, New York credit union, federal credit union, out-of-state credit union, affiliate, or subsidiary; and (B) mitigate any algorithmic discrimination caused by the use of a high-risk artificial intelligence decision system, or any risk of algorithmic discrimination that is reasonably foreseeable as a result of the use of a high-risk artificial intelligence decision system.
Other · Automated Decisionmaking
GBL § 1556(1)-(6)
Plain Language
The AG has exclusive enforcement authority. During the first year (2027), a mandatory 60-day cure period applies for curable violations. After 2028, cure opportunities are discretionary based on violation severity, entity size, public injury risk, and error type. Violations are per se unfair trade practices under GBL § 349, but the private right of action under § 349(h) is expressly excluded. An affirmative defense exists for entities that discover violations through red-teaming, cure within 60 days, notify the AG with mitigation evidence, and maintain compliance with NIST AI RMF, ISO/IEC 42001, or equivalent frameworks. The statute expressly preserves all other legal rights, claims, and remedies — the rebuttable presumptions and affirmative defenses apply only to AG enforcement actions. This creates no new affirmative compliance obligation.
Statutory Text
1. The attorney general shall have exclusive authority to enforce the provisions of this article. 2. Except as provided in subdivision six of this section, during the period beginning on January first, two thousand twenty-seven, and ending on January first, two thousand twenty-eight, the attorney general shall, prior to initiating any action for a violation of this section, issue a notice of violation to the developer, deployer, or other person if the attorney general determines that it is possible to cure such violation. If the developer, deployer, or other person fails to cure such violation within sixty days after receipt of such notice of violation, the attorney general may bring an action pursuant to this section. 3. Except as provided in subdivision six of this section, beginning on January first, two thousand twenty-eight, the attorney general may, in determining whether to grant a developer, deployer, or other person the opportunity to cure a violation described in subdivision two of this section, consider: (a) the number of violations; (b) the size and complexity of the developer, deployer, or other person; (c) the nature and extent of the developer's, deployer's, or other person's business; (d) the substantial likelihood of injury to the public; (e) the safety of persons or property; and (f) whether such violation was likely caused by human or technical error. 4. Nothing in this article shall be construed as providing the basis for a private right of action for violations of the provisions of this article. 5. Except as provided in subdivisions one, two, three, four, and six of this section, a violation of the requirements established in this article shall constitute an unfair trade practice for purposes of section three hundred forty-nine of this chapter and shall be enforced solely by the attorney general; provided, however, that subdivision (h) of section three hundred forty-nine of this chapter shall not apply to any such violation. 6. (a) In any action commenced by the attorney general for any violation of this article, it shall be an affirmative defense that the developer, deployer, or other person: (i) discovers a violation of any provision of this article through red-teaming; (ii) no later than sixty days after discovering such violation through red-teaming: (A) cures such violation; and (B) provides to the attorney general, in a form and manner prescribed by the attorney general, notice that such violation has been cured and evidence that any harm caused by such violation has been mitigated; and (iii) is otherwise in compliance with the latest version of: (A) the Artificial Intelligence Risk Management Framework published by the national institute of standards and technology; (B) ISO/IEC 42001 of the international organization for standardization and the international electrotechnical commission; (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the risk management frameworks described in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this article; or (D) any risk management framework for artificial intelligence decision systems that is substantially equivalent to, and at least as stringent as, the risk management frameworks described in clauses (A), (B), and (C) of this subparagraph. (b) The developer, deployer, or other person bears the burden of demonstrating to the attorney general that the requirements established pursuant to paragraph (a) of this subdivision have been satisfied. (c) Nothing in this article, including, but not limited to, the enforcement authority granted to the attorney general pursuant to this section, shall be construed to preempt or otherwise affect any right, claim, remedy, presumption, or defense available at law or in equity. Any rebuttable presumption or affirmative defense established pursuant to this article shall apply only to an enforcement action brought by the attorney general pursuant to this section and shall not apply to any right, claim, remedy, presumption, or defense available at law or in equity.