A-08884
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2026-06-09
New York Assembly Bill 8884 — An Act to amend the civil rights law and the executive law, in relation to the use of artificial intelligence systems (New York Artificial Intelligence Act)
The New York AI Act imposes obligations on developers and deployers of high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, education, law enforcement, legal services, and financial services). Core obligations include a duty of reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits on a recurring schedule, periodic reporting to the Attorney General, and implementation of a documented risk management program aligned with NIST AI RMF. End users must receive advance notice before AI-driven consequential decisions, the right to opt out in favor of human decision-making, and a post-decision appeal with meaningful human review. The bill categorically prohibits social scoring AI systems. Enforcement is through both AG action (injunctive relief, up to $20,000 per violation) and a private right of action with a plaintiff-friendly presumption at the motion-to-dismiss stage. Audit requirements take effect two years after enactment; all other provisions take effect one year after enactment.
Summary

The New York AI Act imposes obligations on developers and deployers of high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, education, law enforcement, legal services, and financial services). Core obligations include a duty of reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits on a recurring schedule, periodic reporting to the Attorney General, and implementation of a documented risk management program aligned with NIST AI RMF. End users must receive advance notice before AI-driven consequential decisions, the right to opt out in favor of human decision-making, and a post-decision appeal with meaningful human review. The bill categorically prohibits social scoring AI systems. Enforcement is through both AG action (injunctive relief, up to $20,000 per violation) and a private right of action with a plaintiff-friendly presumption at the motion-to-dismiss stage. Audit requirements take effect two years after enactment; all other provisions take effect one year after enactment.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement: the AG may apply to the Supreme Court for injunctive relief and civil penalties upon finding a violation. No proof of actual injury required for AG enforcement. Private right of action: any person harmed by a violation may commence a plenary proceeding. At the motion-to-dismiss stage, the court shall presume the AI system was created or operated in violation and that the violation caused the alleged harm; defendant may rebut by clear and convincing evidence. A developer may rebut by demonstrating compliance with the safe harbor under § 89-b. The Supreme Court has jurisdiction over all enforcement actions.
Penalties
AG enforcement: civil penalty of up to $20,000 per violation; injunctive relief; restitution; attorney's fees and costs as provided under CPLR § 8303(a)(6). Private right of action: compensatory damages and legal fees to the prevailing party. AG injunctive relief does not require proof that any person was injured or damaged. Whistleblower retaliation claims may obtain appropriate relief as provided under Labor Law § 740(5).
Who Is Covered
"Deployer" means any person, partnership, association or corporation that offers or uses an AI system for commerce in the state of New York, or provides an AI system for use by the general public in the state of New York. A deployer shall not include any natural person using an AI system for personal use. A developer may also be considered a deployer if its actions satisfy this definition.
"Developer" means a person, partnership, or corporation that designs, codes, or produces an AI system, or creates a substantial change with respect to an AI system, whether for its own use in the state of New York or for use by a third party in the state of New York. A deployer may also be considered a developer if its actions satisfy this definition.
What Is Covered
"Artificial intelligence system" or "AI system" means a machine-based system or combination of systems, that for explicit and implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Artificial intelligence system shall not include: (a) any system that (i) is used by a business entity solely for internal purposes and (ii) is not used as a substantial factor in a consequential decision; or (b) any software used primarily for basic computerized processes, such as anti-malware, anti-virus, auto-correct functions, calculators, databases, data storage, electronic communications, firewall, internet domain registration, internet website loading, networking, spam and robocall-filtering, spellcheck tools, spreadsheets, web caching, web hosting, or any tool that relates only to internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety or welfare of any individual within the state.
"High-risk AI system" means any AI system that, when deployed: (a) is a substantial factor in making a consequential decision; or (b) will have a material impact on the statutory or constitutional rights, civil liberties, safety, or welfare of an individual in the state.
Compliance Obligations 17 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86(1)–(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination resulting from the use, sale, or sharing of the system. Before using, selling, or sharing a high-risk AI system, the developer or deployer must have completed an independent audit under § 87 confirming compliance with this reasonable-care standard. The definition of algorithmic discrimination covers an extensive list of protected characteristics and expressly exempts internal bias testing, diversity pool expansion, and private club operations. Failure to comply is an unlawful discriminatory practice.
Statutory Text
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
H-01 Human Oversight of Automated Decisions · H-01.3H-01.4 · Deployer · Automated Decisionmaking
Civ. Rights Law § 86-a(1)(a)–(d)
Plain Language
Before using a high-risk AI system for a consequential decision, deployers must notify the end user at least five business days in advance — in clear, consumer-friendly terms and in all languages the company offers — that AI will be used. The deployer must also provide the end user a meaningful opportunity to opt out of the automated process and have the decision made by a human instead; opting out may not trigger adverse consequences and the deployer must render a decision within 45 days. When the AI decision would confer a benefit (e.g., social benefits, housing, emergency funds), the deployer must offer the user the option to waive the five-day advance notice, after which notice must still be given as early as practicable. Users are limited to one opt-out per consequential decision within a six-month period. An urgent-necessity exception exists: if compliance would cause imminent detriment to the end user, the notice and opt-out obligations are waived — but the right to request human review is never waived.
Statutory Text
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
Civ. Rights Law § 86-a(2)(a)–(b)
Plain Language
After a high-risk AI system has been used in a consequential decision, the deployer must inform the end user within five days. The deployer must then provide a clear appeal process that allows the end user to (1) formally contest the decision, (2) submit supporting information, and (3) obtain meaningful human review. The deployer must respond to appeals within 45 days, extendable once by another 45 days for complex or voluminous appeals, with notice to the user of the extension and reasons. Users are limited to one appeal per consequential decision within a six-month period.
Statutory Text
(a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Other · Automated Decisionmaking
Civ. Rights Law § 86-a(3)–(5)
Plain Language
Developers and deployers are legally responsible for the quality and accuracy of all consequential decisions made by their high-risk AI systems, including any resulting bias or algorithmic discrimination. The rights and obligations of Article 8-A cannot be waived by contract. An end user must choose between exercising the pre-decision opt-out (§ 86-a(1)) or the post-decision appeal (§ 86-a(2)) — they may not use both for the same decision. These provisions establish liability rules and structural constraints on existing obligations; they do not create independent affirmative compliance requirements.
Statutory Text
3. The deployer or developer of a high-risk AI system is legally responsible for quality and accuracy of all consequential decisions made, including any bias or algorithmic discrimination resulting from the operation of the AI system on their behalf. 4. The rights and obligations under this section may not be waived by any person, partnership, association or corporation. 5. With respect to a single consequential decision, an end user may not exercise both its right to opt-out of a consequential decision under subdivision one of this section and its right to appeal a consequential decision under subdivision two of this section.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1G-03.3 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(1)–(2), (4)–(5)
Plain Language
Developers and deployers of high-risk AI systems must not prevent employees — including former employees and independent contractors — from disclosing suspected violations to the Attorney General, including through employment terms or NDAs, and must not retaliate against employees who make such disclosures. Each developer and deployer must also provide an anonymous internal reporting process for employees who believe the entity has violated Article 8-A, any other law, or has made false or misleading statements about its risk management program. The internal process must include at least monthly status updates to the disclosing employee. Employees harmed by retaliation may petition a court for relief under Labor Law § 740(5), and nothing in this section limits protections under that law.
Statutory Text
1. Developers and/or deployers of high-risk AI systems shall not: (a) prevent any of their employees from disclosing information to the attorney general, including through terms and conditions of employment or seeking to enforce terms and conditions of employment, if the employee has reasonable cause to believe the information indicates a violation of this article; or (b) retaliate against an employee for disclosing information to the attorney general pursuant to this section. 2. An employee harmed by a violation of this article may petition a court for appropriate relief as provided in subdivision five of section seven hundred forty of the labor law. 4. Each developer and deployer shall provide a reasonable internal process through which an employee may anonymously disclose information to the developer or deployer if the employee believes in good faith that the information indicates that the developer or deployer has violated any provision of this article or any other law, or has made false or materially misleading statements related to its risk management policy and program, or failed to disclose known risks to employees, including, at a minimum, a monthly update to the person who made the disclosure regarding the status of the developer's or deployer's investigation of the disclosure and the actions taken by the developer or deployer in response to the disclosure. 5. This section does not limit protections provided to employees under section seven hundred forty of the labor law.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(3)
Plain Language
Developers and deployers must provide clear notice to all employees working on high-risk AI systems of their rights and responsibilities under Article 8-A, including the right of contractor and subcontractor employees to use the developer's internal disclosure process. A presumption of compliance attaches if the entity either (a) continuously posts the notice in all workplaces, provides it to new employees, and periodically distributes it to remote workers, or (b) provides written notice annually and obtains acknowledgment from all employees.
Statutory Text
3. Developers and deployers of high-risk AI systems shall provide a clear notice to all of their employees working on such AI systems of their rights and responsibilities under this article, including the right of employees of contractors and subcontractors to use the developer's internal process for making protected disclosures pursuant to subdivision four of this section. A developer or deployer is presumed to be in compliance with the requirements of this subdivision if the developer or deployer does either of the following: (a) at all times post and display within all workplaces maintained by the developer or deployer a notice to all employees of their rights and responsibilities under this article, ensure that all new employees receive equivalent notice, and ensure that employees who work remotely periodically receive an equivalent notice; or (b) no less frequently than once every year, provide written notice to all employees of their rights and responsibilities under this article and ensure that the notice is received and acknowledged by all of those employees.
H-02 Non-Discrimination & Bias Assessment · H-02.6H-02.7 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 87(1)–(3), (5)–(9)
Plain Language
Both developers and deployers of high-risk AI systems must engage independent third-party auditors to evaluate their systems for algorithmic discrimination and risk management program conformity. Developers must complete a first audit within six months of offering or deploying the system, then annually thereafter. Deployers must complete a first audit within six months of deployment, a second within one year, then biennially. Deployer audits must also assess system accuracy and reliability. Auditor independence requirements are strict: no prior 12-month business relationship with the commissioning entity, no current or planned commercial competition, no contingent fees or bonuses for positive results. Auditors must have access to all prior regulatory reports; audits may use AI tools in part but may not be completed entirely by AI and may not use a different high-risk AI system to complete the audit. An audit satisfying equivalent federal, state, or local requirements may serve as a substitute. The AG may promulgate additional rules on auditor independence and community engagement. Note that the audit requirement takes effect two years after enactment (one year later than other provisions).
Statutory Text
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
H-02 Non-Discrimination & Bias Assessment · Government · Automated Decisionmaking
Civ. Rights Law § 87(4)
Plain Language
The Attorney General has discretionary authority to promulgate additional audit rules to ensure audits properly assess algorithmic discrimination and compliance, and to recommend updated auditing frameworks to the legislature based on nationally or internationally recognized standards such as ISO frameworks. This creates a delegated rulemaking power but no immediate additional compliance obligation for developers or deployers — the obligation may expand through future rulemaking.
Statutory Text
4. At the attorney general's discretion, the attorney general may: (a) promulgate further rules as necessary to ensure that audits under this section assess whether or not AI systems produce algorithmic discrimination and otherwise comply with the provisions of this article; and (b) recommend an updated AI system auditing framework to the legislature, where such recommendations are based on a standard or framework (i) designed to evaluate the risks of AI systems, and (ii) that is nationally or internationally recognized and consensus-driven, including but not limited to a relevant framework or standard created by the International Standards Organization.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
Civ. Rights Law § 88(1)–(3)
Plain Language
Developers must file periodic reports with the Attorney General covering system description, intended and disallowed uses, development overview, training data overview, and information necessary for deployers to monitor compliance. The first report is due within six months of offering the system for deployment (or deploying it), with annual reports thereafter and an additional report within six months of any substantial change. Each report must be accompanied by the most recent independent audit. Developers who are also deployers should note the dual filing requirement.
Statutory Text
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
Civ. Rights Law § 88(4)
Plain Language
Deployers must file periodic reports with the Attorney General covering system description, actual and planned uses, any deviation from developer-intended uses, and an impact assessment addressing algorithmic discrimination risk, monetization plans, and cost-benefit evaluation for consumers. The first report is due within six months of deployment, the second within one year, then biennially thereafter, plus within six months of any substantial change. Entities that are both developer and deployer may file a single joint report covering both sets of requirements.
Statutory Text
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
G-02 Public Transparency & Documentation · G-02.4 · Government · Automated Decisionmaking
Civ. Rights Law § 88(5)
Plain Language
The Attorney General must maintain a publicly accessible online database, updated biannually, containing the reports and audits filed by developers and deployers. Developers and deployers may request redactions of sensitive or protected information under a process to be promulgated by the AG. While the AG maintains the database, the obligation to file reportable content falls on the developers and deployers — the public accessibility of their filings effectively creates a public transparency obligation.
Statutory Text
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 88(6)
Plain Language
For high-risk AI systems already in deployment when the law takes effect, developers and deployers have an 18-month grace period to complete and file their first report and associated audit. After the first filing, developers must file annually and deployers must file biennially. This transitional provision applies only to pre-existing deployments — new deployments after the effective date follow the standard six-month timeline under § 88(3)–(4).
Statutory Text
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89(1)–(2)
Plain Language
Every developer and deployer of high-risk AI systems must plan, document, and implement a risk management policy and program covering the principles, processes, and personnel used to identify, document, and mitigate foreseeable algorithmic discrimination risks. The program must be iterative and systematically reviewed and updated over the system's life cycle. Reasonableness is assessed considering NIST AI RMF 1.0 (or an AG-approved equivalent framework), the entity's size and complexity, the system's nature and scope, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems if sufficient. This creates both an establishment obligation and an ongoing maintenance obligation.
Statutory Text
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient.
R-02 Regulatory Disclosure & Submissions · R-02.2 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89(3)
Plain Language
The Attorney General may require developers or deployers to produce their risk management policy and program on demand, in a form and manner the AG prescribes, and may evaluate it for compliance. This is a responsive regulatory disclosure obligation — entities must maintain their risk management documentation in a form that can be produced to the AG when requested.
Statutory Text
3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89-a
Plain Language
No person or entity may develop, deploy, use, or sell an AI system that evaluates or classifies individuals' trustworthiness over time based on social behavior or personal characteristics, where the resulting social score leads to: differential treatment in unrelated social contexts, unjustified or disproportionate differential treatment, or infringement of constitutional or statutory rights. This is a categorical prohibition — there is no compliance pathway; social scoring AI systems meeting these criteria are simply banned.
Statutory Text
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Other · Automated Decisionmaking
Civ. Rights Law § 89-b
Plain Language
A developer may qualify for a safe harbor exempting it from all substantive obligations under Article 8-A if it meets four conditions: (1) obtains signed contractual commitments from every deployer that the AI system will not be used as a high-risk AI system; (2) implements reasonable technical safeguards to prevent or detect high-risk use; (3) prominently displays on its website, marketing materials, and licensing agreements that the system cannot be used as a high-risk AI system; and (4) maintains deployer agreements for at least five years. The safe harbor may be used as evidence to rebut the presumption of violation in private actions under § 89-c(3)(b). This provision creates no new affirmative compliance obligation — it defines an exemption pathway.
Statutory Text
A developer may be exempt from its duties and obligations under sections eighty-six, eighty-six-a, eighty-six-b, eighty-seven, eighty-eight, and eighty-nine of this article if such developer: 1. receives a written and signed contractual agreement from each deployer authorized to use the artificial intelligence system developed by such developer, including the developer if they are also a deployer, that such artificial intelligence system will not be used as a high-risk AI system; 2. implements reasonable technical safeguards designed to prevent or detect high-risk AI system use cases or otherwise demonstrates reasonable steps taken to ensure that any unauthorized deployments of its AI systems are not being used as a high-risk AI system; 3. prominently displays on its website, in marketing materials, and in all licensing agreements offered to prospective deployers of its AI system that the AI system cannot be used as a high-risk AI system; and 4. maintains records of deployer agreements for a period of not less than five years.
Other · Automated Decisionmaking
Exec. Law § 296(23)
Plain Language
Violations of the AI algorithmic discrimination prohibition under Civil Rights Law § 86 are also unlawful discriminatory practices under Executive Law § 296 (the New York Human Rights Law). This extends the existing Human Rights Law enforcement framework — including the New York State Division of Human Rights complaint process — to AI discrimination violations. It creates no new compliance obligation beyond what § 86 already requires.
Statutory Text
23. It shall be an unlawful discriminatory practice under this section for a deployer or a developer, as such terms are defined in section eighty-five of the civil rights law, to engage in an unlawful discriminatory practice under section eighty-six of the civil rights law.