A-08884
NY · State · USA
NY
USA
● Pending
New York Assembly Bill 8884 — An Act to amend the civil rights law and the executive law, in relation to the use of artificial intelligence systems (New York Artificial Intelligence Act)
The New York AI Act imposes comprehensive obligations on developers and deployers of high-risk AI systems used in consequential decisions affecting employment, housing, credit, healthcare, education, law enforcement, and other domains. Core requirements include a duty to take reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits on a recurring schedule, periodic reporting to the attorney general with public database access, and establishment of a risk management policy and program aligned with the NIST AI RMF. End users must receive advance notice before a high-risk AI system is used in a consequential decision and must be offered the right to opt out in favor of human decision-making or to appeal after the fact. Social scoring AI systems are categorically prohibited. Enforcement is through both the attorney general (with civil penalties up to $20,000 per violation and injunctive relief) and a private right of action with a plaintiff-favorable presumption at the motion to dismiss stage. The audit provisions take effect two years after enactment; all other provisions take effect one year after enactment.
Summary

The New York AI Act imposes comprehensive obligations on developers and deployers of high-risk AI systems used in consequential decisions affecting employment, housing, credit, healthcare, education, law enforcement, and other domains. Core requirements include a duty to take reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits on a recurring schedule, periodic reporting to the attorney general with public database access, and establishment of a risk management policy and program aligned with the NIST AI RMF. End users must receive advance notice before a high-risk AI system is used in a consequential decision and must be offered the right to opt out in favor of human decision-making or to appeal after the fact. Social scoring AI systems are categorically prohibited. Enforcement is through both the attorney general (with civil penalties up to $20,000 per violation and injunctive relief) and a private right of action with a plaintiff-favorable presumption at the motion to dismiss stage. The audit provisions take effect two years after enactment; all other provisions take effect one year after enactment.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement: the attorney general may apply to the supreme court for injunctive relief against violations of sections 86-a, 86-b, 87, 88, 89, or 89-a, without requiring proof of actual injury. Private right of action: any person harmed by a violation of sections 86-a, 86-b, 87, 88, 89, or 89-a may commence a plenary proceeding. At the motion to dismiss stage, the court shall presume the AI system was operated in violation of law and that the violation caused the alleged harm; the defendant must rebut by clear and convincing evidence. A developer may rebut presumptions by demonstrating compliance with the safe harbor under section 89-b. The supreme court has jurisdiction over all enforcement actions.
Penalties
AG enforcement: civil penalty of up to $20,000 per violation, injunctive relief, and restitution. Court may make allowances to the attorney general under CPLR § 8303(a)(6). Private right of action: compensatory damages and legal fees to the prevailing party. Whistleblower claims: appropriate relief as provided in Labor Law § 740(5). No statutory minimum for private plaintiffs; compensatory damages require proof of actual harm.
Who Is Covered
"Deployer" means any person, partnership, association or corporation that offers or uses an AI system for commerce in the state of New York, or provides an AI system for use by the general public in the state of New York. A deployer shall not include any natural person using an AI system for personal use. A developer may also be considered a deployer if its actions satisfy this definition.
"Developer" means a person, partnership, or corporation that designs, codes, or produces an AI system, or creates a substantial change with respect to an AI system, whether for its own use in the state of New York or for use by a third party in the state of New York. A deployer may also be considered a developer if its actions satisfy this definition.
What Is Covered
"Artificial intelligence system" or "AI system" means a machine-based system or combination of systems, that for explicit and implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Artificial intelligence system shall not include: (a) any system that (i) is used by a business entity solely for internal purposes and (ii) is not used as a substantial factor in a consequential decision; or (b) any software used primarily for basic computerized processes, such as anti-malware, anti-virus, auto-correct functions, calculators, databases, data storage, electronic communications, firewall, internet domain registration, internet website loading, networking, spam and robocall-filtering, spellcheck tools, spreadsheets, web caching, web hosting, or any tool that relates only to internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety or welfare of any individual within the state.
"High-risk AI system" means any AI system that, when deployed: (a) is a substantial factor in making a consequential decision; or (b) will have a material impact on the statutory or constitutional rights, civil liberties, safety, or welfare of an individual in the state.
Compliance Obligations 14 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 86(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination resulting from the use, sale, or sharing of their systems. Before using, selling, or sharing a high-risk AI system, the developer or deployer must have completed an independent audit under § 87 confirming reasonable care was taken. Algorithmic discrimination covers unjustified differential treatment based on an extensive list of protected characteristics. Self-testing to identify and mitigate bias, pool-expansion efforts for diversity, and private club exemptions are carved out from the definition of algorithmic discrimination. This provision is also declared an unlawful discriminatory practice under Executive Law § 296(23), bringing it within the jurisdiction of New York's human rights enforcement framework.
Statutory Text
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
H-01 Human Oversight of Automated Decisions · H-01.3H-01.4 · Deployer · Automated Decisionmaking
Civil Rights Law § 86-a(1)(a)-(d)
Plain Language
Before using a high-risk AI system to make or assist in making a consequential decision, a deployer must give the end user at least five business days' advance notice — in clear, conspicuous, multilingual terms — that AI will be used. The deployer must also provide a meaningful opportunity to opt out and have the decision made by a human instead, with no adverse consequences for opting out and a 45-day deadline to render the human decision. When the AI decision would confer a benefit, the deployer must offer the end user the option to waive the five-day waiting period; if waived, notice must still be given as early as practicable. End users may exercise the opt-out no more than once per consequential decision in a six-month period. An urgent-necessity exception applies where compliance would cause imminent detriment to the end user's welfare (e.g., emergency benefits), but even in that case the right to request human review is never waived. These rights cannot be waived by contract.
Statutory Text
1. (a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
Civil Rights Law § 86-a(2)(a)-(b)
Plain Language
After a high-risk AI system has been used in a consequential decision, the deployer must notify the end user within five days and provide an accessible appeal process. The appeal must allow the end user to (1) formally contest the decision, (2) submit supporting information, and (3) obtain meaningful human review. The deployer must respond within 45 days, extendable once by 45 days for complex or high-volume appeals with notice and reasons to the end user. Each end user may appeal the same consequential decision only once within a six-month period. Under § 86-a(5), an end user who exercised the pre-decision opt-out right under subdivision 1 cannot also exercise the post-decision appeal right under this subdivision for the same decision.
Statutory Text
2. (a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Other · Automated Decisionmaking
Civil Rights Law § 86-a(3)
Plain Language
Developers and deployers are legally responsible for the quality and accuracy of all consequential decisions made by their high-risk AI systems, including any resulting bias or algorithmic discrimination. This is a strict liability allocation — the developer or deployer cannot disclaim responsibility by attributing errors to the AI system itself. Combined with the private right of action in § 89-c(2), this creates direct liability exposure for harm caused by inaccurate or discriminatory AI outputs.
Statutory Text
3. The deployer or developer of a high-risk AI system is legally responsible for quality and accuracy of all consequential decisions made, including any bias or algorithmic discrimination resulting from the operation of the AI system on their behalf.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 86-b(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must not prevent employees — including independent contractors and former employees — from disclosing information to the attorney general when the employee reasonably believes it indicates a violation of the article. Retaliation against employees who make such disclosures is prohibited. Employment contracts and terms of employment cannot restrict these disclosures. Harmed employees may seek court relief under Labor Law § 740(5). These protections supplement but do not limit the general whistleblower protections under Labor Law § 740.
Statutory Text
1. Developers and/or deployers of high-risk AI systems shall not: (a) prevent any of their employees from disclosing information to the attorney general, including through terms and conditions of employment or seeking to enforce terms and conditions of employment, if the employee has reasonable cause to believe the information indicates a violation of this article; or (b) retaliate against an employee for disclosing information to the attorney general pursuant to this section. 2. An employee harmed by a violation of this article may petition a court for appropriate relief as provided in subdivision five of section seven hundred forty of the labor law.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 86-b(3)
Plain Language
Developers and deployers must provide clear notice to all employees working on high-risk AI systems of their rights under the article, including the right of contractor and subcontractor employees to use the developer's internal anonymous disclosure process. Two safe-harbor methods create a presumption of compliance: (a) continuously posting workplace notices, onboarding new employees with equivalent notice, and periodically notifying remote workers; or (b) providing written notice at least annually to all employees with documented receipt and acknowledgment.
Statutory Text
3. Developers and deployers of high-risk AI systems shall provide a clear notice to all of their employees working on such AI systems of their rights and responsibilities under this article, including the right of employees of contractors and subcontractors to use the developer's internal process for making protected disclosures pursuant to subdivision four of this section. A developer or deployer is presumed to be in compliance with the requirements of this subdivision if the developer or deployer does either of the following: (a) at all times post and display within all workplaces maintained by the developer or deployer a notice to all employees of their rights and responsibilities under this article, ensure that all new employees receive equivalent notice, and ensure that employees who work remotely periodically receive an equivalent notice; or (b) no less frequently than once every year, provide written notice to all employees of their rights and responsibilities under this article and ensure that the notice is received and acknowledged by all of those employees.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 86-b(4)
Plain Language
Each developer and deployer must maintain a reasonable internal anonymous disclosure channel for employees who believe in good faith that the entity has violated the article, violated any other law, made false or misleading statements about its risk management program, or failed to disclose known risks. The process must include at minimum a monthly status update to the disclosing employee on the investigation and any actions taken. The anonymous disclosure channel must cover contractor and subcontractor employees as well, per the notice requirement in § 86-b(3).
Statutory Text
4. Each developer and deployer shall provide a reasonable internal process through which an employee may anonymously disclose information to the developer or deployer if the employee believes in good faith that the information indicates that the developer or deployer has violated any provision of this article or any other law, or has made false or materially misleading statements related to its risk management policy and program, or failed to disclose known risks to employees, including, at a minimum, a monthly update to the person who made the disclosure regarding the status of the developer's or deployer's investigation of the disclosure and the actions taken by the developer or deployer in response to the disclosure.
G-01 AI Governance Program & Documentation · G-01.5 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 87(1)-(3), (5)-(9)
Plain Language
Developers and deployers of high-risk AI systems must engage independent third-party auditors on recurring schedules. Developers must complete a first audit within six months of initial offering or deployment, then annually. Deployers must complete a first audit within six months of deployment, a second one year later, then every two years. Developer audits must evaluate reasonable care against algorithmic discrimination and conformity of the risk management program. Deployer audits must also assess system accuracy and reliability against intended and actual use cases. Auditors must be independent — no prior service relationship with the company in the past 12 months, no competitive conflict for 5 years post-audit, no contingent fees. Audits may use AI tools in part (e.g., controlled testing, pattern detection) but cannot be completed entirely by AI — a different high-risk AI system cannot be used for auditing, and AI-drafted audits require meaningful human review. Auditors must receive all prior § 88 reports. Cross-compliance: audits conducted under other applicable law that satisfy all § 87 requirements are deemed compliant.
Statutory Text
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
Civil Rights Law § 88(1)-(3)
Plain Language
Developers must file reports with the attorney general on a defined schedule: within six months of initial offering or deployment, annually thereafter, and within six months of any substantial change. Developer reports must describe intended and disallowed uses, development methodology, training data overview, and information sufficient for deployers to monitor the system and fulfill their own obligations. Each report must be accompanied by a copy of the most recently completed independent audit. Substantial changes — new versions, releases, or updates affecting use cases, functionality, or expected outcomes — trigger an additional reporting obligation.
Statutory Text
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
Civil Rights Law § 88(4)
Plain Language
Deployers must file reports with the attorney general on a schedule: within six months of deployment, one year after the first report, then every two years, plus within six months of any substantial change. Deployer reports must describe actual and planned uses for consequential decisions, flag any developer-disallowed uses, and include an impact assessment covering algorithmic discrimination risks and mitigation steps, monetization details, and a cost-benefit evaluation for consumers. Each report must be accompanied by the latest independent audit. Entities that are both developer and deployer may file a single joint report covering both sets of requirements.
Statutory Text
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
G-02 Public Transparency & Documentation · G-02.4 · Government · Automated Decisionmaking
Civil Rights Law § 88(5)
Plain Language
The attorney general must maintain a publicly accessible online database containing all developer and deployer reports and audits, updated biannually. Reports are published with redactions where developers or deployers have successfully requested protection of sensitive information through a process the attorney general will establish by rule. While this provision primarily directs the attorney general, it creates a constructive public disclosure obligation for developers and deployers — their reports and audits will be published unless they affirmatively seek and obtain redactions.
Statutory Text
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 88(6)
Plain Language
For high-risk AI systems already deployed when the article takes effect, developers and deployers receive an 18-month grace period to file their first report and associated audit. After the initial filing, developers must report annually and deployers every two years. This transitional provision gives existing operators more time than the six-month window that applies to newly developed or deployed systems under § 88(3)-(4).
Statutory Text
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 89(1)-(3)
Plain Language
Each developer and deployer of a high-risk AI system must plan, document, and implement a risk management policy and program covering the identification, documentation, and mitigation of known and reasonably foreseeable algorithmic discrimination risks. The program must be iterative — regularly and systematically reviewed and updated over the system's life cycle, including updates to documentation. Reasonableness is assessed against NIST AI RMF v1.0 or an AG-approved equivalent framework, the entity's size and complexity, the system's nature and intended uses, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems if sufficient. The attorney general may require disclosure of the program and evaluate it for compliance. This obligation is the foundation that the independent audit under § 87 evaluates for conformity.
Statutory Text
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient. 3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.1 · DeveloperDeployer · Automated Decisionmaking
Civil Rights Law § 89-a
Plain Language
No entity may develop, deploy, use, or sell an AI system that evaluates or classifies individuals' trustworthiness over time based on social behavior or personality characteristics where the resulting social score leads to differential treatment in unrelated contexts, unjustified or disproportionate differential treatment, or infringement of constitutional or statutory rights. This is a categorical prohibition — there is no compliance pathway that permits social scoring AI. The prohibition applies broadly to any person, partnership, association, or corporation, not just to developers or deployers of high-risk AI systems.
Statutory Text
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.