S-01169
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2026-01-01
New York Senate Bill 1169-A — An Act to amend the civil rights law and the executive law, in relation to the use of artificial intelligence systems (New York Artificial Intelligence Act)
The New York AI Act imposes comprehensive obligations on developers and deployers of high-risk AI systems used in consequential decisions affecting employment, education, housing, healthcare, financial services, law enforcement, and legal services. Core requirements include a duty of reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits, periodic reporting to the Attorney General, and implementation of a risk management policy and program aligned with the NIST AI RMF. Deployers must provide end users with advance notice, opt-out rights, and post-decision appeal with meaningful human review. The bill prohibits social scoring AI systems and includes whistleblower protections. Enforcement is through the Attorney General (injunctions, up to $20,000 per violation, restitution) and a private right of action with compensatory damages, legal fees, and a plaintiff-favorable rebuttable presumption at the motion-to-dismiss stage. The audit requirements take effect two years after enactment; all other provisions take effect one year after enactment.
Summary

The New York AI Act imposes comprehensive obligations on developers and deployers of high-risk AI systems used in consequential decisions affecting employment, education, housing, healthcare, financial services, law enforcement, and legal services. Core requirements include a duty of reasonable care to prevent algorithmic discrimination, mandatory independent third-party audits, periodic reporting to the Attorney General, and implementation of a risk management policy and program aligned with the NIST AI RMF. Deployers must provide end users with advance notice, opt-out rights, and post-decision appeal with meaningful human review. The bill prohibits social scoring AI systems and includes whistleblower protections. Enforcement is through the Attorney General (injunctions, up to $20,000 per violation, restitution) and a private right of action with compensatory damages, legal fees, and a plaintiff-favorable rebuttable presumption at the motion-to-dismiss stage. The audit requirements take effect two years after enactment; all other provisions take effect one year after enactment.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement and private right of action. The Attorney General may apply to the Supreme Court for injunctive relief and civil penalties for violations of sections 86-a, 86-b, 87, 88, 89, or 89-a without requiring proof of actual injury. Any person harmed by a violation of those sections may bring a private plenary proceeding. At the motion to dismiss stage, the court presumes the AI system was operated in violation of the specified law and that the violation caused the alleged harm; defendants must rebut by clear and convincing evidence. Developers may invoke the safe harbor under section 89-b as a rebuttal defense.
Penalties
AG enforcement: civil penalty of up to $20,000 per violation, injunctive relief, and restitution. No proof of actual injury required for AG injunction. Private right of action: compensatory damages and legal fees to the prevailing party. Private plaintiffs must demonstrate harm, but benefit from a rebuttable presumption of violation and causation at the motion to dismiss stage. Whistleblower employees may petition for appropriate relief as provided in Labor Law § 740(5).
Who Is Covered
"Deployer" means any person, partnership, association or corporation that offers or uses an AI system for commerce in the state of New York, or provides an AI system for use by the general public in the state of New York. A deployer shall not include any natural person using an AI system for personal use. A developer may also be considered a deployer if its actions satisfy this definition.
"Developer" means a person, partnership, or corporation that designs, codes, or produces an AI system, or creates a substantial change with respect to an AI system, whether for its own use in the state of New York or for use by a third party in the state of New York. A deployer may also be considered a developer if its actions satisfy this definition.
What Is Covered
"Artificial intelligence system" or "AI system" means a machine-based system or combination of systems, that for explicit and implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Artificial intelligence system shall not include: (a) any system that (i) is used by a business entity solely for internal purposes and (ii) is not used as a substantial factor in a consequential decision; or (b) any software used primarily for basic computerized processes, such as anti-malware, anti-virus, auto-correct functions, calculators, databases, data storage, electronic communications, firewall, internet domain registration, internet website loading, networking, spam and robocall-filtering, spellcheck tools, spreadsheets, web caching, web hosting, or any tool that relates only to internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety or welfare of any individual within the state.
"High-risk AI system" means any AI system that, when deployed: (a) is a substantial factor in making a consequential decision; or (b) will have a material impact on the statutory or constitutional rights, civil liberties, safety, or welfare of an individual in the state.
Compliance Obligations 17 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination — defined broadly to cover unjustified differential treatment across a wide range of protected characteristics. Before using, selling, or sharing a high-risk AI system, they must have completed an independent audit confirming this duty has been met. Testing to identify and mitigate bias is explicitly carved out of the definition of algorithmic discrimination, as is expanding applicant pools for diversity purposes. This is both a substantive standard of care and a pre-condition for lawful deployment.
Statutory Text
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
H-01 Human Oversight of Automated Decisions · H-01.3H-01.4 · Deployer · Automated Decisionmaking
Civ. Rights Law § 86-a(1)(a)-(d)
Plain Language
Before using a high-risk AI system for a consequential decision, deployers must give end users at least five business days' advance notice — in clear, multilingual terms — that AI will be used. They must also give the end user sufficient time to opt out of the AI decision process and have the decision made by a human instead, with no adverse consequences for opting out. If the decision confers a benefit, the deployer must offer the end user the option to waive the five-day notice period, after which notice must still be given as early as practicable. End users get one opt-out per consequential decision per six-month period. The entire advance notice and opt-out obligation is waived in cases of urgent necessity to confer a benefit (e.g., emergency funds), but even then the end user's right to request human review cannot be waived. Deployers must render a decision within 45 days of an opt-out request.
Statutory Text
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
Civ. Rights Law § 86-a(2)(a)-(b)
Plain Language
After a consequential decision is made using a high-risk AI system, the deployer must notify the end user within five days. The deployer must also explain the appeal process, which must at minimum allow the end user to (1) formally contest the decision, (2) submit supporting information, and (3) obtain meaningful human review. The deployer must respond within 45 days, with one 45-day extension permitted where reasonably necessary. End users get one appeal per consequential decision per six-month period. Note the election requirement: under § 86-a(5), an end user may exercise either the pre-decision opt-out or the post-decision appeal, but not both for the same consequential decision.
Statutory Text
(a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Other · Automated Decisionmaking
Civ. Rights Law § 86-a(3)
Plain Language
Developers and deployers are legally responsible for the quality, accuracy, and non-discrimination of all consequential decisions made by or with the assistance of their high-risk AI systems. This is a liability attribution provision — it ensures that developers and deployers cannot disclaim responsibility for outcomes produced by their AI systems. It does not prescribe specific compliance steps; rather, it establishes who bears legal responsibility when other obligations are breached.
Statutory Text
The deployer or developer of a high-risk AI system is legally responsible for quality and accuracy of all consequential decisions made, including any bias or algorithmic discrimination resulting from the operation of the AI system on their behalf.
Other · Automated Decisionmaking
Civ. Rights Law § 86-a(4)
Plain Language
No party — whether developer, deployer, or end user — may contractually waive the rights or obligations under Section 86-a. This means end users cannot be required to waive their notice, opt-out, or appeal rights by contract, and developers and deployers cannot shift their obligations by agreement. This is a structural provision reinforcing the mandatory nature of the section's obligations rather than creating a new compliance duty.
Statutory Text
The rights and obligations under this section may not be waived by any person, partnership, association or corporation.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(1)
Plain Language
Developers and deployers of high-risk AI systems may not prevent employees — including former employees and independent contractors — from disclosing information to the Attorney General when the employee reasonably believes the information indicates a violation of the New York AI Act. Employers may not enforce employment terms that would restrict such disclosures, and may not retaliate against employees who make them. Harmed employees may petition a court for relief under Labor Law § 740(5).
Statutory Text
Developers and/or deployers of high-risk AI systems shall not: (a) prevent any of their employees from disclosing information to the attorney general, including through terms and conditions of employment or seeking to enforce terms and conditions of employment, if the employee has reasonable cause to believe the information indicates a violation of this article; or (b) retaliate against an employee for disclosing information to the attorney general pursuant to this section.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(3)
Plain Language
Developers and deployers must notify all employees working on high-risk AI systems of their rights under the Act, including the right of contractor and subcontractor employees to use the developer's internal disclosure process. Compliance is presumed if the entity either (a) continuously posts workplace notices, provides equivalent notice to new employees, and periodically notifies remote employees, or (b) provides annual written notice received and acknowledged by all employees. The two options are alternative safe harbors for demonstrating compliance.
Statutory Text
Developers and deployers of high-risk AI systems shall provide a clear notice to all of their employees working on such AI systems of their rights and responsibilities under this article, including the right of employees of contractors and subcontractors to use the developer's internal process for making protected disclosures pursuant to subdivision four of this section. A developer or deployer is presumed to be in compliance with the requirements of this subdivision if the developer or deployer does either of the following: (a) at all times post and display within all workplaces maintained by the developer or deployer a notice to all employees of their rights and responsibilities under this article, ensure that all new employees receive equivalent notice, and ensure that employees who work remotely periodically receive an equivalent notice; or (b) no less frequently than once every year, provide written notice to all employees of their rights and responsibilities under this article and ensure that the notice is received and acknowledged by all of those employees.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(4)
Plain Language
Every developer and deployer must establish a reasonable internal process enabling employees to anonymously report suspected violations of the Act, false or misleading risk management statements, or failure to disclose known risks. The process must include at least monthly status updates to the disclosing employee on the investigation and any responsive actions taken. The scope of covered disclosures is broad — it covers violations of any provision of the Act, any other law, or misleading risk management statements.
Statutory Text
Each developer and deployer shall provide a reasonable internal process through which an employee may anonymously disclose information to the developer or deployer if the employee believes in good faith that the information indicates that the developer or deployer has violated any provision of this article or any other law, or has made false or materially misleading statements related to its risk management policy and program, or failed to disclose known risks to employees, including, at a minimum, a monthly update to the person who made the disclosure regarding the status of the developer's or deployer's investigation of the disclosure and the actions taken by the developer or deployer in response to the disclosure.
H-02 Non-Discrimination & Bias Assessment · H-02.6H-02.7 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 87(1)-(3), (5)-(9)
Plain Language
Developers and deployers of high-risk AI systems must engage independent third-party auditors to assess whether they have taken reasonable care to prevent algorithmic discrimination and whether their risk management programs conform to statutory requirements. Developer audits are due within six months of initial offering/deployment, then annually. Deployer audits are due within six months of deployment, then annually for one year, then biennially. Deployer audits additionally cover system accuracy and reliability. Independence requirements are strict: auditors cannot have provided any services to the commissioning entity in the prior 12 months, cannot be competitors, cannot receive contingent fees, and must receive all prior reports filed under § 88. Audits may use AI tools to assist (e.g., testing the system in a controlled environment) but cannot be completed entirely by AI and cannot use a different high-risk AI system. An audit completed under another law satisfies these requirements if it covers all required elements. This section takes effect two years after enactment.
Statutory Text
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
Civ. Rights Law § 88(1)-(3)
Plain Language
Developers must file reports with the Attorney General on a defined schedule: within six months of initial offering or deployment, annually thereafter, and within six months of any substantial change. Reports must include system description (intended and disallowed uses), development overview, training data overview, and sufficient information for deployers to monitor compliance. Each report must be accompanied by the most recent independent audit. Substantial change triggers include new versions, new releases, or updates significantly affecting use cases, functionality, or expected outcomes.
Statutory Text
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
Civ. Rights Law § 88(4)
Plain Language
Deployers must file reports with the Attorney General on a defined schedule: within six months of initial deployment, a second report one year later, then biennially, plus within six months of any substantial change. Reports must include a system description covering actual and intended uses with respect to consequential decisions and whether any developer-unintended uses are occurring. Reports must also include an impact assessment covering algorithmic discrimination risk and mitigation steps, monetization details, and a cost-benefit evaluation for consumers. Entities that are both developer and deployer may file a single joint report. Each report must be accompanied by the latest audit.
Statutory Text
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 88(5)-(6)
Plain Language
The Attorney General must create a redaction process for reports and maintain a publicly accessible online database of reports and audits, updated biannually. For high-risk AI systems already deployed at the effective date, developers and deployers have 18 months to complete and file their first report and audit, followed by annual (developers) or biennial (deployers) subsequent reports. This transition provision gives existing systems additional compliance runway beyond the standard six-month initial filing window.
Statutory Text
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually. 6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89(1)-(2)
Plain Language
Every developer and deployer of high-risk AI systems must plan, document, and implement a risk management policy and program addressing algorithmic discrimination risks. The program must specify the principles, processes, and personnel used to identify, document, and mitigate known or foreseeable discrimination risks. It must be iterative — regularly and systematically reviewed and updated over the AI system's lifecycle. Reasonableness is evaluated considering the NIST AI RMF 1.0 (or an equivalent framework selected by the AG), the entity's size and complexity, the system's nature and intended uses, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems. The AG may require disclosure of the program and evaluate it for compliance.
Statutory Text
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient.
R-02 Regulatory Disclosure & Submissions · R-02.2 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89(3)
Plain Language
The Attorney General may at any time require a developer or deployer to produce its risk management policy and program in a prescribed form. The AG may also evaluate the program for compliance. This means entities must maintain their risk management documentation in a form that can be produced on request — it is not sufficient to have a program only in concept.
Statutory Text
3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89-a
Plain Language
No person or entity may develop, deploy, use, or sell an AI system that evaluates or classifies individuals' trustworthiness over time based on social behavior or personal characteristics where the resulting social score leads to: (1) differential treatment in unrelated social contexts, (2) unjustified or disproportionate differential treatment, or (3) infringement of constitutional or statutory rights. This is a categorical prohibition — no compliance program, testing, or disclosure can authorize social scoring systems that meet these criteria.
Statutory Text
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Other · Automated Decisionmaking
Civ. Rights Law § 89-b
Plain Language
A developer may claim exemption from essentially all obligations under the Act if it: (1) obtains signed written agreements from all deployers that the system will not be used as a high-risk AI system, (2) implements reasonable technical safeguards to prevent or detect high-risk use, (3) prominently discloses on its website, in marketing materials, and in licensing agreements that the system cannot be used as a high-risk AI system, and (4) maintains deployer agreement records for at least five years. This is a safe harbor — it does not create a new obligation but provides a defense that developers can invoke to rebut the presumption of violation in private actions under § 89-c(3)(b). All four conditions must be met.
Statutory Text
A developer may be exempt from its duties and obligations under sections eighty-six, eighty-six-a, eighty-six-b, eighty-seven, eighty-eight, and eighty-nine of this article if such developer: 1. receives a written and signed contractual agreement from each deployer authorized to use the artificial intelligence system developed by such developer, including the developer if they are also a deployer, that such artificial intelligence system will not be used as a high-risk AI system; 2. implements reasonable technical safeguards designed to prevent or detect high-risk AI system use cases or otherwise demonstrates reasonable steps taken to ensure that any unauthorized deployments of its AI systems are not being used as a high-risk AI system; 3. prominently displays on its website, in marketing materials, and in all licensing agreements offered to prospective deployers of its AI system that the AI system cannot be used as a high-risk AI system; and 4. maintains records of deployer agreements for a period of not less than five years.
Other · Automated Decisionmaking
Exec. Law § 296(23) (as added by § 4 of the Act)
Plain Language
This provision extends the reach of the New York Human Rights Law (Executive Law § 296) by making any violation of the AI Act's unlawful discriminatory practice provisions (Civil Rights Law § 86) simultaneously a violation of the Human Rights Law. This potentially opens enforcement through the Division of Human Rights and the full apparatus of the Human Rights Law, including its remedies and procedures, in addition to the enforcement mechanisms in the AI Act itself. It creates no new substantive obligation but expands the enforcement pathways available.
Statutory Text
It shall be an unlawful discriminatory practice under this section for a deployer or a developer, as such terms are defined in section eighty-five of the civil rights law, to engage in an unlawful discriminatory practice under section eighty-six of the civil rights law.