S-01169
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2027-01-01
New York Senate Bill 1169-A — An Act to amend the civil rights law and the executive law, in relation to the use of artificial intelligence systems (New York Artificial Intelligence Act)
Comprehensive AI anti-discrimination and governance framework for New York. Imposes obligations on developers and deployers of high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, education, law enforcement, legal services, financial services). Core requirements include: a duty of reasonable care to prevent algorithmic discrimination; mandatory independent third-party audits on a recurring schedule; periodic reporting to the Attorney General with public database publication; a formal risk management policy and program aligned with the NIST AI RMF; pre-decision notice and opt-out rights for end users; post-decision appeal with meaningful human review; whistleblower protections; and a categorical ban on social scoring AI. Enforcement is dual: the AG may seek injunctions, restitution, and civil penalties up to $20,000 per violation, and harmed individuals have a private right of action with a plaintiff-favorable presumption at the motion to dismiss stage. A developer safe harbor is available where the developer obtains written deployer agreements not to use the system as high-risk, implements technical safeguards, and maintains records for five years.
Summary

Comprehensive AI anti-discrimination and governance framework for New York. Imposes obligations on developers and deployers of high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, education, law enforcement, legal services, financial services). Core requirements include: a duty of reasonable care to prevent algorithmic discrimination; mandatory independent third-party audits on a recurring schedule; periodic reporting to the Attorney General with public database publication; a formal risk management policy and program aligned with the NIST AI RMF; pre-decision notice and opt-out rights for end users; post-decision appeal with meaningful human review; whistleblower protections; and a categorical ban on social scoring AI. Enforcement is dual: the AG may seek injunctions, restitution, and civil penalties up to $20,000 per violation, and harmed individuals have a private right of action with a plaintiff-favorable presumption at the motion to dismiss stage. A developer safe harbor is available where the developer obtains written deployer agreements not to use the system as high-risk, implements technical safeguards, and maintains records for five years.

Enforcement & Penalties
Enforcement Authority
Dual enforcement. The Attorney General may bring an action in the Supreme Court for injunctive relief, restitution, and civil penalties without proof of actual injury. Private right of action by plenary proceeding for any person harmed by a violation of §§ 86-a, 86-b, 87, 88, 89, or 89-a. At the motion to dismiss stage, the court shall presume the AI system was operated in violation of specified law and that such violation caused the alleged harm; the defendant may rebut through clear and convincing evidence. A developer may rebut presumptions by demonstrating compliance with the safe harbor under § 89-b.
Penalties
AG enforcement: civil penalty of up to $20,000 per violation, injunctive relief, and restitution; no proof of actual injury required for injunction. Private right of action: compensatory damages and legal fees to the prevailing party. Private plaintiffs must show harm. Court allowances to the AG as provided in CPLR § 8303(a)(6).
Who Is Covered
"Deployer" means any person, partnership, association or corporation that offers or uses an AI system for commerce in the state of New York, or provides an AI system for use by the general public in the state of New York. A deployer shall not include any natural person using an AI system for personal use. A developer may also be considered a deployer if its actions satisfy this definition.
"Developer" means a person, partnership, or corporation that designs, codes, or produces an AI system, or creates a substantial change with respect to an AI system, whether for its own use in the state of New York or for use by a third party in the state of New York. A deployer may also be considered a developer if its actions satisfy this definition.
What Is Covered
"Artificial intelligence system" or "AI system" means a machine-based system or combination of systems, that for explicit and implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Artificial intelligence system shall not include: (a) any system that (i) is used by a business entity solely for internal purposes and (ii) is not used as a substantial factor in a consequential decision; or (b) any software used primarily for basic computerized processes, such as anti-malware, anti-virus, auto-correct functions, calculators, databases, data storage, electronic communications, firewall, internet domain registration, internet website loading, networking, spam and robocall-filtering, spellcheck tools, spreadsheets, web caching, web hosting, or any tool that relates only to internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety or welfare of any individual within the state.
"High-risk AI system" means any AI system that, when deployed: (a) is a substantial factor in making a consequential decision; or (b) will have a material impact on the statutory or constitutional rights, civil liberties, safety, or welfare of an individual in the state.
Compliance Obligations 17 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must exercise reasonable care to prevent foreseeable algorithmic discrimination arising from the use, sale, or sharing of those systems. Before using, selling, or sharing a high-risk AI system, an independent audit confirming such reasonable care must be completed. The definition of algorithmic discrimination covers an extensive list of protected characteristics. Importantly, testing your own system to identify bias, expanding applicant pools to increase diversity, and acts by private clubs exempt under the federal Civil Rights Act are excluded from the definition of algorithmic discrimination. This is a foundational duty — failure to comply is an unlawful discriminatory practice actionable under the Human Rights Law.
Statutory Text
1. A developer or deployer shall take reasonable care to prevent foreseeable risk of algorithmic discrimination that is a consequence of the use, sale, or sharing of a high-risk AI system or a product featuring a high-risk AI system. 2. Any developer or deployer that uses, sells, or shares a high-risk AI system shall have completed an independent audit, pursuant to section eighty-seven of this article, confirming that the developer or deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Civ. Rights Law § 86-a(1)(a)(i), (1)(b), (1)(c)
Plain Language
Before using a high-risk AI system for a consequential decision, a deployer must notify the end user at least five business days in advance — in clear, multilingual, consumer-friendly terms — that AI will be used. When the decision would confer a benefit on the end user, the deployer must offer the end user the option to waive the five-day notice; if waived, notice must still be given as early as practicable. An urgency exception applies when the decision confers a benefit and delay would cause imminent detriment to the end user, though even under the urgency exception, the end user's right to request human review is never waived. Notice must be provided in every language in which the deployer offers its end services.
Statutory Text
(a) Any deployer that employs a high-risk AI system for a consequential decision shall comply with the following requirements; provided, however, that where there is an urgent necessity for a decision to be made to confer a benefit to the end user, including, but not limited to, social benefits, housing access, or dispensing of emergency funds, and compliance with this section would cause imminent detriment to the welfare of the end user, such obligation shall be considered waived; provided further, that nothing in this section shall be construed to waive a natural person's option to request human review of the decision: (i) inform the end user at least five business days prior to the use of such system for the making of a consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision; and (b) If a deployer employs a high-risk AI system for a consequential decision to determine whether to or on what terms to confer a benefit on an end user, the deployer shall offer the end user the option to waive their right to advance notice of five business days under this subdivision. (c) If the end user clearly and affirmatively waives their right to five business days' notice, the deployer shall then inform the end user as early as practicable before the making of the consequential decision in clear, conspicuous, and consumer-friendly terms, made available in each of the languages in which the company offers its end services, that AI systems will be used to make a decision or to assist in making a decision. The deployer shall allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days.
H-01 Human Oversight of Automated Decisions · H-01.4 · Deployer · Automated Decisionmaking
Civ. Rights Law § 86-a(1)(a)(ii), (1)(d)
Plain Language
Deployers must give end users a clear, accessible opportunity to opt out of having a consequential decision made by an AI system and instead have it made by a human representative. The deployer must render the human decision within 45 days. Consumers may not be punished or face any adverse action for exercising the opt-out. The opt-out right is limited to one exercise per consequential decision per six-month period. An end user cannot exercise both the opt-out right (pre-decision) and the appeal right (post-decision) with respect to the same consequential decision.
Statutory Text
(ii) allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated consequential decision process and for the decision to be made by a human representative. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer shall render a decision to the consumer within forty-five days. (d) An end user shall be entitled to no more than one opt-out with respect to the same consequential decision within a six-month period.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
Civ. Rights Law § 86-a(2)(a)-(b)
Plain Language
Within five days after a high-risk AI system has been used for a consequential decision, the deployer must inform the end user and provide an accessible appeal process. The appeal must, at minimum, allow the end user to formally contest the decision, submit supporting information, and obtain meaningful human review. The deployer must respond within 45 days, extendable once by 45 days if reasonably necessary — with notice and reasons provided to the end user. Each end user is limited to one appeal per consequential decision per six-month period. Notably, an end user cannot exercise both the pre-decision opt-out and the post-decision appeal for the same decision.
Statutory Text
2. (a) Any deployer that employs a high-risk AI system for a consequential decision shall inform the end user within five days in a clear, conspicuous and consumer-friendly manner if a high-risk AI system has been used to make a consequential decision. The deployer shall then provide and explain a process for the end user to appeal the decision, which shall at minimum allow the end user to (i) formally contest the decision, (ii) provide information to support their position, and (iii) obtain meaningful human review of the decision. A deployer shall respond to an end user's appeal within forty-five days of receipt of the appeal. That period may be extended once by forty-five additional days where reasonably necessary, taking into account the complexity and number of appeals. The deployer shall inform the end user of any such extension within forty-five days of receipt of the appeal, together with the reasons for the delay. (b) An end user shall be entitled to no more than one appeal with respect to the same consequential decision in a six-month period.
Other · Automated Decisionmaking
Civ. Rights Law § 86-a(3)-(4)
Plain Language
Developers and deployers are legally responsible for the quality, accuracy, bias, and any algorithmic discrimination resulting from their high-risk AI systems used in consequential decisions. This responsibility cannot be contractually disclaimed or waived. This is a liability assignment and anti-waiver provision rather than a discrete compliance obligation — it confirms that covered entities bear liability for the downstream effects of their AI systems.
Statutory Text
3. The deployer or developer of a high-risk AI system is legally responsible for quality and accuracy of all consequential decisions made, including any bias or algorithmic discrimination resulting from the operation of the AI system on their behalf. 4. The rights and obligations under this section may not be waived by any person, partnership, association or corporation.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.3 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(1)-(2)
Plain Language
Developers and deployers of high-risk AI systems must not prevent employees — including former employees and independent contractors — from disclosing to the Attorney General information the employee reasonably believes indicates a violation of this article. Retaliation against employees for such disclosures is prohibited, including through employment terms or enforcement of employment terms. Employees harmed by violations may petition a court for relief under Labor Law § 740(5). This provision does not limit other protections available under Labor Law § 740.
Statutory Text
1. Developers and/or deployers of high-risk AI systems shall not: (a) prevent any of their employees from disclosing information to the attorney general, including through terms and conditions of employment or seeking to enforce terms and conditions of employment, if the employee has reasonable cause to believe the information indicates a violation of this article; or (b) retaliate against an employee for disclosing information to the attorney general pursuant to this section. 2. An employee harmed by a violation of this article may petition a court for appropriate relief as provided in subdivision five of section seven hundred forty of the labor law.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.4 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(3)
Plain Language
Developers and deployers must provide clear notice to all employees working on high-risk AI systems of their rights and responsibilities under the Act, including the right of contractor and subcontractor employees to use the developer's internal disclosure process. A presumption of compliance applies if the entity either (a) continuously posts notice in all workplaces, provides equivalent notice to new employees, and periodically notifies remote workers, or (b) provides annual written notice received and acknowledged by all employees. Both options serve as safe harbors — either alone creates the presumption.
Statutory Text
3. Developers and deployers of high-risk AI systems shall provide a clear notice to all of their employees working on such AI systems of their rights and responsibilities under this article, including the right of employees of contractors and subcontractors to use the developer's internal process for making protected disclosures pursuant to subdivision four of this section. A developer or deployer is presumed to be in compliance with the requirements of this subdivision if the developer or deployer does either of the following: (a) at all times post and display within all workplaces maintained by the developer or deployer a notice to all employees of their rights and responsibilities under this article, ensure that all new employees receive equivalent notice, and ensure that employees who work remotely periodically receive an equivalent notice; or (b) no less frequently than once every year, provide written notice to all employees of their rights and responsibilities under this article and ensure that the notice is received and acknowledged by all of those employees.
G-03 Whistleblower & Anti-Retaliation Protections · G-03.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 86-b(4)
Plain Language
Every developer and deployer must establish a reasonable internal anonymous disclosure process for employees. Employees may use the process to report good-faith concerns that the entity has violated any provision of this article or any other law, made false or materially misleading statements about its risk management program, or failed to disclose known risks. The process must include at least monthly status updates to the disclosing employee on the investigation and responsive actions. Contractor and subcontractor employees also have the right to use the developer's internal process, as established in § 86-b(3).
Statutory Text
4. Each developer and deployer shall provide a reasonable internal process through which an employee may anonymously disclose information to the developer or deployer if the employee believes in good faith that the information indicates that the developer or deployer has violated any provision of this article or any other law, or has made false or materially misleading statements related to its risk management policy and program, or failed to disclose known risks to employees, including, at a minimum, a monthly update to the person who made the disclosure regarding the status of the developer's or deployer's investigation of the disclosure and the actions taken by the developer or deployer in response to the disclosure.
G-01 AI Governance Program & Documentation · G-01.5 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 87(1)-(2), (3)-(9)
Plain Language
Both developers and deployers of high-risk AI systems must engage independent third-party auditors to evaluate their systems on a recurring schedule. Developers must complete a first audit within six months of offering or deploying the system, then annually. Deployers must complete a first audit within six months of deployment, a second audit one year later, then biennially. Audits must evaluate: (1) whether the entity has taken reasonable care to prevent algorithmic discrimination, (2) conformity of the risk management program with § 89, and for deployers additionally (3) system accuracy and reliability against intended and actual use cases. Strict auditor independence requirements apply — no entity that provided any service to the commissioning company in the past 12 months, and no competitor planning to compete for 5 years post-audit. Audit fees cannot be contingent on results. Audits may use AI as a tool (e.g., controlled testing) but may not be completed entirely by AI, and a separate high-risk AI system may not be used to complete the audit. An audit completed for compliance with another law satisfies this section if it meets all requirements. For systems already deployed at the effective date, an 18-month transition period applies (per § 88(6)).
Statutory Text
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 4. At the attorney general's discretion, the attorney general may: (a) promulgate further rules as necessary to ensure that audits under this section assess whether or not AI systems produce algorithmic discrimination and otherwise comply with the provisions of this article; and (b) recommend an updated AI system auditing framework to the legislature, where such recommendations are based on a standard or framework (i) designed to evaluate the risks of AI systems, and (ii) that is nationally or internationally recognized and consensus-driven, including but not limited to a relevant framework or standard created by the International Standards Organization. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
Civ. Rights Law § 88(1)-(3)
Plain Language
Developers of high-risk AI systems must file reports with the Attorney General on a recurring schedule: first report within six months of initial offering or deployment; annually thereafter; and within six months of any substantial change. Each report must describe the system's intended uses, unintended or disallowed uses, development overview, training data overview, and any information deployers need to monitor the system and fulfill their own obligations. Each filing must also include the most recent independent audit. The training data overview requirement makes this a de facto training data disclosure obligation to the regulator. Substantial change includes new versions, releases, or updates that significantly change use cases, functionality, or expected outcomes.
Statutory Text
1. Every developer and deployer of a high-risk AI system shall comply with the reporting requirements of this section. 2. Together with each report required to be filed under this section, every developer and deployer shall file with the attorney general a copy of the last completed independent audit required by this article. 3. Developers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A developer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; (ii) one report annually following the submission of the first report; and (iii) one report within six months of any substantial change to the high-risk AI system. (b) A developer report under this section shall include: (i) a description of the system including: (A) the uses of the high-risk AI system that the developer intends; and (B) any explicitly unintended or disallowed uses of the high-risk AI system; (ii) an overview of how the high-risk AI system was developed; (iii) an overview of the high-risk AI system's training data; and (iv) any other information necessary to allow a deployer to: (A) understand the outputs and monitor the system for compliance with this article; and (B) fulfill its duties under this article.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
Civ. Rights Law § 88(4)
Plain Language
Deployers of high-risk AI systems must file reports with the Attorney General on a recurring schedule: first report within six months of deployment; second report one year later; biennially thereafter; and within six months of any substantial change. Reports must include a system description covering actual, intended, or planned uses and any developer-unintended uses, plus an impact assessment addressing algorithmic discrimination risk and mitigation, monetization plans, and a cost-benefit evaluation for consumers and end users. Each filing must also include the most recent independent audit. An entity that is both developer and deployer may submit a single joint report. For systems already deployed at the effective date, an 18-month transition period applies.
Statutory Text
4. Deployers of high-risk AI systems shall complete and file with the attorney general reports in accordance with this subdivision. (a) A deployer of a high-risk AI system shall complete and file with the attorney general at least: (i) a first report within six months after initial deployment; (ii) a second report within one year following the completion and filing of the first report; (iii) one report every two years following the completion and filing of the second report; and (iv) one report within six months of any substantial change to the high-risk AI system. (b) A deployer report under this section shall include: (i) a description of the system including: (A) the deployer's actual, intended, or planned uses of the high-risk AI system with respect to consequential decisions; and (B) whether the deployer is using the high-risk AI system for any developer unintended or disallowed uses; and (ii) an impact assessment including: (A) whether the high-risk AI system poses a risk of algorithmic discrimination and the steps taken to address the risk of algorithmic discrimination; (B) if the high-risk AI system is or will be monetized, how it is or is planned to be monetized; and (C) an evaluation of the costs and benefits to consumers and other end users. (c) A deployer that is also a developer and is required to submit reports under subdivision three of this section may submit a single joint report provided it contains the information required in this subdivision.
G-02 Public Transparency & Documentation · G-02.4 · Government · Automated Decisionmaking
Civ. Rights Law § 88(5)
Plain Language
The Attorney General must maintain a publicly accessible online database containing all reports and audits filed under this article, updated biannually. Developers and deployers may request redaction of sensitive and protected information through a process to be established by AG rulemaking. This effectively creates a public transparency obligation — while the filing obligation is to the AG, the public database means the substantive content of reports and audits will be publicly available in redacted form.
Statutory Text
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89(1)-(3)
Plain Language
Every developer and deployer of high-risk AI systems must plan, document, and implement a risk management policy and program covering the identification, documentation, and mitigation of known or reasonably foreseeable risks of algorithmic discrimination. The program must be iterative, regularly and systematically reviewed and updated over the AI system's lifecycle, including documentation updates. Reasonableness is evaluated against the NIST AI RMF 1.0 or an equivalent framework designated by the AG, and must account for the entity's size and complexity, the system's nature and intended uses, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems if sufficient. The AG may require disclosure of the program and evaluate it for compliance. This is a continuing obligation — not a one-time pre-deployment exercise.
Statutory Text
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient. 3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
S-02 Prohibited Conduct & Output Restrictions · S-02.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 89-a
Plain Language
No person or entity may develop, deploy, use, or sell an AI system that evaluates or classifies the trustworthiness of individuals based on their social behavior or personal characteristics where the resulting social score leads to: differential treatment in unrelated social contexts, unjustified or disproportionate differential treatment, or infringement of constitutional or statutory rights. This is a categorical prohibition applying to all persons and entities — not limited to developers or deployers — and covers the entire lifecycle from development through sale.
Statutory Text
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Other · Automated Decisionmaking
Civ. Rights Law § 89-b
Plain Language
A developer may be exempt from all substantive obligations under this article — including the anti-discrimination duty, deployer/developer obligations, whistleblower requirements, audits, reporting, and risk management — if the developer: (1) obtains written, signed contractual agreements from every deployer (including itself, if also deploying) that the system will not be used as a high-risk AI system; (2) implements reasonable technical safeguards to prevent or detect high-risk use cases; (3) prominently displays on its website, in marketing materials, and in all licensing agreements that the system cannot be used as high-risk; and (4) retains deployer agreements for at least five years. All four conditions must be satisfied. This safe harbor is available to defendants as a rebuttal to the motion-to-dismiss presumption under § 89-c(3)(b).
Statutory Text
A developer may be exempt from its duties and obligations under sections eighty-six, eighty-six-a, eighty-six-b, eighty-seven, eighty-eight, and eighty-nine of this article if such developer: 1. receives a written and signed contractual agreement from each deployer authorized to use the artificial intelligence system developed by such developer, including the developer if they are also a deployer, that such artificial intelligence system will not be used as a high-risk AI system; 2. implements reasonable technical safeguards designed to prevent or detect high-risk AI system use cases or otherwise demonstrates reasonable steps taken to ensure that any unauthorized deployments of its AI systems are not being used as a high-risk AI system; 3. prominently displays on its website, in marketing materials, and in all licensing agreements offered to prospective deployers of its AI system that the AI system cannot be used as a high-risk AI system; and 4. maintains records of deployer agreements for a period of not less than five years.
R-02 Regulatory Disclosure & Submissions · R-02.1 · DeveloperDeployer · Automated Decisionmaking
Civ. Rights Law § 88(6)
Plain Language
For high-risk AI systems already deployed at the effective date of this article, developers and deployers receive an 18-month transition period to complete and file their first report and associated independent audit. After the first filing, developers must file annually and deployers must file biennially. This is the grandfathering provision for legacy systems — it provides additional runway but does not exempt existing deployments from the statute's requirements.
Statutory Text
6. For high-risk AI systems which are already in deployment at the time of the effective date of this article, developers and deployers shall have eighteen months from such effective date to complete and file the first report and associated independent audit required by this article. (a) Each developer of a high-risk AI system shall thereafter file at least one report annually following the submission of the first report under this subdivision. (b) Each deployer of a high-risk AI system shall thereafter file at least one report every two years following the submission of the first report under this subdivision.
Other · Automated Decisionmaking
Exec. Law § 296(23)
Plain Language
This provision integrates the AI anti-discrimination obligations from Civil Rights Law § 86 into the Executive Law's unlawful discriminatory practice framework (§ 296). By making a violation of § 86 an unlawful discriminatory practice under § 296, it opens the door to enforcement through the Division of Human Rights complaint process and remedies available under the Human Rights Law, in addition to the enforcement mechanisms in § 89-c.
Statutory Text
23. It shall be an unlawful discriminatory practice under this section for a deployer or a developer, as such terms are defined in section eighty-five of the civil rights law, to engage in an unlawful discriminatory practice under section eighty-six of the civil rights law.