A-03265
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-04-27
New York Assembly Bill 3265 — An Act to amend the state technology law, in relation to enacting the New York artificial intelligence bill of rights
Establishes a broad 'AI Bill of Rights' framework for New York residents affected by automated systems that meaningfully impact their civil rights, equal opportunities, or access to critical resources. Imposes obligations on designers, developers, and deployers of automated systems covering five pillars: system safety and effectiveness (including pre-deployment testing and ongoing monitoring), algorithmic discrimination protections (including equity assessments and disparity testing), data privacy (including data minimization, consent requirements, and surveillance restrictions), notice and explanation of automated outcomes, and human alternatives and fallback processes. Enforcement is exclusively through the Attorney General, with treble damages based on actual harm caused. The statute expressly bars any private right of action. Obligations are aspirational in tone and lack detailed implementation specifics, which may create compliance ambiguity.
Summary

Establishes a broad 'AI Bill of Rights' framework for New York residents affected by automated systems that meaningfully impact their civil rights, equal opportunities, or access to critical resources. Imposes obligations on designers, developers, and deployers of automated systems covering five pillars: system safety and effectiveness (including pre-deployment testing and ongoing monitoring), algorithmic discrimination protections (including equity assessments and disparity testing), data privacy (including data minimization, consent requirements, and surveillance restrictions), notice and explanation of automated outcomes, and human alternatives and fallback processes. Enforcement is exclusively through the Attorney General, with treble damages based on actual harm caused. The statute expressly bars any private right of action. Obligations are aspirational in tone and lack detailed implementation specifics, which may create compliance ambiguity.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The penalty may be recovered by an action brought by the attorney general in any court of competent jurisdiction. No private right of action — the statute expressly provides that nothing shall be construed as creating, establishing, or authorizing a private cause of action by an aggrieved person.
Penalties
Penalty of not less than three times such damages caused. Treble damages are the floor, but the statute requires proof of actual damages as the base — there is no fixed statutory minimum independent of harm. No provision for injunctive relief, attorney's fees, or costs is specified in the enforcement section.
Who Is Covered
What Is Covered
"Automated system" means any system, software, or process that affects New York residents and that uses computation as a whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with New York residents or communities. Automated systems shall include, but not be limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and shall exclude passive computing infrastructure.
Compliance Obligations 18 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.1S-01.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(1)-(2)
Plain Language
Automated systems that meaningfully impact New York residents must undergo pre-deployment testing, risk identification, and risk mitigation before going live. Systems must also be subjected to ongoing monitoring post-deployment to demonstrate continued safety and effectiveness based on intended use, foreseeable misuse, and domain-specific standards. Development must include collaboration with diverse communities and domain experts. The obligations apply broadly to any computational system affecting New York residents, excluding only passive computing infrastructure.
Statutory Text
1. New York residents have the right to be protected from unsafe or ineffective automated systems. These systems must be developed in collaboration with diverse communities, stakeholders, and domain experts to identify and address any potential concerns, risks, or impacts. 2. Automated systems shall undergo pre-deployment testing, risk identification and mitigation, and shall also be subjected to ongoing monitoring that demonstrates they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.
S-01 AI System Safety Program · S-01.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(3)-(4)
Plain Language
Automated systems that fail safety and effectiveness requirements must not be deployed — and if already deployed, must be pulled from service. No system may be designed with the intent or reasonably foreseeable possibility of endangering the safety of New York residents. Systems must also be affirmatively designed to protect against foreseeable harms even from unintended uses. This effectively creates a deployment-gating obligation and a continuing removal obligation.
Statutory Text
3. If an automated system fails to meet the requirements of this section, it shall not be deployed or, if already in use, shall be removed. No automated system shall be designed with the intent or a reasonably foreseeable possibility of endangering the safety of any New York resident or New York communities. 4. Automated systems shall be designed to proactively protect New York residents from harm stemming from unintended, yet foreseeable, uses or impacts.
S-01 AI System Safety Program · S-01.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(6)
Plain Language
Independent evaluations must be conducted to confirm that automated systems are safe and effective, including documentation of harm mitigation steps. Results must be made public 'whenever possible.' The qualifier 'whenever possible' introduces ambiguity about when public disclosure is actually required — it appears to contemplate exceptions but does not define them.
Statutory Text
6. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, shall be performed and the results made public whenever possible.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(5)
Plain Language
Designers, developers, and deployers must ensure that data used in automated systems is appropriate and relevant to the system's purpose. Residents are also protected from compounded harms arising from data reuse — meaning data collected for one automated system purpose should not be repurposed in ways that compound risk or harm. This is a data minimization and purpose limitation obligation applied to AI system design and deployment.
Statutory Text
5. New York residents are entitled to protection from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3H-02.5 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 505(1)-(4)
Plain Language
Designers, developers, and deployers must take proactive and continuous measures to prevent algorithmic discrimination across an expansive list of protected characteristics. Required measures include proactive equity assessments during system design, use of representative training data, protections against proxy discrimination (e.g., using non-protected features that correlate with protected characteristics), and ensuring accessibility for users with disabilities. Systems must undergo both pre-deployment and ongoing disparity testing with clear organizational oversight. The protected characteristics list is broad, including New York-specific categories such as domestic violence victim status, predisposing genetic characteristics, and prior arrest or conviction record.
Statutory Text
1. No New York resident shall face discrimination by algorithms, and all automated systems shall be used and designed in an equitable manner. 2. The designers, developers, and deployers of automated systems shall take proactive and continuous measures to protect New York residents and communities from algorithmic discrimination, ensuring the use and design of these systems in an equitable manner. 3. The protective measures required by this section shall include proactive equity assessments as part of the system design, use of representative data, protection against proxies for demographic features, and assurance of accessibility for New York residents with disabilities in design and development. 4. Automated systems shall undergo pre-deployment and ongoing disparity testing and mitigation, under clear organizational oversight.
H-02 Non-Discrimination & Bias Assessment · H-02.5H-02.6H-02.7 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 505(5)-(6)
Plain Language
All automated systems must undergo independent evaluations resulting in a plain-language algorithmic impact assessment that includes disparity testing results and mitigation measures. New York residents have the right to view these evaluations and reports. The scope is notable — this applies to all automated systems within the statute's coverage, not just high-risk systems. The statute does not define who qualifies as an 'independent' evaluator or specify publication timing or format requirements.
Statutory Text
5. Independent evaluations and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, shall be conducted for all automated systems. 6. New York residents shall have the right to view such evaluations and reports.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(1)-(2)
Plain Language
Automated systems must incorporate privacy protections by design and by default. Data collection must conform to reasonable user expectations and must be limited to data that is strictly necessary for the specific context of use. This is a combined privacy-by-design and data minimization obligation. The 'strictly necessary' standard is among the most restrictive formulations — it goes beyond 'reasonably necessary' or 'proportionate' standards used in other jurisdictions.
Statutory Text
1. New York residents shall be protected from abusive data practices via built-in protections and shall maintain agency over the use of their personal data. 2. Privacy violations shall be mitigated through design choices that include privacy protections by default, ensuring that data collection conforms to reasonable expectations and that only strictly necessary data for the specific context is collected.
D-01 Automated Processing Rights & Data Controls · D-01.1D-01.2D-01.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(3)-(6)
Plain Language
Designers, developers, and deployers must respect New York residents' decisions regarding their data — including collection, use, access, transfer, and deletion — to the fullest extent possible, and must implement privacy-by-design alternatives where full user control is not feasible. Systems may not use dark patterns or privacy-invasive defaults that obscure user choice. Consent may only justify data collection where it can be meaningfully given — not through lengthy, incomprehensible terms of service. This effectively prohibits reliance on blanket consent buried in complex notices and requires that consent mechanisms be brief, plain-language, and context-specific.
Statutory Text
3. Designers, developers, and deployers of automated systems must seek and respect the decisions of New York residents regarding the collection, use, access, transfer, and deletion of their data in all appropriate ways and to the fullest extent possible. Where not possible, alternative privacy by design safeguards must be implemented. 4. Automated systems shall not employ user experience or design decisions that obscure user choice or burden users with default settings that are privacy-invasive. 5. Consent shall be used to justify the collection of data only in instances where it can be appropriately and meaningfully given. Any consent requests shall be brief, understandable in plain language, and provide New York residents with agency over data collection and its specific context of use. 6. Any existing practice of complex notice-and-choice for broad data use shall be transformed, emphasizing clarity and user comprehension.
D-01 Automated Processing Rights & Data Controls · D-01.4D-01.5 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(7)
Plain Language
In sensitive domains — areas where activities can cause material harms to human rights, autonomy, dignity, or civil liberties — data and inferences about individuals may only be used for necessary functions. This data must be safeguarded by ethical review processes and subject to use prohibitions. The sensitive data definition is extremely broad, capturing genomic data, biometric data, behavioral data, geolocation data, criminal justice data, relationship history, and all data generated by minors. The combination of the broad sensitive data definition and the 'necessary functions only' restriction creates a strict purpose limitation regime for AI systems operating in these domains.
Statutory Text
7. Enhanced protections and restrictions shall be established for data and inferences related to sensitive domains. In sensitive domains, individual data and related inferences may only be used for necessary functions, safeguarded by ethical review and use prohibitions.
S-02 Prohibited Conduct & Output Restrictions · S-02.2 · DeveloperDeployer · Automated DecisionmakingBiometrics
State Tech. Law § 506(8)-(9)
Plain Language
Surveillance technologies must undergo pre-deployment harm assessments and be subject to scope limitations protecting privacy and civil liberties. Continuous surveillance and monitoring are prohibited in education, work, housing, or any context where such use is likely to limit rights, opportunities, or access. The surveillance technology definition is exceptionally broad — covering any product or service that can be used to detect, monitor, collect, or retain data about New York residents. The continuous surveillance prohibition in education, work, and housing contexts is a categorical restriction, not a qualified one.
Statutory Text
8. New York residents and New York communities shall be free from unchecked surveillance; surveillance technologies shall be subject to heightened oversight, including at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. 9. Continuous surveillance and monitoring shall not be used in education, work, housing, or any other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.
G-02 Public Transparency & Documentation · G-02.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(10)
Plain Language
Where possible, operators must make available reporting that confirms they are respecting residents' data choices and that assesses the impact of surveillance technologies on residents' rights, opportunities, and access. The 'whenever possible' qualifier makes this a soft obligation with unclear enforceability.
Statutory Text
10. Whenever possible, New York residents shall have access to reporting that confirms respect for their data decisions and provides an assessment of the potential impact of surveillance technologies on their rights, opportunities, or access.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 507(1)-(3)
Plain Language
New York residents must be informed whenever an automated system is in use that impacts them. Designers, developers, and deployers must provide accessible, plain-language documentation covering: how the system works overall, the role of automation in decisions, notice that the system is in use, identification of the responsible organization or individual, and clear explanations of outcomes. This documentation must be kept current, and residents must be notified of significant changes to use cases or key functionalities. This is a broad notice and documentation obligation that applies to all covered automated systems, not just high-risk ones.
Statutory Text
1. New York residents shall be informed when an automated system is in use and New York residents shall be informed how and why the system contributes to outcomes that impact them. 2. Designers, developers, and deployers of automated systems shall provide accessible plain language documentation, including clear descriptions of the overall system functioning, the role of automation, notice of system use, identification of the individual or organization responsible for the system, and clear, timely, and accessible explanations of outcomes. 3. The provided notice shall be kept up-to-date, and New York residents impacted by the system shall be notified of any significant changes to use cases or key functionalities.
H-01 Human Oversight of Automated Decisions · H-01.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 507(4)-(5)
Plain Language
Residents have the right to understand how and why an automated system contributed to an outcome affecting them — even when the system was only one factor in the decision. Explanations must be technically valid, meaningful to the affected individual (not just generic boilerplate), and proportionate to the level of risk in the specific context. Higher-risk decisions require more detailed explanations. This right extends to hybrid human-AI decisions, not only fully automated ones.
Statutory Text
4. New York residents shall have the right to understand how and why an outcome impacting them was determined by an automated system, even when the automated system is not the sole determinant of the outcome. 5. Automated systems shall provide explanations that are technically valid, meaningful to the individual and any other persons who need to understand the system and proportionate to the level of risk based on the context.
G-02 Public Transparency & Documentation · G-02.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 507(6)
Plain Language
Summary reports about automated systems — including assessments of how clear and high-quality the notice and explanations provided to residents actually are — must be made publicly available whenever possible. This is a public transparency obligation focused on the quality of notice, not just the existence of notice. The 'whenever possible' qualifier introduces discretion about when disclosure is required.
Statutory Text
6. Summary reporting, including plain language information about these automated systems and assessments of the clarity and quality of notice and explanations, shall be made public whenever possible.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
State Tech. Law § 508(1)-(3)
Plain Language
Residents have the right to opt out of automated systems in favor of a human alternative where appropriate — assessed based on reasonable expectations in context, with emphasis on broad accessibility and protection from harmful impacts. Separately, residents must have access to a timely human review and remedy process when an automated system fails, produces errors, or when they wish to appeal or contest an outcome. The human fallback process must be accessible, equitable, effective, maintained over time, accompanied by operator training, and must not impose unreasonable burdens on the public. The opt-out right is qualified ('where appropriate'), but the human fallback for errors and appeals appears to be a mandatory right.
Statutory Text
1. New York residents shall have the right to opt out of automated systems, where appropriate, in favor of a human alternative. The appropriateness of such an option shall be determined based on reasonable expectations in a given context, with a focus on ensuring broad accessibility and protecting the public from particularly harmful impacts. In some instances, a human or other alternative may be mandated by law. 2. New York residents shall have access to a timely human consideration and remedy through a fallback and escalation process if an automated system fails, produces an error, or if they wish to appeal or contest its impacts on them. 3. The human consideration and fallback process shall be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.
H-01 Human Oversight of Automated Decisions · H-01.6 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 508(4)
Plain Language
Automated systems deployed in sensitive domains — specifically including criminal justice, employment, education, and health — face heightened obligations beyond the general requirements: they must be tailored to their specific purpose, provide meaningful oversight access, include user training for residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions. The human consideration requirement for adverse decisions in sensitive domains is the strongest human oversight obligation in the statute — it effectively requires human-in-the-loop review for consequential decisions in these sectors.
Statutory Text
4. Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall additionally be tailored to their purpose, provide meaningful access for oversight, include training for New York residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
State Tech. Law § 508(5)
Plain Language
Summary reports describing human governance processes — including their timeliness, accessibility, outcomes, and effectiveness — must be made publicly available whenever possible. This public disclosure obligation applies to the human alternatives and fallback mechanisms required by § 508, allowing residents and the public to assess whether human oversight processes are functioning as intended.
Statutory Text
5. Summary reporting, which includes a description of such human governance processes and an assessment of their timeliness, accessibility, outcomes, and effectiveness, shall be made publicly available whenever possible.
Other · Automated Decisionmaking
State Tech. Law § 502
Plain Language
The statute's obligations apply to persons developing automated systems that have the potential to meaningfully impact New York residents' civil rights, civil liberties, privacy, equal opportunities (including education, housing, credit, and employment), or access to critical resources and services (including healthcare, financial services, social services, and government benefits). The scope trigger is broad — 'potential to meaningfully impact' is a low threshold, and the covered domains encompass nearly all consequential sectors of daily life. Notably, § 502 references only 'persons developing' automated systems, though the operative sections also impose obligations on deployers.
Statutory Text
The rights contained within this article shall be construed as applying to New York residents against persons developing automated systems that have the potential to meaningfully impact New York residents': 1. civil rights, civil liberties, and privacy; 2. equal opportunities; or 3. access to critical resources or services.