A-03265
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-04-27
New York Assembly Bill 3265 — An Act to amend the state technology law, in relation to enacting the New York artificial intelligence bill of rights
Establishes a broad set of rights for New York residents affected by automated systems that meaningfully impact their civil rights, equal opportunities, or access to critical resources. Core obligations include pre-deployment safety testing with ongoing monitoring, proactive algorithmic discrimination assessments and disparity testing, data minimization and privacy-by-design, notice and explanation of automated decision-making, and human fallback and opt-out mechanisms. Enforcement is exclusively through the Attorney General, who may seek treble damages; no private right of action is created. The bill's definitions are extremely broad — 'automated system' covers virtually any computational system that affects New York residents — which could raise significant scope and preemption concerns.
Summary

Establishes a broad set of rights for New York residents affected by automated systems that meaningfully impact their civil rights, equal opportunities, or access to critical resources. Core obligations include pre-deployment safety testing with ongoing monitoring, proactive algorithmic discrimination assessments and disparity testing, data minimization and privacy-by-design, notice and explanation of automated decision-making, and human fallback and opt-out mechanisms. Enforcement is exclusively through the Attorney General, who may seek treble damages; no private right of action is created. The bill's definitions are extremely broad — 'automated system' covers virtually any computational system that affects New York residents — which could raise significant scope and preemption concerns.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The penalty may be recovered by an action brought by the attorney general in any court of competent jurisdiction. No private cause of action is created — the statute expressly disclaims any private right of action by an aggrieved person against an operator.
Penalties
Penalty of not less than three times such damages caused. Because the penalty is pegged to actual damages (treble damages), proof of actual harm is required. Recovered by the attorney general. No statutory minimum dollar amount is specified.
Who Is Covered
What Is Covered
"Automated system" means any system, software, or process that affects New York residents and that uses computation as a whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with New York residents or communities. Automated systems shall include, but not be limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and shall exclude passive computing infrastructure.
Compliance Obligations 18 obligations · click obligation ID to open requirement page
S-01 AI System Safety Program · S-01.1S-01.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(1)-(2)
Plain Language
Automated systems must undergo pre-deployment testing covering risk identification and mitigation, and must be subject to ongoing post-deployment monitoring to demonstrate continued safety and effectiveness. Testing and monitoring must be measured against the system's intended use, foreseeable misuse, and domain-specific standards. Additionally, systems must be developed with input from diverse communities, stakeholders, and domain experts to surface concerns before deployment.
Statutory Text
1. New York residents have the right to be protected from unsafe or ineffective automated systems. These systems must be developed in collaboration with diverse communities, stakeholders, and domain experts to identify and address any potential concerns, risks, or impacts. 2. Automated systems shall undergo pre-deployment testing, risk identification and mitigation, and shall also be subjected to ongoing monitoring that demonstrates they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.
S-01 AI System Safety Program · S-01.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(3)-(4)
Plain Language
An automated system that fails to meet the safety and effectiveness requirements of § 504 must not be deployed — or if already deployed, must be removed. There is a categorical prohibition on designing systems with the intent or reasonably foreseeable possibility of endangering New York residents. Systems must also be proactively designed to protect against harms from foreseeable but unintended uses. This creates both a deployment-gating requirement and a proactive safety-by-design obligation.
Statutory Text
3. If an automated system fails to meet the requirements of this section, it shall not be deployed or, if already in use, shall be removed. No automated system shall be designed with the intent or a reasonably foreseeable possibility of endangering the safety of any New York resident or New York communities. 4. Automated systems shall be designed to proactively protect New York residents from harm stemming from unintended, yet foreseeable, uses or impacts.
S-01 AI System Safety Program · S-01.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(6)
Plain Language
Independent evaluations must be conducted to confirm that automated systems are safe and effective, including documentation of harm mitigation steps. Results must be made public whenever possible. The 'whenever possible' qualifier introduces ambiguity about when public disclosure is actually required, but the independent evaluation itself appears mandatory.
Statutory Text
6. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, shall be performed and the results made public whenever possible.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 505(1)-(4)
Plain Language
Designers, developers, and deployers must take proactive and continuous measures to prevent algorithmic discrimination. Required measures include equity assessments during system design, use of representative data, protection against proxy variables for demographic features, and accessibility assurance for persons with disabilities. Automated systems must undergo pre-deployment and ongoing disparity testing and mitigation under clear organizational oversight. The list of protected characteristics is expansive, including all New York Human Rights Law categories plus any other classification protected by law.
Statutory Text
1. No New York resident shall face discrimination by algorithms, and all automated systems shall be used and designed in an equitable manner. 2. The designers, developers, and deployers of automated systems shall take proactive and continuous measures to protect New York residents and communities from algorithmic discrimination, ensuring the use and design of these systems in an equitable manner. 3. The protective measures required by this section shall include proactive equity assessments as part of the system design, use of representative data, protection against proxies for demographic features, and assurance of accessibility for New York residents with disabilities in design and development. 4. Automated systems shall undergo pre-deployment and ongoing disparity testing and mitigation, under clear organizational oversight.
H-02 Non-Discrimination & Bias Assessment · H-02.5H-02.6H-02.7 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 505(5)-(6)
Plain Language
All automated systems must undergo independent evaluations, and the results must be documented in plain-language algorithmic impact assessments that include disparity testing results and descriptions of mitigation steps taken. New York residents have the right to view these evaluations and reports, effectively requiring public disclosure. This applies to all automated systems — not limited to high-risk or employment contexts — making the scope significantly broader than typical independent audit requirements.
Statutory Text
5. Independent evaluations and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, shall be conducted for all automated systems. 6. New York residents shall have the right to view such evaluations and reports.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(1)-(2)
Plain Language
Automated systems must incorporate privacy protections by default. Data collection must conform to reasonable expectations and must be limited to what is strictly necessary for the specific context — a data minimization requirement. This is a design-level obligation requiring privacy-by-design architecture, not merely a policy commitment.
Statutory Text
1. New York residents shall be protected from abusive data practices via built-in protections and shall maintain agency over the use of their personal data. 2. Privacy violations shall be mitigated through design choices that include privacy protections by default, ensuring that data collection conforms to reasonable expectations and that only strictly necessary data for the specific context is collected.
D-01 Automated Processing Rights & Data Controls · D-01.3 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(3)-(6)
Plain Language
Designers, developers, and deployers must respect residents' decisions regarding collection, use, access, transfer, and deletion of their data. Where honoring those decisions is not possible, alternative privacy-by-design safeguards must be used. Systems may not use dark patterns or privacy-invasive defaults. Consent may only justify data collection where it can be meaningfully given, and consent requests must be brief, in plain language, and context-specific. Existing complex notice-and-choice practices must be simplified. This effectively creates a right to opt out of data collection and requires affirmative, meaningful consent practices.
Statutory Text
3. Designers, developers, and deployers of automated systems must seek and respect the decisions of New York residents regarding the collection, use, access, transfer, and deletion of their data in all appropriate ways and to the fullest extent possible. Where not possible, alternative privacy by design safeguards must be implemented. 4. Automated systems shall not employ user experience or design decisions that obscure user choice or burden users with default settings that are privacy-invasive. 5. Consent shall be used to justify the collection of data only in instances where it can be appropriately and meaningfully given. Any consent requests shall be brief, understandable in plain language, and provide New York residents with agency over data collection and its specific context of use. 6. Any existing practice of complex notice-and-choice for broad data use shall be transformed, emphasizing clarity and user comprehension.
D-01 Automated Processing Rights & Data Controls · D-01.4D-01.5 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(7)
Plain Language
In sensitive domains — broadly defined as areas where activities can cause material harms to human rights, autonomy, dignity, or civil liberties — individual data and related inferences may only be used for necessary functions. These uses must be safeguarded by ethical review and subject to use prohibitions. The definition of 'sensitive data' is extremely broad, encompassing data generated by minors, biometric data, genomic data, behavioral data, geolocation data, criminal justice data, and data with reasonable potential to cause harm. This effectively imposes a strict-necessity standard for data use in sensitive contexts.
Statutory Text
7. Enhanced protections and restrictions shall be established for data and inferences related to sensitive domains. In sensitive domains, individual data and related inferences may only be used for necessary functions, safeguarded by ethical review and use prohibitions.
Other · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(8)-(9)
Plain Language
Surveillance technologies must undergo pre-deployment assessment of potential harms and be subject to scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring is prohibited in education, work, housing, or any other context where it is likely to limit rights, opportunities, or access. The definition of 'surveillance technology' is extremely broad — encompassing virtually any product or service that can collect, monitor, or retain data about individuals.
Statutory Text
8. New York residents and New York communities shall be free from unchecked surveillance; surveillance technologies shall be subject to heightened oversight, including at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. 9. Continuous surveillance and monitoring shall not be used in education, work, housing, or any other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.
D-01 Automated Processing Rights & Data Controls · D-01.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 506(10)
Plain Language
Residents should have access to reporting that confirms their data preferences are being honored and that assesses the impact of surveillance technologies on their rights and access. The 'whenever possible' qualifier creates ambiguity about when this right is actually enforceable.
Statutory Text
10. Whenever possible, New York residents shall have access to reporting that confirms respect for their data decisions and provides an assessment of the potential impact of surveillance technologies on their rights, opportunities, or access.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 507(1)-(3)
Plain Language
Residents must be informed whenever an automated system is in use and must be told how and why it contributes to outcomes affecting them. Designers, developers, and deployers must provide accessible plain-language documentation covering system functioning, the role of automation, notice of use, identification of the responsible party, and explanations of outcomes. This documentation must be kept current, and residents must be notified of significant changes to use cases or functionality. This combines AI identity disclosure with an ongoing documentation and change-notification obligation.
Statutory Text
1. New York residents shall be informed when an automated system is in use and New York residents shall be informed how and why the system contributes to outcomes that impact them. 2. Designers, developers, and deployers of automated systems shall provide accessible plain language documentation, including clear descriptions of the overall system functioning, the role of automation, notice of system use, identification of the individual or organization responsible for the system, and clear, timely, and accessible explanations of outcomes. 3. The provided notice shall be kept up-to-date, and New York residents impacted by the system shall be notified of any significant changes to use cases or key functionalities.
H-01 Human Oversight of Automated Decisions · H-01.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 507(4)-(5)
Plain Language
Residents have the right to understand how and why an automated system determined an outcome affecting them — including when the system was only one factor in the decision. Explanations must be technically valid, meaningful to the individual, and proportionate to the risk level of the context. This is an individual explanation right, not a general transparency obligation — it applies to specific outcomes affecting specific individuals.
Statutory Text
4. New York residents shall have the right to understand how and why an outcome impacting them was determined by an automated system, even when the automated system is not the sole determinant of the outcome. 5. Automated systems shall provide explanations that are technically valid, meaningful to the individual and any other persons who need to understand the system and proportionate to the level of risk based on the context.
G-02 Public Transparency & Documentation · G-02.1 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 507(6)
Plain Language
Plain-language summary reporting about automated systems — including assessments of the quality and clarity of the notice and explanations provided to residents — must be made public whenever possible. The 'whenever possible' qualifier creates significant ambiguity about when this obligation is enforceable.
Statutory Text
6. Summary reporting, including plain language information about these automated systems and assessments of the clarity and quality of notice and explanations, shall be made public whenever possible.
H-01 Human Oversight of Automated Decisions · H-01.4H-01.5 · Deployer · Automated Decisionmaking
State Tech. Law § 508(1)-(3)
Plain Language
Residents have the right to opt out of automated systems in favor of a human alternative where appropriate, based on reasonable expectations and the risk of harmful impacts. When a system fails, produces an error, or when a resident wishes to appeal or contest an outcome, they must have access to a timely human consideration and remedy through a fallback and escalation process. The human fallback process must be accessible, equitable, effective, maintained, accompanied by operator training, and must not impose an unreasonable burden. This creates both an opt-out right and a mandatory human review/appeal mechanism.
Statutory Text
1. New York residents shall have the right to opt out of automated systems, where appropriate, in favor of a human alternative. The appropriateness of such an option shall be determined based on reasonable expectations in a given context, with a focus on ensuring broad accessibility and protecting the public from particularly harmful impacts. In some instances, a human or other alternative may be mandated by law. 2. New York residents shall have access to a timely human consideration and remedy through a fallback and escalation process if an automated system fails, produces an error, or if they wish to appeal or contest its impacts on them. 3. The human consideration and fallback process shall be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.
H-01 Human Oversight of Automated Decisions · H-01.6 · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 508(4)
Plain Language
Automated systems used in sensitive domains — including criminal justice, employment, education, and health — face heightened requirements beyond the general human fallback provisions. These systems must be tailored to their purpose, provide meaningful oversight access, include user training for residents who interact with the system, and incorporate human consideration for adverse or high-risk decisions. The human consideration requirement for high-risk decisions in sensitive domains effectively mandates human-in-the-loop review before acting on adverse automated outcomes in these contexts.
Statutory Text
4. Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall additionally be tailored to their purpose, provide meaningful access for oversight, include training for New York residents interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
G-02 Public Transparency & Documentation · Deployer · Automated Decisionmaking
State Tech. Law § 508(5)
Plain Language
Summary reporting describing human governance processes — including their timeliness, accessibility, outcomes, and effectiveness — must be made publicly available whenever possible. This is a public transparency obligation related to the human fallback and consideration mechanisms, not a general model documentation requirement.
Statutory Text
5. Summary reporting, which includes a description of such human governance processes and an assessment of their timeliness, accessibility, outcomes, and effectiveness, shall be made publicly available whenever possible.
Other · Automated Decisionmaking
State Tech. Law § 509(1)-(3)
Plain Language
Operators who violate any right in this article face a penalty of at least treble the damages caused, recoverable only by the Attorney General. The statute expressly disclaims any private cause of action. This is a penalty and enforcement provision — it creates no independent compliance obligation but establishes the consequences for violating the substantive rights in §§ 504–508.
Statutory Text
1. Where an operator of an automated system violates or causes a violation of any of the rights stated within this article, such operator shall be liable to the people of this state for a penalty not less than three times such damages caused. 2. The penalty provided for in subdivision one of this section may be recovered by an action brought by the attorney general in any court of competent jurisdiction. 3. Nothing set forth in this article shall be construed as creating, establishing, or authorizing a private cause of action by an aggrieved person against an operator of an automated system who has violated, or is alleged to have violated, any provision of this article.
S-01 AI System Safety Program · DeveloperDeployer · Automated Decisionmaking
State Tech. Law § 504(5)
Plain Language
Residents must be protected from the use of inappropriate or irrelevant data in the design, development, and deployment of automated systems, including protection from the compounded harm of reusing such data across systems. This creates a data quality and relevance obligation in the safety context — data used to train and operate automated systems must be appropriate and relevant to the system's purpose.
Statutory Text
5. New York residents are entitled to protection from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse.