A-00235
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2026-01-08
New York Assembly Bill 235 — An Act to amend the general business law, in relation to unauthorized depictions of public officials generated by artificial intelligence
Requires owners, licensees, or operators of visual or audio generative AI systems accessible to New York residents to implement reasonable methods to prevent users from creating unauthorized realistic depictions of public officeholders and candidates for public office, within 60 days of receiving notice from the covered person. Operators must also provide an accessible notice mechanism for covered persons to submit opt-out requests. The bill creates a private right of action for covered persons at $100 per unauthorized depiction, capped at $100,000 in aggregate, with safe harbors for reasonable compliance efforts and for depictions that are incidental or unforeseeable. Does not apply where a third party with no ownership or control over the underlying model processes the system's outputs.
Summary

Requires owners, licensees, or operators of visual or audio generative AI systems accessible to New York residents to implement reasonable methods to prevent users from creating unauthorized realistic depictions of public officeholders and candidates for public office, within 60 days of receiving notice from the covered person. Operators must also provide an accessible notice mechanism for covered persons to submit opt-out requests. The bill creates a private right of action for covered persons at $100 per unauthorized depiction, capped at $100,000 in aggregate, with safe harbors for reasonable compliance efforts and for depictions that are incidental or unforeseeable. Does not apply where a third party with no ownership or control over the underlying model processes the system's outputs.

Enforcement & Penalties
Enforcement Authority
Private right of action by covered persons (public officeholders or candidates for public office). No designated agency enforcer. A covered person who has sent notice and whose realistic depiction is generated after the prescribed compliance period may bring a civil action against the owner, licensee, or operator of the covered system. No cure period beyond the 60-day implementation window following notice.
Penalties
$100 per unauthorized depiction, capped at $100,000 in the aggregate per covered person. Statutory damages do not require proof of actual monetary harm. No provision for injunctive relief, attorney fees, or costs. No liability where the owner, licensee, or operator implemented a reasonable method but was unable to prevent the depiction, or where the depiction was incidental or created in an unforeseeable way.
Who Is Covered
"Covered person" shall mean any person holding a public office or a candidate for public office. In the context of notice, covered person shall include the covered person's agent, employee or representative acting at the direction of the covered person.
What Is Covered
"Visual or audio generative artificial intelligence system" or "covered system" shall mean any artificial intelligence system that is accessible to New York residents whose primary function is to generate visual or auditory media.
Compliance Obligations 4 obligations · click obligation ID to open requirement page
CP-02 Non-Consensual Intimate Imagery · CP-02.4 · Deployer · Content Generation
Gen. Bus. Law § 390-f(2)
Plain Language
Within 60 days of receiving notice from a public officeholder or candidate (or their authorized agent) that they do not want realistic depictions of themselves generated, the owner, licensee, or operator of a covered generative AI system must implement a reasonable method to prevent users from creating such depictions. A method is deemed reasonable if it is consistent with industry standards, not overly burdensome on the system, cost-effective to implement and maintain, and up to date. This is a notice-triggered obligation — there is no duty to proactively block depictions of public officials absent notice. The 60-day window begins upon receipt of notice, not upon enactment.
Statutory Text
2. The owner, licensee or operator of a visual or audio generative artificial intelligence system shall implement a reasonable method to prohibit its users from creating unauthorized realistic depictions of a covered person within sixty days of being notified by such covered person that such covered person does not want a realistic depiction of themselves to be generated by the owner, licensee or operator's system. An implemented method to prevent the unauthorized creation of realistic depictions of a covered person shall be considered reasonable when the owner, licensee or operator of the covered system has implemented a method that, in relation to the method of user inputs used by the covered system, is consistent with industry standards, not overly burdensome on the system, cost-effective to implement and maintain and is up to date.
CP-02 Non-Consensual Intimate Imagery · CP-02.3 · Deployer · Content Generation
Gen. Bus. Law § 390-f(3)
Plain Language
Operators must provide a notice intake mechanism that covered persons (public officials, candidates, or their authorized agents) can use to request that their likeness be blocked from generation. The mechanism must be easy to access, understand, complete, and submit. Operators must also provide timely status updates on each request. Operators may require reasonable identification to verify the requester's identity. This is an infrastructure obligation — it must be in place regardless of whether any covered person has yet submitted a notice.
Statutory Text
3. The owner, licensee or operator of a covered system shall implement a reasonable method for covered persons to send notice to them under this section provided that such method is easy to access, understand, complete and send and that such method provides clear updates to the sender on the status of their request in a timely manner. The owner, licensee or operator of a covered system may request a reasonable means of identification to process such requests.
CP-02 Non-Consensual Intimate Imagery · CP-02.4 · Deployer · Content Generation
Gen. Bus. Law § 390-f(4)
Plain Language
Operators may choose to allow covered persons (or their authorized agents) to continue generating realistic depictions of themselves even after an opt-out notice, provided the operator implements reasonable safeguards to ensure only the covered person or their authorized representatives can do so. If the safeguards are not reasonable — measured by the same industry-standard, cost-effectiveness, and up-to-date criteria as the blocking obligation — the operator is liable as if they failed to implement blocking at all. This is a permissive carve-out with a conditional liability hook: operators are not required to offer this self-use exception, but if they do, they must safeguard it adequately.
Statutory Text
4. Nothing in this section shall prohibit the owner, licensee or operator of a covered system from implementing reasonable safeguards to permit the covered person, their agent, employee or representative to use such covered system to generate realistic depictions of such covered person, provided however that such owner, licensee or operator of such covered system shall be liable in the same manner as if they had violated subdivision two of this section where such safeguards are not reasonable. A safeguard is considered reasonable for purposes of this subdivision where the owner, licensee or operator of a covered system implements measures that are consistent with industry standards, not overly burdensome on the system, cost-effective to implement and maintain and are up to date.
Other · Content Generation
Gen. Bus. Law § 390-f(6)
Plain Language
The entire section does not apply where the visual or audio outputs of the generative AI system are processed by a third party that has no ownership or control over the underlying generative model. This carve-out appears intended to exempt downstream processing services (e.g., a video editing platform that takes AI-generated outputs from another system and further processes them) where the processing party does not control the model that generated the content. The scope and practical application of this exclusion is ambiguous — it is unclear whether it exempts the upstream model operator when a third party processes the output, or the third-party processor itself, or both.
Statutory Text
6. This section shall not apply to the owner, licensee or operator of a visual or audio generative artificial intelligence system where the visual or audio outputs of the system are processed by a third party that has no ownership or control over the underlying generative model.