A-08962
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2025-10-12
New York Assembly Bill 8962-A — An act to amend the general business law and the civil rights law, in relation to enacting the "New York fundamental artificial intelligence requirements in (FAIR) news act"
The FAIR News Act imposes obligations on news media employers regarding the use of generative AI in newsrooms. It requires employers to disclose to workers when and how generative AI tools are used in content creation, requires conspicuous consumer-facing labeling of news content substantially created by generative AI (unless the content is eligible for copyright registration), mandates human review and approval authority before AI-generated news content may be published, prohibits training AI on worker work product without notice and consent, prohibits displacement of workers through AI adoption, and requires employers of journalists to establish safeguards protecting journalistic sources from AI access. The bill contains no enforcement mechanism, penalty provisions, or private right of action.
Summary

The FAIR News Act imposes obligations on news media employers regarding the use of generative AI in newsrooms. It requires employers to disclose to workers when and how generative AI tools are used in content creation, requires conspicuous consumer-facing labeling of news content substantially created by generative AI (unless the content is eligible for copyright registration), mandates human review and approval authority before AI-generated news content may be published, prohibits training AI on worker work product without notice and consent, prohibits displacement of workers through AI adoption, and requires employers of journalists to establish safeguards protecting journalistic sources from AI access. The bill contains no enforcement mechanism, penalty provisions, or private right of action.

Enforcement & Penalties
Enforcement Authority
No enforcement authority is specified in the bill. No designated agency enforcer, no private right of action, and no penalty provisions are included. Enforcement mechanism is unclear — obligations may be enforceable under New York's general consumer protection statutes (GBL § 349/350) or through labor law frameworks, but the bill itself does not specify.
Penalties
The bill does not specify any penalties, damages, or remedies for violations. No statutory damages, civil penalties, injunctive relief, or attorney fee provisions are included.
Who Is Covered
Compliance Obligations 7 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Content GenerationEmployment
GBL § 1152
Plain Language
News media employers must fully disclose to their workers whenever and however any generative AI tool is being used in the workplace for content creation — including writing, recordings, and transcripts. The disclosure must include a description of the AI system and a summary of its purpose and use. This is an internal, worker-facing disclosure obligation, distinct from the consumer-facing labeling requirement in § 1153. The bill does not specify timing, format, or frequency of the disclosure beyond requiring it be 'full.'
Statutory Text
News media employers shall fully disclose to workers when and how any generative artificial intelligence tool is used in the workplace as it relates to the creation of content, including, but not limited to, writing, recordings and transcripts. Such disclosure shall include a description of the artificial intelligence system and a summary of the purpose and use of such system.
T-02 AI Content Labeling & Provenance · T-02.1 · Deployer · Content Generation
GBL § 1153
Plain Language
News media content that was substantially created by generative AI and is published, broadcast, or otherwise accessible in New York must carry a conspicuous label. For visual content, the label must be imprinted at the top of the page, webpage, image, graphic, or video. For audio content, the disclosure must be verbally stated at the onset. Critically, this obligation does not apply if the content is eligible for copyright registration — which creates a significant carve-out, since human-supervised AI-assisted content may qualify for copyright protection. The threshold trigger is 'substantially composed, authored, or otherwise created' by generative AI, which is not further defined.
Statutory Text
Any news media content published, broadcast, or otherwise disseminated or accessible within the state of New York, which was substantially composed, authored, or otherwise created through the use of generative artificial intelligence shall conspicuously imprint on the top of the page, webpage, image, graphic, video or other visual or audio/visual content, or verbally orate at the onset of audio content, that such content was substantially created by generative artificial intelligence. If the content is eligible for copyright registration such disclosure requirement shall not apply.
H-01 Human Oversight of Automated Decisions · H-01.6 · Deployer · Content Generation
GBL § 1154
Plain Language
Before any news media content that was created in whole or in material part by generative AI may be published (with the consumer disclosure required by § 1153), a human worker must review the content and must have authority to approve, deny, or modify the AI system's output. This is a mandatory human-in-the-loop requirement — the human reviewer must have genuine override authority, not merely a rubber-stamp role. The obligation is tied to the publication act: AI-generated content cannot be published until it has been through this human review gate. Note the threshold here ('in whole or in material part') is broader than the consumer disclosure threshold ('substantially composed'), potentially requiring human review even when the § 1153 labeling requirement does not apply.
Statutory Text
Any news media content, including stories, articles, audio, visuals or images, which are created in whole or in material part by generative artificial intelligence shall be reviewed by a human worker who has the authority to approve, deny, or modify any decision recommended or made by the automated system before such content may be published with the disclosure under section eleven hundred fifty-three of this article.
Other · Content GenerationEmployment
GBL § 1155(1)
Plain Language
News media employers may not authorize the training of any generative AI system on news media worker work product — whether directly or through a third party — unless the employer has provided notice to the worker, obtained the worker's consent, and given the worker an opportunity to bargain over appropriate remuneration. Workers cannot be penalized for declining to consent. This creates both a consent-before-training obligation and an anti-retaliation protection. The scope is broad: it covers any third-party arrangement where the employer authorizes training, not just in-house AI development.
Statutory Text
News media employers shall not directly or through a third party authorize the training of a generative artificial intelligence system on the work product of a news media worker without notice, consent and an opportunity to bargain over appropriate remuneration. A news media employer shall not penalize a news media worker for declining to consent to allow their work product to be used to train a generative artificial intelligence system.
Other · Content GenerationEmployment
GBL § 1155(2)(a)
Plain Language
The adoption of generative AI or automated employment decision-making tools by news media employers must not diminish employees' existing rights under collective bargaining agreements, existing representational relationships among employee organizations, or existing bargaining relationships between employers and employee organizations. This is both a preservation clause and an affirmative constraint — employers cannot use AI adoption as a mechanism to weaken existing labor agreements or union relationships.
Statutory Text
The use of generative artificial intelligence or automated employment decision-making tools shall not diminish (i) the existing rights of employees pursuant to an existing collective bargaining agreement; or (ii) the existing representational relationships among employee organizations or the bargaining relationships between the employer and an employee organization.
Other · Content GenerationEmployment
GBL § 1155(2)(b)
Plain Language
News media employers may not use generative AI in a manner that results in the discharge, displacement, or loss of position of workers — including partial displacement such as reduced hours, wages, or benefits — or that impairs existing collective bargaining agreements. The prohibition also extends to transferring existing duties and functions from human employees to AI systems. This is a sweeping anti-displacement provision: it effectively prohibits news media employers from replacing human workers with AI for any existing job functions. The provision uses 'shall not result in' language, which appears to create strict liability for displacement outcomes regardless of employer intent.
Statutory Text
The use of generative artificial intelligence systems shall not result in: (i) discharge, displacement or loss of position, including partial displacement such as a reduction in the hours of non-overtime work, wages, or employment benefits, or result in the impairment of existing collective bargaining agreements; or (ii) transfer of existing duties and functions previously performed by employees or workers.
Other · Content GenerationEmployment
Civil Rights Law § 79-h(h)
Plain Language
Employers of professional journalists and newscasters must establish safeguards to protect journalistic sources and confidential materials from being accessed by any AI technology. This covers materials gathered through location tracking, surveillance, or any other means. The obligation is framed broadly — it applies to 'any artificial intelligence technology' that could access confidential source information, not just generative AI. This amends New York's existing journalist shield law (Civil Rights Law § 79-h) to add an AI-specific source protection requirement. The bill does not specify what 'safeguards' must entail, leaving significant discretion to employers.
Statutory Text
Employers of professional journalists and newscasters shall establish safeguards to protect journalistic sources and confidential materials gathered through location tracking, surveillance or any other means, which can be accessed by any artificial intelligence technology, as defined by section eleven hundred fifty-one of the general business law.