T-02
Transparency & Disclosure
AI Content Labeling & Provenance
AI-generated content must be identifiable. This obligation falls on three different actor types — content generators, platforms, and hardware manufacturers — and ranges from visible human-perceptible labels to embedded machine-readable provenance signals to platform detection.
Applies to DeveloperDeployerDistributorManufacturerGovernment Sector Foundation ModelSocial MediaCommunicationsSearchRecording DevicePolitical AdvertisingModel Hosting
Bills — Enacted
1
unique bills
Bills — Proposed
15
Last Updated
2026-03-29
Core Obligation

AI-generated content must be identifiable. This obligation falls on three different actor types — content generators, platforms, and hardware manufacturers — and ranges from visible human-perceptible labels to embedded machine-readable provenance signals to platform detection.

Sub-Obligations7 sub-obligations
ID
Name & Description
Enacted
Proposed
T-02.1
Visible or audible label AI-generated content must carry a human-perceptible label — a watermark, caption, audio tag, or other conspicuous indicator — identifying it as AI-generated. Political content triggers stricter requirements in most jurisdictions imposing this obligation.
0 enacted
13 proposed
T-02.2
Embedded provenance metadata AI-generated content must carry embedded machine-readable provenance signals at the point of generation, enabling downstream detection even if visible labels are removed. Signals must be durable and survive common transformations such as compression and format conversion.
1 enacted
6 proposed
T-02.3
Provenance standard compliance Provenance signals must conform to an interoperable standard enabling third-party verification (e.g., C2PA Content Credentials), rather than a proprietary system that only the developer can verify.
1 enacted
2 proposed
T-02.4
Platform provenance detection duty Large online platforms must scan content they distribute to detect whether standards-compliant provenance data is embedded in or attached to it.
1 enacted
1 proposed
T-02.5
Platform user disclosure duty Large online platforms must provide a user-facing interface that clearly discloses when content carries provenance data indicating AI origin, including the name of the generating system and whether digital signatures are available.
1 enacted
1 proposed
T-02.6
Platform preservation duty Large online platforms must not knowingly strip standards-compliant provenance data or digital signatures from content uploaded or distributed on the platform, to the extent technically feasible.
1 enacted
1 proposed
T-02.7
Detection tool availability Developers of large-scale AI content generation systems must offer a publicly accessible tool or API that accepts content as input and returns a determination of whether the content was AI-generated by that developer's systems.
0 enacted
1 proposed
Bills That Map This Requirement 16 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
T-02.1T-02.2
Section 2(a)(1)-(3), Section 2(b)(1)-(7)
Plain Language
Developers of generative AI systems available in Alabama must ensure that any image, video, or audiovisual content the system produces carries two layers of disclosure: (1) a human-perceptible label that is clear, conspicuous, unavoidable, format-matched (visual for visual content, visual-and-audible for audiovisual content), understandable to a reasonable person, and not contradicted by the content itself; and (2) embedded metadata identifying the content as AI-generated, the tool used, and the creation timestamp. Both disclosures must, to the extent technically feasible, be permanent or unable to be easily removed by subsequent users. The disclosure obligation is triggered only when the content meets the definition of AI-generated content — i.e., it materially alters a reasonable person's understanding of the content's meaning or significance. Minor AI enhancements that do not cross this materiality threshold would not trigger the obligation.
(a) A developer of a generative artificial intelligence system made available in this state shall ensure that any generative artificial intelligence system that produces images, video, or audiovisual content includes a clear and conspicuous disclosure on AI-generated content that meets all of the following requirements: (1) The disclosure shall include a clear and conspicuous notice appropriate for the medium of the content which identifies the content as AI-generated content. (2) The output's metadata shall identify the content as AI-generated content, identify the tool used to create the content, and the date and time the content was created. (3) The disclosure, to the extent technically feasible, shall be permanent or unable to be easily removed by subsequent users. (b) For a disclosure to be clear and conspicuous as required by subsection (a), the disclosure shall meet all of the following criteria: (1) For content that is solely visual, the disclosure shall be made visually in the same means the content is presented. (2) For content that is both visual and audible, the disclosure shall be visual and audible. (3) A visual disclosure shall stand out from any accompanying text or other visual elements by its size, contrast, location, the length of time it appears, and other characteristics so that the disclosure is easily noticed, read, and understood. (4) An audible disclosure shall be delivered in a volume, speed, and cadence sufficient for a reasonable person to easily hear and understand the disclosure. (5) The disclosure shall be unavoidable. (6) The disclosure shall use diction and syntax understandable to a reasonable person. (7) The disclosure shall not be contradicted, mitigated by, or inconsistent with, anything else in the communication.
Pending 2026-10-01
T-02.1
Section 2(c)(1)-(3)
Plain Language
Developers must implement reasonable procedures to prevent their generative AI systems from being used downstream without the required disclosures. At a minimum, developers must: (1) contractually require end users and third-party licensees not to remove disclosures from AI-generated content; (2) obtain certifications from end users and third-party licensees that they will not remove disclosures; and (3) terminate access when the developer has reason to believe an end user or third-party licensee has removed a required disclosure. This is an affirmative procedural obligation — developers must build and maintain these downstream safeguards, not merely include passive terms of service.
(c) A developer of a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of a generative artificial intelligence system without the disclosures required under subsection (a), which shall include: (1) Requiring by contract that end users and third-party licensees of the generative artificial intelligence system refrain from removing any required disclosure from AI-generated content; (2) Requiring certification that end users and third-party licensees will not remove any disclosure from AI-generated content; and (3) Terminating access to the generative artificial intelligence system when the developer has reason to believe that an end user or third-party licensee has removed the required disclosure from AI-generated content.
Pending 2026-10-01
T-02.1
Section 2(d)(1)-(3)
Plain Language
Third-party licensees of generative AI systems face a parallel downstream-prevention obligation to developers. They must implement reasonable procedures to prevent end users from using the system without required disclosures, including: (1) contractually requiring end users not to remove disclosures; (2) obtaining certifications from end users; and (3) terminating access when there is reason to believe an end user has removed a required disclosure. This obligation runs independently of the developer's own obligation under Section 2(c), meaning both the developer and the third-party licensee are separately responsible for downstream compliance.
(d) Any third-party licensee of a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of a generative artificial intelligence system without the disclosures required under subsection (a). The procedures shall include: (1) Requiring by contract that end users of the generative artificial intelligence system refrain from removing any required disclosure from AI-generated content; (2) Requiring certification that end users will not remove any disclosure from AI-generated content; and (3) Terminating access to the generative artificial intelligence system when the developer has reason to believe that an end user has removed the required disclosure from AI-generated content.
Enacted 2026-08-02
T-02.4T-02.5
Bus. & Prof. Code § 22757.3.1(a)(1)-(2)
Plain Language
Large online platforms (social media, file-sharing, mass messaging platforms, and stand-alone search engines with 2M+ unique monthly users) must scan content distributed on their platform to detect any provenance data that conforms to widely adopted standards-body specifications. Where system provenance data is found indicating AI generation, substantial AI alteration, or capture-device origin, the platform must provide a user interface that clearly and conspicuously discloses: whether provenance data exists, the name of the GenAI system or capture device that created or altered the content, and whether digital signatures are available. This is a detect-and-display obligation — the platform need only surface provenance signals that are already embedded in content using recognized standards.
(a) A large online platform shall do all of the following: (1) Detect whether any provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded into or attached to content distributed on the large online platform. (2) (A) Provide a user interface to disclose the availability of system provenance data that reliably indicates that the content was generated or substantially altered by a GenAI system or captured by a capture device. (B) The user interface required by this paragraph shall make clearly and conspicuously available to users information sufficient to identify the content's authenticity, origin, or history of modification, including, but not limited to, all of the following: (i) Whether provenance data is available. (ii) The name of the GenAI system or capture device that created or substantially altered the content, if applicable. (iii) Whether any digital signatures are available.
Enacted 2026-08-02
T-02.5
Bus. & Prof. Code § 22757.3.1(a)(3)
Plain Language
Beyond detecting and disclosing provenance data, large online platforms must allow users to inspect the full system provenance data in an easily accessible way. The platform may satisfy this through any of three methods: (1) inline display through its own UI, (2) enabling the user to download the content with provenance data attached, or (3) providing a link to the provenance data on a website or app (the platform's own or a third party's). Platforms have flexibility in which method to use but must offer at least one.
(a) A large online platform shall do all of the following: ... (3) Allow a user to inspect all available system provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body in an easily accessible manner by any of the following means: (A) Directly through the large online platform's user interface pursuant to paragraph (2). (B) Allow the user to download a version of the content with its attached system provenance data. (C) Provide a link to the content's system provenance data displayed on an internet website or in another application provided either by the large online platform or a third party.
Enacted 2026-08-02
T-02.6
Bus. & Prof. Code § 22757.3.1(b)
Plain Language
Large online platforms must not knowingly strip standards-compliant system provenance data or digital signatures from content that is uploaded to or distributed on the platform, to the extent this is technically feasible. This is a preservation duty — the platform need not add provenance data, but it must not remove what is already there. The 'knowingly' and 'technically feasible' qualifiers provide a safe harbor for incidental or unavoidable data loss during normal processing, but deliberate removal of recognized provenance signals is prohibited.
(b) A large online platform shall not, to the extent technically feasible, knowingly strip any system provenance data or digital signature that is compliant with widely adopted specifications adopted by an established standards-setting body from content uploaded or distributed on the large online platform.
Enacted 2026-08-02
T-02.2T-02.3
Bus. & Prof. Code § 22757.3.3(a)-(b)
Plain Language
Beginning January 1, 2028, manufacturers of cameras, phones, voice recorders, and similar capture devices must — for any device first produced for sale in California on or after that date — (1) give users the option to embed latent (machine-readable, not human-visible) provenance disclosures in captured content, conveying the manufacturer name, device name and version, and creation/alteration timestamp; and (2) enable this disclosure feature by default. Both obligations are subject to two safety valves: technical feasibility and compliance with widely adopted standards-body specifications. Assembly-only firms are excluded from the manufacturer definition. This is a notable extension of AI provenance law into hardware, aimed at creating an authenticity baseline for non-AI-generated content.
(a) A capture device manufacturer shall, with respect to any capture device the capture device manufacturer first produced for sale in the state on or after January 1, 2028, do both of the following: (1) Provide a user with the option to include a latent disclosure in content captured by the capture device that conveys all of the following information: (A) The name of the capture device manufacturer. (B) The name and version number of the capture device that created or altered the content. (C) The time and date of the content's creation or alteration. (2) Embed latent disclosures in content captured by the device by default. (b) A capture device manufacturer shall comply with this section only to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body.
Enacted 2026-08-02
T-02.2T-02.3
Bus. & Prof. Code § 22757.3.2(a)
Plain Language
Platforms that host GenAI systems for download (source code or model weights) must not knowingly make available any GenAI system that fails to include the latent provenance disclosures required by § 22757.3 (the existing covered-provider disclosure obligations). This effectively extends enforcement upstream: hosting platforms become gatekeepers that must verify their hosted GenAI systems embed proper provenance data before distribution. The 'knowingly' standard provides a scienter requirement — hosting platforms are not strictly liable for every non-compliant system but must not distribute them with actual knowledge of non-compliance.
(a) A GenAI system hosting platform shall not knowingly make available a GenAI system that does not place disclosures pursuant to Section 22757.3.
Pending 2027-01-01
T-02.2
Section 10(f)
Plain Language
Covered AI tool providers must embed a machine-readable provenance label in every image, video, or audio content instance their AI generates. The label must be readable by the provider's provenance label reading tool, must be permanent or extraordinarily difficult to remove (to the extent technically feasible), and must convey the provider's name, the AI system name and version, timestamp of creation or alteration, and a unique content identifier. This is an automatic embedding requirement — it applies to every content instance, not just content that users choose to label.
(f) A covered artificial intelligence tool provider shall include a provenance label in any image, video, or audio content instance created by its artificial intelligence. A provenance label required under this subsection shall: (1) be readable by the provenance label reading tool required by this Section; (2) be, to the extent technically feasible, permanent or extraordinarily difficult to remove; (3) convey, to the extent technically feasible, either directly or through a link to a permanent website, the following system provenance data: (A) the name of the covered artificial intelligence tool provider; (B) the name and version number of the artificial intelligence that created or altered the content; (C) the time and date of the content's creation or alteration; and (D) a unique identifier of the content.
Pending 2027-01-01
T-02.7
Section 10(a)-(e)
Plain Language
Covered AI tool providers must offer a free, publicly accessible provenance label reading tool via a conspicuous link on their website and mobile app. The tool must accept content uploads or URLs and must also be accessible via API for programmatic submissions. It must include a user feedback mechanism, and the provider must use that feedback to improve the tool. The provider may not collect or retain personal information from users of the tool (except voluntarily provided feedback contact info), may not output personal provenance data detected in content, and may not retain submitted content longer than necessary to comply with the Act. This effectively requires a public detection tool — similar to but more prescriptive than a verification API.
(a) A covered artificial intelligence tool provider shall make available, at no cost to a person, a provenance label reading tool. The provenance label reading tool shall be made publicly accessible through a conspicuous link on the covered artificial intelligence tool provider's website and any corresponding mobile application. The provenance label reading tool shall allow a person to: (1) upload an image, video, text, or audio content; or (2) provide a uniform resource locator that links to an image, video, text, or audio content. (b) The provenance label reading tool shall support access by an application programming interface that allows a person to programmatically submit content for assessment without accessing the covered artificial intelligence tool provider's website. (c) The provenance label reading tool shall provide a mechanism for a person to submit feedback regarding the tool's efficacy. A covered artificial intelligence tool provider shall consider and use this feedback to improve the provenance label reading tool. (d) A covered artificial intelligence tool provider shall not collect or retain any personal information from a person who uses the provenance label reading tool, except that it may retain contact information voluntarily provided by a person who submits feedback in accordance with subsection (c). The provenance label reading tool shall not output any personal provenance data detected in the content. (e) A covered artificial intelligence tool provider shall not retain any content submitted to the provenance label reading tool for longer than is necessary to comply with this Act.
Pending 2027-01-01
T-02.4T-02.5
Section 15(a)
Plain Language
Large online platforms must (1) detect standards-compliant provenance labels embedded in distributed content (to the extent technically feasible), (2) clearly and conspicuously disclose to users when provenance data is available, and (3) allow users to inspect all available system provenance data either through the platform UI or by downloading the content with its attached metadata. The detection obligation is limited to provenance labels compliant with widely adopted standards-body specifications, not proprietary formats.
(a) A large online platform shall: (1) to the extent technically feasible, detect whether any provenance label that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded in or attached to content distributed on the large online platform; (2) provide a mechanism to disclose any machine-readable provenance label detected in content distributed on the large online platform, which shall, in a clear and conspicuous manner, indicate to a user that provenance data is available; and (3) allow a user to inspect all available system provenance data in an easily accessible manner, either directly through the platform's user interface or by providing a means for the user to download the content with its attached system provenance data.
Pending 2027-01-01
T-02.6
Section 15(b)
Plain Language
Large online platforms are prohibited from (1) knowingly stripping standards-compliant provenance labels or system provenance data from uploaded or distributed content, to the extent technically feasible, and (2) retaining any personal provenance data from content shared on the platform. The first prohibition covers only labels compliant with established standards-body specifications. The second is an absolute prohibition — no technical feasibility qualifier — on retaining provenance data that could identify individual users.
(b) A large online platform shall not: (1) to the extent technically feasible, knowingly strip any provenance label or system provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body from content uploaded to or distributed on the large online platform; or (2) retain any personal provenance data from content shared on the large online platform.
Pending 2027-01-01
T-02.2T-02.3
Section 20
Plain Language
For capture devices first produced for sale in Illinois on or after January 1, 2027, manufacturers must embed provenance labels by default in captured content, to the extent technically feasible and compliant with established standards-body specifications. The label must convey the manufacturer name, device name and version, and creation timestamp. Manufacturers must also give users the option to include the label, inform users of provenance label settings on first use of a recording function, provide a clear opt-out mechanism in device settings, and ensure provenance capabilities are available to both the default capture app and third-party apps using the device's capture functions. Entities engaged exclusively in assembly from others' components are excluded from the definition of capture device manufacturer.
With respect to any capture device that a capture device manufacturer first produces for sale in this State on or after the effective date of this Act, the capture device manufacturer, to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body, shall: (1) provide a user with the option to include a provenance label in content captured by the capture device that conveys the following system provenance data: (A) the name of the capture device manufacturer; (B) the name and version number of the capture device that created the content; and (C) the time and date of the content's creation; (2) embed the provenance label described in paragraph (1) in content captured by the device by default; (3) clearly inform a user of the existence of settings relating to the provenance label upon the user's first use of a recording function on the capture device; (4) provide in the capture device's settings a clear and accessible mechanism for a user to opt out of the inclusion of a provenance label in the user's captured content; and (5) ensure the capabilities required by this Section are available for the capture device's default capture application and are made available to third-party applications that use the device's capture functionalities.
Pending 2027-01-01
T-02.2
Section 25(a)-(c)
Plain Language
When a covered AI tool provider licenses its AI to a third party, the provider must contractually require the licensee to maintain the provenance labeling capability. If the provider gains actual knowledge that a licensee has removed that capability, the provider must revoke the license within 96 hours. Once revoked, the licensee is prohibited from continuing to use the AI. This creates a dual obligation: the provider must include and enforce the contractual requirement, and the licensee must cease use upon revocation. The 96-hour clock runs from actual knowledge, not constructive notice.
(a) If a covered artificial intelligence tool provider licenses its artificial intelligence to a third party, the covered artificial intelligence tool provider shall require by contract that the licensee maintain the system's capability to include a provenance label as required by subsection (f) of Section 10. (b) If a covered artificial intelligence tool provider has actual knowledge that a third-party licensee has modified an artificial intelligence to remove its capability to include a provenance label, the covered artificial intelligence tool provider shall revoke the third party's license to use the artificial intelligence within 96 hours after obtaining the knowledge. (c) A third-party licensee whose license to use artificial intelligence is revoked under this Section shall not use the artificial intelligence after the revocation.
Pending 2027-01-01
T-02.2
Section 25(d)
Plain Language
Operators of model hosting platforms — websites or applications that make AI source code or model weights available for download — may not knowingly host AI that lacks the provenance labeling capability required by Section 10(f). This obligation applies based on actual knowledge that the hosted AI does not embed provenance labels; it does not impose a duty to affirmatively audit every hosted model. The obligation falls on the hosting platform operator, not the model developer.
(d) The operator of a website or application that makes available for download the source code or model weights of artificial intelligence shall not knowingly make available artificial intelligence that does not place disclosures into content as required by subsection (f) of Section 10.
Pending 2026-08-01
T-02.1
R.S. 51:1430(B)
Plain Language
Any AI system that generates images, videos, audio, or multimedia content must include a clear and conspicuous label on the output identifying it as AI-generated. The obligation is unconditional — there are no exceptions for parody, satire, artistic expression, editorial use, or de minimis content. The statute does not specify a particular label format or standard, only that the disclosure be 'clear and conspicuous.' Because the statute does not name a specific obligated party, the practical compliance burden falls on whoever develops or deploys the AI system that produces the content. Violations are classified as deceptive and unfair trade practices subject to civil fines of up to $10,000 per violation, enforceable by the attorney general.
Any artificial intelligence system that produces images, videos, audio, or multimedia artificial intelligence-generated content shall include on such artificial intelligence-generated content a clear and conspicuous disclosure that identifies the content as generated by artificial intelligence.
Pending 2025-10-01
T-02.1T-02.2
Section 1(2)
Plain Language
All publicly distributed online media generated in whole or in part by AI must carry two layers of labeling: (1) human-perceptible markers — such as watermarks, labels, disclaimers, or audio cues — that alert users the content involves AI, and (2) embedded machine-readable markers that survive deletion of the visible markers, enabling downstream detection of AI involvement. The bill does not specify who bears responsibility for applying these markers (the AI developer, the content creator, or the distributor), nor does it specify a technical standard for the embedded markers. Governmental entities are excluded from this requirement.
Any publicly distributed online media generated in whole or in part by artificial intelligence must contain identifiable markers that alert users to the use of artificial intelligence, as well as embedded markers that allow identification of the use of artificial intelligence should the original identifiable markers be deleted.
Pending 2027-01-01
T-02.1
GBL § 1554(1)-(2)
Plain Language
While the primary mapping of this provision is to T-01 (AI identity disclosure), the bill also defines 'synthetic digital content' in § 1550(15), and the disclosure obligation in § 1554 applies to any AI decision system intended to interact with consumers — which would include content-generating systems. The obligation to disclose that a consumer is interacting with AI effectively serves as a labeling function for AI-generated content in interactive contexts. However, the bill does not impose standalone content provenance or watermarking requirements.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
Pending 2025-03-11
T-02.1
Gen. Bus. Law § 338(1)-(2)
Plain Language
Any book published in New York that was created wholly or partially using generative AI must carry a conspicuous on-cover disclosure stating that AI was used in its creation. This applies to all printed and digital books regardless of audience, encompassing text, pictures, audio, puzzles, games, and combinations thereof. The definition of 'generative artificial intelligence' is extremely broad — it extends well beyond large language models and image generators to encompass virtually any machine learning or algorithmic system that performs tasks, makes predictions, or approximates cognitive functions. Publishers should note the bill does not specify any penalties for noncompliance, nor does it designate an enforcement authority or create a private right of action.
1. Any book that was wholly or partially created through the use of generative artificial intelligence, published in this state, shall conspicuously disclose upon the cover of the book, that such book was created with the use of generative artificial intelligence. 2. Books subject to the provisions of this section shall include, but not be limited to, all printed and digital books, regardless of such books' target age group or audience, consisting of text, pictures, audio, puzzles, games or any combination thereof.
Pending 2025-10-12
T-02.1
GBL § 1153
Plain Language
News media content that was substantially created by generative AI and is published, broadcast, or otherwise accessible in New York must carry a conspicuous label. For visual content, the label must be imprinted at the top of the page, webpage, image, graphic, or video. For audio content, the disclosure must be verbally stated at the onset. Critically, this obligation does not apply if the content is eligible for copyright registration — which creates a significant carve-out, since human-supervised AI-assisted content may qualify for copyright protection. The threshold trigger is 'substantially composed, authored, or otherwise created' by generative AI, which is not further defined.
Any news media content published, broadcast, or otherwise disseminated or accessible within the state of New York, which was substantially composed, authored, or otherwise created through the use of generative artificial intelligence shall conspicuously imprint on the top of the page, webpage, image, graphic, video or other visual or audio/visual content, or verbally orate at the onset of audio content, that such content was substantially created by generative artificial intelligence. If the content is eligible for copyright registration such disclosure requirement shall not apply.
Pending
T-02.1
Gen. Bus. Law § 399-ss(2)(a)-(b)
Plain Language
When a search engine displays information generated by AI, it must label that information in two ways: (1) a clear, plain-language notice placed directly above the AI-generated content, and (2) a watermark displayed across the content. Both labels must be in the same font size as the AI-generated information itself. The obligation is triggered whenever AI-generated information is displayed — there is no materiality threshold, user opt-out, or exception for brief or incidental AI-generated content. The bill does not define 'search engine,' creating ambiguity about whether the obligation extends to general-purpose search engines only or also to internal site search, AI assistants with search capabilities, or other retrieval interfaces.
2. Where a search engine displays information which was generated by artificial intelligence, the search engine shall in clear, plain language in the same font size as such information, inform the user that such information was generated by artificial intelligence: (a) directly above such information; and (b) as a watermark across such information.
Pending
T-02.1
Gen. Bus. Law § 338(1)-(2)
Plain Language
Any publisher of a book in New York that was created in whole or in part using generative AI must place a conspicuous disclosure on the cover of the book stating that AI was used in its creation. This applies to all printed and digital books — regardless of audience or age group — including those consisting of text, pictures, audio, puzzles, games, or any combination. The definition of generative AI is extremely broad, encompassing virtually any machine learning, neural network, or automated system that performs cognitive tasks or learns from data. The bill does not specify the exact wording of the disclosure, only that it must be conspicuous and appear on the cover. It also does not define what degree of AI involvement constitutes 'partially created,' leaving significant ambiguity about whether incidental AI use (e.g., grammar checking, formatting assistance) would trigger the requirement.
1. Any book that was wholly or partially created through the use of generative artificial intelligence, published in this state, shall conspicuously disclose upon the cover of the book, that such book was created with the use of generative artificial intelligence. 2. Books subject to the provisions of this section shall include, but not be limited to, all printed and digital books, regardless of such books' target age group or audience, consisting of text, pictures, audio, puzzles, games or any combination thereof.
Pending 2025-10-11
T-02.1
GBL § 1550(15)
Plain Language
The statute defines 'synthetic digital content' broadly to cover any audio, image, text, or video produced or manipulated by an AI decision system. While this definition is established, the bill does not contain a standalone operative provision requiring labeling or provenance marking of synthetic digital content. The definition appears to be anticipatory or supporting the general disclosure obligations elsewhere in the article. No independent labeling obligation is triggered by this definition alone.
"Synthetic digital content" shall mean any digital content, including, but not limited to, any audio, image, text, or video, that is produced or manipulated by an artificial intelligence decision system, including, but not limited to, a general-purpose artificial intelligence model.
Pending 2025-09-05
T-02.1
Gen. Bus. Law § 1153
Plain Language
News media content that was substantially created using generative AI and is published or accessible in New York must carry a conspicuous consumer-facing disclosure. For visual content, the label must be imprinted at the top of the page, webpage, image, graphic, or video. For audio content, the disclosure must be verbally stated at the onset. A notable carve-out applies: if the content is eligible for copyright registration, the disclosure requirement does not apply. This carve-out is significant because copyright eligibility generally requires sufficient human authorship — meaning content with enough human creative input to qualify for copyright is exempt, while purely or substantially AI-generated content (which the U.S. Copyright Office has indicated is not copyrightable) must be labeled.
Disclosure to consumers. Any news media content published, broadcast, or otherwise disseminated or accessible within the state of New York, which was substantially composed, authored, or otherwise created through the use of generative artificial intelligence shall conspicuously imprint on the top of the page, webpage, image, graphic, video or other visual or audio/visual content, or verbally orate at the onset of audio content, that such content was substantially created by generative artificial intelligence. If the content is eligible for copyright registration such disclosure requirement shall not apply.
Pending 2026-07-01
T-02.1
Va. Code § 19.2-11.14(D)
Plain Language
Any police report or law-enforcement record produced during a criminal investigation using generative AI must: (1) carry a disclaimer that it contains AI-generated content; (2) where technically feasible, specifically identify which portions were generated by AI; and (3) include a certification from the human author that they have read and reviewed the document for accuracy. This applies to reports written in whole or in part by generative AI but does not apply to mere spell-checking or grammar-checking, which are excluded from the definition of covered AI.
D. An official police report or other law-enforcement record generated during a criminal investigation that was created in whole in or in part by using generative artificial intelligence shall:

1. Include a disclaimer that the report or record contains content generated by artificial intelligence;

2. Where technically feasible, identify the specific content in the report or record that was generated by artificial intelligence; and

3. Include a certification by the author of the report or record that the author has read and reviewed the report or record for accuracy.
Passed 2027-02-01
T-02.2T-02.3
Sec. 2(1)-(4)
Plain Language
Covered providers must embed provenance data in any video, image, or audio content (or combination thereof) that is created or materially altered by their generative AI system, to the extent commercially and technically reasonable. The provenance data must enable users to assess whether content was AI-generated or materially altered. Providers must also use commercially and technically reasonable methods to make the provenance data tamper-resistant. Using a commonly supported standard such as C2PA is deemed compliant with the tamper-resistance requirement. Provenance data may not include personally identifiable information. 'Materially altered' excludes minor adjustments like brightness changes, cropping, resizing, denoising, and similar cosmetic edits. The chapter does not apply to video games, interactive e-commerce experiences, systems used solely for upscaling/noise reduction/compression, or business-to-business uses.
(1) To the extent commercially and technically reasonable, a covered provider shall include provenance data in any video, image, or audio content, or content that is any combination thereof, created or materially altered by the covered provider's generative artificial intelligence system and that is subject to the terms of this chapter. The provenance data must allow a user to assess whether image, video, or audio content, or content that is any combination thereof, was created or materially altered by the covered provider's generative artificial intelligence system. (2) A covered provider must use commercially and technically reasonable methods to make the provenance data difficult to remove or tamper with. The use of a commonly supported technical standard for watermarking or metadata, such as the coalition for content provenance and authenticity specification, for provenance data is considered compliant with this subsection. (3) A covered provider may not be required under this section to include any information relating to an identified or reasonably identifiable individual in provenance data included in content created or content materially altered by the covered provider's generative artificial intelligence system. (4) For the purposes of this section, "materially altered" means a significant change that substantially alters the data in content. "Materially altered" does not include minor modifications that do not lead to significant changes to the perceived content or meaning of the content. Minor modifications include: Changes to brightness, contrast, or color; sharpening; saturating; applying filters; resizing; scaling; cropping; format conversions; resampling; denoising; and removal of background noise in audio.
Pending 2027-01-01
T-02.1T-02.2
Sec. 2(7)(a)-(c)
Plain Language
Developers of high-risk generative AI systems that produce synthetic content must ensure their outputs are identifiable and detectable using industry-standard tools or developer-provided tools, and must apply such identification at the point of generation. For audio, image, or video content in artistic, creative, satirical, or fictional works, the identification must not hinder enjoyment of the work. Exemptions apply to text-only content, content in the public interest, content unlikely to mislead a reasonable person, outputs from assistive editing tools that do not substantially alter input data, and law enforcement-authorized outputs.
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.
Pending 2027-01-01
T-02.1T-02.2
Sec. 2(7)
Plain Language
Developers of high-risk generative AI systems that produce or substantially modify synthetic content must ensure outputs are identifiable and detectable using industry-standard tools or developer-provided tools at the time of generation. For artistic, creative, satirical, or fictional works, the identification must not hinder display or enjoyment. Significant carve-outs apply: text-only synthetic content, content published in the public interest, content unlikely to mislead a reasonable person, outputs from assistive editing tools that do not substantially alter inputs, and law enforcement-authorized crime detection uses are all exempt from the identification requirement.
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.