T-02
Transparency & Disclosure
AI Content Labeling & Provenance
AI-generated content must be identifiable. This obligation falls on three different actor types — content generators, platforms, and hardware manufacturers — and ranges from visible human-perceptible labels to embedded machine-readable provenance signals to platform detection.
Applies to DeveloperDeployerDistributorManufacturerGovernment Sector Foundation ModelSocial MediaCommunicationsSearchRecording DevicePolitical AdvertisingModel Hosting
Bills — Enacted
1
unique bills
Bills — Proposed
13
Last Updated
2026-03-29
Core Obligation

AI-generated content must be identifiable. This obligation falls on three different actor types — content generators, platforms, and hardware manufacturers — and ranges from visible human-perceptible labels to embedded machine-readable provenance signals to platform detection.

Sub-Obligations7 sub-obligations
ID
Name & Description
Enacted
Proposed
T-02.1
Visible or audible label AI-generated content must carry a human-perceptible label — a watermark, caption, audio tag, or other conspicuous indicator — identifying it as AI-generated. Political content triggers stricter requirements in most jurisdictions imposing this obligation.
0 enacted
11 proposed
T-02.2
Embedded provenance metadata AI-generated content must carry embedded machine-readable provenance signals at the point of generation, enabling downstream detection even if visible labels are removed. Signals must be durable and survive common transformations such as compression and format conversion.
1 enacted
6 proposed
T-02.3
Provenance standard compliance Provenance signals must conform to an interoperable standard enabling third-party verification (e.g., C2PA Content Credentials), rather than a proprietary system that only the developer can verify.
1 enacted
1 proposed
T-02.4
Platform provenance detection duty Large online platforms must scan content they distribute to detect whether standards-compliant provenance data is embedded in or attached to it.
1 enacted
1 proposed
T-02.5
Platform user disclosure duty Large online platforms must provide a user-facing interface that clearly discloses when content carries provenance data indicating AI origin, including the name of the generating system and whether digital signatures are available.
1 enacted
1 proposed
T-02.6
Platform preservation duty Large online platforms must not knowingly strip standards-compliant provenance data or digital signatures from content uploaded or distributed on the platform, to the extent technically feasible.
1 enacted
1 proposed
T-02.7
Detection tool availability Developers of large-scale AI content generation systems must offer a publicly accessible tool or API that accepts content as input and returns a determination of whether the content was AI-generated by that developer's systems.
0 enacted
1 proposed
Bills That Map This Requirement 14 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
T-02.1T-02.2
Section 2(a)(1)-(3), Section 2(b)
Plain Language
Developers of generative AI systems available in Alabama must ensure that any image, video, or audiovisual output that meets the AI-generated content threshold carries two layers of disclosure: (1) a human-perceptible label appropriate for the medium — visual for visual content, visual and audible for audiovisual content — that is conspicuous, unavoidable, understandable, and not contradicted by the content; and (2) embedded metadata identifying the content as AI-generated, the tool used to create it, and the creation timestamp. Both disclosures must, to the extent technically feasible, be permanent or not easily removable. The disclosure obligation applies only when the AI's involvement materially alters a reasonable person's understanding of the content — minor AI edits that do not change the meaning or significance would not trigger the obligation.
(a) A developer of a generative artificial intelligence system made available in this state shall ensure that any generative artificial intelligence system that produces images, video, or audiovisual content includes a clear and conspicuous disclosure on AI-generated content that meets all of the following requirements: (1) The disclosure shall include a clear and conspicuous notice appropriate for the medium of the content which identifies the content as AI-generated content. (2) The output's metadata shall identify the content as AI-generated content, identify the tool used to create the content, and the date and time the content was created. (3) The disclosure, to the extent technically feasible, shall be permanent or unable to be easily removed by subsequent users. (b) For a disclosure to be clear and conspicuous as required by subsection (a), the disclosure shall meet all of the following criteria: (1) For content that is solely visual, the disclosure shall be made visually in the same means the content is presented. (2) For content that is both visual and audible, the disclosure shall be visual and audible. (3) A visual disclosure shall stand out from any accompanying text or other visual elements by its size, contrast, location, the length of time it appears, and other characteristics so that the disclosure is easily noticed, read, and understood. (4) An audible disclosure shall be delivered in a volume, speed, and cadence sufficient for a reasonable person to easily hear and understand the disclosure. (5) The disclosure shall be unavoidable. (6) The disclosure shall use diction and syntax understandable to a reasonable person. (7) The disclosure shall not be contradicted, mitigated by, or inconsistent with, anything else in the communication.
Pending 2026-10-01
T-02.2
Section 2(c)
Plain Language
Developers must implement reasonable downstream enforcement procedures to prevent their generative AI systems from being used without the required disclosures. At minimum, developers must: (1) contractually require end users and third-party licensees not to remove disclosures; (2) obtain certifications from end users and licensees that they will not remove disclosures; and (3) terminate access when the developer has reason to believe a user or licensee has removed a required disclosure. This is a procedural and contractual obligation — developers must build a compliance enforcement chain, not merely include disclosures at the point of generation.
(c) A developer of a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of a generative artificial intelligence system without the disclosures required under subsection (a), which shall include: (1) Requiring by contract that end users and third-party licensees of the generative artificial intelligence system refrain from removing any required disclosure from AI-generated content; (2) Requiring certification that end users and third-party licensees will not remove any disclosure from AI-generated content; and (3) Terminating access to the generative artificial intelligence system when the developer has reason to believe that an end user or third-party licensee has removed the required disclosure from AI-generated content.
Pending 2026-10-01
T-02.2
Section 2(d)
Plain Language
Third-party licensees — persons who license a developer's generative AI system for their own purposes — bear a parallel downstream enforcement obligation toward their own end users. Licensees must contractually prohibit end users from removing disclosures, obtain certifications, and terminate access when there is reason to believe disclosures have been removed. This mirrors the developer obligation in Section 2(c) but shifts the duty to the intermediary licensee layer. Note: the text of subsection (d)(3) refers to 'the developer' having reason to believe, which appears to be a drafting error — the obligation falls on the third-party licensee.
(d) Any third-party licensee of a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of a generative artificial intelligence system without the disclosures required under subsection (a). The procedures shall include: (1) Requiring by contract that end users of the generative artificial intelligence system refrain from removing any required disclosure from AI-generated content; (2) Requiring certification that end users will not remove any disclosure from AI-generated content; and (3) Terminating access to the generative artificial intelligence system when the developer has reason to believe that an end user has removed the required disclosure from AI-generated content.
Enacted 2026-08-02
T-02.4T-02.5
Bus. & Prof. Code § 22757.3.1(a)(1)-(2)
Plain Language
Large online platforms (social media, file-sharing, mass messaging platforms, and stand-alone search engines with 2M+ unique monthly users) must scan content distributed on their platform to detect any provenance data that conforms to widely adopted standards-body specifications. Where system provenance data is found indicating AI generation, substantial AI alteration, or capture-device origin, the platform must provide a user interface that clearly and conspicuously discloses: whether provenance data exists, the name of the GenAI system or capture device that created or altered the content, and whether digital signatures are available. This is a detect-and-display obligation — the platform need only surface provenance signals that are already embedded in content using recognized standards.
(a) A large online platform shall do all of the following: (1) Detect whether any provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded into or attached to content distributed on the large online platform. (2) (A) Provide a user interface to disclose the availability of system provenance data that reliably indicates that the content was generated or substantially altered by a GenAI system or captured by a capture device. (B) The user interface required by this paragraph shall make clearly and conspicuously available to users information sufficient to identify the content's authenticity, origin, or history of modification, including, but not limited to, all of the following: (i) Whether provenance data is available. (ii) The name of the GenAI system or capture device that created or substantially altered the content, if applicable. (iii) Whether any digital signatures are available.
Enacted 2026-08-02
T-02.5
Bus. & Prof. Code § 22757.3.1(a)(3)
Plain Language
Beyond detecting and disclosing provenance data, large online platforms must allow users to inspect the full system provenance data in an easily accessible way. The platform may satisfy this through any of three methods: (1) inline display through its own UI, (2) enabling the user to download the content with provenance data attached, or (3) providing a link to the provenance data on a website or app (the platform's own or a third party's). Platforms have flexibility in which method to use but must offer at least one.
(a) A large online platform shall do all of the following: ... (3) Allow a user to inspect all available system provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body in an easily accessible manner by any of the following means: (A) Directly through the large online platform's user interface pursuant to paragraph (2). (B) Allow the user to download a version of the content with its attached system provenance data. (C) Provide a link to the content's system provenance data displayed on an internet website or in another application provided either by the large online platform or a third party.
Enacted 2026-08-02
T-02.6
Bus. & Prof. Code § 22757.3.1(b)
Plain Language
Large online platforms must not knowingly strip standards-compliant system provenance data or digital signatures from content that is uploaded to or distributed on the platform, to the extent this is technically feasible. This is a preservation duty — the platform need not add provenance data, but it must not remove what is already there. The 'knowingly' and 'technically feasible' qualifiers provide a safe harbor for incidental or unavoidable data loss during normal processing, but deliberate removal of recognized provenance signals is prohibited.
(b) A large online platform shall not, to the extent technically feasible, knowingly strip any system provenance data or digital signature that is compliant with widely adopted specifications adopted by an established standards-setting body from content uploaded or distributed on the large online platform.
Enacted 2026-08-02
T-02.2T-02.3
Bus. & Prof. Code § 22757.3.3(a)-(b)
Plain Language
Beginning January 1, 2028, manufacturers of cameras, phones, voice recorders, and similar capture devices must — for any device first produced for sale in California on or after that date — (1) give users the option to embed latent (machine-readable, not human-visible) provenance disclosures in captured content, conveying the manufacturer name, device name and version, and creation/alteration timestamp; and (2) enable this disclosure feature by default. Both obligations are subject to two safety valves: technical feasibility and compliance with widely adopted standards-body specifications. Assembly-only firms are excluded from the manufacturer definition. This is a notable extension of AI provenance law into hardware, aimed at creating an authenticity baseline for non-AI-generated content.
(a) A capture device manufacturer shall, with respect to any capture device the capture device manufacturer first produced for sale in the state on or after January 1, 2028, do both of the following: (1) Provide a user with the option to include a latent disclosure in content captured by the capture device that conveys all of the following information: (A) The name of the capture device manufacturer. (B) The name and version number of the capture device that created or altered the content. (C) The time and date of the content's creation or alteration. (2) Embed latent disclosures in content captured by the device by default. (b) A capture device manufacturer shall comply with this section only to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body.
Enacted 2026-08-02
T-02.2T-02.3
Bus. & Prof. Code § 22757.3.2(a)
Plain Language
Platforms that host GenAI systems for download (source code or model weights) must not knowingly make available any GenAI system that fails to include the latent provenance disclosures required by § 22757.3 (the existing covered-provider disclosure obligations). This effectively extends enforcement upstream: hosting platforms become gatekeepers that must verify their hosted GenAI systems embed proper provenance data before distribution. The 'knowingly' standard provides a scienter requirement — hosting platforms are not strictly liable for every non-compliant system but must not distribute them with actual knowledge of non-compliance.
(a) A GenAI system hosting platform shall not knowingly make available a GenAI system that does not place disclosures pursuant to Section 22757.3.
Pending 2027-01-01
T-02.7
Section 10(a)-(e)
Plain Language
Covered AI tool providers must offer a free, publicly accessible provenance label reading tool on their website, mobile app, and via API. The tool must allow any person to upload content or submit a URL to determine whether the content was generated or altered by the provider's AI. The tool must also accept user feedback, which the provider must use to improve the tool. Providers may not collect or retain personal information from users of the reading tool (except voluntary contact info for feedback), may not output personal provenance data, and may not retain submitted content longer than necessary.
(a) A covered artificial intelligence tool provider shall make available, at no cost to a person, a provenance label reading tool. The provenance label reading tool shall be made publicly accessible through a conspicuous link on the covered artificial intelligence tool provider's website and any corresponding mobile application. The provenance label reading tool shall allow a person to: (1) upload an image, video, text, or audio content; or (2) provide a uniform resource locator that links to an image, video, text, or audio content. (b) The provenance label reading tool shall support access by an application programming interface that allows a person to programmatically submit content for assessment without accessing the covered artificial intelligence tool provider's website. (c) The provenance label reading tool shall provide a mechanism for a person to submit feedback regarding the tool's efficacy. A covered artificial intelligence tool provider shall consider and use this feedback to improve the provenance label reading tool. (d) A covered artificial intelligence tool provider shall not collect or retain any personal information from a person who uses the provenance label reading tool, except that it may retain contact information voluntarily provided by a person who submits feedback in accordance with subsection (c). The provenance label reading tool shall not output any personal provenance data detected in the content. (e) A covered artificial intelligence tool provider shall not retain any content submitted to the provenance label reading tool for longer than is necessary to comply with this Act.
Pending 2027-01-01
T-02.2
Section 10(f)
Plain Language
Covered AI tool providers must embed a machine-readable provenance label in every image, video, or audio content instance their AI creates. The label must be readable by the provider's own reading tool, must be permanent or extraordinarily difficult to remove (to the extent technically feasible), and must convey system provenance data including the provider's name, the AI system's name and version, the timestamp of creation or alteration, and a unique content identifier. The label may convey this information directly or through a link to a permanent website.
(f) A covered artificial intelligence tool provider shall include a provenance label in any image, video, or audio content instance created by its artificial intelligence. A provenance label required under this subsection shall: (1) be readable by the provenance label reading tool required by this Section; (2) be, to the extent technically feasible, permanent or extraordinarily difficult to remove; (3) convey, to the extent technically feasible, either directly or through a link to a permanent website, the following system provenance data: (A) the name of the covered artificial intelligence tool provider; (B) the name and version number of the artificial intelligence that created or altered the content; (C) the time and date of the content's creation or alteration; and (D) a unique identifier of the content.
Pending 2027-01-01
T-02.4T-02.5
Section 15(a)
Plain Language
Large online platforms must (1) detect standards-compliant provenance labels in content they distribute, (2) clearly and conspicuously disclose to users when provenance data is available, and (3) allow users to inspect all available system provenance data either through the platform's UI or by downloading the content with its metadata. Detection is subject to a technical feasibility qualifier. Only provenance labels compliant with widely adopted standards-body specifications trigger the detection obligation.
(a) A large online platform shall: (1) to the extent technically feasible, detect whether any provenance label that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded in or attached to content distributed on the large online platform; (2) provide a mechanism to disclose any machine-readable provenance label detected in content distributed on the large online platform, which shall, in a clear and conspicuous manner, indicate to a user that provenance data is available; and (3) allow a user to inspect all available system provenance data in an easily accessible manner, either directly through the platform's user interface or by providing a means for the user to download the content with its attached system provenance data.
Pending 2027-01-01
T-02.6
Section 15(b)(1)
Plain Language
Large online platforms must not knowingly strip standards-compliant provenance labels or system provenance data from content uploaded to or distributed on the platform, to the extent technically feasible. The prohibition applies only to provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body, and only when the stripping is knowing — inadvertent data loss through standard processing is not a violation.
(b) A large online platform shall not: (1) to the extent technically feasible, knowingly strip any provenance label or system provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body from content uploaded to or distributed on the large online platform;
Pending 2027-01-01
T-02.2
Section 20
Plain Language
For capture devices first produced for sale in Illinois on or after January 1, 2027, manufacturers must embed provenance labels by default in captured content, to the extent technically feasible and compliant with established standards-body specifications. The label must convey manufacturer name, device name and version, and creation timestamp. Manufacturers must also give users the option to include or opt out of provenance labels, inform users of label settings on first use of a recording function, and ensure provenance capabilities are available to both the default capture app and third-party apps using the device's capture functions. The exclusion for persons exclusively engaged in assembly from others' components means contract assemblers are not covered.
With respect to any capture device that a capture device manufacturer first produces for sale in this State on or after the effective date of this Act, the capture device manufacturer, to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body, shall: (1) provide a user with the option to include a provenance label in content captured by the capture device that conveys the following system provenance data: (A) the name of the capture device manufacturer; (B) the name and version number of the capture device that created the content; and (C) the time and date of the content's creation; (2) embed the provenance label described in paragraph (1) in content captured by the device by default; (3) clearly inform a user of the existence of settings relating to the provenance label upon the user's first use of a recording function on the capture device; (4) provide in the capture device's settings a clear and accessible mechanism for a user to opt out of the inclusion of a provenance label in the user's captured content; and (5) ensure the capabilities required by this Section are available for the capture device's default capture application and are made available to third-party applications that use the device's capture functionalities.
Pending 2027-01-01
T-02.2
Section 25(a)-(c)
Plain Language
When a covered AI tool provider licenses its AI to a third party, the provider must contractually require the licensee to maintain the system's provenance labeling capability. If the provider learns that a licensee has removed provenance labeling capability, the provider must revoke the license within 96 hours. A licensee whose license is revoked must immediately cease use. This creates a chain-of-custody enforcement mechanism ensuring provenance labeling obligations survive downstream licensing arrangements. The 96-hour revocation clock starts upon actual knowledge, not constructive notice.
(a) If a covered artificial intelligence tool provider licenses its artificial intelligence to a third party, the covered artificial intelligence tool provider shall require by contract that the licensee maintain the system's capability to include a provenance label as required by subsection (f) of Section 10. (b) If a covered artificial intelligence tool provider has actual knowledge that a third-party licensee has modified an artificial intelligence to remove its capability to include a provenance label, the covered artificial intelligence tool provider shall revoke the third party's license to use the artificial intelligence within 96 hours after obtaining the knowledge. (c) A third-party licensee whose license to use artificial intelligence is revoked under this Section shall not use the artificial intelligence after the revocation.
Pending 2027-01-01
T-02.2
Section 25(d)
Plain Language
Model hosting platforms — websites or applications that make AI source code or model weights available for download — must not knowingly distribute AI systems that lack provenance labeling capability as required by Section 10(f). This creates a gatekeeper obligation for hosting platforms to verify that AI models they distribute can embed provenance labels in generated content. The knowledge standard is 'knowingly,' so platforms are not strictly liable for unknowing distribution but must act on actual knowledge.
(d) The operator of a website or application that makes available for download the source code or model weights of artificial intelligence shall not knowingly make available artificial intelligence that does not place disclosures into content as required by subsection (f) of Section 10.
Pending 2026-08-01
T-02.1
R.S. 51:1430(B)
Plain Language
Any AI system that generates images, videos, audio, or multimedia content must include a clear and conspicuous disclosure on the output identifying it as AI-generated. The statute does not specify the form of the disclosure (watermark, caption, audio tag, etc.) — only that it be clear, conspicuous, and present on the content itself. There are no exemptions for artistic expression, satire, or de minimis use. The obligation attaches to the content at the point of generation and applies broadly to any AI system producing the covered content types, which in practice means the entity operating or controlling the system bears compliance responsibility.
Any artificial intelligence system that produces images, videos, audio, or multimedia artificial intelligence-generated content shall include on such artificial intelligence-generated content a clear and conspicuous disclosure that identifies the content as generated by artificial intelligence.
Failed 2025-10-01
T-02.1T-02.2
Section 1(2)
Plain Language
All publicly distributed online media generated in whole or in part by AI must carry two layers of markers: (1) identifiable (human-perceptible) markers that alert users that AI was involved in generating the content, and (2) embedded (machine-readable or latent) markers that persist and allow detection of AI involvement even if the visible markers are removed. The definition of 'markers' is broad, encompassing visual marks, audio flaws, watermarks, content labels, bylines, disclaimers, and similar disclosures. Government entities are exempt. The bill does not specify which entity in the content creation or distribution chain bears this obligation — it applies to 'any publicly distributed online media,' creating ambiguity about whether the obligation falls on the creator, distributor, or platform.
Any publicly distributed online media generated in whole or in part by artificial intelligence must contain identifiable markers that alert users to the use of artificial intelligence, as well as embedded markers that allow identification of the use of artificial intelligence should the original identifiable markers be deleted.
Pending
T-02.1
Gen. Bus. Law § 338(1)-(2)
Plain Language
Any book published in New York that was created wholly or partially using generative AI must carry a conspicuous disclosure on its cover stating that AI was used in its creation. This applies to all printed and digital books regardless of target audience, encompassing books made up of text, pictures, audio, puzzles, games, or any combination. The obligation falls on the publisher of the book. The definition of generative AI is exceptionally broad — it encompasses virtually any machine learning, automation, or algorithmic system, which could sweep in tools like spell-checkers, grammar correction software, or layout automation. The bill does not specify the precise wording of the required disclosure, only that it must be conspicuous and appear on the cover.
1. Any book that was wholly or partially created through the use of generative artificial intelligence, published in this state, shall conspicuously disclose upon the cover of the book, that such book was created with the use of generative artificial intelligence. 2. Books subject to the provisions of this section shall include, but not be limited to, all printed and digital books, regardless of such books' target age group or audience, consisting of text, pictures, audio, puzzles, games or any combination thereof.
Pending
T-02.1
Gen. Bus. Law § 1153
Plain Language
Any news media content that was substantially created using generative AI must carry a conspicuous, human-perceptible label disclosing that fact. For visual content (pages, webpages, images, graphics, video), the label must appear at the top. For audio content, a verbal disclosure must be made at the beginning. However, if the content is eligible for copyright registration, the disclosure requirement does not apply. This exemption is notable because it effectively limits the labeling obligation to content that is not copyrightable — which in practice may narrow the scope significantly, given that many AI-assisted works may qualify for copyright if they involve sufficient human creative input.
Any news media content published, broadcast, or otherwise disseminated or accessible within the state of New York, which was substantially composed, authored, or otherwise created through the use of generative artificial intelligence shall conspicuously imprint on the top of the page, webpage, image, graphic, video or other visual or audio/visual content, or verbally orate at the onset of audio content, that such content was substantially created by generative artificial intelligence. If the content is eligible for copyright registration such disclosure requirement shall not apply.
Pending
T-02.1
Gen. Bus. Law § 399-ss(2)(a)-(b)
Plain Language
When a search engine displays information that was generated by AI, it must inform the user that the information is AI-generated, using clear, plain language in the same font size as the AI-generated content. The disclosure must appear in two forms simultaneously: (1) a text notice directly above the AI-generated information, and (2) a watermark overlaid across the AI-generated information. The bill does not define 'search engine,' so the scope of covered entities depends on the ordinary meaning of that term. The dual-format requirement (above-text notice plus watermark) is unusually prescriptive compared to other AI content labeling laws.
2. Where a search engine displays information which was generated by artificial intelligence, the search engine shall in clear, plain language in the same font size as such information, inform the user that such information was generated by artificial intelligence: (a) directly above such information; and (b) as a watermark across such information.
Pending
T-02.1
Gen. Bus. Law § 338(1)-(2)
Plain Language
Any book published in New York that was created in whole or in part using generative AI must carry a conspicuous disclosure on its cover stating that the book was created with the use of generative AI. This applies to all printed and digital books regardless of audience or age group, including books of text, pictures, audio, puzzles, games, or any combination. The definition of generative AI is extremely broad, encompassing virtually any machine learning or algorithmic system. The bill does not specify the exact wording of the disclosure, only that it must be conspicuous and on the cover. No penalties or enforcement mechanism are provided, which may limit practical enforceability.
1. Any book that was wholly or partially created through the use of generative artificial intelligence, published in this state, shall conspicuously disclose upon the cover of the book, that such book was created with the use of generative artificial intelligence. 2. Books subject to the provisions of this section shall include, but not be limited to, all printed and digital books, regardless of such books' target age group or audience, consisting of text, pictures, audio, puzzles, games or any combination thereof.
Pending 2025-09-05
T-02.1
Gen. Bus. Law § 1153
Plain Language
News media content that was substantially created by generative AI and is published or accessible in New York must carry a conspicuous label. For visual content, the label must appear at the top of the page, webpage, image, graphic, or video. For audio content, the disclosure must be verbally stated at the onset. Critically, this requirement does not apply if the content is eligible for copyright registration — which creates a significant carve-out, since content with sufficient human authorship to qualify for copyright protection would be exempt. The practical effect is that this labeling requirement targets fully or predominantly AI-generated content that lacks the human creative input necessary for copyright eligibility.
Any news media content published, broadcast, or otherwise disseminated or accessible within the state of New York, which was substantially composed, authored, or otherwise created through the use of generative artificial intelligence shall conspicuously imprint on the top of the page, webpage, image, graphic, video or other visual or audio/visual content, or verbally orate at the onset of audio content, that such content was substantially created by generative artificial intelligence. If the content is eligible for copyright registration such disclosure requirement shall not apply.
Pending 2026-07-01
T-02.1
§ 19.2-11.14(D)
Plain Language
Any police report or law-enforcement record created in whole or in part using generative AI must carry three elements: (1) a disclaimer stating it contains AI-generated content; (2) identification of the specific AI-generated portions where technically feasible; and (3) a certification by the authoring officer that they have read and reviewed the document for accuracy. This applies to all generative AI used in report writing, not just tools meeting the broader 'covered artificial intelligence' definition. The accuracy certification effectively creates a human-review requirement for all AI-assisted report drafting.
D. An official police report or other law-enforcement record generated during a criminal investigation that was created in whole in or in part by using generative artificial intelligence shall:

1. Include a disclaimer that the report or record contains content generated by artificial intelligence;

2. Where technically feasible, identify the specific content in the report or record that was generated by artificial intelligence; and

3. Include a certification by the author of the report or record that the author has read and reviewed the report or record for accuracy.
Passed 2027-02-01
T-02.2T-02.3
Sec. 2(1)-(4)
Plain Language
Covered providers must embed provenance data in any image, video, or audio content created or materially altered by their generative AI system, to the extent commercially and technically reasonable. The provenance data must enable users to assess whether the content was AI-generated or materially altered. Providers must also use commercially and technically reasonable methods to make the provenance data tamper-resistant. Use of a commonly supported technical standard such as C2PA is deemed compliant with the tamper-resistance requirement. Provenance data may not include personal information about identifiable individuals. "Materially altered" excludes routine editing adjustments like brightness, color, cropping, resizing, format conversion, and audio noise removal. The chapter does not apply to video games, interactive e-commerce experiences, or systems used solely for upscaling, noise reduction, or compression, and does not apply to B2B uses, sales, licensing, or distribution of generative AI systems.
(1) To the extent commercially and technically reasonable, a covered provider shall include provenance data in any video, image, or audio content, or content that is any combination thereof, created or materially altered by the covered provider's generative artificial intelligence system and that is subject to the terms of this chapter. The provenance data must allow a user to assess whether image, video, or audio content, or content that is any combination thereof, was created or materially altered by the covered provider's generative artificial intelligence system. (2) A covered provider must use commercially and technically reasonable methods to make the provenance data difficult to remove or tamper with. The use of a commonly supported technical standard for watermarking or metadata, such as the coalition for content provenance and authenticity specification, for provenance data is considered compliant with this subsection. (3) A covered provider may not be required under this section to include any information relating to an identified or reasonably identifiable individual in provenance data included in content created or content materially altered by the covered provider's generative artificial intelligence system. (4) For the purposes of this section, "materially altered" means a significant change that substantially alters the data in content. "Materially altered" does not include minor modifications that do not lead to significant changes to the perceived content or meaning of the content. Minor modifications include: Changes to brightness, contrast, or color; sharpening; saturating; applying filters; resizing; scaling; cropping; format conversions; resampling; denoising; and removal of background noise in audio.
Pending 2027-01-01
T-02.1T-02.2
Sec. 2(7)(a)-(c)
Plain Language
Developers of high-risk generative AI systems that produce or substantially modify synthetic content must ensure outputs are identifiable and detectable using industry-standard tools or developer-provided tools, with identification applied at the time of generation. For audio, image, or video content in artistic or creative works, the identification must not hinder display or enjoyment. Three categories are exempt: text-only content, content published in the public interest or unlikely to mislead a reasonable person, and outputs from assistive editing tools that do not substantially alter input data or are used for law enforcement purposes.
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.
Pending 2027-01-01
T-02.1T-02.2
Sec. 2(7)(a)-(c)
Plain Language
Developers of high-risk generative AI systems that produce or substantially modify synthetic content must ensure outputs are identifiable and detectable using industry-standard or developer-provided tools, with identification embedded at the time of generation. For audio, image, or video in artistic, creative, satirical, or fictional works, identification must not hinder display or enjoyment. Text-only content, content published on matters of public interest, content unlikely to mislead a reasonable person, assistive editing outputs, and law enforcement tools are exempt. This obligation applies only to high-risk generative systems — non-high-risk generative tools are not covered.
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.