AI-generated content must be identifiable. This obligation falls on three different actor types — content generators, platforms, and hardware manufacturers — and ranges from visible human-perceptible labels to embedded machine-readable provenance signals to platform detection.
(a) A developer of a generative artificial intelligence system made available in this state shall ensure that any generative artificial intelligence system that produces images, video, or audiovisual content includes a clear and conspicuous disclosure on AI-generated content that meets all of the following requirements: (1) The disclosure shall include a clear and conspicuous notice appropriate for the medium of the content which identifies the content as AI-generated content. (2) The output's metadata shall identify the content as AI-generated content, identify the tool used to create the content, and the date and time the content was created. (3) The disclosure, to the extent technically feasible, shall be permanent or unable to be easily removed by subsequent users. (b) For a disclosure to be clear and conspicuous as required by subsection (a), the disclosure shall meet all of the following criteria: (1) For content that is solely visual, the disclosure shall be made visually in the same means the content is presented. (2) For content that is both visual and audible, the disclosure shall be visual and audible. (3) A visual disclosure shall stand out from any accompanying text or other visual elements by its size, contrast, location, the length of time it appears, and other characteristics so that the disclosure is easily noticed, read, and understood. (4) An audible disclosure shall be delivered in a volume, speed, and cadence sufficient for a reasonable person to easily hear and understand the disclosure. (5) The disclosure shall be unavoidable. (6) The disclosure shall use diction and syntax understandable to a reasonable person. (7) The disclosure shall not be contradicted, mitigated by, or inconsistent with, anything else in the communication.
(c) A developer of a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of a generative artificial intelligence system without the disclosures required under subsection (a), which shall include: (1) Requiring by contract that end users and third-party licensees of the generative artificial intelligence system refrain from removing any required disclosure from AI-generated content; (2) Requiring certification that end users and third-party licensees will not remove any disclosure from AI-generated content; and (3) Terminating access to the generative artificial intelligence system when the developer has reason to believe that an end user or third-party licensee has removed the required disclosure from AI-generated content.
(d) Any third-party licensee of a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of a generative artificial intelligence system without the disclosures required under subsection (a). The procedures shall include: (1) Requiring by contract that end users of the generative artificial intelligence system refrain from removing any required disclosure from AI-generated content; (2) Requiring certification that end users will not remove any disclosure from AI-generated content; and (3) Terminating access to the generative artificial intelligence system when the developer has reason to believe that an end user has removed the required disclosure from AI-generated content.
(a) A large online platform shall do all of the following: (1) Detect whether any provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded into or attached to content distributed on the large online platform. (2) (A) Provide a user interface to disclose the availability of system provenance data that reliably indicates that the content was generated or substantially altered by a GenAI system or captured by a capture device. (B) The user interface required by this paragraph shall make clearly and conspicuously available to users information sufficient to identify the content's authenticity, origin, or history of modification, including, but not limited to, all of the following: (i) Whether provenance data is available. (ii) The name of the GenAI system or capture device that created or substantially altered the content, if applicable. (iii) Whether any digital signatures are available.
(a) A large online platform shall do all of the following: ... (3) Allow a user to inspect all available system provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body in an easily accessible manner by any of the following means: (A) Directly through the large online platform's user interface pursuant to paragraph (2). (B) Allow the user to download a version of the content with its attached system provenance data. (C) Provide a link to the content's system provenance data displayed on an internet website or in another application provided either by the large online platform or a third party.
(b) A large online platform shall not, to the extent technically feasible, knowingly strip any system provenance data or digital signature that is compliant with widely adopted specifications adopted by an established standards-setting body from content uploaded or distributed on the large online platform.
(a) A capture device manufacturer shall, with respect to any capture device the capture device manufacturer first produced for sale in the state on or after January 1, 2028, do both of the following: (1) Provide a user with the option to include a latent disclosure in content captured by the capture device that conveys all of the following information: (A) The name of the capture device manufacturer. (B) The name and version number of the capture device that created or altered the content. (C) The time and date of the content's creation or alteration. (2) Embed latent disclosures in content captured by the device by default. (b) A capture device manufacturer shall comply with this section only to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body.
(a) A GenAI system hosting platform shall not knowingly make available a GenAI system that does not place disclosures pursuant to Section 22757.3.
(f) A covered artificial intelligence tool provider shall include a provenance label in any image, video, or audio content instance created by its artificial intelligence. A provenance label required under this subsection shall: (1) be readable by the provenance label reading tool required by this Section; (2) be, to the extent technically feasible, permanent or extraordinarily difficult to remove; (3) convey, to the extent technically feasible, either directly or through a link to a permanent website, the following system provenance data: (A) the name of the covered artificial intelligence tool provider; (B) the name and version number of the artificial intelligence that created or altered the content; (C) the time and date of the content's creation or alteration; and (D) a unique identifier of the content.
(a) A covered artificial intelligence tool provider shall make available, at no cost to a person, a provenance label reading tool. The provenance label reading tool shall be made publicly accessible through a conspicuous link on the covered artificial intelligence tool provider's website and any corresponding mobile application. The provenance label reading tool shall allow a person to: (1) upload an image, video, text, or audio content; or (2) provide a uniform resource locator that links to an image, video, text, or audio content. (b) The provenance label reading tool shall support access by an application programming interface that allows a person to programmatically submit content for assessment without accessing the covered artificial intelligence tool provider's website. (c) The provenance label reading tool shall provide a mechanism for a person to submit feedback regarding the tool's efficacy. A covered artificial intelligence tool provider shall consider and use this feedback to improve the provenance label reading tool. (d) A covered artificial intelligence tool provider shall not collect or retain any personal information from a person who uses the provenance label reading tool, except that it may retain contact information voluntarily provided by a person who submits feedback in accordance with subsection (c). The provenance label reading tool shall not output any personal provenance data detected in the content. (e) A covered artificial intelligence tool provider shall not retain any content submitted to the provenance label reading tool for longer than is necessary to comply with this Act.
(a) A large online platform shall: (1) to the extent technically feasible, detect whether any provenance label that is compliant with widely adopted specifications adopted by an established standards-setting body is embedded in or attached to content distributed on the large online platform; (2) provide a mechanism to disclose any machine-readable provenance label detected in content distributed on the large online platform, which shall, in a clear and conspicuous manner, indicate to a user that provenance data is available; and (3) allow a user to inspect all available system provenance data in an easily accessible manner, either directly through the platform's user interface or by providing a means for the user to download the content with its attached system provenance data.
(b) A large online platform shall not: (1) to the extent technically feasible, knowingly strip any provenance label or system provenance data that is compliant with widely adopted specifications adopted by an established standards-setting body from content uploaded to or distributed on the large online platform; or (2) retain any personal provenance data from content shared on the large online platform.
With respect to any capture device that a capture device manufacturer first produces for sale in this State on or after the effective date of this Act, the capture device manufacturer, to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body, shall: (1) provide a user with the option to include a provenance label in content captured by the capture device that conveys the following system provenance data: (A) the name of the capture device manufacturer; (B) the name and version number of the capture device that created the content; and (C) the time and date of the content's creation; (2) embed the provenance label described in paragraph (1) in content captured by the device by default; (3) clearly inform a user of the existence of settings relating to the provenance label upon the user's first use of a recording function on the capture device; (4) provide in the capture device's settings a clear and accessible mechanism for a user to opt out of the inclusion of a provenance label in the user's captured content; and (5) ensure the capabilities required by this Section are available for the capture device's default capture application and are made available to third-party applications that use the device's capture functionalities.
(a) If a covered artificial intelligence tool provider licenses its artificial intelligence to a third party, the covered artificial intelligence tool provider shall require by contract that the licensee maintain the system's capability to include a provenance label as required by subsection (f) of Section 10. (b) If a covered artificial intelligence tool provider has actual knowledge that a third-party licensee has modified an artificial intelligence to remove its capability to include a provenance label, the covered artificial intelligence tool provider shall revoke the third party's license to use the artificial intelligence within 96 hours after obtaining the knowledge. (c) A third-party licensee whose license to use artificial intelligence is revoked under this Section shall not use the artificial intelligence after the revocation.
(d) The operator of a website or application that makes available for download the source code or model weights of artificial intelligence shall not knowingly make available artificial intelligence that does not place disclosures into content as required by subsection (f) of Section 10.
Any artificial intelligence system that produces images, videos, audio, or multimedia artificial intelligence-generated content shall include on such artificial intelligence-generated content a clear and conspicuous disclosure that identifies the content as generated by artificial intelligence.
Any publicly distributed online media generated in whole or in part by artificial intelligence must contain identifiable markers that alert users to the use of artificial intelligence, as well as embedded markers that allow identification of the use of artificial intelligence should the original identifiable markers be deleted.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
1. Any book that was wholly or partially created through the use of generative artificial intelligence, published in this state, shall conspicuously disclose upon the cover of the book, that such book was created with the use of generative artificial intelligence. 2. Books subject to the provisions of this section shall include, but not be limited to, all printed and digital books, regardless of such books' target age group or audience, consisting of text, pictures, audio, puzzles, games or any combination thereof.
Any news media content published, broadcast, or otherwise disseminated or accessible within the state of New York, which was substantially composed, authored, or otherwise created through the use of generative artificial intelligence shall conspicuously imprint on the top of the page, webpage, image, graphic, video or other visual or audio/visual content, or verbally orate at the onset of audio content, that such content was substantially created by generative artificial intelligence. If the content is eligible for copyright registration such disclosure requirement shall not apply.
2. Where a search engine displays information which was generated by artificial intelligence, the search engine shall in clear, plain language in the same font size as such information, inform the user that such information was generated by artificial intelligence: (a) directly above such information; and (b) as a watermark across such information.
1. Any book that was wholly or partially created through the use of generative artificial intelligence, published in this state, shall conspicuously disclose upon the cover of the book, that such book was created with the use of generative artificial intelligence. 2. Books subject to the provisions of this section shall include, but not be limited to, all printed and digital books, regardless of such books' target age group or audience, consisting of text, pictures, audio, puzzles, games or any combination thereof.
"Synthetic digital content" shall mean any digital content, including, but not limited to, any audio, image, text, or video, that is produced or manipulated by an artificial intelligence decision system, including, but not limited to, a general-purpose artificial intelligence model.
Disclosure to consumers. Any news media content published, broadcast, or otherwise disseminated or accessible within the state of New York, which was substantially composed, authored, or otherwise created through the use of generative artificial intelligence shall conspicuously imprint on the top of the page, webpage, image, graphic, video or other visual or audio/visual content, or verbally orate at the onset of audio content, that such content was substantially created by generative artificial intelligence. If the content is eligible for copyright registration such disclosure requirement shall not apply.
D. An official police report or other law-enforcement record generated during a criminal investigation that was created in whole in or in part by using generative artificial intelligence shall: 1. Include a disclaimer that the report or record contains content generated by artificial intelligence; 2. Where technically feasible, identify the specific content in the report or record that was generated by artificial intelligence; and 3. Include a certification by the author of the report or record that the author has read and reviewed the report or record for accuracy.
(1) To the extent commercially and technically reasonable, a covered provider shall include provenance data in any video, image, or audio content, or content that is any combination thereof, created or materially altered by the covered provider's generative artificial intelligence system and that is subject to the terms of this chapter. The provenance data must allow a user to assess whether image, video, or audio content, or content that is any combination thereof, was created or materially altered by the covered provider's generative artificial intelligence system. (2) A covered provider must use commercially and technically reasonable methods to make the provenance data difficult to remove or tamper with. The use of a commonly supported technical standard for watermarking or metadata, such as the coalition for content provenance and authenticity specification, for provenance data is considered compliant with this subsection. (3) A covered provider may not be required under this section to include any information relating to an identified or reasonably identifiable individual in provenance data included in content created or content materially altered by the covered provider's generative artificial intelligence system. (4) For the purposes of this section, "materially altered" means a significant change that substantially alters the data in content. "Materially altered" does not include minor modifications that do not lead to significant changes to the perceived content or meaning of the content. Minor modifications include: Changes to brightness, contrast, or color; sharpening; saturating; applying filters; resizing; scaling; cropping; format conversions; resampling; denoising; and removal of background noise in audio.
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.
(7)(a) A developer of a high-risk generative artificial intelligence system that generates or substantially modifies synthetic content shall ensure that the outputs of such high-risk artificial intelligence system: (i) Are identifiable and detectable in a manner that is accessible by consumers using industry-standard tools or tools provided by the developer; (ii) comply with any applicable accessibility requirements, as synthetic content, to the extent reasonably feasible; and (iii) apply such identification at the time the output is generated. (b) If such synthetic content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional, or analogous work or program, the requirement for identifying outputs of high-risk artificial intelligence systems pursuant to (a) of this subsection (7) is limited to a manner that does not hinder the display or enjoyment of such work or program. (c) The identification of outputs required by (a) of this subsection (7) do not apply to: (i) Synthetic content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic content; or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.