G-02
Governance & Documentation
Public Transparency & Documentation
Developers of AI systems must publish standardized documentation describing model capabilities, limitations, intended uses, safety measures, and risk assessments. The primary audience is the public and downstream deployers — this is distinct from confidential regulatory submissions. Publication must occur before or at deployment and must be kept current.
Applies to DeveloperDeployerGovernment Sector Foundation Model
Bills — Enacted
2
unique bills
Bills — Proposed
31
Last Updated
2026-03-29
Core Obligation

Developers of AI systems must publish standardized documentation describing model capabilities, limitations, intended uses, safety measures, and risk assessments. The primary audience is the public and downstream deployers — this is distinct from confidential regulatory submissions. Publication must occur before or at deployment and must be kept current.

Sub-Obligations3 sub-obligations
Bills That Map This Requirement 33 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-01-01
G-02.1
Health & Safety Code § 1339.76(a)-(c)
Plain Language
Health facilities, clinics, physician's offices, and group practices that use or deploy AI or clinical decision support systems in patient care must provide a comprehensive disclosure to any licensed health care professional or other person who uses the tool or views its outputs. The disclosure must cover twelve categories of information: developer identity and funding; intended use and patient population; out-of-scope uses and known risks; inputs and output generation methods; training data details including demographic representativeness and bias; validation processes; performance measures; ongoing maintenance and update procedures; a notice of health care entity and developer liability; and a notice that the worker may override the tool's output based on professional judgment. The disclosure must be provided at the time of tool use, in plain language, linked in the patient's health record, and with sufficient time for the professional to make informed decisions. This functions as a detailed model card requirement specific to health care AI tools, directed at the deploying health care entity rather than the AI developer.
(a) A health facility, clinic, physician's office, or office of a group practice that uses or deploys a covered tool for patient care shall disclose required information, described in subdivision (b), to any licensed health care professional or other person using a covered tool or viewing outputs from a covered tool. (b) Required information under subdivision (a) shall include all of the following: (1) Details on the covered tool, including developer, funding source, any foundation model used, and description of output. (2) Intended use of the covered tool, including intended patient population, intended users, and intended decisionmaking role. (3) Cautioned out-of-scope use of the covered tool, including known risks and limitations. (4) List of the inputs into the covered tool. (5) Description of how the covered tool generates outputs. (6) Development details of the covered tool, including, but not limited to, all of the following: (A) Description of the training set or clinical research underlying recommendations, including demographic representativeness and known biases based on protected characteristics. (B) Description of the relevance of training data to deployed setting. (C) Process used to ensure fairness in development of the intervention. (7) Description of the validation process. (8) Qualitative measures of performance. (9) Description of ongoing maintenance of intervention implementation and use. (10) Description of updates and continued validation or fairness assessment process. (11) Notice that health care entities and developers are liable for harm that results from the use of artificial intelligence in patient care. (12) Notice that a worker providing direct patient care is permitted to override the output of a covered tool if, in the judgment of the worker acting in their scope of practice, such an override is appropriate for the patient, or as necessary to comply with applicable law, including civil rights law. (c) (1) A disclosure made pursuant to this section shall be provided at the time the licensed health care professional or other person uses the covered tool or views any recommendation or output generated by the covered tool. (2) The disclosure shall be provided in plain language to, and linked in the health record of, any patient whose care was affected by the output of the covered tool or whose health information was used as an input to the covered tool. (3) The disclosure shall be provided with ample time for the licensed health care professional or other person to review and make reasoned decisions based on their professional judgment on whether and how to use the covered tool.
Pending 2027-07-01
G-02.4
Bus. & Prof. Code § 22612(c)
Plain Language
Operators must publish a child safety policy on their website that describes the protective measures they have taken to mitigate child safety risks identified through their risk assessments. The policy must be kept current and updated as needed. This is a public-facing transparency obligation — the policy must be accessible to parents, users, and the public, not just regulators.
Publish on its internet website, and update as needed to ensure accuracy, a child safety policy.
Pending 2026-01-01
G-02.4
Bus. & Prof. Code § 22756.2(b)(1)-(3)
Plain Language
Deployers must publish a summary statement on their website describing the types of high-risk automated decision systems they deploy, how they manage algorithmic discrimination risks, and the nature and source of data used by those systems. This is a standing public transparency obligation — it must be maintained on the deployer's website and kept current as systems change.
(b) A deployer shall make available on its internet website a statement summarizing all of the following: (1) The types of high-risk automated decision systems it currently deploys. (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems. (3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.
Enacted 2026-01-01
G-02.1
Bus. & Prof. Code § 22757.12(c)(1)
Plain Language
All frontier developers must publish a transparency report on their website at or before deployment of each new or substantially modified frontier model, disclosing key model characteristics including supported languages, output modalities, intended uses, and use restrictions. Section 22757.12(c)(3) explicitly notes that publishing this information as part of a system card or model card satisfies the requirement. Section 22757.12(c)(4) says developers are encouraged to make these disclosures consistent with industry best practices.
Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer shall clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer. (B) A mechanism that enables a natural person to communicate with the frontier developer. (C) The release date of the frontier model. (D) The languages supported by the frontier model. (E) The modalities of output supported by the frontier model. (F) The intended uses of the frontier model. (G) Any generally applicable restrictions or conditions on uses of the frontier model.
Enacted 2026-01-01
G-02.3
Bus. & Prof. Code § 22757.12(c)(2)
Plain Language
Large frontier developers must include in each deployment transparency report summaries of their catastrophic risk assessments, assessment results, third-party evaluator involvement, and other steps taken under their frontier AI framework for that model.
Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer shall include in the transparency report required by paragraph (1) summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's frontier AI framework. (B) The results of those assessments. (C) The extent to which third-party evaluators were involved. (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(f)
Plain Language
When a frontier developer redacts published compliance documents for trade secret, cybersecurity, public safety, or national security reasons, it must describe the nature and justification of each redaction and retain the unredacted information for five years.
(1)  When a frontier developer publishes documents to comply with this section, the frontier developer may make redactions to those documents that are necessary to protect the frontier developer’s trade secrets, the frontier developer’s cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (2) If a frontier developer redacts information in a document pursuant to this subdivision, the frontier developer shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
Enacted 2026-06-30
G-02.4
C.R.S. § 6-1-1702(4)(a)
Plain Language
Developers must publish on their website or in a public use case inventory a clear, readily available statement summarizing their high-risk AI systems. The specific content of this summary is defined in the original SB 205 § 6-1-1702(4)(a) (types of high-risk AI systems developed, how the developer manages known or foreseeable risks of algorithmic discrimination, etc.). This is a public-facing transparency obligation distinct from the deployer-facing documentation requirement.
(4) (a) On and after June 30, 2026, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing:
Enacted 2026-06-30
G-02.4
C.R.S. § 6-1-1703(5)(a)
Plain Language
Deployers must publish a clear, readily available statement on their website summarizing their deployed high-risk AI systems. The specific summary content is defined in the original SB 205 § 6-1-1703(5)(a) (types of systems deployed, how the deployer manages known or foreseeable discrimination risks, etc.). This is the deployer counterpart to the developer's public use case inventory obligation in § 6-1-1702(4)(a).
(5) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing:
Pending 2027-01-01
G-02.4
O.C.G.A. § 10-1-972(1)
Plain Language
Production companies deploying AI systems for use in production in Georgia must conduct and publicly publish an inventory of all AI systems in use no later than December 31, 2027, and annually thereafter. The inventory must include for each system: (1) the system name and vendor; (2) a description of capabilities and uses; (3) how the system makes, informs, or supports conclusions, decisions, or judgments; and (4) whether and how an impact assessment was conducted prior to implementation. This obligation applies only to production companies approved by the Department of Economic Development, not to interactive entertainment production companies.
Any production company deploying artificial intelligence systems for use in production in this state shall: (1) Not later than December 31, 2027, and annually thereafter, conduct an inventory of all systems that employ artificial intelligence and are in use and publish such inventory on a publicly accessible website. Each inventory shall include, but not be limited to, the following information for each artificial intelligence system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) The manner in which such system is able to be used to independently make, inform, or materially support a conclusion, decision, or judgment; and (D) The manner in which such system underwent an impact assessment prior to implementation;
Pending 2025-07-01
G-02.4
O.C.G.A. § 10-16-2(d)
Plain Language
Developers must publish and maintain on their public website or in a public use case inventory a clear summary describing the types of automated decision systems they offer and how they manage algorithmic discrimination risks. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a covered system.
(1) A developer shall make available to the public, in a manner that is clear and readily available on the developer's public website or in a public use case inventory, a statement summarizing: (A) The types of automated decision systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (B) How the developer manages known or reasonably foreseeable risks of algorithmic discrimination. (2) A developer shall update the statement described in paragraph (1) of this subsection: (A) As necessary to ensure that the statement remains accurate; and (B) No later than 90 days after the developer intentionally and substantially modifies any automated decision system described in such statement.
Pending 2025-07-01
G-02.4
O.C.G.A. § 10-16-5(a)-(b)
Plain Language
Deployers must publish and keep current on their public website a clear summary covering: the types of automated decision systems they currently deploy, their approach to managing algorithmic discrimination risks for each system, and detailed information about the nature, source, and extent of data they collect and use. The small deployer exemption in § 10-16-6 applies. This is a deployer-side analog to the developer public statement obligation in § 10-16-2(d), with an additional data collection disclosure element.
(a) Except as provided in Code Section 10-16-6, a deployer shall make available, in a manner that is clear and readily available on the deployer's public website, a statement summarizing: (1) The types of automated decision systems that are currently deployed by the deployer; (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each such automated decision system; and (3) In detail, the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall periodically update the statement described in subsection (a) of this Code section.
Failed 2026-01-01
G-02.4
Section 20(a)
Plain Language
Businesses using AI systems in Illinois must publish on their official website a public report explaining how they comply with the five AI governance principles. The report must be updated annually and whenever significant changes are made to the AI system — such as algorithm modifications, substantial changes to data inputs, or shifts in operational context. The report must cover system design, key design decisions (including testing metrics), training data, risk mitigation strategies, and any impact assessments conducted. It must be written in two tiers: plain language for the general public and a more detailed version for specialized audiences who may need to challenge the system's outputs. The Department of Innovation and Technology will adopt rules to specify additional triggers for updates and further report requirements. Applies only to businesses with 10 or more employees (Section 25).
(a) The Illinois Department of Innovation and Technology shall adopt rules to ensure that a business using an AI system in Illinois publishes on the business's official Internet website accessible to the public a report explaining compliance with the 5 principles of AI governance iterated in this Act. This report shall: (1) be updated annually and whenever significant changes are made to the AI system, such as modifications to algorithms, substantial alterations to data inputs, or shifts in operational contexts, additional significant change shall be established by the Department of Innovation and Technology; (2) include information on the design, major decisions made during the design process (such as testing metrics), training data, risk mitigation strategies, and any impact assessments conducted; and (3) be written in plain language to ensure accessibility for the general public, while also providing a more detailed explanation for specialized audiences; this 2-level approach ensures clarity for everyone while offering enough depth for those who may need to understand or challenge the system or its outputs, such as in cases of fairness or discrimination.
Pending 2027-01-01
G-02.4
Section 25
Plain Language
Deployers must publish a publicly accessible, clear policy summarizing (1) the types of automated decision tools they currently use or make available and (2) how they manage the foreseeable risks of algorithmic discrimination associated with those tools. This is a standing public disclosure obligation — the policy must be maintained and kept current as the deployer's tool inventory and risk management practices change. Unlike the impact assessment submission to the Attorney General, this disclosure is directed at the public.
A deployer shall make publicly available, in a readily accessible manner, a clear policy that provides a summary of both of the following: (1) the types of automated decision tools currently in use or made available to others by the deployer; and (2) how the deployer manages the reasonably foreseeable risks of algorithmic discrimination that may arise from the use of the automated decision tools it currently uses or makes available to others.
Passed 2025-03-13
G-02.1
Section 3(6)(c)
Plain Language
Public disclaimers about government AI use must also include information about any third-party AI products or programs involved, including documentation on how the high-risk AI or generative AI system works — such as system cards or other developer-provided documentation. This effectively requires state agencies to pass through developer-provided documentation (e.g., model cards) as part of their public disclosures.
(c) Any disclaimer under paragraph (a) of this subsection shall also provide information regarding third-party artificial intelligence products or programs, including but not limited to information as to how the high-risk artificial intelligence system or generative artificial intelligence system works, such as system cards or other documented information provided by developers.
Pre-filed 2025-07-07
G-02.1
Chapter 93M, Section 2(b)
Plain Language
Developers must furnish downstream deployers with documentation covering three areas: (1) the AI system's intended and foreseeable uses, (2) known limitations and risks including algorithmic discrimination potential, and (3) training dataset information and bias mitigation measures applied. This is a deployer-facing documentation obligation — the bill does not require this specific documentation to be made publicly available (that obligation is in Section 2(d)). The training data disclosure includes both the data used and the bias mitigation steps taken.
(b) Documentation Requirements: Developers must provide deployers with: (1) A summary of intended and foreseeable uses of the AI system; (2) Known limitations and risks, including algorithmic discrimination; (3) Information on the datasets used for training, including measures taken to mitigate biases.
Pre-filed 2025-07-07
G-02.4
Chapter 93M, Section 2(d)
Plain Language
Developers must publish a plain-language summary on their public website describing the types of AI systems they develop, the measures they take to mitigate algorithmic discrimination, and contact information for inquiries. This is an ongoing public transparency obligation — the summary must be accessible to anyone, not just deployers or regulators.
(d) Public Statement: Developers must publish a plain-language summary on their website, detailing: (1) Types of AI systems they develop; (2) Measures to mitigate algorithmic discrimination; (3) Contact information for inquiries.
Pre-filed 2025-07-07
G-02.4
Chapter 93M, Section 3(d)
Plain Language
Deployers must publicly disclose which types of high-risk AI systems they use and the strategies they employ to mitigate risks. This is a public-facing transparency obligation distinct from the deployer-to-consumer notifications in Section 3(c) — it requires general public disclosure rather than individual notice at the point of decision.
(d) Transparency: Deployers must publicly disclose the types of high-risk AI systems in use and their risk mitigation strategies.
Pre-filed
G-02.1
Chapter 93M § 2(b)(1)-(4), (c), (f)
Plain Language
Developers must provide deployers and downstream developers with comprehensive documentation about each high-risk AI system, including: a statement of foreseeable and harmful uses; summaries of training data types; known limitations and discrimination risks; purpose and intended uses; pre-deployment evaluation methodology; data governance measures; mitigation steps taken; human monitoring guidance; and any additional documentation needed for the deployer to complete impact assessments. This documentation may be delivered through model cards, dataset cards, or similar artifacts. Trade secrets, legally protected information, and security-sensitive information are exempt. A developer that is also the sole deployer of its own system need not generate this documentation unless the system is provided to an unaffiliated deployer.
(b) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (i) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (ii) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (iii) the purpose of the high-risk artificial intelligence system; (iv) the intended benefits and uses of the high-risk artificial intelligence system; and (v) all other information necessary to allow the deployer to comply with the requirements of section 3; (3) documentation describing: (i) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of the high-risk artificial intelligence system; (iv) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (v) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination. (c) (1) except as provided in subsection (f) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 3 (c). (2) a developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer. (f) nothing in subsections (b) to (e) of this section requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
Pre-filed
G-02.4
Chapter 93M § 2(d)
Plain Language
Developers must publish on their website or in a public use case inventory a clear statement summarizing: (1) the types of high-risk AI systems they currently make available, and (2) how they manage known or foreseeable algorithmic discrimination risks from those systems. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a listed system.
(d) (1) Not later than 6 months after the effective date of this act, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (ii) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with subsection (d)(1)(i) of this section. (2) a developer shall update the statement described in subsection (d)(1) of this section: (i) as necessary to ensure that the statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subsection (d)(1)(i) of this section.
Pre-filed
G-02.4
Chapter 93M § 3(e)
Plain Language
Deployers must publish on their website a clear summary describing: the types of high-risk AI systems they currently deploy, how they manage algorithmic discrimination risks for each, and detailed information about the data they collect and use. This statement must be periodically updated. Small deployers meeting the subsection (f) criteria are exempt.
(e) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subsection (e)(1)(i) of this section; and (iii) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) a deployer shall periodically update the statement described in subsection (e)(1) of this section.
Pending 2026-01-01
G-02.3
Sec. 7(1)(c)
Plain Language
Large developers must publicly publish a transparency report at least every 90 days. Each report covers a rolling window from 120 days before publication to 30 days before publication (creating a 30-day overlap with each subsequent report). Reports must include: conclusions of all risk assessments conducted during the period, updated capability assessments for the highest-risk foundation model for each critical risk type (if changed), and — when a newly deployed or modified model poses higher critical risk than existing deployed models — the decision rationale and safeguards implemented. This creates ongoing public visibility into the developer's risk posture.
(c) Not less than once every 90 days, produce and conspicuously publish a transparency report that covers the period of 120 days before the publishing of the report to 30 days before the publishing of the report that includes all of the following information: (i) The conclusion of any risk assessments made during the reporting period in accordance with the safety and security protocol under subdivision (a). (ii) If different from the preceding reporting period, for each type of critical risk, an assessment of the relevant capability of the foundation model to create that critical risk of whichever of the large developer's foundation models, whether deployed or not, would pose the highest level of that critical risk if deployed without adequate safeguards and protections. (iii) If, during the reporting period, the large developer has deployed or modified a foundation model that would pose a higher level of critical risk than any of the large developer's existing deployed foundation models if deployed without adequate safeguards and protections, both of the following: (A) The grounds on which and the process by which the large developer decided to deploy the foundation model. (B) Any safeguards and protections implemented by the large developer to mitigate critical risks.
Pending 2026-08-01
G-02.1
Minn. Stat. § 325G.64, Subd. 2
Plain Language
Before selling or distributing any program that contains AI, the seller or distributor must disclose five categories of information to the buyer or recipient: (1) the business names of the AI's manufacturers or creators, (2) contact information for technical support, (3) the functions the AI performs, (4) the types of modeling the AI uses, and (5) all safety features, including human-in-the-loop integration. The bill does not specify the format, medium, or recipient of these disclosures, nor does it define 'program,' 'seller,' or 'distributor.' The obligation applies broadly to any AI system as defined — essentially any machine-based system that infers outputs from inputs — with no thresholds, sector limitations, or risk-level filters.
Before selling or distributing a program containing artificial intelligence technology, the seller or distributor must disclose: (1) the business names of the manufacturers or creators of the AI; (2) contact information for technical experts who assist users with the AI; (3) the functions the AI performs; (4) the types of modeling the AI uses; and (5) all safety features of the AI, including but not limited to the integration of human intelligence.
Pending 2026-08-01
Minn. Stat. § 325M.41, subd. 1(3)
Plain Language
Before deployment, developers must both (1) conspicuously publish a redacted copy of their safety and security protocol publicly, and (2) transmit a redacted copy to the attorney general. This creates a dual disclosure obligation — public transparency plus regulatory submission. The developer may apply 'appropriate redactions' to the public and AG copies, but see subdivision 1(4) which requires the developer to provide an essentially unredacted copy if the AG requests access.
Before deploying an artificial intelligence model, a developer must: (3) conspicuously publish a copy of the safety and security protocol with appropriate redactions, and transmit a copy of the redacted safety and security protocol to the attorney general;
Failed 2027-01-01
G-02.4
Sec. 4(2)
Plain Language
Whenever a large frontier developer or large chatbot provider materially modifies its public safety and child protection plan, it must publish the updated plan and a justification for the changes on its website within 30 days. This ensures ongoing public transparency about safety plan evolution.
(2) If a large frontier developer or large chatbot provider makes a material modification to its public safety and child protection plan, the large frontier developer or large chatbot provider shall clearly and conspicuously publish on such developer's or provider's website the modified public safety and child protection plan and a justification for such modification within thirty days after such material modification.
Failed 2027-01-01
G-02.3
Sec. 4(3)(i)-(iv)
Plain Language
Before or when integrating a new or substantially modified foundation model into a covered chatbot, the large chatbot provider must publish summaries of its child safety risk assessments, the results, the extent of third-party evaluator involvement, and other steps taken to address child safety risks. This ensures that each model change triggers fresh public disclosure about child safety evaluation. The timing obligation is tied to model integration, not a fixed calendar schedule.
(3) Before, or concurrently with, integrating a new foundation model, or a version of an existing foundation model that has been substantially modified, into a covered chatbot operated by the large chatbot provider, a large chatbot provider shall conspicuously publish on its website summaries of all of the following: (i) Assessments of child safety risks conducted pursuant to the large chatbot provider's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to child safety risks.
Failed 2027-01-01
G-02.3
Sec. 4(4)(a)(i)-(iv), (4)(b)
Plain Language
Before or when deploying a new or substantially modified frontier model, the large frontier developer must publish summaries of catastrophic risk assessments, their results, third-party evaluator involvement, and other safety steps taken. Publishing this information as part of a system card or model card satisfies the requirement. This is a per-deployment obligation — each new model or substantial modification triggers a new publication.
(4)(a) Before, or concurrently with, deploying a new frontier model or a version of an existing frontier model that the large frontier developer has substantially modified, a large frontier developer shall conspicuously publish on its website summaries of all of the following: (i) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to catastrophic risks from the frontier model. (b) A large frontier developer that publishes the information described in subdivision (5)(a) of this section as part of a larger document, including a system card or model card, shall be deemed in compliance with this subsection.
Failed 2026-02-01
G-02.1
Sec. 3(2)(a)-(d)
Plain Language
Developers must provide deployers (or downstream developers) with comprehensive documentation covering: intended and harmful uses, training data summary, known limitations and discrimination risks, system purpose, pre-deployment bias evaluation methods, data governance measures, intended outputs, discrimination mitigation steps, usage and monitoring guidance, and output-understanding documentation. This is a deployer-facing disclosure — not a public posting — and is subject to the trade secret exemption in Sec. 3(6). A developer that also serves as its own deployer is exempt unless the system is provided to an unaffiliated deployer.
(2) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, each developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (a) A general statement describing the uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (b) Documentation disclosing: (i) A high-level summary of the types of data used to train the high-risk artificial intelligence system; (ii) Each known limitation of the high-risk artificial intelligence system, including each known or reasonably foreseeable risk of algorithmic discrimination arising from the intended use of the high-risk artificial intelligence system; (iii) The purpose of the high-risk artificial intelligence system; (iv) Any intended benefit and use of the high-risk artificial intelligence system; and (v) Information necessary to allow the deployer to comply with the requirements of section 4 of this act; (c) Documentation describing: (i) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) Intended outputs of the high-risk artificial intelligence system; (iv) The measures the developer has taken to mitigate known risks of algorithmic discrimination that could arise from the deployment of the high-risk artificial intelligence system; and (v) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (d) Documentation that is reasonably necessary to assist the deployer in understanding each output and monitor the performance of the high-risk artificial intelligence system for each risk of algorithmic discrimination.
Failed 2026-02-01
G-02.1
Sec. 3(3)(a)-(b)
Plain Language
When a developer makes a high-risk AI system available to a deployer, it must provide — to the extent feasible — documentation sufficient for the deployer to complete an impact assessment under Section 4(3). This includes any model card or impact assessment the developer has already completed. The self-deploying developer exemption applies: this obligation only triggers when the system is provided to an unaffiliated deployer.
(3)(a) Except as otherwise provided in subsection (6) of this section, on or after February 1, 2026, a developer that offers, sells, leases, licenses, gives, or otherwise makes any high-risk artificial intelligence system available to a deployer or other developer shall to the extent feasible make available to the deployer or other developer the documentation and information necessary for the deployer or a third party contracted by the deployer to complete an impact assessment pursuant to subsection (3) of section 4 of this act. Such documentation and information includes any model card or other impact assessment. (b) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
Failed 2026-02-01
G-02.4
Sec. 3(4)(a)-(b)
Plain Language
Developers must maintain a publicly available use case inventory summarizing: the types of high-risk AI systems they currently offer, any systems they have intentionally and substantially modified, and how they manage known algorithmic discrimination risks. This inventory must be kept accurate on an ongoing basis and updated within 90 days of any intentional and substantial modification.
(4)(a) On and after February 1, 2026, a developer shall make a statement summarizing the following available in a manner that is clear and readily available in a public use case inventory: (i) The types of high-risk artificial intelligence systems that the developer has developed and currently makes available to a deployer or other developer; (ii) The types of high-risk artificial intelligence system that the developer has intentionally and substantially modified and currently makes available to a deployer or other developer; and (iii) How the developer manages known risks of algorithmic discrimination that could arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in subdivisions (4)(a)(i) and (ii) of this section. (b) A developer shall update the statement described in subdivision (4)(a) of this section: (i) As necessary to ensure that the statement remains accurate; and (ii) No later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subdivision (4)(a)(ii) of this section.
Failed 2026-02-01
G-02.4
Sec. 4(5)(a)-(b)
Plain Language
Deployers must publish and maintain a clear, readily available public statement disclosing: the types of high-risk AI systems they currently deploy, how they manage algorithmic discrimination risks, and the nature, source, and extent of information they collect and use. This statement must be updated at least annually. Small deployers meeting Section 4(6) criteria are exempt.
(5)(a) Except as provided in subsection (6) of this section, on and after February 1, 2026, a deployer shall make a statement with the following information available in a manner that is clear and readily available: (i) The types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) How the deployer manages known risks of algorithmic discrimination that may arise from the deployment of the types of high-risk artificial intelligence systems described in subdivision (a)(ii) of this subsection; and (iii) A description of the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall update the statement described in subdivision (a) of this subsection at least once each year.
Pending 2027-01-01
G-02.1
GBL § 1551(2)(a)-(d)
Plain Language
Developers must provide downstream deployers and other developers with comprehensive documentation covering: foreseeable and harmful uses; training data summaries; known limitations and discrimination risks; system purpose and intended benefits; pre-deployment evaluation methods for performance and bias; data governance measures; intended outputs; discrimination mitigation measures; usage and monitoring guidance; and any additional documentation reasonably necessary for deployers to understand outputs and monitor discrimination risk. Trade secrets and security-sensitive information are exempt from disclosure under § 1551(5). This documentation is distinct from the public-facing summary required under § 1551(4) — this obligation runs to downstream business recipients, not the public.
Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination.
Pending 2027-01-01
G-02.4
GBL § 1551(4)(a)-(b)
Plain Language
Developers must publish and maintain on their website or a public use case inventory a clear summary of the types of high-risk AI decision systems they offer, along with a description of how they manage algorithmic discrimination risks. The summary must be updated whenever accuracy requires and within 90 days of any intentional and substantial modification. Trade secrets and security-sensitive information are exempt under § 1551(5).
(a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
Pending 2027-01-01
G-02.4
GBL § 1552(6)(a)-(b)
Plain Language
Deployers must publish and maintain on their website a clear summary describing: the types of high-risk AI decision systems they deploy, how they manage algorithmic discrimination risks from each system, and the nature, source, and extent of data collected and used. This statement must be periodically updated. The § 1552(7) developer-assumption exemption may apply. Trade secrets are protected under § 1552(8), but deployers withholding information must notify consumers that information is being withheld and explain the basis.
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
Pending 2027-01-01
G-02.1
GBL § 1553(1)(b)
Plain Language
Developers of general-purpose AI models must create, maintain, and make available to downstream integrators documentation enabling those integrators to understand the model's capabilities and limitations and to comply with their own obligations under this article. At minimum, the documentation must disclose the technical integration requirements and the model information specified in § 1553(1)(a)(ii) (intended tasks, downstream systems, acceptable use policies, release date, distribution methods, and I/O formats). Documentation must be reviewed and revised at least annually. This is the GPAI equivalent of the high-risk system deployer-facing documentation in § 1551(2).
(b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
Pending 2025-04-27
G-02.1
State Tech. Law § 507(6)
Plain Language
Plain-language summary reporting about automated systems — including assessments of the quality and clarity of the notice and explanations provided to residents — must be made public whenever possible. The 'whenever possible' qualifier creates significant ambiguity about when this obligation is enforceable.
6. Summary reporting, including plain language information about these automated systems and assessments of the clarity and quality of notice and explanations, shall be made public whenever possible.
Pending 2025-04-27
State Tech. Law § 508(5)
Plain Language
Summary reporting describing human governance processes — including their timeliness, accessibility, outcomes, and effectiveness — must be made publicly available whenever possible. This is a public transparency obligation related to the human fallback and consideration mechanisms, not a general model documentation requirement.
5. Summary reporting, which includes a description of such human governance processes and an assessment of their timeliness, accessibility, outcomes, and effectiveness, shall be made publicly available whenever possible.
Pending 2025-07-26
G-02.4
State Tech. Law § 514(1)-(2)
Plain Language
Licensees must conspicuously display their license in their physical office and, if they have an internet presence, on their website or mobile application. The license must state the licensee's name, address, and corporate details. Licenses are non-transferable and non-assignable. This is a public transparency obligation — the purpose is to enable users and the public to verify that an operator is licensed.
1. Any license issued under this article shall state the name and address of the licensee, and if the licensee be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation. 2. Such license or licenses shall be kept conspicuously posted in the office of the licensee and, where such licensee has a public internet presence, on the website or mobile application of the licensee and shall not be transferable or assignable.
Pending 2026-06-09
G-02.4
Civ. Rights Law § 88(5)
Plain Language
The Attorney General must maintain a publicly accessible online database, updated biannually, containing the reports and audits filed by developers and deployers. Developers and deployers may request redactions of sensitive or protected information under a process to be promulgated by the AG. While the AG maintains the database, the obligation to file reportable content falls on the developers and deployers — the public accessibility of their filings effectively creates a public transparency obligation.
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
Pending 2025-09-05
G-02.4
Real Prop. Law § 442-m(3)(d)
Plain Language
Real estate brokers and online housing platforms using AI tools must document their compliance with the housing advertising non-discrimination requirements (subdivision 3), retain those records, and publish a public-facing compliance report on their website describing compliance measures and internal auditing methods used. This is both a recordkeeping and a public transparency obligation.
(d) document, retain, and provide public-facing reporting on such real estate broker's or online housing platform's website, information on compliance with this subdivision, and any internal auditing methods used for such compliance.
Pending 2027-01-01
G-02.4
Civil Rights Law § 104(6)(a)(iii)
Plain Language
Developers and deployers must publish on their website a public summary of each full pre-deployment evaluation, impact assessment, or developer annual review within 30 days of completion. The summary must be easily accessible to individuals. Trade secrets may be redacted but personal data must be redacted. This is the public transparency component of the broader submission obligation — the full document goes to the Division, while the summary goes to the public.
(iii) not later than thirty days after completion: (A) publish a summary of the evaluation, assessment, or review on the website of the developer or deployer in a manner that is easily accessible to individuals; and (B) submit such summary to the division.
Pending 2027-01-01
G-02.4
Civil Rights Law § 110(1)-(5)
Plain Language
Every developer and deployer must publish a comprehensive public disclosure covering their AI practices, including: entity identity and contact information, links to evaluation and assessment summaries, categories of personal data collected and processing purposes, third-party data transfers, individual rights exercise instructions, compliance practices, a mandatory disclaimer about the limitations of the audit, and the disclosure's effective date. The disclosure must be in plain language, accessible to individuals with disabilities, and available in the top 10 languages spoken in New York. Material changes require advance notification to affected individuals via direct electronic communication. All previous disclosure versions must be retained for10 years and published on the website, along with a public change log describing the date and nature of each material change.
1. Each developer or deployer shall make publicly available, in plain language and in a clear, conspicuous, not misleading, easy-to-read, and readily accessible manner, a disclosure that provides a detailed and accurate representation of the developer or deployer's practices regarding the requirements under this article. 2. The disclosure required under subdivision one of this section shall include, at a minimum, the following: (a) the identity and the contact information of: (i) the developer or deployer to which the disclosure applies (including the developer or deployer's point of contact and electronic and physical mail address, as applicable for any inquiry concerning a covered algorithm or individual rights under this article); and (ii) any other entity within the same corporate structure as the developer or deployer to which personal data is transferred by the developer or deployer. (b) a link to the website containing the developer or deployer's summaries of pre-deployment evaluations, impact assessments, and annual review of assessments, as applicable; (c) the categories of personal data the developer or deployer collects or processes in the development or deployment of a covered algorithm and the processing purpose for each such category; (d) whether the developer or deployer transfers personal data, and, if so, each third party to which the developer or deployer transfers such data and the purpose for which such data is transferred, except with respect to a transfer to a governmental entity pursuant to a court order or law that prohibits the developer or deployer from disclosing such transfer; (e) a prominent description of how an individual can exercise the rights described in this article; (f) a general description of the developer or deployer's practices for compliance with the requirements described in sections one hundred three and one hundred six of this article; (g) the following disclosure: "The audit of this algorithm was conducted to comply with the New York Artificial Intelligence Civil Rights Act, which seeks to avoid the use of any algorithm that has a disparate impact on certain protected classes of individuals. The audit does not guarantee that this algorithm is safe or in compliance with all applicable laws."; and (h) the effective date of the disclosure. 3. The disclosure required under this section shall be made available in each covered language in which the developer or deployer operates or provides a good or service. 4. Any disclosure provided under this section shall be made available in a manner that is reasonably accessible to and usable by individuals with disabilities. 5. (a) If a developer or deployer makes a material change to the disclosure required under this section, the developer or deployer shall notify each individual affected by such material change prior to implementing the material change. (b) Each developer or deployer shall take all reasonable measures to provide to each affected individual a direct electronic notification regarding any material change to the disclosure, in each covered language in which the disclosure is made available and taking into account available technology and the nature of the relationship with such individual. (c) (i) Beginning after the effective date of this article, each developer or deployer shall retain a copy of each previous version of the disclosure required under this section for a period of at least ten years after the last day on which such version was effective and publish each such version on its website. Each developer or deployer shall make publicly available, in a clear, conspicuous, and readily accessible manner, a log describing the date and nature of each material change to its disclosure during the retention period, and such descriptions shall be sufficient for a reasonable individual to understand the material effect of each material change. (ii) The obligations described in this paragraph shall not apply to any previous version of a developer or deployer's disclosure of practices regarding the collection, processing, and transfer of personal data, or any material change to such disclosure, that precedes the effective date of this article.
Pending 2025-10-11
G-02.1
GBL § 1551(2)(a)-(d), § 1551(3)(a)-(b), § 1551(5)
Plain Language
Developers must provide deployers and downstream developers with comprehensive pre-deployment documentation covering: foreseeable and harmful uses, training data summaries, known limitations and discrimination risks, system purpose, performance evaluation methods, data governance measures, intended outputs, discrimination mitigation steps, and usage/monitoring guidance. Documentation must be delivered through model cards, dataset cards, or equivalent artifacts and must be sufficient for deployers to complete their own impact assessments. A developer that is also the sole deployer of a system is exempt unless the system is provided to an unaffiliated deployer. Trade secrets and security-sensitive information are exempt from disclosure.
2. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination. 3. (a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer. 5. Nothing in subdivisions two or four of this section shall be construed to require a developer to disclose any information: (a) that is a trade secret or otherwise protected from disclosure pursuant to state or federal law; or (b) the disclosure of which would present a security risk to such developer.
Pending 2025-10-11
G-02.4
GBL § 1551(4)(a)-(b)
Plain Language
Developers must publish and maintain on their website or a public use case inventory a clear summary describing: the types of high-risk AI decision systems they have developed or substantially modified and currently make available, and how they manage known or foreseeable algorithmic discrimination risks. The statement must be updated as needed for accuracy and within 90 days of any intentional and substantial modification to a covered system.
4. (a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
Pending 2025-10-11
G-02.4
GBL § 1552(6)(a)-(b)
Plain Language
Deployers must publish and maintain on their website a clear statement summarizing: the types of high-risk AI decision systems they currently deploy, their algorithmic discrimination risk management practices for each system, and detailed information about the nature, source, and extent of data collected and used. The statement must be periodically updated. Deployers meeting the § 1552(7) delegation conditions are exempt.
6. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
Passed 2025-06-25
G-02.3
Gen. Bus. Law § 1421(1)(c)
Plain Language
Before deployment, large developers must conspicuously publish their safety and security protocol — with permitted redactions for trade secrets, public safety, privacy, and legally controlled information — and transmit a copy to the Division of Homeland Security and Emergency Services. Additionally, the developer must grant DHSES or the Attorney General access to the protocol upon request, with redactions limited only to what federal law requires (a narrower redaction standard than the public-facing version). This creates a two-tier disclosure regime: the public version may have broader redactions, while the regulator version may only be redacted as required by federal law.
(c) (i) Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of such redacted safety and security protocol to the division of homeland security and emergency services; (ii) Grant the division of homeland security and emergency services or the attorney general access to the safety and security protocol, with redactions only to the extent required by federal law, upon request;
Pending 2025-01-01
G-02.1
Section 37-31-20(B)(1)-(4), (F)
Plain Language
Developers must provide deployers with comprehensive documentation covering: foreseeable and harmful uses, training data summaries, system limitations, algorithmic discrimination risks, performance evaluation methodology, data governance measures, intended outputs, mitigation measures, and usage/monitoring guidance. This is essentially a model card obligation directed at downstream deployers. The obligation does not require disclosure of trade secrets, legally protected information, or information creating security risks for the developer.
(B) Except as provided in subsection (F), a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (a) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (b) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (c) the purpose of the high-risk artificial intelligence system; (d) the intended benefits and uses of the high-risk artificial intelligence system; and (e) all other information necessary to allow the deployer to comply with the requirements of Section 37-31-30; (3) documentation describing: (a) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (b) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (c) the intended outputs of the high-risk artificial intelligence system; (d) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (e) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination. (F) Nothing in subsections (B) through (E) requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
Pending 2025-01-01
G-02.1
Section 37-31-20(C)(1)-(2)
Plain Language
Developers must provide deployers with the documentation and artifacts — such as model cards, dataset cards, or impact assessments — needed for deployers to complete their own impact assessments. This obligation applies to the extent feasible and does not require a developer that also serves as its own deployer to generate this documentation unless the system is provided to an unaffiliated deployer.
(C)(1) Except as provided in subsection (F), a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to Section 37-31-30(C). (2) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
Pending 2025-01-01
G-02.4
Section 37-31-20(D)(1)-(2)
Plain Language
Developers must publish on their website or in a public use case inventory a clear summary of the types of high-risk AI systems they offer and how they manage algorithmic discrimination risks. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a covered system. Routine post-deployment learning that was anticipated in the initial impact assessment and documented in technical documentation does not trigger the update obligation.
(D)(1) A developer shall make available, in a manner that is clear and readily available on the developer's website or in a public-use case inventory, a statement summarizing: (a) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (b) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with item (1)(a). (2) A developer shall update the statement described in item (1): (a) as necessary to ensure that the statement remains accurate; and (b) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in item (1)(a).
Pending 2025-01-01
G-02.4
Section 37-31-30(E)(1)-(2)
Plain Language
Deployers must publish on their website a clear summary of: the types of high-risk AI systems they deploy, how they manage algorithmic discrimination risks for each system, and the nature, source, and extent of data they collect and use. The statement must be periodically updated. Small deployers meeting the subsection (F) criteria are exempt.
(E)(1) Except as provided in subsection (F), a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (a) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (b) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subitem (a); and (c) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) A deployer shall periodically update the statement described in item (1) of this section.
Pending 2027-01-01
§ 59.1-619(B)
Plain Language
Operators must publicly publish the findings of any safety testing conducted to ensure compliance with the minor safety requirements in § 59.1-615. This is an ongoing publication obligation — each round of safety testing conducted in connection with the minor-specific prohibited conduct provisions must result in published findings. The provision does not specify the format or location of publication, but the findings must be made public.
B. Operators shall publish safety test findings for any safety testing conducted in furtherance of § 59.1-615.
Failed 2027-07-01
G-02.1
Va. Code § 59.1-615(A)(1)-(7)
Plain Language
Developers of base AI models (i.e., large-scale foundation models trained on broad data) must clearly and conspicuously disclose seven categories of basic model information in the terms of service: model name, developer identity, developer's incorporation location, most recent version release date, training data recency date, supported languages, and a link to the terms of service. The disclosure must be appropriate for the medium and easily accessible to users. This is a relatively lightweight model card-style obligation focused on identity and provenance metadata rather than capabilities, limitations, or safety assessments.
A. A developer of a base artificial intelligence model shall clearly and conspicuously disclose, in a manner that is appropriate for the medium of the content and is easily accessible to the user of such model, in the terms of service governing the use of such model: 1. The name of the model; 2. The developer of the model; 3. The location where the developer is incorporated; 4. The release date of the most recent version of the model; 5. The date that the model's training data was most recently updated; 6. Supported languages for the model; and 7. A link to the model's terms of service.
Pending 2025-07-01
G-02.4
9 V.S.A. § 4193f(e)(2)
Plain Language
The Attorney General must maintain a publicly accessible online database containing all reports and audits filed under this subchapter, redacted where appropriate, and updated biannually. This creates an indirect public transparency obligation for developers and deployers — their filings will be publicly accessible through the AG's database. While the direct obligation falls on the AG, developers and deployers should assume their reports and audit results will be publicly available in redacted form.
(e) The Attorney General shall: ... (2) maintain an online database that is accessible to the general public with reports, redacted in accordance with this section, and audits required by this subchapter, which shall be updated biannually.
Pre-filed 2025-07-01
G-02.1
9 V.S.A. § 4193f(b)
Plain Language
Developers of inherently dangerous AI systems must document and disclose to all actual and potential deployers: (1) all reasonably foreseeable risks — including from unintended or unauthorized uses — that could cause any of the nine categories of harm enumerated in § 4193f(a), and (2) risk mitigation processes reasonably foreseeable to mitigate those harms. This is a pre-deployment downstream disclosure obligation — developers must affirmatively push risk and mitigation information to deployers, not merely make it available on request. The disclosure covers both the risk landscape and the developer's recommended mitigation approaches.
(b) Each developer of an inherently dangerous artificial intelligence system shall document and disclose to any actual or potential deployer of the artificial intelligence system any: (1) reasonably foreseeable risk, including by unintended or unauthorized uses, that causes or is likely to cause any of the injuries as set forth in subsection (a) of this section; and (2) risk mitigation processes that are reasonably foreseeable to mitigate any injury as set forth in subsection (a) of this section.
Pre-filed 2026-07-01
G-02.4
9 V.S.A. § 4193c(d)
Plain Language
Chatbot providers must publish information about their chatbot on their website, updated monthly, with the specific categories of information to be defined by Attorney General rulemaking. This is a public transparency obligation distinct from the data security program publication requirement in § 4193b(d). The full scope of what must be disclosed will depend on AG rules, but the monthly update cadence ensures ongoing disclosure rather than a one-time publication.
(d) Chatbot information. A chatbot provider shall make information about its chatbot publicly available on its website on a monthly basis as set forth in rules adopted by the Attorney General pursuant to this subchapter.
Pending 2027-01-01
G-02.1
Sec. 2(2)(a)-(c)
Plain Language
Developers may not distribute a high-risk AI system to deployers or other developers unless they provide comprehensive documentation covering: intended uses, known limitations and foreseeable discrimination risks, purpose and intended outputs, a performance and bias evaluation summary, discrimination mitigation measures, monitoring and use/misuse guidance, and any additional documentation reasonably needed for the deployer to understand and monitor the system. This is a pre-distribution gating requirement — the system may not be provided until these disclosures are made available to the recipient.
(2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
Pending 2027-01-01
G-02.1
Sec. 2(3)
Plain Language
Developers must provide deployers with information and documentation — including system cards, predeployment impact assessments, and risk management policies — sufficient to enable the deployer or its contracted third party to complete the deployer-side impact assessment required by Section 3(3). This is a feasibility-qualified obligation; developers must provide what is feasible and necessary. This complements but is distinct from the Section 2(2) documentation — it specifically targets impact assessment enablement.
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
Pending 2027-01-01
G-02.4
Sec. 3(6)
Plain Language
Deployers must publish or make readily available a clear public summary describing how they manage foreseeable algorithmic discrimination risks from their high-risk AI systems. This is a standalone public transparency obligation — separate from the impact assessment and from the consumer-facing disclosures at the point of interaction. The statement must be affirmatively made available, not merely produced on request.
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
Pending 2027-01-01
G-02.1
Sec. 3(8)
Plain Language
When a deployer makes an intentional and substantial modification to a high-risk AI system, the deployer is treated as a developer for documentation purposes and must comply with all Section 2 developer disclosure requirements — including making available intended use statements, limitation documentation, evaluation summaries, mitigation descriptions, use/misuse guidance, and impact assessment enablement documentation. This effectively means a modifying deployer assumes dual obligations.
(8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
Pending 2027-01-01
G-02.1
Sec. 2(2)(a)-(c)
Plain Language
Developers may not provide a high-risk AI system to any deployer or other developer without delivering comprehensive documentation covering: intended uses, known limitations and discrimination risks, purpose and intended outputs, a summary of pre-deployment performance and bias evaluations, discrimination mitigation measures taken, guidance on proper use, misuse, and human monitoring, plus any additional documentation reasonably necessary for the deployer to understand outputs and monitor for discrimination. This is a pre-distribution prerequisite — the system cannot be made available until the documentation is delivered.
(2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
Pending 2027-01-01
G-02.1
Sec. 2(3)
Plain Language
Developers must provide deployers with the information and artifacts — including system cards, pre-deployment impact assessments, and risk management policies — necessary for the deployer or its contracted third party to complete the impact assessment required under Section 3(3). This obligation is limited to what is feasible and necessary but ensures deployers have adequate upstream documentation to conduct their own compliance assessments.
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
Pending 2027-01-01
G-02.1
Sec. 2(6)
Plain Language
Developers must update all deployer-facing disclosures within 90 days of performing an intentional and substantial modification to a high-risk AI system. This is a continuing accuracy obligation — documentation delivered under Section 2(2) must remain current. Notably, routine deployer customizations and changes arising from predetermined continuous learning that were included in the initial impact assessment do not trigger this update obligation.
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
Pending 2027-01-01
G-02.4
Sec. 3(6)
Plain Language
Deployers must publish a clear, readily available summary of how they manage algorithmic discrimination risks arising from their high-risk AI systems. 'Readily available' implies public accessibility — not buried in terms of service or available only upon request. This is a standalone public transparency obligation distinct from the impact assessment requirement and consumer-facing disclosures.
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
Pending 2027-01-01
G-02.1
Sec. 3(7)
Plain Language
Deployers must update all required disclosures within 30 days of being notified by the developer of an intentional and substantial modification to the high-risk AI system. This is a continuing accuracy obligation — deployer disclosures to consumers and the public must remain current as the system evolves. Note the 30-day deployer window is shorter than the developer's 90-day update window under Section 2(6).
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
Pending 2027-01-01
G-02.1
Sec. 3(8)
Plain Language
When a deployer performs an intentional and substantial modification to a high-risk AI system, the deployer steps into the developer's shoes and must comply with all developer documentation and disclosure obligations under Section 2. This effectively means the deployer must produce and deliver the same comprehensive documentation package (intended uses, limitations, bias evaluations, mitigation measures, monitoring guidance) that a developer would produce. This triggers only for modifications that create new material discrimination risks or materially change the system's purpose.
(8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.