G-02
Governance & Documentation
Public Transparency & Documentation
Developers of AI systems must publish standardized documentation describing model capabilities, limitations, intended uses, safety measures, and risk assessments. The primary audience is the public and downstream deployers — this is distinct from confidential regulatory submissions. Publication must occur before or at deployment and must be kept current.
Applies to DeveloperDeployerGovernment Sector Foundation Model
Bills — Enacted
3
unique bills
Bills — Proposed
28
Last Updated
2026-03-29
Core Obligation

Developers of AI systems must publish standardized documentation describing model capabilities, limitations, intended uses, safety measures, and risk assessments. The primary audience is the public and downstream deployers — this is distinct from confidential regulatory submissions. Publication must occur before or at deployment and must be kept current.

Sub-Obligations3 sub-obligations
Bills That Map This Requirement 31 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-01-01
G-02.1
Health & Safety Code § 1339.76(a)-(c)
Plain Language
Health facilities, clinics, physician offices, and group practice offices that use any AI or clinical decision support system for patient care must provide a comprehensive disclosure to every healthcare professional or other person who uses the tool or views its outputs. The disclosure must cover twelve categories of information: developer and funding details, intended use and patient population, out-of-scope risks and limitations, system inputs and output generation methods, training data characteristics including demographic representativeness and known biases, fairness processes, validation methodology, performance measures, ongoing maintenance plans, update and continued validation processes, a liability notice, and a notice that direct patient care workers may override the tool's output. The disclosure must be provided at the time of use, in plain language, linked to the patient's health record, and with sufficient time for the professional to make reasoned decisions about whether and how to use the tool.
(a) A health facility, clinic, physician's office, or office of a group practice that uses or deploys a covered tool for patient care shall disclose required information, described in subdivision (b), to any licensed health care professional or other person using a covered tool or viewing outputs from a covered tool. (b) Required information under subdivision (a) shall include all of the following: (1) Details on the covered tool, including developer, funding source, any foundation model used, and description of output. (2) Intended use of the covered tool, including intended patient population, intended users, and intended decisionmaking role. (3) Cautioned out-of-scope use of the covered tool, including known risks and limitations. (4) List of the inputs into the covered tool. (5) Description of how the covered tool generates outputs. (6) Development details of the covered tool, including, but not limited to, all of the following: (A) Description of the training set or clinical research underlying recommendations, including demographic representativeness and known biases based on protected characteristics. (B) Description of the relevance of training data to deployed setting. (C) Process used to ensure fairness in development of the intervention. (7) Description of the validation process. (8) Qualitative measures of performance. (9) Description of ongoing maintenance of intervention implementation and use. (10) Description of updates and continued validation or fairness assessment process. (11) Notice that health care entities and developers are liable for harm that results from the use of artificial intelligence in patient care. (12) Notice that a worker providing direct patient care is permitted to override the output of a covered tool if, in the judgment of the worker acting in their scope of practice, such an override is appropriate for the patient, or as necessary to comply with applicable law, including civil rights law. (c) (1) A disclosure made pursuant to this section shall be provided at the time the licensed health care professional or other person uses the covered tool or views any recommendation or output generated by the covered tool. (2) The disclosure shall be provided in plain language to, and linked in the health record of, any patient whose care was affected by the output of the covered tool or whose health information was used as an input to the covered tool. (3) The disclosure shall be provided with ample time for the licensed health care professional or other person to review and make reasoned decisions based on their professional judgment on whether and how to use the covered tool.
Pending 2027-07-01
G-02.4
Bus. & Prof. Code § 22612(c)
Plain Language
Operators must publish on their website a child safety policy — a public-facing document describing the protective measures they take to mitigate identified child safety risks — and keep it updated as needed. This must be in place by July 1, 2027. The policy must reflect the risks identified through the annual risk assessment required by § 22612(a).
Publish on its internet website, and update as needed to ensure accuracy, a child safety policy.
Pending 2026-01-01
G-02.1
Bus. & Prof. Code § 22756.1(c)(1)
Plain Language
Developers must make the content of their impact assessment available to current and prospective deployers. This is a downstream disclosure obligation — distinct from the confidential submission to regulators under § 22756.6 — ensuring that entities considering deploying a high-risk system can review the developer's assessment of purpose, intended uses, data inputs, foreseeable discriminatory impacts, safeguards, and monitoring guidance before making a deployment decision.
(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developer's impact assessment pursuant to paragraph (2).
Pending 2026-01-01
G-02.4
Bus. & Prof. Code § 22756.2(b)
Plain Language
Deployers must publish and maintain on their website a public summary disclosing which types of high-risk automated decision systems they currently deploy, how they manage known or foreseeable risks of algorithmic discrimination, and the nature and source of information the systems collect and use. This is a standing public disclosure obligation — not a one-time filing — and must be kept current as deployed systems change.
(b) A deployer shall make available on its internet website a statement summarizing all of the following: (1) The types of high-risk automated decision systems it currently deploys. (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems. (3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.
Enacted 2026-01-01
G-02.1
Bus. & Prof. Code § 22757.12(c)(1)(A)-(G), (c)(3)-(4)
Plain Language
All frontier developers — not just large frontier developers — must publish a transparency report on their website before or concurrently with deploying a new frontier model or a substantially modified version. The report must cover: the developer's website, a contact mechanism, release date, supported languages and output modalities, intended uses, and any use restrictions. A safe harbor allows compliance by including this information in a model card or system card. Developers are encouraged but not required to exceed industry best practices in their disclosures.
(c) (1) Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer shall clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer. (B) A mechanism that enables a natural person to communicate with the frontier developer. (C) The release date of the frontier model. (D) The languages supported by the frontier model. (E) The modalities of output supported by the frontier model. (F) The intended uses of the frontier model. (G) Any generally applicable restrictions or conditions on uses of the frontier model. (3) A frontier developer that publishes the information described in paragraph (1) or (2) as part of a larger document, including a system card or model card, shall be deemed in compliance with the applicable paragraph. (4) A frontier developer is encouraged, but not required, to make disclosures described in this subdivision that are consistent with, or superior to, industry best practices.
Enacted 2026-01-01
G-02.3
Bus. & Prof. Code § 22757.12(c)(2)(A)-(D)
Plain Language
Large frontier developers must include additional content in their transparency reports beyond what is required of all frontier developers. Specifically, before or at deployment, the report must summarize catastrophic risk assessments conducted under the developer's frontier AI framework, the results of those assessments, the extent of third-party evaluator involvement, and other steps taken to fulfill the framework's requirements. This is a public-facing disclosure — the summaries must be published on the developer's website, not merely submitted to a regulator.
(2) Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer shall include in the transparency report required by paragraph (1) summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's frontier AI framework. (B) The results of those assessments. (C) The extent to which third-party evaluators were involved. (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
Enacted 2026-01-01
G-02.1
Bus. & Prof. Code § 22757.12(c)(1)
Plain Language
All frontier developers must publish a transparency report on their website at or before deployment of each new or substantially modified frontier model, disclosing key model characteristics including supported languages, output modalities, intended uses, and use restrictions. Section 22757.12(c)(3) explicitly notes that publishing this information as part of a system card or model card satisfies the requirement. Section 22757.12(c)(4) says developers are encouraged to make these disclosures consistent with industry best practices.
Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer shall clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer. (B) A mechanism that enables a natural person to communicate with the frontier developer. (C) The release date of the frontier model. (D) The languages supported by the frontier model. (E) The modalities of output supported by the frontier model. (F) The intended uses of the frontier model. (G) Any generally applicable restrictions or conditions on uses of the frontier model.
Enacted 2026-01-01
G-02.3
Bus. & Prof. Code § 22757.12(c)(2)
Plain Language
Large frontier developers must include in each deployment transparency report summaries of their catastrophic risk assessments, assessment results, third-party evaluator involvement, and other steps taken under their frontier AI framework for that model.
Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer shall include in the transparency report required by paragraph (1) summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's frontier AI framework. (B) The results of those assessments. (C) The extent to which third-party evaluators were involved. (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(f)
Plain Language
When a frontier developer redacts published compliance documents for trade secret, cybersecurity, public safety, or national security reasons, it must describe the nature and justification of each redaction and retain the unredacted information for five years.
(1)  When a frontier developer publishes documents to comply with this section, the frontier developer may make redactions to those documents that are necessary to protect the frontier developer’s trade secrets, the frontier developer’s cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (2) If a frontier developer redacts information in a document pursuant to this subdivision, the frontier developer shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
Enacted 2026-06-30
G-02.4
C.R.S. § 6-1-1702(4)(a)
Plain Language
Developers must publish on their website or in a public use case inventory a clear, readily available statement summarizing their high-risk AI systems. The specific content of this summary is defined in the original SB 205 § 6-1-1702(4)(a) (types of high-risk AI systems developed, how the developer manages known or foreseeable risks of algorithmic discrimination, etc.). This is a public-facing transparency obligation distinct from the deployer-facing documentation requirement.
(4) (a) On and after June 30, 2026, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing:
Enacted 2026-06-30
G-02.4
C.R.S. § 6-1-1703(5)(a)
Plain Language
Deployers must publish a clear, readily available statement on their website summarizing their deployed high-risk AI systems. The specific summary content is defined in the original SB 205 § 6-1-1703(5)(a) (types of systems deployed, how the deployer manages known or foreseeable discrimination risks, etc.). This is the deployer counterpart to the developer's public use case inventory obligation in § 6-1-1702(4)(a).
(5) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing:
Pending 2027-01-01
G-02.4
O.C.G.A. § 10-1-972(1)
Plain Language
Production companies deploying AI systems for production in Georgia must conduct and publicly publish a comprehensive inventory of all AI systems in use, beginning no later than December 31, 2027 and annually thereafter. Each inventory entry must include the system name and vendor, a description of its capabilities and uses, how it can independently make or inform decisions, and how it underwent an impact assessment prior to implementation. This is a public transparency obligation — the inventory must be posted on a publicly accessible website. Note this applies to all AI systems in use, not just those used for digital replicas.
Any production company deploying artificial intelligence systems for use in production in this state shall: (1) Not later than December 31, 2027, and annually thereafter, conduct an inventory of all systems that employ artificial intelligence and are in use and publish such inventory on a publicly accessible website. Each inventory shall include, but not be limited to, the following information for each artificial intelligence system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) The manner in which such system is able to be used to independently make, inform, or materially support a conclusion, decision, or judgment; and (D) The manner in which such system underwent an impact assessment prior to implementation;
Pending 2025-07-01
G-02.4
O.C.G.A. § 10-16-2(d)
Plain Language
Developers must publish on their website or in a public use case inventory a clear summary of the types of automated decision systems they currently make available and how they manage algorithmic discrimination risks. This statement must be kept accurate on an ongoing basis and updated within 90 days of any intentional and substantial modification to a described system. Continuous learning changes that were predetermined and documented in the initial impact assessment are excluded from the modification trigger.
(1) A developer shall make available to the public, in a manner that is clear and readily available on the developer's public website or in a public use case inventory, a statement summarizing: (A) The types of automated decision systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (B) How the developer manages known or reasonably foreseeable risks of algorithmic discrimination. (2) A developer shall update the statement described in paragraph (1) of this subsection: (A) As necessary to ensure that the statement remains accurate; and (B) No later than 90 days after the developer intentionally and substantially modifies any automated decision system described in such statement.
Pending 2025-07-01
G-02.4
O.C.G.A. § 10-16-5(a)-(b)
Plain Language
Deployers must publish a clear, readily accessible statement on their public website summarizing the types of automated decision systems they deploy, how they manage algorithmic discrimination risks for each system, and detailed information about the nature, source, and extent of data they collect and use. The statement must be periodically updated. Small deployers meeting all § 10-16-6 conditions are exempt.
(a) Except as provided in Code Section 10-16-6, a deployer shall make available, in a manner that is clear and readily available on the deployer's public website, a statement summarizing: (1) The types of automated decision systems that are currently deployed by the deployer; (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each such automated decision system; and (3) In detail, the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall periodically update the statement described in subsection (a) of this Code section.
Pending 2026-01-01
G-02.4
Section 20(a)
Plain Language
Businesses using AI systems in Illinois must publish a publicly accessible compliance report on their website explaining how they comply with the five AI governance principles. The report must include information on system design, key design decisions (including testing metrics), training data, risk mitigation strategies, and any impact assessments conducted. It must be written in a two-tier format: plain language for general audiences and a more detailed version for specialized audiences who may need to evaluate or challenge the system. The report must be updated annually and whenever significant changes are made to the AI system — such as algorithm modifications, substantial data input changes, or shifts in operational context — with additional triggering events to be defined by Department rulemaking. Applies only to businesses with 10 or more employees.
(a) The Illinois Department of Innovation and Technology shall adopt rules to ensure that a business using an AI system in Illinois publishes on the business's official Internet website accessible to the public a report explaining compliance with the 5 principles of AI governance iterated in this Act. This report shall: (1) be updated annually and whenever significant changes are made to the AI system, such as modifications to algorithms, substantial alterations to data inputs, or shifts in operational contexts, additional significant change shall be established by the Department of Innovation and Technology; (2) include information on the design, major decisions made during the design process (such as testing metrics), training data, risk mitigation strategies, and any impact assessments conducted; and (3) be written in plain language to ensure accessibility for the general public, while also providing a more detailed explanation for specialized audiences; this 2-level approach ensures clarity for everyone while offering enough depth for those who may need to understand or challenge the system or its outputs, such as in cases of fairness or discrimination.
Pending 2026-01-01
G-02.4
Section 15(d)-(e)
Plain Language
Employers must notify affected employees and exclusive bargaining representatives of the results of each impact assessment and provide a copy of the assessment upon request. Additionally, each impact assessment must be published on the employer's website. Publication is subject to redaction limitations under Section 20, which permits redaction where disclosure would substantially harm public health or safety, infringe privacy, impair cybersecurity, or reveal security-related technology details — but any redaction must be accompanied by a published explanatory statement.
(d) The employer shall notify affected employees and any exclusive bargaining representative, the results of each impact assessment, and provide a copy of the impact assessment upon request. (e) Each impact assessment shall be published on the employer's website, subject to the limitations set forth in Section 20.
Pending 2027-01-01
G-02.4
Section 25
Plain Language
Deployers must publish a clear, readily accessible public policy summarizing: (1) the types of automated decision tools they currently use or make available, and (2) how they manage the foreseeable risks of algorithmic discrimination arising from those tools. This is a standalone public transparency requirement — distinct from the impact assessment, which is submitted to the Attorney General. The policy must be kept current (it covers tools 'currently in use') and must be accessible to the general public, not just regulators.
A deployer shall make publicly available, in a readily accessible manner, a clear policy that provides a summary of both of the following: (1) the types of automated decision tools currently in use or made available to others by the deployer; and (2) how the deployer manages the reasonably foreseeable risks of algorithmic discrimination that may arise from the use of the automated decision tools it currently uses or makes available to others.
Pre-filed 2025-07-07
G-02.1
Chapter 93M, Section 2(b)
Plain Language
Developers must furnish deployers with documentation covering three areas: (1) a summary of the system's intended and foreseeable uses, (2) known limitations and risks, specifically including algorithmic discrimination risks, and (3) information about training datasets and bias mitigation measures applied. This is a pre-deployment downstream disclosure obligation — deployers cannot comply with their own impact assessment and risk management obligations without this documentation from developers.
(b) Documentation Requirements: Developers must provide deployers with: (1) A summary of intended and foreseeable uses of the AI system; (2) Known limitations and risks, including algorithmic discrimination; (3) Information on the datasets used for training, including measures taken to mitigate biases.
Pre-filed 2025-07-07
G-02.4
Chapter 93M, Section 2(d)
Plain Language
Developers must publish on their website a plain-language public summary covering the types of AI systems they develop, the measures they take to mitigate algorithmic discrimination, and contact information for public inquiries. This is a standing public transparency obligation — the summary must be accessible to the general public, not just deployers or regulators.
(d) Public Statement: Developers must publish a plain-language summary on their website, detailing: (1) Types of AI systems they develop; (2) Measures to mitigate algorithmic discrimination; (3) Contact information for inquiries.
Pre-filed 2025-07-07
G-02.4
Chapter 93M, Section 3(d)
Plain Language
Deployers must publicly disclose what types of high-risk AI systems they operate and how they mitigate the associated risks. This is a standing public transparency obligation distinct from the deployer-facing documentation developers must provide under Section 2(b) and from the impact assessments under Section 3(b). The provision does not specify the format or publication location, though the AG has rulemaking authority under Section 7 to elaborate on requirements.
(d) Transparency: Deployers must publicly disclose the types of high-risk AI systems in use and their risk mitigation strategies.
Pre-filed 2025-07-07
Section 4(a)-(b)
Plain Language
Any corporation operating in Massachusetts that uses AI to target consumer groups or influence consumer behavior must disclose the methods, purposes, and contexts of that targeting, the specific ways AI is designed to influence behavior, and details of third-party entities involved in the design, deployment, or operation of those AI systems. These disclosures must be posted publicly on the corporation's website in an accessible format and included in terms and conditions provided to consumers before significant interaction with an AI system. Proprietary information is protected under state confidentiality laws. This Section 4 obligation is broader than Section 3's high-risk system focus — it applies to any AI used for consumer targeting or behavioral influence, regardless of whether it qualifies as high-risk.
(a) Disclosure of AI Use: Any corporation operating in Massachusetts that uses artificial intelligence systems or related tools to target specific consumer groups or influence behavior must disclose: (1) Purpose of AI Use: The methods, purposes, and contexts in which AI systems are used to identify or target specific classes of individuals; (2) Behavioral Influence: The specific ways in which AI tools are designed to influence consumer behavior; (3) Third-Party Partnerships: Details of any third-party entities involved in the design, deployment, or operation of AI systems used for targeting or behavioral influence. Proprietary information will be safeguarded and exempt from public disclosure under state confidentiality laws. (b) Public Disclosure Requirements: Corporations must make these disclosures: (1) Publicly available on their website in a manner that is easily accessible and comprehensible; (2) Included in terms and conditions provided to consumers prior to significant interaction with an AI system.
Pre-filed 2025-07-17
G-02.1
Ch. 93M § 2(c)
Plain Language
Developers must provide deployers with documentation — such as model cards, dataset cards, or impact assessments — sufficient for deployers to complete the impact assessments required under Section 3(c). This is a feasibility-qualified obligation. A developer that also serves as the deployer for a system is exempt from generating this documentation unless the system is also provided to an unaffiliated deployer.
(c) (1) except as provided in subsection (f) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 3 (c). (2) a developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
Pre-filed 2025-07-17
G-02.4
Ch. 93M § 2(d)
Plain Language
Developers must publish on their website or in a public use case inventory a clear statement summarizing: (1) the types of high-risk AI systems they develop or substantially modify and make available, and (2) how they manage algorithmic discrimination risks for those systems. This statement must be kept current and updated within 90 days of any intentional and substantial modification. Note that the definition of 'intentional and substantial modification' excludes continuous learning changes that were predetermined and documented in the initial impact assessment.
(d) (1) Not later than 6 months after the effective date of this act, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (ii) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with subsection (d)(1)(i) of this section. (2) a developer shall update the statement described in subsection (d)(1) of this section: (i) as necessary to ensure that the statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subsection (d)(1)(i) of this section.
Pre-filed 2025-07-17
G-02.4
Ch. 93M § 3(e)
Plain Language
Deployers must publish on their website a clear, readily accessible statement summarizing: (1) the types of high-risk AI systems currently deployed, (2) how they manage algorithmic discrimination risks for each system, and (3) detailed information about the nature, source, and extent of data they collect and use. The statement must be periodically updated. The small-deployer exemption under Section 3(f) applies to this obligation.
(e) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subsection (e)(1)(i) of this section; and (iii) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) a deployer shall periodically update the statement described in subsection (e)(1) of this section.
Pending 2026-01-01
G-02.3
Sec. 7(1)(c)
Plain Language
Large developers must publish a transparency report at least every 90 days covering a rolling window from 120 days to 30 days before publication. Each report must include risk assessment conclusions, updated capability assessments for each critical risk type (if changed), and — if a new or modified model posing higher critical risk was deployed — the rationale for deployment and safeguards implemented. The 30-day lookback gap allows time for report preparation. Reports must be conspicuously published.
(c) Not less than once every 90 days, produce and conspicuously publish a transparency report that covers the period of 120 days before the publishing of the report to 30 days before the publishing of the report that includes all of the following information: (i) The conclusion of any risk assessments made during the reporting period in accordance with the safety and security protocol under subdivision (a). (ii) If different from the preceding reporting period, for each type of critical risk, an assessment of the relevant capability of the foundation model to create that critical risk of whichever of the large developer's foundation models, whether deployed or not, would pose the highest level of that critical risk if deployed without adequate safeguards and protections. (iii) If, during the reporting period, the large developer has deployed or modified a foundation model that would pose a higher level of critical risk than any of the large developer's existing deployed foundation models if deployed without adequate safeguards and protections, both of the following: (A) The grounds on which and the process by which the large developer decided to deploy the foundation model. (B) Any safeguards and protections implemented by the large developer to mitigate critical risks.
Pending 2027-01-01
G-02.4
Sec. 4(2)
Plain Language
When a large frontier developer or large chatbot provider materially modifies its public safety and child protection plan, it must publish the updated plan and a written justification for the changes on its website within 30 days. This is an ongoing disclosure obligation triggered by material plan modifications — not a one-time publication requirement.
(2) If a large frontier developer or large chatbot provider makes a material modification to its public safety and child protection plan, the large frontier developer or large chatbot provider shall clearly and conspicuously publish on such developer's or provider's website the modified public safety and child protection plan and a justification for such modification within thirty days after such material modification.
Pending 2027-01-01
G-02.4
Sec. 4(3)
Plain Language
Before or concurrently with integrating a new or substantially modified foundation model into a covered chatbot, a large chatbot provider must publish on its website summaries of its child safety risk assessments, the assessment results, the degree of third-party evaluator involvement, and other steps taken to fulfill the child protection plan. This disclosure is triggered each time a new or substantially modified foundation model is integrated into a covered chatbot.
(3) Before, or concurrently with, integrating a new foundation model, or a version of an existing foundation model that has been substantially modified, into a covered chatbot operated by the large chatbot provider, a large chatbot provider shall conspicuously publish on its website summaries of all of the following: (i) Assessments of child safety risks conducted pursuant to the large chatbot provider's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to child safety risks.
Pending 2027-01-01
G-02.3
Sec. 4(4)(a)-(b)
Plain Language
Before or concurrently with deploying a new or substantially modified frontier model, a large frontier developer must publish on its website summaries of its catastrophic risk assessments, assessment results, third-party evaluator involvement, and other steps taken to address catastrophic risks. Publication as part of a system card or model card satisfies this requirement. This disclosure is triggered each time a new or substantially modified frontier model is deployed.
(4)(a) Before, or concurrently with, deploying a new frontier model or a version of an existing frontier model that the large frontier developer has substantially modified, a large frontier developer shall conspicuously publish on its website summaries of all of the following: (i) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to catastrophic risks from the frontier model. (b) A large frontier developer that publishes the information described in subdivision (5)(a) of this section as part of a larger document, including a system card or model card, shall be deemed in compliance with this subsection.
Pending 2026-02-01
G-02.1
Sec. 3(2)(a)-(d)
Plain Language
Developers must provide deployers with comprehensive documentation covering: intended and harmful uses, training data summaries, known limitations and discrimination risks, system purpose and benefits, pre-deployment performance and bias evaluations, data governance measures, intended outputs, discrimination mitigation steps, usage and monitoring guidance, and information needed for deployers to complete their own impact assessments. This documentation need not be made publicly available — it is a deployer-facing disclosure obligation. A developer that also serves as a deployer is exempt unless the system is provided to an unaffiliated deployer. Trade secrets and security-sensitive information may be withheld.
(2) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, each developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (a) A general statement describing the uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (b) Documentation disclosing: (i) A high-level summary of the types of data used to train the high-risk artificial intelligence system; (ii) Each known limitation of the high-risk artificial intelligence system, including each known or reasonably foreseeable risk of algorithmic discrimination arising from the intended use of the high-risk artificial intelligence system; (iii) The purpose of the high-risk artificial intelligence system; (iv) Any intended benefit and use of the high-risk artificial intelligence system; and (v) Information necessary to allow the deployer to comply with the requirements of section 4 of this act; (c) Documentation describing: (i) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) Intended outputs of the high-risk artificial intelligence system; (iv) The measures the developer has taken to mitigate known risks of algorithmic discrimination that could arise from the deployment of the high-risk artificial intelligence system; and (v) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (d) Documentation that is reasonably necessary to assist the deployer in understanding each output and monitor the performance of the high-risk artificial intelligence system for each risk of algorithmic discrimination.
Pending 2026-02-01
G-02.4
Sec. 3(4)(a)-(b)
Plain Language
Developers must publish and maintain a public use case inventory summarizing: the types of high-risk AI systems they have developed or substantially modified that are currently available, and how they manage known algorithmic discrimination risks. This must be kept current and updated within 90 days of any intentional and substantial modification. Changes from ongoing machine learning that were pre-planned and documented in the initial impact assessment are excluded from the modification trigger.
(4)(a) On and after February 1, 2026, a developer shall make a statement summarizing the following available in a manner that is clear and readily available in a public use case inventory: (i) The types of high-risk artificial intelligence systems that the developer has developed and currently makes available to a deployer or other developer; (ii) The types of high-risk artificial intelligence system that the developer has intentionally and substantially modified and currently makes available to a deployer or other developer; and (iii) How the developer manages known risks of algorithmic discrimination that could arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in subdivisions (4)(a)(i) and (ii) of this section. (b) A developer shall update the statement described in subdivision (4)(a) of this section: (i) As necessary to ensure that the statement remains accurate; and (ii) No later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subdivision (4)(a)(ii) of this section.
Pending 2026-02-01
G-02.4
Sec. 4(5)(a)-(b)
Plain Language
Deployers must publish and maintain a clear, readily available public statement describing: the types of high-risk AI systems they currently deploy, how they manage known algorithmic discrimination risks, and the nature, source, and extent of information they collect and use. This statement must be updated at least annually. Small deployer exemption applies under Sec. 4(6) conditions.
(5)(a) Except as provided in subsection (6) of this section, on and after February 1, 2026, a deployer shall make a statement with the following information available in a manner that is clear and readily available: (i) The types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) How the deployer manages known risks of algorithmic discrimination that may arise from the deployment of the types of high-risk artificial intelligence systems described in subdivision (a)(ii) of this subsection; and (iii) A description of the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall update the statement described in subdivision (a) of this subsection at least once each year.
Pending 2026-02-01
G-02.1
Sec. 3(3)(a)-(b)
Plain Language
When making a high-risk AI system available to a deployer, developers must provide — to the extent feasible — all documentation and information needed for the deployer to complete its own impact assessment, including any model card or developer impact assessment. Developer-deployers that use the system only internally are exempt unless the system is provided to an unaffiliated deployer. This is a deployer-enabling disclosure obligation distinct from the general documentation requirements in Sec. 3(2).
(3)(a) Except as otherwise provided in subsection (6) of this section, on or after February 1, 2026, a developer that offers, sells, leases, licenses, gives, or otherwise makes any high-risk artificial intelligence system available to a deployer or other developer shall to the extent feasible make available to the deployer or other developer the documentation and information necessary for the deployer or a third party contracted by the deployer to complete an impact assessment pursuant to subsection (3) of section 4 of this act. Such documentation and information includes any model card or other impact assessment. (b) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
Pending 2027-01-01
G-02.1
GBL § 1551(2)(a)-(d)
Plain Language
Developers of high-risk AI decision systems must provide deployers and downstream developers with comprehensive documentation covering: foreseeable and harmful uses, training data summaries, known limitations and discrimination risks, purpose and intended benefits, pre-deployment evaluation methodology, data governance measures, intended outputs, discrimination mitigation steps, human monitoring instructions, and any additional documentation needed for compliance. This is a deployer-facing documentation obligation — not a public disclosure requirement. Trade secrets and security-sensitive information are exempt under § 1551(5).
Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination.
Pending 2027-01-01
G-02.4
GBL § 1551(4)(a)-(b)
Plain Language
Developers must publish on their website or a public use case inventory a clear, readily available summary of the types of high-risk AI decision systems they currently offer and how they manage algorithmic discrimination risks associated with those systems. This statement must be kept current and updated within 90 days of any intentional and substantial modification. Trade secrets and security-sensitive information are exempt under § 1551(5).
(a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
Pending 2027-01-01
G-02.4
GBL § 1552(6)(a)-(b)
Plain Language
Deployers must publish on their website a clear, readily available statement summarizing: the types of high-risk AI decision systems they deploy, how they manage algorithmic discrimination risks for each system, and detailed information about the nature, source, and extent of data they collect and use. The statement must be periodically updated to remain current. The obligation may be shifted to the developer by contract under § 1552(7).
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
Pending 2027-01-01
G-02.1
GBL § 1553(1)(b)
Plain Language
Developers of general-purpose AI models must create, maintain, and make available to downstream integrators documentation enabling them to understand model capabilities and limitations and comply with their own obligations under this article. At minimum, the documentation must cover technical integration requirements and the model specification information (intended tasks, integration contexts, acceptable use policies, release date, distribution methods, and I/O modalities). Documentation must be reviewed and revised at least annually. Open-source models with public parameters are exempt from the annual review requirement but not from the initial documentation obligation. Trade secrets are exempt under § 1553(3).
(b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
Pending 2025-04-27
G-02.4
State Tech. Law § 506(10)
Plain Language
Where possible, operators must make available reporting that confirms they are respecting residents' data choices and that assesses the impact of surveillance technologies on residents' rights, opportunities, and access. The 'whenever possible' qualifier makes this a soft obligation with unclear enforceability.
10. Whenever possible, New York residents shall have access to reporting that confirms respect for their data decisions and provides an assessment of the potential impact of surveillance technologies on their rights, opportunities, or access.
Pending 2025-04-27
G-02.4
State Tech. Law § 507(6)
Plain Language
Summary reports about automated systems — including assessments of how clear and high-quality the notice and explanations provided to residents actually are — must be made publicly available whenever possible. This is a public transparency obligation focused on the quality of notice, not just the existence of notice. The 'whenever possible' qualifier introduces discretion about when disclosure is required.
6. Summary reporting, including plain language information about these automated systems and assessments of the clarity and quality of notice and explanations, shall be made public whenever possible.
Pending 2025-04-27
G-02.4
State Tech. Law § 508(5)
Plain Language
Summary reports describing human governance processes — including their timeliness, accessibility, outcomes, and effectiveness — must be made publicly available whenever possible. This public disclosure obligation applies to the human alternatives and fallback mechanisms required by § 508, allowing residents and the public to assess whether human oversight processes are functioning as intended.
5. Summary reporting, which includes a description of such human governance processes and an assessment of their timeliness, accessibility, outcomes, and effectiveness, shall be made publicly available whenever possible.
Pending 2025-07-26
G-02.4
State Tech. Law § 514(1)-(2)
Plain Language
Operators must conspicuously post their AI license in their physical office and, if they have a public internet presence, on their website or mobile application. The license is non-transferable and non-assignable. This creates a public transparency obligation — users and the public can verify an operator's licensed status.
§ 514. License provisions and posting. 1. Any license issued under this article shall state the name and address of the licensee, and if the licensee be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation. 2. Such license or licenses shall be kept conspicuously posted in the office of the licensee and, where such licensee has a public internet presence, on the website or mobile application of the licensee and shall not be transferable or assignable.
Pending
G-02.4
Civil Rights Law § 88(5)
Plain Language
The attorney general must maintain a publicly accessible online database containing all developer and deployer reports and audits, updated biannually. Reports are published with redactions where developers or deployers have successfully requested protection of sensitive information through a process the attorney general will establish by rule. While this provision primarily directs the attorney general, it creates a constructive public disclosure obligation for developers and deployers — their reports and audits will be published unless they affirmatively seek and obtain redactions.
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
Pending 2025-09-05
G-02.4
Real Prop. Law § 442-m(3)(d)
Plain Language
Real estate brokers and online housing platforms using AI tools must document and retain compliance information related to the housing advertisement anti-discrimination requirements in subdivision 3, and publish a public-facing report on their website describing their compliance status and the internal auditing methods used. This is an ongoing public transparency obligation — the information must be maintained and accessible on the entity's website, not merely filed with a regulator.
(d) document, retain, and provide public-facing reporting on such real estate broker's or online housing platform's website, information on compliance with this subdivision, and any internal auditing methods used for such compliance.
Pending 2027-01-01
G-02.4
Civil Rights Law § 104(6)(a)(iii)
Plain Language
Developers and deployers must publish a summary of each full pre-deployment evaluation, full impact assessment, and developer annual review on their website within 30 days of completion, in a manner easily accessible to individuals. This is the public-facing disclosure component of the evaluation/assessment submission process — ensuring that the public has access to information about how covered algorithms have been evaluated for harm and disparate impact.
(iii) not later than thirty days after completion: (A) publish a summary of the evaluation, assessment, or review on the website of the developer or deployer in a manner that is easily accessible to individuals; and (B) submit such summary to the division.
Pending 2027-01-01
G-02.4
Civil Rights Law § 110(1)-(5)
Plain Language
Every developer and deployer must publish a comprehensive public disclosure covering: identity and contact information (including corporate affiliates receiving personal data), links to evaluation/assessment summaries, categories of personal data collected and processing purposes, third-party data transfers, individual rights descriptions, compliance practices, a mandated audit disclaimer, and the disclosure's effective date. Disclosures must be in plain language, accessible to individuals with disabilities, and available in the top 10 languages spoken in New York. Material changes require prior electronic notification to affected individuals in each covered language. All previous disclosure versions must be retained for at least 10 years and published online with a public change log describing each material change. This is one of the most detailed public disclosure requirements in any U.S. AI bill.
1. Each developer or deployer shall make publicly available, in plain language and in a clear, conspicuous, not misleading, easy-to-read, and readily accessible manner, a disclosure that provides a detailed and accurate representation of the developer or deployer's practices regarding the requirements under this article. 2. The disclosure required under subdivision one of this section shall include, at a minimum, the following: (a) the identity and the contact information of: (i) the developer or deployer to which the disclosure applies (including the developer or deployer's point of contact and electronic and physical mail address, as applicable for any inquiry concerning a covered algorithm or individual rights under this article); and (ii) any other entity within the same corporate structure as the developer or deployer to which personal data is transferred by the developer or deployer. (b) a link to the website containing the developer or deployer's summaries of pre-deployment evaluations, impact assessments, and annual review of assessments, as applicable; (c) the categories of personal data the developer or deployer collects or processes in the development or deployment of a covered algorithm and the processing purpose for each such category; (d) whether the developer or deployer transfers personal data, and, if so, each third party to which the developer or deployer transfers such data and the purpose for which such data is transferred, except with respect to a transfer to a governmental entity pursuant to a court order or law that prohibits the developer or deployer from disclosing such transfer; (e) a prominent description of how an individual can exercise the rights described in this article; (f) a general description of the developer or deployer's practices for compliance with the requirements described in sections one hundred three and one hundred six of this article; (g) the following disclosure: "The audit of this algorithm was conducted to comply with the New York Artificial Intelligence Civil Rights Act, which seeks to avoid the use of any algorithm that has a disparate impact on certain protected classes of individuals. The audit does not guarantee that this algorithm is safe or in compliance with all applicable laws."; and (h) the effective date of the disclosure. 3. The disclosure required under this section shall be made available in each covered language in which the developer or deployer operates or provides a good or service. 4. Any disclosure provided under this section shall be made available in a manner that is reasonably accessible to and usable by individuals with disabilities. 5. (a) If a developer or deployer makes a material change to the disclosure required under this section, the developer or deployer shall notify each individual affected by such material change prior to implementing the material change. (b) Each developer or deployer shall take all reasonable measures to provide to each affected individual a direct electronic notification regarding any material change to the disclosure, in each covered language in which the disclosure is made available and taking into account available technology and the nature of the relationship with such individual. (c) (i) Beginning after the effective date of this article, each developer or deployer shall retain a copy of each previous version of the disclosure required under this section for a period of at least ten years after the last day on which such version was effective and publish each such version on its website. Each developer or deployer shall make publicly available, in a clear, conspicuous, and readily accessible manner, a log describing the date and nature of each material change to its disclosure during the retention period, and such descriptions shall be sufficient for a reasonable individual to understand the material effect of each material change. (ii) The obligations described in this paragraph shall not apply to any previous version of a developer or deployer's disclosure of practices regarding the collection, processing, and transfer of personal data, or any material change to such disclosure, that precedes the effective date of this article.
Pending 2027-01-01
G-02.4
Civ. Rights Law § 88(5)
Plain Language
The Attorney General must maintain a publicly accessible online database containing all reports and audits filed under this article, updated biannually. Developers and deployers may request redaction of sensitive and protected information through a process to be established by AG rulemaking. This effectively creates a public transparency obligation — while the filing obligation is to the AG, the public database means the substantive content of reports and audits will be publicly available in redacted form.
5. The attorney general shall:
(a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and
(b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
Pending 2025-10-11
G-02.1
GBL § 1551(2)(a)-(d)
Plain Language
Developers must provide deployers and other downstream developers with comprehensive documentation covering: foreseeable and harmful uses, training data summaries, known limitations and discrimination risks, system purpose and intended benefits, pre-deployment performance and bias evaluation methods, data governance measures, intended outputs, discrimination mitigation measures, usage and monitoring guidance, and any additional documentation necessary for downstream compliance. This documentation must be made available beginning January 1, 2027, subject to a trade secret and security risk carve-out under subdivision 5.
2. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination.
Pending 2025-10-11
G-02.4
GBL § 1551(4)(a)-(b)
Plain Language
Developers must publish on their website or a public use case inventory a clear summary of the types of high-risk AI decision systems they currently make available and how they manage known or foreseeable risks of algorithmic discrimination. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a covered system. Continuous-learning changes that were predetermined and documented in the initial impact assessment do not trigger this update obligation.
4. (a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
Pending 2025-10-11
G-02.4
GBL § 1552(6)(a)-(b)
Plain Language
Deployers must publish and maintain on their website a clear, readily available statement summarizing: the types of high-risk AI decision systems they currently deploy, how they manage known or foreseeable algorithmic discrimination risks for each system, and the nature, source, and extent of information they collect and use. The statement must be periodically updated. Deployers meeting the subdivision 7 conditions are exempt.
6. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
Pending 2025-10-11
G-02.1
GBL § 1553(1)(b)
Plain Language
Developers of general-purpose AI models must create, maintain, and make available to downstream integrators documentation enabling them to understand the model's capabilities and limitations, comply with their own obligations under this article, and integrate the model technically. The documentation must disclose technical integration requirements and all the model-level information required in the technical documentation (tasks, target systems, acceptable use policies, release date, distribution methods, I/O formats). Must be reviewed and revised at least annually.
(b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
Pending
G-02.1
S.C. Code § 37-31-20(B)
Plain Language
Developers must provide deployers (and other downstream developers) with comprehensive documentation covering: intended and harmful uses, training data summaries, known limitations and discrimination risks, purpose and benefits, pre-deployment bias evaluation methodology, data governance measures, intended outputs, discrimination mitigation steps, and human oversight guidance. This is a developer-to-deployer disclosure obligation — not a public-facing requirement. The trade secret exception in subsection (F) applies.
(B) Except as provided in subsection (F), a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (a) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (b) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (c) the purpose of the high-risk artificial intelligence system; (d) the intended benefits and uses of the high-risk artificial intelligence system; and (e) all other information necessary to allow the deployer to comply with the requirements of Section 37-31-30; (3) documentation describing: (a) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (b) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (c) the intended outputs of the high-risk artificial intelligence system; (d) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (e) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
Pending
G-02.1
S.C. Code § 37-31-20(C)
Plain Language
Developers must supply deployers with the documentation — such as model cards, dataset cards, or impact assessments — needed for the deployer to complete its own impact assessment. This obligation is qualified by feasibility. A developer that is also the sole deployer of a system does not need to generate this documentation unless the system is provided to an unaffiliated deployer.
(C)(1) Except as provided in subsection (F), a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to Section 37-31-30(C). (2) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
Pending
G-02.4
S.C. Code § 37-31-20(D)
Plain Language
Developers must publicly post on their website or in a public-use case inventory a summary of the types of high-risk AI systems they make available and how they manage discrimination risks. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a covered system. This is a public-facing transparency obligation distinct from the deployer-facing documentation in subsection (B).
(D)(1) A developer shall make available, in a manner that is clear and readily available on the developer's website or in a public-use case inventory, a statement summarizing: (a) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (b) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with item (1)(a). (2) A developer shall update the statement described in item (1): (a) as necessary to ensure that the statement remains accurate; and (b) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in item (1)(a).
Pending
G-02.4
S.C. Code § 37-31-30(E)
Plain Language
Deployers must publish and periodically update on their website a clear summary of: the types of high-risk AI systems they deploy, how they manage algorithmic discrimination risks for each, and detailed information about the nature, source, and extent of information collected and used. The small deployer exemption in subsection (F) applies.
(E)(1) Except as provided in subsection (F), a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (a) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (b) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subitem (a); and (c) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) A deployer shall periodically update the statement described in item (1) of this section.
Pending 2027-01-01
G-02.4
§ 59.1-619(B)
Plain Language
Operators must publicly publish the findings of any safety testing they conduct to comply with the minor safety requirements of § 59.1-615. This is a public disclosure obligation — the results of safety testing related to preventing harmful chatbot capabilities for minors must be made available to the public. The statute does not specify format, timing, or the level of detail required in the published findings.
B. Operators shall publish safety test findings for any safety testing conducted in furtherance of § 59.1-615.
Pending 2027-07-01
G-02.1
§ 59.1-615(A)-(B)
Plain Language
Developers of base AI models must publish seven enumerated items clearly and conspicuously in the model's terms of service: the model name, developer name, developer's incorporation location, most recent version release date, training data update date, supported languages, and a link to the terms of service. The disclosure must be appropriate for the medium and easily accessible to users. Importantly, making this disclosure does not insulate the developer from liability — subsection B explicitly states that providing the disclosure is not a defense to harm claims.
A. A developer of a base artificial intelligence model shall clearly and conspicuously disclose, in a manner that is appropriate for the medium of the content and is easily accessible to the user of such model, in the terms of service governing the use of such model: 1. The name of the model; 2. The developer of the model; 3. The location where the developer is incorporated; 4. The release date of the most recent version of the model; 5. The date that the model's training data was most recently updated; 6. Supported languages for the model; and 7. A link to the model's terms of service. B. The provision of such disclosure to a user shall not be a defense to liability for any harm caused to a plaintiff.
Pending 2025-07-01
G-02.4
9 V.S.A. § 4193f(e)(2)
Plain Language
The Attorney General must maintain a publicly accessible online database containing the filed reports and audit results required by this subchapter, updated biannually. Reports may be redacted under rules the AG adopts to protect sensitive and protected information. While this is primarily an AG obligation, it has compliance implications for developers and deployers because their filed reports and audit results will be public — they should prepare filings with the understanding that they will be disclosed (subject to approved redactions).
(e) The Attorney General shall: (2) maintain an online database that is accessible to the general public with reports, redacted in accordance with this section, and audits required by this subchapter, which shall be updated biannually.
Pre-filed 2026-07-01
G-02.4
9 V.S.A. § 4193c(d)
Plain Language
Chatbot providers must publish information about their chatbot on their website on a monthly basis. The specific categories of information to be disclosed will be defined by AG rulemaking under § 4193d(a)(3). This is a recurring public transparency obligation — not a one-time publication — requiring monthly updates.
(d) Chatbot information. A chatbot provider shall make information about its chatbot publicly available on its website on a monthly basis as set forth in rules adopted by the Attorney General pursuant to this subchapter.
Pending 2027-01-01
G-02.1
Sec. 2(2)(a)-(c)
Plain Language
Before making a high-risk AI system available to any deployer or downstream developer, the developer must provide comprehensive documentation covering: intended uses, known limitations and discrimination risks, purpose and intended outputs, a summary of pre-deployment performance and bias evaluations, mitigation measures taken, usage guidelines (including what the system should and should not be used for and how humans should monitor it), and any additional documentation reasonably necessary for the deployer to understand outputs and monitor for discrimination. This is a condition precedent to distribution — the developer may not provide the system without first making this documentation available.
(2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
Pending 2027-01-01
G-02.1
Sec. 2(3)
Plain Language
Developers must provide deployers (or their contracted third parties) with sufficient documentation to complete the deployer's required impact assessment. This includes artifacts such as system cards, predeployment impact assessments, and relevant risk management policies. The obligation is qualified by feasibility and necessity, but the developer bears the obligation to make the information available — the deployer should not have to request it.
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
Pending 2027-01-01
G-02.4
Sec. 3(6)
Plain Language
Deployers must make publicly available a clear summary of how they manage foreseeable algorithmic discrimination risks for each high-risk AI system they deploy. 'Readily available' implies public accessibility — not merely available upon request. This is a standalone public transparency obligation separate from the individual consumer disclosures required by Section 3(4) and the impact assessment documentation required by Section 3(3).
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
Pending 2027-01-01
G-02.1
Sec. 3(8)
Plain Language
When a deployer itself performs an intentional and substantial modification to a high-risk AI system — as opposed to receiving a modified system from a developer — the deployer steps into the developer's shoes and must comply with all of the documentation and disclosure obligations that Section 2 imposes on developers. This ensures that whoever modifies the system in a material way bears the documentation burden, regardless of whether they are formally classified as a developer or deployer.
(8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
Pending 2027-01-01
G-02.1
Sec. 2(3)
Plain Language
Developers must provide deployers with the information and artifacts — such as system cards, pre-deployment impact assessments, and risk management policies — that the deployer needs to complete its own impact assessment under Section 3(3). This obligation is scoped by feasibility and necessity. The intent is to prevent deployers from being unable to comply with their impact assessment obligations because the developer withheld upstream documentation.
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
Pending 2027-01-01
G-02.4
Sec. 3(6)
Plain Language
Deployers must make a publicly accessible, clear summary statement describing how they manage algorithmic discrimination risks from their high-risk AI systems. This is a standalone public transparency obligation — separate from the impact assessment (which is internal/retained documentation) and the consumer-facing pre-decision disclosures. The statement must be 'readily available,' suggesting publication on a website or similar public channel.
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.