Developers of AI systems must publish standardized documentation describing model capabilities, limitations, intended uses, safety measures, and risk assessments. The primary audience is the public and downstream deployers — this is distinct from confidential regulatory submissions. Publication must occur before or at deployment and must be kept current.
(a) A health facility, clinic, physician's office, or office of a group practice that uses or deploys a covered tool for patient care shall disclose required information, described in subdivision (b), to any licensed health care professional or other person using a covered tool or viewing outputs from a covered tool. (b) Required information under subdivision (a) shall include all of the following: (1) Details on the covered tool, including developer, funding source, any foundation model used, and description of output. (2) Intended use of the covered tool, including intended patient population, intended users, and intended decisionmaking role. (3) Cautioned out-of-scope use of the covered tool, including known risks and limitations. (4) List of the inputs into the covered tool. (5) Description of how the covered tool generates outputs. (6) Development details of the covered tool, including, but not limited to, all of the following: (A) Description of the training set or clinical research underlying recommendations, including demographic representativeness and known biases based on protected characteristics. (B) Description of the relevance of training data to deployed setting. (C) Process used to ensure fairness in development of the intervention. (7) Description of the validation process. (8) Qualitative measures of performance. (9) Description of ongoing maintenance of intervention implementation and use. (10) Description of updates and continued validation or fairness assessment process. (11) Notice that health care entities and developers are liable for harm that results from the use of artificial intelligence in patient care. (12) Notice that a worker providing direct patient care is permitted to override the output of a covered tool if, in the judgment of the worker acting in their scope of practice, such an override is appropriate for the patient, or as necessary to comply with applicable law, including civil rights law. (c) (1) A disclosure made pursuant to this section shall be provided at the time the licensed health care professional or other person uses the covered tool or views any recommendation or output generated by the covered tool. (2) The disclosure shall be provided in plain language to, and linked in the health record of, any patient whose care was affected by the output of the covered tool or whose health information was used as an input to the covered tool. (3) The disclosure shall be provided with ample time for the licensed health care professional or other person to review and make reasoned decisions based on their professional judgment on whether and how to use the covered tool.
Publish on its internet website, and update as needed to ensure accuracy, a child safety policy.
(c) (1) A developer shall make available to deployers and potential deployers the statements included in the developer's impact assessment pursuant to paragraph (2).
(b) A deployer shall make available on its internet website a statement summarizing all of the following: (1) The types of high-risk automated decision systems it currently deploys. (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination arising from the deployment of those high-risk automated decision systems. (3) The nature and source of the information collected and used by the high-risk automated decision systems deployed by the deployer.
(c) (1) Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer shall clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer. (B) A mechanism that enables a natural person to communicate with the frontier developer. (C) The release date of the frontier model. (D) The languages supported by the frontier model. (E) The modalities of output supported by the frontier model. (F) The intended uses of the frontier model. (G) Any generally applicable restrictions or conditions on uses of the frontier model. (3) A frontier developer that publishes the information described in paragraph (1) or (2) as part of a larger document, including a system card or model card, shall be deemed in compliance with the applicable paragraph. (4) A frontier developer is encouraged, but not required, to make disclosures described in this subdivision that are consistent with, or superior to, industry best practices.
(2) Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer shall include in the transparency report required by paragraph (1) summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's frontier AI framework. (B) The results of those assessments. (C) The extent to which third-party evaluators were involved. (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer shall clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer. (B) A mechanism that enables a natural person to communicate with the frontier developer. (C) The release date of the frontier model. (D) The languages supported by the frontier model. (E) The modalities of output supported by the frontier model. (F) The intended uses of the frontier model. (G) Any generally applicable restrictions or conditions on uses of the frontier model.
Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer shall include in the transparency report required by paragraph (1) summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's frontier AI framework. (B) The results of those assessments. (C) The extent to which third-party evaluators were involved. (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
(1) When a frontier developer publishes documents to comply with this section, the frontier developer may make redactions to those documents that are necessary to protect the frontier developer’s trade secrets, the frontier developer’s cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (2) If a frontier developer redacts information in a document pursuant to this subdivision, the frontier developer shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
(4) (a) On and after June 30, 2026, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing:
(5) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing:
Any production company deploying artificial intelligence systems for use in production in this state shall: (1) Not later than December 31, 2027, and annually thereafter, conduct an inventory of all systems that employ artificial intelligence and are in use and publish such inventory on a publicly accessible website. Each inventory shall include, but not be limited to, the following information for each artificial intelligence system: (A) The name of such system and the vendor, if any, that provided such system; (B) A description of the general capabilities and uses of such system; (C) The manner in which such system is able to be used to independently make, inform, or materially support a conclusion, decision, or judgment; and (D) The manner in which such system underwent an impact assessment prior to implementation;
(1) A developer shall make available to the public, in a manner that is clear and readily available on the developer's public website or in a public use case inventory, a statement summarizing: (A) The types of automated decision systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (B) How the developer manages known or reasonably foreseeable risks of algorithmic discrimination. (2) A developer shall update the statement described in paragraph (1) of this subsection: (A) As necessary to ensure that the statement remains accurate; and (B) No later than 90 days after the developer intentionally and substantially modifies any automated decision system described in such statement.
(a) Except as provided in Code Section 10-16-6, a deployer shall make available, in a manner that is clear and readily available on the deployer's public website, a statement summarizing: (1) The types of automated decision systems that are currently deployed by the deployer; (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each such automated decision system; and (3) In detail, the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall periodically update the statement described in subsection (a) of this Code section.
(a) The Illinois Department of Innovation and Technology shall adopt rules to ensure that a business using an AI system in Illinois publishes on the business's official Internet website accessible to the public a report explaining compliance with the 5 principles of AI governance iterated in this Act. This report shall: (1) be updated annually and whenever significant changes are made to the AI system, such as modifications to algorithms, substantial alterations to data inputs, or shifts in operational contexts, additional significant change shall be established by the Department of Innovation and Technology; (2) include information on the design, major decisions made during the design process (such as testing metrics), training data, risk mitigation strategies, and any impact assessments conducted; and (3) be written in plain language to ensure accessibility for the general public, while also providing a more detailed explanation for specialized audiences; this 2-level approach ensures clarity for everyone while offering enough depth for those who may need to understand or challenge the system or its outputs, such as in cases of fairness or discrimination.
(d) The employer shall notify affected employees and any exclusive bargaining representative, the results of each impact assessment, and provide a copy of the impact assessment upon request. (e) Each impact assessment shall be published on the employer's website, subject to the limitations set forth in Section 20.
A deployer shall make publicly available, in a readily accessible manner, a clear policy that provides a summary of both of the following: (1) the types of automated decision tools currently in use or made available to others by the deployer; and (2) how the deployer manages the reasonably foreseeable risks of algorithmic discrimination that may arise from the use of the automated decision tools it currently uses or makes available to others.
(b) Documentation Requirements: Developers must provide deployers with: (1) A summary of intended and foreseeable uses of the AI system; (2) Known limitations and risks, including algorithmic discrimination; (3) Information on the datasets used for training, including measures taken to mitigate biases.
(d) Public Statement: Developers must publish a plain-language summary on their website, detailing: (1) Types of AI systems they develop; (2) Measures to mitigate algorithmic discrimination; (3) Contact information for inquiries.
(d) Transparency: Deployers must publicly disclose the types of high-risk AI systems in use and their risk mitigation strategies.
(a) Disclosure of AI Use: Any corporation operating in Massachusetts that uses artificial intelligence systems or related tools to target specific consumer groups or influence behavior must disclose: (1) Purpose of AI Use: The methods, purposes, and contexts in which AI systems are used to identify or target specific classes of individuals; (2) Behavioral Influence: The specific ways in which AI tools are designed to influence consumer behavior; (3) Third-Party Partnerships: Details of any third-party entities involved in the design, deployment, or operation of AI systems used for targeting or behavioral influence. Proprietary information will be safeguarded and exempt from public disclosure under state confidentiality laws. (b) Public Disclosure Requirements: Corporations must make these disclosures: (1) Publicly available on their website in a manner that is easily accessible and comprehensible; (2) Included in terms and conditions provided to consumers prior to significant interaction with an AI system.
(c) (1) except as provided in subsection (f) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 3 (c). (2) a developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
(d) (1) Not later than 6 months after the effective date of this act, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (ii) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with subsection (d)(1)(i) of this section. (2) a developer shall update the statement described in subsection (d)(1) of this section: (i) as necessary to ensure that the statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subsection (d)(1)(i) of this section.
(e) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subsection (e)(1)(i) of this section; and (iii) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) a deployer shall periodically update the statement described in subsection (e)(1) of this section.
(c) Not less than once every 90 days, produce and conspicuously publish a transparency report that covers the period of 120 days before the publishing of the report to 30 days before the publishing of the report that includes all of the following information: (i) The conclusion of any risk assessments made during the reporting period in accordance with the safety and security protocol under subdivision (a). (ii) If different from the preceding reporting period, for each type of critical risk, an assessment of the relevant capability of the foundation model to create that critical risk of whichever of the large developer's foundation models, whether deployed or not, would pose the highest level of that critical risk if deployed without adequate safeguards and protections. (iii) If, during the reporting period, the large developer has deployed or modified a foundation model that would pose a higher level of critical risk than any of the large developer's existing deployed foundation models if deployed without adequate safeguards and protections, both of the following: (A) The grounds on which and the process by which the large developer decided to deploy the foundation model. (B) Any safeguards and protections implemented by the large developer to mitigate critical risks.
(2) If a large frontier developer or large chatbot provider makes a material modification to its public safety and child protection plan, the large frontier developer or large chatbot provider shall clearly and conspicuously publish on such developer's or provider's website the modified public safety and child protection plan and a justification for such modification within thirty days after such material modification.
(3) Before, or concurrently with, integrating a new foundation model, or a version of an existing foundation model that has been substantially modified, into a covered chatbot operated by the large chatbot provider, a large chatbot provider shall conspicuously publish on its website summaries of all of the following: (i) Assessments of child safety risks conducted pursuant to the large chatbot provider's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to child safety risks.
(4)(a) Before, or concurrently with, deploying a new frontier model or a version of an existing frontier model that the large frontier developer has substantially modified, a large frontier developer shall conspicuously publish on its website summaries of all of the following: (i) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer's public safety and child protection plan; (ii) The results of such assessments; (iii) The extent to which third-party evaluators were involved in such assessments; and (iv) Other steps taken to fulfill the requirements of the public safety and child protection plan with respect to catastrophic risks from the frontier model. (b) A large frontier developer that publishes the information described in subdivision (5)(a) of this section as part of a larger document, including a system card or model card, shall be deemed in compliance with this subsection.
(2) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, each developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (a) A general statement describing the uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (b) Documentation disclosing: (i) A high-level summary of the types of data used to train the high-risk artificial intelligence system; (ii) Each known limitation of the high-risk artificial intelligence system, including each known or reasonably foreseeable risk of algorithmic discrimination arising from the intended use of the high-risk artificial intelligence system; (iii) The purpose of the high-risk artificial intelligence system; (iv) Any intended benefit and use of the high-risk artificial intelligence system; and (v) Information necessary to allow the deployer to comply with the requirements of section 4 of this act; (c) Documentation describing: (i) How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) Intended outputs of the high-risk artificial intelligence system; (iv) The measures the developer has taken to mitigate known risks of algorithmic discrimination that could arise from the deployment of the high-risk artificial intelligence system; and (v) How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (d) Documentation that is reasonably necessary to assist the deployer in understanding each output and monitor the performance of the high-risk artificial intelligence system for each risk of algorithmic discrimination.
(4)(a) On and after February 1, 2026, a developer shall make a statement summarizing the following available in a manner that is clear and readily available in a public use case inventory: (i) The types of high-risk artificial intelligence systems that the developer has developed and currently makes available to a deployer or other developer; (ii) The types of high-risk artificial intelligence system that the developer has intentionally and substantially modified and currently makes available to a deployer or other developer; and (iii) How the developer manages known risks of algorithmic discrimination that could arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in subdivisions (4)(a)(i) and (ii) of this section. (b) A developer shall update the statement described in subdivision (4)(a) of this section: (i) As necessary to ensure that the statement remains accurate; and (ii) No later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subdivision (4)(a)(ii) of this section.
(5)(a) Except as provided in subsection (6) of this section, on and after February 1, 2026, a deployer shall make a statement with the following information available in a manner that is clear and readily available: (i) The types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) How the deployer manages known risks of algorithmic discrimination that may arise from the deployment of the types of high-risk artificial intelligence systems described in subdivision (a)(ii) of this subsection; and (iii) A description of the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall update the statement described in subdivision (a) of this subsection at least once each year.
(3)(a) Except as otherwise provided in subsection (6) of this section, on or after February 1, 2026, a developer that offers, sells, leases, licenses, gives, or otherwise makes any high-risk artificial intelligence system available to a deployer or other developer shall to the extent feasible make available to the deployer or other developer the documentation and information necessary for the deployer or a third party contracted by the deployer to complete an impact assessment pursuant to subsection (3) of section 4 of this act. Such documentation and information includes any model card or other impact assessment. (b) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination.
(a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
(b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
10. Whenever possible, New York residents shall have access to reporting that confirms respect for their data decisions and provides an assessment of the potential impact of surveillance technologies on their rights, opportunities, or access.
6. Summary reporting, including plain language information about these automated systems and assessments of the clarity and quality of notice and explanations, shall be made public whenever possible.
5. Summary reporting, which includes a description of such human governance processes and an assessment of their timeliness, accessibility, outcomes, and effectiveness, shall be made publicly available whenever possible.
§ 514. License provisions and posting. 1. Any license issued under this article shall state the name and address of the licensee, and if the licensee be a co-partnership or association, the names of the members thereof, and if a corporation the date and place of its incorporation. 2. Such license or licenses shall be kept conspicuously posted in the office of the licensee and, where such licensee has a public internet presence, on the website or mobile application of the licensee and shall not be transferable or assignable.
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
(d) document, retain, and provide public-facing reporting on such real estate broker's or online housing platform's website, information on compliance with this subdivision, and any internal auditing methods used for such compliance.
(iii) not later than thirty days after completion: (A) publish a summary of the evaluation, assessment, or review on the website of the developer or deployer in a manner that is easily accessible to individuals; and (B) submit such summary to the division.
1. Each developer or deployer shall make publicly available, in plain language and in a clear, conspicuous, not misleading, easy-to-read, and readily accessible manner, a disclosure that provides a detailed and accurate representation of the developer or deployer's practices regarding the requirements under this article. 2. The disclosure required under subdivision one of this section shall include, at a minimum, the following: (a) the identity and the contact information of: (i) the developer or deployer to which the disclosure applies (including the developer or deployer's point of contact and electronic and physical mail address, as applicable for any inquiry concerning a covered algorithm or individual rights under this article); and (ii) any other entity within the same corporate structure as the developer or deployer to which personal data is transferred by the developer or deployer. (b) a link to the website containing the developer or deployer's summaries of pre-deployment evaluations, impact assessments, and annual review of assessments, as applicable; (c) the categories of personal data the developer or deployer collects or processes in the development or deployment of a covered algorithm and the processing purpose for each such category; (d) whether the developer or deployer transfers personal data, and, if so, each third party to which the developer or deployer transfers such data and the purpose for which such data is transferred, except with respect to a transfer to a governmental entity pursuant to a court order or law that prohibits the developer or deployer from disclosing such transfer; (e) a prominent description of how an individual can exercise the rights described in this article; (f) a general description of the developer or deployer's practices for compliance with the requirements described in sections one hundred three and one hundred six of this article; (g) the following disclosure: "The audit of this algorithm was conducted to comply with the New York Artificial Intelligence Civil Rights Act, which seeks to avoid the use of any algorithm that has a disparate impact on certain protected classes of individuals. The audit does not guarantee that this algorithm is safe or in compliance with all applicable laws."; and (h) the effective date of the disclosure. 3. The disclosure required under this section shall be made available in each covered language in which the developer or deployer operates or provides a good or service. 4. Any disclosure provided under this section shall be made available in a manner that is reasonably accessible to and usable by individuals with disabilities. 5. (a) If a developer or deployer makes a material change to the disclosure required under this section, the developer or deployer shall notify each individual affected by such material change prior to implementing the material change. (b) Each developer or deployer shall take all reasonable measures to provide to each affected individual a direct electronic notification regarding any material change to the disclosure, in each covered language in which the disclosure is made available and taking into account available technology and the nature of the relationship with such individual. (c) (i) Beginning after the effective date of this article, each developer or deployer shall retain a copy of each previous version of the disclosure required under this section for a period of at least ten years after the last day on which such version was effective and publish each such version on its website. Each developer or deployer shall make publicly available, in a clear, conspicuous, and readily accessible manner, a log describing the date and nature of each material change to its disclosure during the retention period, and such descriptions shall be sufficient for a reasonable individual to understand the material effect of each material change. (ii) The obligations described in this paragraph shall not apply to any previous version of a developer or deployer's disclosure of practices regarding the collection, processing, and transfer of personal data, or any material change to such disclosure, that precedes the effective date of this article.
5. The attorney general shall: (a) promulgate rules for a process whereby developers and deployers may request redaction of portions of reports required under this section to ensure that they are not required to disclose sensitive and protected information; and (b) maintain an online database that is accessible to the general public with reports, redacted in accordance with this subdivision, and audits required by this article, which database shall be updated biannually.
2. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision five of this section, a developer of a high-risk artificial intelligence decision system shall make available to each deployer or other developer the following information: (a) A general statement describing the reasonably foreseeable uses, and the known harmful or inappropriate uses, of such high-risk artificial intelligence decision system; (b) Documentation disclosing: (i) high-level summaries of the type of data used to train such high-risk artificial intelligence decision system; (ii) the known or reasonably foreseeable limitations of such high-risk artificial intelligence decision system, including, but not limited to, the known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence decision system; (iii) the purpose of such high-risk artificial intelligence decision system; (iv) the intended benefits and uses of such high-risk artificial intelligence decision system; and (v) any other information necessary to enable such deployer or other developer to comply with the provisions of this article; (c) Documentation describing: (i) how such high-risk artificial intelligence decision system was evaluated for performance, and mitigation of algorithmic discrimination, before such high-risk artificial intelligence decision system was offered, sold, leased, licensed, given, or otherwise made available to such deployer or other developer; (ii) the data governance measures used to cover the training datasets and examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of such high-risk artificial intelligence decision system; (iv) the measures such deployer or other developer has taken to mitigate any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of such high-risk artificial intelligence decision system; and (v) how such high-risk artificial intelligence decision system should be used, not be used, and be monitored by an individual when such high-risk artificial intelligence decision system is used to make, or as a substantial factor in making, a consequential decision; and (d) Any additional documentation that is reasonably necessary to assist a deployer or other developer to: (i) understand the outputs of such high-risk artificial intelligence decision system; and (ii) monitor the performance of such high-risk artificial intelligence decision system for risks of algorithmic discrimination.
4. (a) Beginning on January first, two thousand twenty-seven, each developer shall publish, in a manner that is clear and readily available, on such developer's website, or a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that such developer: (A) has developed or intentionally and substantially modified; and (B) currently makes available to a deployer or other developer; and (ii) how such developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence decision systems described in subparagraph (i) of this subdivision. (b) Each developer shall update the statement described in paragraph (a) of this subdivision: (i) as necessary to ensure that such statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence decision system described in subparagraph (i) of paragraph (a) of this subdivision.
6. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer shall make available, in a manner that is clear and readily available on such deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence decision systems that are currently deployed by such deployer; (ii) how such deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each high-risk artificial intelligence decision system described in subparagraph (i) of this paragraph; and (iii) in detail, the nature, source and extent of the information collected and used by such deployer. (b) Each deployer shall periodically update the statement required pursuant to paragraph (a) of this subdivision.
(b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
(B) Except as provided in subsection (F), a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (a) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (b) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (c) the purpose of the high-risk artificial intelligence system; (d) the intended benefits and uses of the high-risk artificial intelligence system; and (e) all other information necessary to allow the deployer to comply with the requirements of Section 37-31-30; (3) documentation describing: (a) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (b) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (c) the intended outputs of the high-risk artificial intelligence system; (d) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (e) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
(C)(1) Except as provided in subsection (F), a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to Section 37-31-30(C). (2) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
(D)(1) A developer shall make available, in a manner that is clear and readily available on the developer's website or in a public-use case inventory, a statement summarizing: (a) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (b) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with item (1)(a). (2) A developer shall update the statement described in item (1): (a) as necessary to ensure that the statement remains accurate; and (b) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in item (1)(a).
(E)(1) Except as provided in subsection (F), a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (a) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (b) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subitem (a); and (c) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) A deployer shall periodically update the statement described in item (1) of this section.
B. Operators shall publish safety test findings for any safety testing conducted in furtherance of § 59.1-615.
A. A developer of a base artificial intelligence model shall clearly and conspicuously disclose, in a manner that is appropriate for the medium of the content and is easily accessible to the user of such model, in the terms of service governing the use of such model: 1. The name of the model; 2. The developer of the model; 3. The location where the developer is incorporated; 4. The release date of the most recent version of the model; 5. The date that the model's training data was most recently updated; 6. Supported languages for the model; and 7. A link to the model's terms of service. B. The provision of such disclosure to a user shall not be a defense to liability for any harm caused to a plaintiff.
(e) The Attorney General shall: (2) maintain an online database that is accessible to the general public with reports, redacted in accordance with this section, and audits required by this subchapter, which shall be updated biannually.
(d) Chatbot information. A chatbot provider shall make information about its chatbot publicly available on its website on a monthly basis as set forth in rules adopted by the Attorney General pursuant to this subchapter.
(2) A developer of a high-risk artificial intelligence system may not offer, sell, lease, give, or otherwise provide to a deployer or other developer a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer: (a) A statement disclosing the intended uses of such high-risk artificial intelligence system; (b) Documentation disclosing the following: (i) The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system; (ii) The purpose of such high-risk artificial intelligence system and its intended outputs, benefits, and uses; (iii) A summary describing how such high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before it was licensed, sold, leased, given, or otherwise made available to a deployer or other developer; (iv) A description of the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment or use of such high-risk artificial intelligence system; and (v) A description of how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and (c) Any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
(8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
(3) A developer that offers, sells, leases, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer to the extent feasible and necessary, information and documentation to enable the deployer, other developer, or a third party contracted by the deployer to complete an impact assessment required by section 3(3) of this act. Such information and documentation must include artifacts, such as system cards or predeployment impact assessments, including relevant risk management policies and impact assessments.
(6) A deployer shall make readily available a clear statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.