G-01
Governance & Documentation
AI Governance Program & Documentation
Organizations developing or deploying AI must establish a formal AI governance program, maintain contemporaneous records of AI system design, testing, and deployment decisions, and designate a responsible individual or office for AI governance. Program establishment is not a one-time exercise — ongoing maintenance, recordkeeping, and accountability designation are continuing obligations.
Applies to DeveloperDeployerProfessionalGovernment
Bills — Enacted
4
unique bills
Bills — Proposed
65
Last Updated
2026-03-29
Core Obligation

Organizations developing or deploying AI must establish a formal AI governance program, maintain contemporaneous records of AI system design, testing, and deployment decisions, and designate a responsible individual or office for AI governance. Program establishment is not a one-time exercise — ongoing maintenance, recordkeeping, and accountability designation are continuing obligations.

Sub-Obligations6 sub-obligations
ID
Name & Description
Enacted
Proposed
G-01.1
Risk management program establishment A formal AI risk management program must be established, documented, and approved by appropriate organizational leadership. Must cover risk identification, assessment criteria, mitigation strategies, and escalation procedures. NIST AI RMF is commonly cited as a safe harbor framework.
2 enacted
30 proposed
G-01.2
Ongoing program maintenance and update The program must be reviewed and updated periodically — typically annually — and following material changes to AI systems in scope or to the regulatory environment.
3 enacted
18 proposed
G-01.3
Record keeping and audit trail Documentation of AI system design decisions, training data characteristics, bias testing results, safety evaluation results, and deployment parameters must be created contemporaneously and retained for defined periods — typically 2–5 years depending on jurisdiction.
2 enacted
39 proposed
G-01.4
Regulatory production of records Records must be organized and maintained in a form that can be produced to regulatory authorities upon request within a reasonable timeframe.
1 enacted
19 proposed
G-01.5
Third-party audit and certification High-risk AI systems must be submitted to a qualified independent auditor for evaluation, and results disclosed to regulators or publicly.
0 enacted
7 proposed
G-01.6
Designated AI accountability role A specific individual or office must be formally designated as responsible for AI governance, with defined responsibilities, authority, and resources. SPublic disclosure of the designated role may be required.
1 enacted
5 proposed
Bills That Map This Requirement 69 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-01-01
G-01.1
A.R.S. § 44-1383.01(D)
Plain Language
Chatbot providers must develop, implement, and maintain a written comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and nature of the personal data and chat logs they hold. The program must be publicly available on the provider's website. This is both a governance obligation (establishing and maintaining the program) and a transparency obligation (public posting).
A chatbot provider shall develop, implement and maintain a comprehensive data security program that contains administrative, technical and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
Pending 2027-01-01
G-01.3
Bus. & Prof. Code § 22587.3(a)(1)-(3)
Plain Language
Operators must maintain documentation of three categories of information for every companion chatbot they make available in California: (1) whether a graduated response system exists, (2) all credible crisis expressions detected by the chatbot, and (3) the duration and conditions of every crisis interruption pause initiated. This is a contemporaneous recordkeeping obligation — operators need systems to log crisis detections and pauses as they occur. The documentation serves as the basis for the annual reporting obligation to the Office of Suicide Prevention beginning January 1, 2028.
(a) An operator shall document all of the following with respect to any companion chatbot that the operator makes available in this state: (1) The existence of a graduated response system. (2) All credible crisis expressions detected by the companion chatbot. (3) The duration and conditions of a crisis interruption pause initiated by the companion chatbot.
Pending 2025-07-01
G-01.3G-01.4
2 CCR § 11013(c)
Plain Language
Employers and other covered entities must preserve all employment records — now explicitly including automated-decision system data, selection criteria, and all application and personnel records — for at least four years from the date the record was made or the personnel action occurred, whichever is later. This is a significant expansion: the retention period has been increased from two years to four years, and the scope of records now expressly includes data used in or generated by automated-decision systems. Records must be available to CRD investigators and to support any administrative or judicial proceeding.
(c) Preservation of Records. Any personnel or other employment records created or received by any employer or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee shall be preserved by the employer or other covered entity for a period of four years from the date of the making of the record or the date of the personnel action involved, whichever occurs later. This includes all applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data, and other records created or received by the employer or other covered entity dealing with any employment practice and affecting any employment benefit of any applicant or employee.
Pending 2027-07-01
G-01.5
Bus. & Prof. Code § 22614(a)-(c)
Plain Language
Operators must submit to an annual independent audit of their compliance with this chapter, conducted by an auditor certified by the Attorney General. The audit must begin 180 days after the AG adopts implementing regulations (due by January 1, 2028), meaning the first audit obligation triggers approximately mid-2028. Within 90 days of completing an audit, the auditor — not the operator — must submit the audit report to the AG. Audit reports are confidential by default, but the AG may disclose specific information to government agencies and public prosecutors for enforcement, to qualified researchers for child safety studies, and to child safety organizations for developing safety standards, in each case subject to confidentiality protections. Operators cannot control the auditor's submission to the AG.
(a) Beginning on the date that is 180 days after the Attorney General adopts regulations pursuant to Section 22615, and annually thereafter, an operator shall submit to an independent audit assessing the operator's compliance with this chapter. (b) Within 90 days of completing an independent audit pursuant to subdivision (a), the auditor shall submit an AI child safety audit report to the Attorney General for any audited companion chatbot. (c) (1) Notwithstanding any other law, except as provided in paragraph (2), an AI child safety audit report submitted pursuant to this section is confidential. (2) The Attorney General may disclose specific information from an AI child safety audit report to any of the following: (A) A government agency or a public prosecutor in the state as necessary for enforcement purposes. (B) A qualified researcher conducting a study on child safety, subject to confidentiality agreements and data protection requirements set by the Attorney General. (C) An independent child safety organization or advocacy group for the purpose of developing safety standards or educational resources, subject to appropriate confidentiality protections.
Pending 2026-01-01
G-01.1
Bus. & Prof. Code § 22756.3(a)-(b)
Plain Language
Both developers and deployers must establish, document, implement, and maintain a formal governance program with reasonable administrative and technical safeguards to address foreseeable risks of algorithmic discrimination. The program must be proportionate to the system's intended use, the entity's size and complexity, the nature of its activities, and the technical feasibility and cost of available risk management tools. This is a continuing obligation — the program must be maintained, not merely established once.
(a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system. (b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following: (1) The use, or intended use, of the high-risk automated decision system. (2) The size, complexity, and resources of the deployer or developer. (3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system. (4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.
Failed 2026-01-01
G-01.1G-01.2G-01.6
Civ. Code § 1798.91.3(a)-(c)(1)-(2)(10)
Plain Language
Covered deployers must develop, implement, and maintain a comprehensive written information security program containing administrative, technical, and physical safeguards scaled to their size, resources, data volume, and confidentiality needs. The program must be consistent with existing state and federal data protection requirements, designate one or more employees to maintain it, and be reviewed at least annually and whenever there is a material change in business practices affecting personal information security. This is a continuing obligation — the program must be maintained, not merely created.
(a) A covered deployer conducting business in this state shall have a duty to protect personal information held by the covered deployer as provided by this section.
(b) A covered deployer whose high-risk artificial intelligence systems process personal information shall develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate for all of the following:
(1) The covered deployer's size, scope, and type of business.
(2) The amount of resources available to the covered deployer.
(3) The amount of data stored by the covered deployer.
(4) The need for security and confidentiality of personal information stored by the covered deployer.
(c) The comprehensive information security program required by subdivision (a) shall meet all of the following requirements:
(1) The program shall incorporate safeguards that are consistent with the safeguards for the protection of personal information and information of a similar character under state or federal laws and regulations applicable to the covered deployer.
(2) The program shall include the designation of one or more employees of the covered deployer to maintain the program.
(10) The program shall require the regular review of the scope of the program's security measures that must occur subject to both of the following timeframes:
(A) At least annually.
(B) Whenever there is a material change in the covered deployer's business practices that may reasonably affect the security or integrity of records containing personal information.
Failed 2026-01-01
G-01.3
Civ. Code § 1798.91.3(c)(3)-(6)(8)(9)(11)
Plain Language
The information security program must include detailed operational components: risk identification and assessment for internal and external threats to personal information; ongoing employee and contractor training on security procedures; mandatory compliance with program policies with disciplinary measures for violations; policies governing off-premises storage, access, and transportation of personal information records; measures to revoke terminated employees' access; physical access restrictions including locked storage; regular monitoring to prevent unauthorized access; and documented breach response including mandatory post-incident review. These are the operational elements that give the program practical effect beyond the structural requirements in § 1798.91.3(a)-(b).
(3) The program shall require the identification and assessment of reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of any electronic, paper, or other record containing personal information, and the establishment of a process for evaluating and improving, as necessary, the effectiveness of the current safeguards for limiting those risks, including by all of the following:
(A) Requiring ongoing employee and contractor education and training, including education and training for temporary employees and contractors of the covered deployer, on the proper use of security procedures and protocols and the importance of personal information security.
(B) Mandating employee compliance with policies and procedures established under the program.
(C) Providing a means for detecting and preventing security system failures.
(4) The program shall include security policies for the covered deployer's employees relating to the storage, access, and transportation of records containing personal information outside of the covered deployer's physical business premises.
(5) The program shall provide disciplinary measures for violations of a policy or procedure established under the program.
(6) The program shall include measures for preventing a terminated employee from accessing records containing personal information.
(8) The program shall provide reasonable restrictions on physical access to records containing personal information, including by requiring the records containing the data to be stored in a locked facility, storage area, or container.
(9) The program shall include regular monitoring to ensure that the program is operating in a manner reasonably calculated to prevent unauthorized access to or unauthorized use of personal information and, as necessary, upgrading information safeguards to limit the risk of unauthorized access to or unauthorized use of personal information.
(11) The program shall require the documentation of responsive actions taken in connection with any incident involving a breach of security, including a mandatory postincident review of each event and the actions taken, if any, in response to that event to make changes in business practices relating to protection of personal information.
Failed 2026-01-01
Civ. Code § 1798.91.3(c)(7)
Plain Language
Covered deployers must include in their information security program policies for overseeing third-party service providers that handle personal information. This includes conducting reasonable due diligence in selecting and retaining providers capable of maintaining appropriate security, and contractually requiring those providers to implement and maintain security measures for personal information. This is a supply-chain security obligation — deployers cannot outsource data processing without ensuring downstream protections.
(7) The program shall provide policies for the supervision of third-party service providers that include both of the following:
(A) Taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect personal information consistent with applicable law.
(B) Requiring third-party service providers by contract to implement and maintain appropriate security measures for personal information.
Failed 2026-01-01
Civ. Code § 1798.91.3(c)(12)
Plain Language
To the extent feasible, the information security program must include specific technical security controls: secure user authentication (credential management, password security, account lockout); role-based access controls limiting personal information access to employees and contractors who need it; encryption for data in transit over public or wireless networks and at rest on portable devices; system monitoring for unauthorized access; current firewall protection and OS patches for internet-connected systems; and current malware protection software with regular updates. These are prescriptive technical minimums — deployers may implement higher-security alternatives. The 'to the extent feasible' qualifier provides some flexibility for smaller organizations.
(12) The program shall, to the extent feasible, include all of the following procedures and protocols with respect to computer system security requirements or procedures and protocols providing a higher degree of security, for the protection of personal information:
(A) The use of secure user authentication protocols that include all of the following features:
(i) The control of user login credentials and other identifiers.
(ii) The use of a reasonably secure method of assigning and selecting passwords or using unique identifier technologies, which may include biometrics or token devices.
(iii) The control of data security passwords to ensure that the passwords are kept in a location and a format that do not compromise the security of the data the passwords protect.
(iv) The restriction of access to only active users and active user accounts.
(v) The blocking of access to user credentials or identification after multiple unsuccessful attempts to gain access.
(B) The use of secure access control measures that include both of the following:
(i) The restriction of access to records and files containing personal information to only employees or contractors who need access to that personal information to perform the job duties of the employees or contractors.
(ii) The assignment of a unique identification and a password to each employee or contractor with access to a computer containing personal information, that may not be a vendor-supplied default password, or the use of another protocol reasonably designed to maintain the integrity of the security of the access controls to personal information.
(C) The encryption of both of the following:
(i) Transmitted records and files containing personal information that will travel across public networks.
(ii) Data containing personal information that is transmitted wirelessly.
(D) The use of reasonable monitoring of systems for unauthorized use of or access to personal information.
(E) The encryption of all personal information stored on laptop computers or other portable devices.
(F) For files containing personal information on a system that is connected to the internet, the use of reasonably current firewall protection and operating system security patches that are reasonably designed to maintain the integrity of the personal information.
(G) The use of both of the following:
(i) A reasonably current version of system security agent software that shall include malware protection and reasonably current patches and virus definitions.
(ii) A version of a system security agent software that is supportable with current patches and virus definitions, and is set to receive the most current security updates on a regular basis.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(a)
Plain Language
Large frontier developers must create, follow, and publicly publish a comprehensive frontier AI framework covering catastrophic risk assessment thresholds, mitigations, third-party evaluations, cybersecurity for model weights, incident response, internal governance, and management of internal-use risks. This is in effect a mandatory AI risk management program.
A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework. (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds. (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to paragraph (2). (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally. (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks. (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to subdivision (c). (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties. (8) Identifying and responding to critical safety incidents. (9) Instituting internal governance practices to ensure implementation of these processes. (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
Enacted 2026-01-01
G-01.2
Bus. & Prof. Code § 22757.12(b)(1)
Plain Language
Large frontier developers must review their frontier AI framework at least annually.
A large frontier developer shall review and, as appropriate, update its frontier AI framework at least once per year.
Enacted 2026-01-01
G-01.2
Bus. & Prof. Code § 22757.12(b)(2)
Plain Language
When a large frontier developer makes a material modification to its frontier AI framework, it must publish the updated framework and a written justification for the change within 30 days of making that modification.
If a large frontier developer makes a material modification to its frontier AI framework, the large frontier developer shall clearly and conspicuously publish the modified frontier AI framework and a justification for that modification within 30 days.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(e)(1)(B)
Plain Language
Large frontier developers must not misrepresent their implementation of or compliance with their own frontier AI framework.
(B) A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework... (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances.
Enacted 2026-01-01
Labor Code § 1107.1(a)
Plain Language
Frontier developers must not adopt rules, policies, or contracts that prevent covered employees from reporting catastrophic risk dangers or TFAIA violations to the Attorney General, federal authorities, or authorized internal personnel, and must not retaliate against employees who make such disclosures.
A frontier developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either of the following: (1) The frontier developer's activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. (2) The frontier developer has violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code.
Enacted 2026-01-01
Labor Code § 1107.1(b)
Plain Language
Frontier developers may not include provisions in contracts that prohibit or restrict employees from making whistleblower disclosures protected under California Labor Code Section 1102.5.
A frontier developer shall not enter into a contract that prevents a covered employee from making a disclosure protected under Section 1102.5.
Enacted 2026-01-01
Labor Code § 1107.1(d)
Plain Language
Frontier developers must provide clear notice to all covered employees of their whistleblower rights, either through continuous workplace posting (including periodic notice for remote workers) or annual written notice acknowledged by each employee.
A frontier developer shall provide a clear notice to all covered employees of their rights and responsibilities under this section, including by doing either of the following: (1) At all times posting and displaying within any workplace maintained by the frontier developer a notice to all covered employees of their rights under this section, ensuring that any new covered employee receives equivalent notice, and ensuring that any covered employee who works remotely periodically receives an equivalent notice. (2) At least once each year, providing written notice to each covered employee of the covered employee's rights under this section and ensuring that the notice is received and acknowledged by all of those covered employees.
Enacted 2026-01-01
Labor Code § 1107.1(e)(1)
Plain Language
Large frontier developers must establish an anonymous internal reporting process for covered employees to disclose good-faith concerns about catastrophic safety risks or violations of California's frontier AI law. The developer must provide the reporting employee with monthly status updates on the investigation and any actions taken in response. Disclosures and responses must be shared with the company's officers and directors at least quarterly — except that if an employee has alleged wrongdoing by a specific officer or director, that individual must be excluded from receiving the relevant disclosures.
A large frontier developer shall provide a reasonable internal process through which a covered employee may anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the information indicates that the large frontier developer's activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code, including a monthly update to the person who made the disclosure regarding the status of the large frontier developer's investigation of the disclosure and the actions taken by the large frontier developer in response to the disclosure. (2)(A)  Except as provided in subparagraph (B), the disclosures and responses of the process required by this subdivision shall be shared with officers and directors of the large frontier developer at least once each quarter. (B)  If a covered employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, subparagraph (A) shall not apply with respect to that officer or director. 
Failed 2026-01-01
G-01.3
Lab. Code § 1522(b)
Plain Language
Employers must maintain and keep current an inventory list of all automated decision systems in use. This is an ongoing recordkeeping obligation — the list must be updated whenever an ADS is added or removed. The statute does not specify the format or content of the list beyond identifying systems currently in use, nor does it require the list to be published publicly or submitted to regulators.
(b) An employer shall maintain an updated list of all ADS currently in use.
Pending 2027-01-01
G-01.3G-01.4
C.R.S. § 10-16-112.7(3)(e)
Plain Language
Covered entities must ensure their AI utilization review systems produce and retain documentation, audit logs, and model-governance records sufficient to demonstrate compliance with both this section and Section 10-3-1104.9 (which governs insurance company record-keeping). This is a contemporaneous documentation and retention obligation — the records must be generated as the system operates, not reconstructed after the fact, and must be maintained in a form available for regulatory inspection.
(e) THE ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES AND RETAINS DOCUMENTATION, AUDIT LOGS, AND MODEL-GOVERNANCE RECORDS IN ORDER TO DEMONSTRATE COMPLIANCE WITH THIS SECTION AND SECTION 10-3-1104.9;
Pending 2027-01-01
G-01.3
C.R.S. § 6-1-1702(2)(a)-(b)
Plain Language
Developers must notify each deployer within a reasonable time of material updates, intentional and substantial modifications, and changes to the covered ADMT's intended use, limitations, or risk mitigation. Developers may satisfy this obligation through public release notes if they also provide direct notice to each deployer that the release notes have been published. This is an ongoing notification obligation that applies whenever the developer makes qualifying changes — not just at initial deployment.
(2) (a) A DEVELOPER SHALL PROVIDE TO EACH DEPLOYER OF A COVERED ADMT DEVELOPED BY THE DEVELOPER A NOTICE OF MATERIAL UPDATES, INTENTIONAL AND SUBSTANTIAL MODIFICATIONS, AND CHANGES TO THE INTENDED USE OF, LIMITATIONS FOR, OR RISK MITIGATION FOR THE COVERED ADMT WITHIN A REASONABLE TIME. (b) A DEVELOPER MAY USE PUBLIC RELEASE NOTES CONTAINING THE INFORMATION REQUIRED BY SUBSECTION (2)(a) OF THIS SECTION TO COMPLY WITH THIS SUBSECTION (2) IF THE DEVELOPER PROVIDES DIRECT NOTICE OF THE PUBLIC RELEASE TO EACH DEPLOYER OF THE COVERED ADMT.
Pending 2027-01-01
G-01.3G-01.4
C.R.S. § 6-1-1702(4)
Plain Language
Developers must retain all records necessary to demonstrate compliance with their documentation and notification obligations for at least three years after creation, or longer if required by other law. Records include system version identifiers, changelogs, and copies of material update notices provided to deployers. This is a continuing recordkeeping obligation — each new record starts a fresh three-year retention clock.
A DEVELOPER SHALL RETAIN, FOR NOT LESS THAN THREE YEARS AFTER THE CREATION OF A RECORD REQUIRED OR CREATED UNDER THIS SECTION OR FOR A LONGER PERIOD IF REQUIRED BY APPLICABLE STATE OR FEDERAL LAW, RECORDS REASONABLY NECESSARY TO DEMONSTRATE COMPLIANCE WITH THIS SECTION. RECORDS INCLUDE SYSTEM VERSION IDENTIFIERS, CHANGELOGS, AND DOCUMENTATION AND NOTICES OF MATERIAL UPDATES PROVIDED TO DEPLOYERS PURSUANT TO SUBSECTION (2) OF THIS SECTION.
Pending 2027-01-01
G-01.3G-01.4
C.R.S. § 6-1-1703
Plain Language
Deployers must retain records necessary to demonstrate compliance with the entire Part 17 for at least three years after each consequential decision, or longer if required by other law. Records may include ADMT version identifiers, changelogs, and documentation of material mitigation changes. Note that the retention clock runs from the date of each consequential decision, not from record creation — this means records supporting recurring decisions may need to be retained for well beyond three years from their creation date.
A DEPLOYER SHALL RETAIN, FOR NOT LESS THAN THREE YEARS AFTER THE DATE OF A CONSEQUENTIAL DECISION OR FOR A LONGER PERIOD IF REQUIRED BY APPLICABLE STATE OR FEDERAL LAW, RECORDS REASONABLY NECESSARY TO DEMONSTRATE COMPLIANCE WITH THIS PART 17. RECORDS MAY INCLUDE, AS APPLICABLE, COVERED ADMT VERSION IDENTIFIERS, CHANGELOGS, AND DOCUMENTATION OF MATERIAL MITIGATION CHANGES.
Pending 2027-01-01
C.R.S. § 6-1-1702(3), (5)
Plain Language
These provisions scope the developer's obligations under § 6-1-1702. The developer's documentation and notification duties apply only when the ADMT was marketed, advertised, configured, contracted, sold, or licensed for use in consequential decisions — or when the developer becomes aware the system is being used for such decisions in a manner consistent with its intended and contracted uses. Developers are not responsible for deployers' unintended or off-label uses. This is a scoping limitation on other obligations, not an independent compliance requirement.
(3) A DEVELOPER IS SUBJECT TO THE DISCLOSURE REQUIREMENTS DESCRIBED IN SUBSECTIONS (1) AND (2) OF THIS SECTION ONLY FOR A DEPLOYER'S USE OF A COVERED ADMT WHERE THE ADMT WAS MARKETED, ADVERTISED, CONFIGURED, CONTRACTED, SOLD, OR LICENSED TO BE USED TO MATERIALLY INFLUENCE A CONSEQUENTIAL DECISION. (5) THIS SECTION APPLIES WHEN A DEVELOPER CREATES A COVERED ADMT THAT IS INTENDED, DOCUMENTED, MARKETED, ADVERTISED, CONFIGURED, OR CONTRACTED TO BE USED TO MAKE CONSEQUENTIAL DECISIONS OR WHEN THE DEVELOPER BECOMES AWARE THAT THE COVERED ADMT IS BEING USED TO MAKE CONSEQUENTIAL DECISIONS IN A MANNER CONSISTENT WITH THE INTENDED AND CONTRACTED USES.
Enacted 2026-06-30
G-01.1G-01.2
C.R.S. § 6-1-1703(2)(a)
Plain Language
Deployers must implement and maintain a formal risk management policy and program governing their deployment of high-risk AI systems. The program must cover the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks. Critically, this is not a one-time exercise — it must be iterative, regularly and systematically reviewed, and updated over the full lifecycle of the AI system. Reasonableness is assessed based on factors specified in the original SB 205 (size/complexity of the deployer, nature/scope of the AI system, sensitivity of data, etc.). This maps closely to the NIST AI RMF approach.
(2) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (2) must be reasonable considering:
Pending 2026-10-01
G-01.3G-01.4
Sec. 8(e)
Plain Language
Deployers must retain all bias audit records for at least five years and produce them to the Labor Commissioner upon request. This is both a recordkeeping and a regulatory access obligation — the five-year retention period exceeds the typical two- to three-year standard in other jurisdictions.
(e) Each deployer shall maintain records relating to bias audits required pursuant to subsection (a) of this section for a period of not less than five years and shall make such records available to the Labor Commissioner upon request.
Pending 2025-07-01
G-01.1G-01.2
O.C.G.A. § 10-16-3(b)-(c)
Plain Language
Deployers must implement a formal risk management policy and program governing their use of automated decision systems. The program must specify the principles, processes, and personnel for identifying, documenting, and mitigating algorithmic discrimination risks. It must be iterative and regularly updated over the system lifecycle. The program must consider the NIST AI RMF, ISO/IEC 42001, or an equivalent framework — or any framework the AG designates — as well as the deployer's size, system scope, and data sensitivity. A single program may cover multiple systems. Small deployers meeting the exemption criteria in § 10-16-6 are exempt from this obligation.
Except as provided in Code Section 10-16-6, a deployer of an automated decision system shall implement a risk management policy and program to govern the deployer's deployment of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection shall take into consideration: (1) Either: (A) The guidance and standards set forth in the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology of the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) Any risk management framework for artificial intelligence systems that the Attorney General, in the Attorney General's discretion, may designate; (2) The size and complexity of the deployer; (3) The nature and scope of the automated decision systems deployed by the deployer, including the intended uses of the automated decision systems; and (4) The sensitivity and volume of data processed in connection with the automated decision systems deployed by the deployer. A risk management policy and program implemented pursuant to this Code section may cover multiple automated decision systems deployed by the deployer.
Pending 2025-07-01
G-01.3
O.C.G.A. § 10-16-3(d)
Plain Language
Deployers must establish and follow written policies for acquiring and relying on third-party automated decision systems, including contractual controls ensuring developers provide all necessary compliance documentation. They must also maintain procedures for reporting errors or algorithmic discrimination back to developers, and for remediating incorrect information in their own systems. This creates a documented vendor management and error-correction framework.
Each deployer shall establish and adhere to: (1) Written standards, policies, procedures, and protocols for the acquisition, use of, or reliance on automated decision systems developed by third-party developers, including reasonable contractual controls ensuring that the developer statements and summaries described in subsection (b) of Code Section 10-16-2 include all information necessary for the deployer to fulfill its obligations under this Code section; (2) Procedures for reporting any incorrect information or evidence of algorithmic discrimination to a developer for further investigation and mitigation, as necessary; and (3) Procedures to remediate and eliminate incorrect information from its automated decision systems that the deployer has identified or has been reported to a developer.
Pending 2028-07-01
G-01.3
HRS § 321-__ (Monitoring; performance evaluation; record keeping)(4)
Plain Language
Health care providers using AI in consequential decisions must maintain four categories of records: (A) an up-to-date inventory of all AI systems in use; (B) documentation of each system's design, intended use, and training data; (C) records of all monitoring, performance evaluations, and oversight activities; and (D) documentation of findings and remediation actions taken in response to identified deficiencies. The bill does not specify a retention period — this will likely be addressed in DOH rulemaking. This is a continuing recordkeeping obligation that must be kept current as systems change.
(4) Maintain: (A) An updated inventory of the artificial intelligence systems; (B) Documentation on the system design, intended use, and training data of the artificial intelligence systems; (C) Record of the monitoring, performance evaluations, and oversight activities; and (D) Documentation of findings and actions taken to address any deficiencies identified through the monitoring or performance evaluations.
Pending 2026-07-01
G-01.3
Iowa Code § 91F.2(5)
Plain Language
Employers must maintain a current inventory of all automated decision systems in use. This is a recordkeeping obligation designed to support compliance with the notice requirements. It does not require public disclosure or submission to regulators — it is an internal record that must be kept updated as systems are added or retired.
5. An employer shall maintain an updated list of all automated decision systems currently in use by the employer to facilitate implementation of this section.
Pending 2025-01-01
G-01.1
Section 20(b)
Plain Language
Health insurance issuers must establish and maintain an AI systems program — encompassing governance, risk management, and internal audit functions — with policies and procedures that ensure compliance with this Act by all employees, officers, agents, and contractors involved in administering health insurance coverage. The issuer bears ultimate responsibility for noncompliance, including noncompliance by third-party vendors. Other persons involved in administering coverage are not relieved of their own liability for failing to cooperate with Department investigations. This effectively requires a formal governance program that extends compliance controls across the issuer's entire supply chain of AI-related services.
(b) A health insurance issuer shall ensure that its health insurance coverage is administered in conformity with this Act. The health insurance issuer's AI systems program shall include policies and procedures to ensure such conformity by all employees, directors, trustees, agents, representatives, and persons directly or indirectly contracted to administer the health insurance coverage. The health insurance issuer shall be responsible for any noncompliance under this Act with respect to its health insurance coverage. Nothing in this Section relieves any other person from liability for failure to comply with the Department's investigations or market conduct actions related to a health insurance issuer's compliance with this Act.
Failed 2026-01-01
G-01.1
Section 15
Plain Language
The Department of Innovation and Technology must adopt rules ensuring that businesses using AI systems comply with five governance principles: safety (systems must not cause harm), transparency (clear explanations of how systems work and decide), accountability (responsible parties must be identified), fairness (bias prevention and equitable treatment), and contestability (individuals can challenge AI decisions). This is a rulemaking mandate — the statute articulates high-level principles but delegates the substantive compliance requirements to the Department to define through rulemaking. Businesses cannot yet determine their specific compliance obligations until rules are adopted. Applies only to businesses with 10 or more employees (Section 25).
To address the concerns detailed in the findings in Section 5 of this Act and to ensure that negative impacts of AI system use are prevented, the Department of Innovation and Technology shall adopt rules as may be necessary to ensure that businesses using AI systems are compliant with the 5 principles of AI governance as follows: (1) Safety: Ensuring systems operate without causing harm to individuals. (2) Transparency: Providing clear and understandable explanations of how systems work and make decisions. (3) Accountability: Identifying and holding individuals or companies responsible for the system's performance and outcomes. (4) Fairness: Preventing and mitigating bias to ensure equitable treatment for all individuals. (5) Contestability: Allowing individuals to challenge and seek redress for decisions made by the system.
Pending 2025-06-01
G-01.1
Section 5 (definition of "AI systems program"), Section 10(a), Section 20
Plain Language
Every insurer authorized to do business in Illinois must develop, implement, and maintain a written AI systems program governing the responsible use of AI systems that make or support decisions related to regulated insurance practices. This is established by the definition of 'AI systems program' and reinforced by Section 10(a)'s reference to Department oversight of compliance with the insurer's AI systems program. The statute requires the program to be ongoing — developed, implemented, and maintained — not merely created once. The specifics of what the program must contain are likely to be fleshed out by Department rulemaking.
"AI systems program" means a written program for the responsible use of AI systems that makes or supports decisions related to regulated insurance practices to be developed, implemented, and maintained by all insurers authorized to do business in the State.
Pending 2027-01-01
G-01.1G-01.2G-01.3G-01.6
Section 20(a)-(d)
Plain Language
Deployers must establish, document, implement, and maintain a governance program with reasonable administrative and technical safeguards to manage algorithmic discrimination risks. The program's safeguards must be proportionate to the tool's use, the deployer's size and resources, and the technical feasibility and cost of available risk management tools. The program must be designed to identify and implement discrimination safeguards, integrate impact assessment processes, conduct annual compliance reviews, retain impact assessment results for at least two years, and make reasonable adjustments in response to material changes in technology, risk profile, standards, or business operations. Deployers must designate at least one employee responsible for overseeing the governance program, who has authority to raise compliance concerns and whose employer must promptly and fully assess any issue raised. Small deployers (fewer than 25 employees, unless the tool impacts more than 999 people per year) are exempt.
(a) A deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination associated with the use or intended use of an automated decision tool. The safeguards required by this subsection shall be appropriate to all of the following: (1) the use or intended use of the automated decision tool; (2) the deployer's role as a deployer; (3) the size, complexity, and resources of the deployer; (4) the nature, context, and scope of the activities of the deployer in connection with the automated decision tool; and (5) the technical feasibility and cost of available tools, assessments, and other means used by a deployer to map, measure, manage, and govern the risks associated with an automated decision tool. (b) The governance program required by this Section shall be designed to do all of the following: (1) identify and implement safeguards to address reasonably foreseeable risks of algorithmic discrimination resulting from the use or intended use of an automated decision tool; (2) if established by a deployer, provide for the performance of impact assessments as required by Section 10; (3) conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this Act; (4) maintain for 2 years after completion the results of an impact assessment; and (5) evaluate and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the automated decision tool, the state of technical standards, and changes in business arrangements or operations of the deployer. (c) A deployer shall designate at least one employee to be responsible for overseeing and maintaining the governance program and compliance with this Act. An employee designated under this subsection shall have the authority to assert to the employee's employer a good faith belief that the design, production, or use of an automated decision tool fails to comply with the requirements of this Act. An employer of an employee designated under this subsection shall conduct a prompt and complete assessment of any compliance issue raised by that employee. (d) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year.
Pending 2027-01-01
G-01.5
Section 20(a)
Plain Language
Every two years, operators must engage an independent third-party auditor to assess their compliance with the entire Act — covering prohibited design practices, user safeguards, AI identity notifications, and crisis intervention protocols. Operators must then publish a high-level summary of the audit findings on their website, excluding confidential or proprietary information. The audit is a comprehensive compliance assessment, not limited to bias or safety — it covers all obligations under the Act.
(a) At least once every 2 years, an operator shall obtain an independent, third-party audit to assess the operator's compliance with this Act. The operator shall make publicly available on its website a high-level summary of the audit's findings, excluding confidential or proprietary information.
Passed 2025-03-13
G-01.1
Section 3(1)(a)-(b)
Plain Language
The Commonwealth Office of Technology must create an AI Governance Committee responsible for developing policy standards and guiding principles aligned with ISO/IEC 42001 to mitigate risks and protect citizen data and privacy. The Committee must also establish technology standards for how state agencies use generative AI and high-risk AI systems. This is an internal government governance program establishment obligation — it applies only to state agencies, not private-sector entities.
(1) The Commonwealth Office of Technology shall create an Artificial Intelligence Governance Committee to govern the use of artificial intelligence systems by state departments, state agencies, and state administrative bodies by: (a) Developing policy standards and guiding principles to mitigate risks and protect data and privacy of Kentucky citizens and businesses that adhere to the latest version of Standard ISO/IEC 42001 of the International Organization for Standardization; (b) Establishing technology standards to provide protocols and requirements for the use of generative artificial intelligence and high-risk artificial intelligence systems;
Passed 2025-03-13
G-01.3
Section 3(2)(a)-(b)
Plain Language
State agencies must verify their use and development of generative AI and high-risk AI systems and follow responsible, ethical, and transparent procedures. Specifically, all AI models must have comprehensive documentation available for review; human review and intervention must be required based on use case and risk level; and AI systems must be resilient, accountable, and explainable. This creates both a documentation obligation and a human oversight requirement for state agency AI deployments.
(2) The Artificial Intelligence Governance Committee shall develop policies and procedures to ensure that any department, program, cabinet, agency, or administrative body that utilizes and accesses the Commonwealth's information technology and technology infrastructure shall: (a) Verify the use and development of generative artificial intelligence systems and high-risk artificial intelligence systems; and (b) Act in compliance with responsible, ethical, and transparent procedures to implement the use of artificial intelligence technologies by: 1. Ensuring artificial intelligence models have comprehensive and complete documentation that is available for review and inspection; 2. Requiring review and intervention by humans dependent on the use case and potential risk for all outcomes from generative and high-risk artificial intelligence systems; and 3. Ensuring the use of generative artificial intelligence and high-risk artificial intelligence systems are resilient, accountable, and explainable.
Passed 2025-03-13
G-01.2
Section 3(7)
Plain Language
The Commonwealth Office of Technology must establish legal and ethical framework policies ensuring all state agency AI systems comply with existing laws, regulations, and guidelines. These policies must be updated at least annually to keep pace with evolving technology and industry best practices. This is a continuing governance maintenance obligation with a mandatory annual review cycle.
(7) The Commonwealth Office of Technology shall establish policies to encompass legal and ethical frameworks to ensure that any artificial intelligence systems shall align with existing laws, administrative regulations, and guidelines, which shall be updated at least annually to maintain compliance as technology and industry best practices evolve.
Passed 2025-03-13
G-01.1
Section 3(8)(a)-(b)
Plain Language
State agencies may not use a high-risk AI system to make a consequential decision without first designing and implementing a risk management policy and program. The policy must specify governing principles, processes, and responsible personnel, and must identify, mitigate, and document any bias risks in consequential decision-making. The policy must adhere to ISO/IEC 42001 or another recognized international AI risk management framework, and must be scaled to the deployer's size and complexity, the system's nature and intended use, and the sensitivity and volume of data processed. This is a mandatory prerequisite — no high-risk AI consequential decision is permitted without a conforming risk management program in place.
(8) (a) Operating standards for utilization of high-risk artificial intelligence systems shall prohibit the use of a high-risk artificial intelligence system to render a consequential decision without the design and implementation of a risk management policy and program for high-risk artificial intelligence systems. The risk management policy shall: 1. Specify principles, process, and personnel that shall be utilized to maintain the risk management program; and 2. Identify, mitigate, and document any bias or potential bias that is a potential consequence of use in making a consequential decision. (b) Each risk management policy designed and implemented shall at a minimum adhere to the latest version of Standard ISO/IEC 42001 of the International Organization for Standardization, or another national or internationally recognized risk management framework for artificial intelligence systems, and consider the: 1. Size and complexity of the deployer; 2. Nature, scope, and intended use of the high-risk artificial intelligence system and its deployer; and 3. Sensitivity and volume of data processed.
Passed 2025-03-13
KRS 42.726(2)(q)
Plain Language
The Commonwealth Office of Technology must establish, publish, maintain, and implement comprehensive policy standards and procedures for responsible, ethical, and transparent use of generative AI and high-risk AI by state agencies. These standards must cover procurement, implementation, ongoing assessment, data security and privacy, and acceptable use guidelines for high-risk AI integration. This is the enabling authority for the Office's AI governance role, codified as an ongoing duty.
(q) Establishing, publishing, maintaining, and implementing comprehensive policy standards and procedures for the responsible, ethical, and transparent use of generative artificial intelligence systems and high-risk artificial intelligence systems by departments, agencies, and administrative bodies, including but not limited to policy standards and procedures that: 1. Govern their procurement, implementation, and ongoing assessment; 2. Address and provide resources for security of data and privacy; and 3. Create guidelines for acceptable use policies for integrating high-risk artificial intelligence systems;
Pending 2026-08-01
G-01.3
R.S. 23:972(B)
Plain Language
Employers must maintain a current, updated inventory of all automated decision systems in use at the workplace. This is a continuing recordkeeping obligation — not a one-time exercise — and the list must reflect any additions or removals of ADS over time.
B. An employer shall maintain an updated list of all ADS currently in use.
Pre-filed 2025-07-07
G-01.1
Chapter 93M, Section 3(a)
Plain Language
Deployers of high-risk AI systems must establish and maintain a formal risk management program that identifies and mitigates known or foreseeable risks of algorithmic discrimination. The program must align with recognized industry standards, with the NIST AI Risk Management Framework cited as an example benchmark. This is a continuing obligation — the program must be maintained, not just created. Small businesses with fewer than 50 employees that do not use proprietary data to train AI systems are exempt per Section 5(1).
(a) Risk Management Policy: Deployers of high-risk AI systems must implement and maintain a risk management program that: (1) Identifies and mitigates known or foreseeable risks of algorithmic discrimination; (2) Aligns with industry standards, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
Pre-filed
G-01.1G-01.2
Chapter 93M § 3(b)
Plain Language
Deployers must implement a risk management policy and program governing their deployment of high-risk AI systems. The program must identify, document, and mitigate algorithmic discrimination risks; specify the principles, processes, and personnel involved; and be iteratively reviewed and updated over the system's life cycle. The program's reasonableness is assessed relative to the NIST AI RMF, ISO/IEC 42001, or another recognized framework, as well as the deployer's size, system scope, and data sensitivity. A single program may cover multiple high-risk systems. Small deployers (fewer than 50 FTEs who do not use their own data to train) are exempt per subsection (f).
(b) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (b) must be reasonable considering: (i) (A) the guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) a risk management policy and program implemented pursuant to subsection (b)(1) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Pending 2025-01-14
G-01.3
Ch. 149B § 2(c)
Plain Language
Employers must maintain accurate records of all data collected through electronic monitoring for three years, enabling compliance with employee or commissioner data requests. Employee data must be destroyed no later than 37 months after collection unless the employee provides written, informed consent for longer retention. Employers must implement reasonable administrative, technical, and physical data security practices. Employees have the right to request corrections to erroneous data collected about them.
(c) An employer shall establish, maintain, and preserve for three years contemporaneous, true, and accurate records of data collected via an electronic monitoring tool to ensure compliance with employee or commissioner requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than thirty-seven months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Pending 2025-01-14
G-01.3G-01.4
Ch. 149B § 3(c)
Plain Language
Employers and vendors must retain all documentation related to the design, development, use, and data of automated employment decision tools sufficient to support impact assessments. Employers must have licensed access to vendor-held documentation and must be able to share it with labor organizations as required by law or courts. Required documentation includes training data sources, technical specifications, developer identities, historical use data, and a full version history enabling attestation about the tool's state at any given time of an employment decision. Documentation must be stored per commissioner specifications and must be legible and accessible for auditors.
(c) An employer or its vendor shall retain all documentation pertaining to the design, development, use, and data of an automated employment decision tool that may be necessary to conduct an impact assessment. To the extent held by a vendor, the employer shall be granted a license to access this documentation and share this documentation with a labor organization to the extent required by federal or state law, or to the extent required by a court or agency in connection with employment or labor litigation. This includes but is not limited to the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, and historical use data for the tool. Such documentation must include a historical record of versions of the tool, such that an employer shall be able to attest in the event of litigation disputing an employment decision, the nature and specifications of the tool as it was used at the time of that employment decision. Such documentation shall be stored in accordance with such record-keeping, data retention, and security requirements as the commissioner may specify, and in such a manner as to be legible and accessible to the party conducting an impact assessment.
Pending 2026-01-01
G-01.3G-01.4
Sec. 7(1)(d)
Plain Language
Large developers must create and retain detailed records of all critical risk assessments — including the specific tests used and results obtained — for at least 5 years. Records must be sufficiently detailed that a qualified third party could replicate the testing. This is a contemporaneous recordkeeping obligation that supports both the annual audit requirement (section 9) and the attorney general's inspection rights.
(d) Record and retain for 5 years any specific tests used and results obtained as a part of an assessment of critical risk with sufficient detail for qualified third parties to replicate the testing.
Pending 2026-01-01
G-01.4
Sec. 7(3)-(4)
Plain Language
All documents published under this act must appear on a conspicuous page of the developer's website. Developers (and auditors for audit reports) may redact for trade secrets, public safety, national security, or legal compliance, but must: (1) retain the unredacted version for at least 5 years and make it available to the attorney general on request, and (2) describe the nature and justification of each redaction in the published version. This creates a dual-track system — the public gets a redacted version with explained redactions, while the attorney general can inspect unredacted originals.
(3) If a large developer publishes a document in accordance with the requirements of this act, the large developer shall publish the information on a conspicuous page on the large developer's website. The large developer may redact the document as reasonably necessary to protect the large developer's trade secrets, public safety, or national security, or to comply with applicable law. An auditor required to perform an audit and produce a report under section 9 may redact information from the report using the same procedure described in this subsection before the publication of that report under section 9(3). (4) If a large developer or auditor makes a redaction under subsection (3), the large developer or auditor shall do both of the following: (a) Retain an unredacted version of the document for not less than 5 years and provide the attorney general with the ability to inspect the unredacted document on request. (b) Describe the character and justification of the redactions in the published version of the document.
Pending 2026-01-01
G-01.5
Sec. 9(1)-(4)
Plain Language
At least once per year, large developers must retain a reputable third-party auditor to assess: (1) whether the developer complied with its own safety and security protocol and document any noncompliance, (2) whether the protocol was stated clearly enough to determine compliance, and (3) whether the developer made false or misleading statements or violated publication/redaction requirements. The auditor must include at least one individual with corporate compliance expertise and one with technical expertise in foundation model safety. The developer must grant the auditor full access to all materials produced under the act and any other reasonably necessary materials. The audit report must be publicly published within 90 days of completion (subject to the redaction procedures in section 7(3)-(4)).
(1) Beginning on January 1, 2026, not less than once per year, a large developer shall retain a reputable third-party auditor to produce a report that assesses all of the following: (a) If the large developer has complied with the large developer's safety and security protocol and any instances of noncompliance. (b) Any instance where the large developer's safety and security protocol was not stated clearly enough to determine if the large developer has complied with the safety and security protocol. (c) Any instance that the auditor believes the large developer violated section 7(2), (3), or (4). (2) A large developer shall grant the auditor access to all materials produced to comply with this act and any other materials reasonably necessary to perform the assessment under subsection (1). (3) Not more than 90 days after the completion of the auditor's report under subsection (1), a large developer shall conspicuously publish that report. (4) In conducting an audit under this section, an auditor shall employ or contract 1 or more individuals with expertise in corporate compliance and 1 or more individuals with technical expertise in the safety of foundation models.
Pending 2026-02-24
G-01.3
Sec. 7(1)-(3)
Plain Language
Employers must delete collected covered individual data within 3 years after the monitoring or decision purpose is achieved (or as a CBA specifies), and must immediately delete any data not actually used. Selling or licensing covered individual data — including deidentified or aggregated data — is categorically prohibited. Sharing data with government is prohibited except to provide information to the Department, comply with law, or comply with a court order. These are strict data governance guardrails that go beyond typical retention periods by requiring immediate deletion of unused data and prohibiting any commercial transfer.
Sec. 7. (1) An employer that collects a covered individual's data shall retain the data for not more than 3 years after the date on which the purpose for using the electronic monitoring tool or automated decisions tool is achieved, unless otherwise specified by a collective bargaining agreement. If the employer does not use any specific data of a covered individual, the employer must delete that data immediately. (2) An employer shall not sell or license a covered individual's data, including, but not limited to, data that is deidentified or aggregated. (3) An employer shall not share data collected under section 4 or 5 with this state or a local unit of government unless otherwise necessary to do any of the following: (a) Provide information to the department. (b) Comply with the requirements of federal, state, or local law. (c) Comply with a court-issued subpoena, warrant, or order.
Pending 2026-02-24
G-01.3G-01.4
Sec. 9(4)-(7)
Plain Language
Employers must retain comprehensive documentation related to the design, development, use, and data of all electronic monitoring and automated decision tools — including data sources, technical specifications, developer identities, historical use data, and version history. Service providers must grant employers access to this documentation. Employers must share documentation with labor organizations when required by law or in connection with employment litigation. The Director will prescribe storage standards to ensure legibility and accessibility for third-party assessors. These are ongoing documentation obligations that support the impact assessment requirements in Sec. 9(1)-(3).
(4) An employer shall retain all documentation pertaining to the design, development, use, and data of an electronic monitoring tool or automated decisions tool that may be necessary to conduct an impact assessment. The documentation includes, but is not limited to, the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, historical use data for the tool, and a historical record of the versions of the tool the employer uses. (5) A service provider that contracts with an employer to provide electronic monitoring or automated decisions shall allow the employer access to the documentation described in subsection (4). (6) An employer shall share the documentation described in subsection (4) with a labor organization as required under law or as required by a court or agency in connection with any employment or labor litigation to which the employer is a party. (7) The documentation described in subsection (4) must be stored in manner as prescribed by the director. The director shall prescribe the manner so that the documentation is legible and accessible to the party that conducts an impact assessment of the tool.
Pending 2026-08-01
G-01.3G-01.4
Minn. Stat. § 181.9923, subd. 1(a)-(c)
Plain Language
Employers must retain all worker data and input/output data collected, used, or produced by an automated decision system — including corroborating evidence from human reviewers — for 36 months from most recent collection, production, or use. After 37 months, the data must be destroyed unless the worker has given written informed consent to continued retention. Records must be maintained in a form that supports compliance with worker access requests and Commissioner data requests. Employers must also protect worker data using security practices consistent with applicable data and cyber privacy laws, appropriate to the volume and nature of data collected.
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
Pending 2026-01-01
G-01.2
§ 325M.41, subd. 3(a)-(b)
Plain Language
Developers must annually review and update their safety and security protocol to reflect changes in model capabilities and evolving industry best practices. If a material modification results from the review, the developer must re-publish the updated protocol (with appropriate redactions) and re-transmit it to the attorney general, following the same publication requirements as the initial deployment. This is not a one-time exercise — it is a continuing annual obligation.
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
Pending 2026-08-01
G-01.3
Minn. Stat. § 325M.41, subd. 1(5)
Plain Language
Developers must create and retain detailed records of all testing — both tests required by law and tests required by the developer's own safety protocol — with enough specificity that a third party could replicate the testing procedure. Retention is required for the full deployment period plus five years. This is a contemporaneous documentation obligation: the records must be created at the time of testing and retained, not reconstructed later.
Before deploying an artificial intelligence model, a developer must: (5) record and retain information on the specific tests and test results used in any assessment of the artificial intelligence model required under this section or by the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure for the entire period of time an artificial intelligence model is deployed, plus five years;
Pending 2026-08-01
G-01.2
Minn. Stat. § 325M.41, subd. 3(a)-(b)
Plain Language
Developers must annually review their safety and security protocol, updating it to reflect both changes in the AI model's capabilities and evolving industry best practices. When material modifications are made, the developer must re-publish the protocol publicly (with appropriate redactions) and re-transmit a copy to the attorney general, following the same process as the initial pre-deployment publication. This is a continuing obligation — annual review is mandatory regardless of whether the developer believes changes are needed.
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
Pending 2026-09-01
G-01.3G-01.4
§ 181.9923, Subd. 1(a)-(c)
Plain Language
Employers must retain all worker data collected, used, or produced by an automated decision system — including input/output data and human reviewer corroborating evidence — for 36 months from the most recent collection, production, or use. The data must then be destroyed no later than 37 months unless the worker provides written, informed consent to longer retention. Employers must also protect worker data using security practices consistent with applicable data and cyber privacy laws and appropriate to the volume and nature of data collected. This creates both a minimum retention floor and a mandatory destruction ceiling.
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
Pending 2026-08-28
G-01.3
RSMo § 1.566(1)
Plain Language
Any private entity that possesses biometric identifiers or biometric information must create and publicly publish a written policy establishing a retention schedule and guidelines for permanent destruction of biometric data. Destruction must occur when the original purpose for collection has been satisfied or within one year of the individual's last interaction with the entity — whichever comes first. The entity must comply with its own published schedule unless a valid warrant or subpoena requires retention. This is both a documentation and a data lifecycle management obligation.
1. Any private entity in possession of biometric identifiers or biometric information shall develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within one year of the individual's last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information shall comply with its established retention schedule and destruction guidelines.
Pending 2026-01-01
G-01.3G-01.4
G.S. 114B-4(a)-(b)(1)
Plain Language
Licensed health information chatbot operators must maintain professional liability insurance at a Department-specified amount, implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct security audits at least every six months. These are ongoing operational and governance obligations that require contemporaneous documentation of security measures and audit results.
(a) A licensee shall maintain professional liability insurance in an amount not less than the amount per occurrence required by the Department. (b) A licensee shall do all of the following: (1) Implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct regular security audits no less than once every six (6) months.
Pending 2026-01-01
G-01.5
G.S. 114B-4(e)
Plain Language
Licensed health information chatbot operators must conduct regular inspections and undergo an annual third-party audit. All inspection and audit results must be provided to the Department of Justice. The statute does not specify what the inspections or audits must cover, leaving scope to be determined by Department rules, but the annual third-party audit is mandatory, not optional.
(e) A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department.
Pending 2027-01-01
G-01.5
G.S. § 114B-4(e)
Plain Language
Licensees must conduct regular internal inspections and an annual independent third-party audit of their health-information chatbot. All inspection and audit results must be made available to the Department of Justice. This creates both an ongoing self-inspection obligation and a mandatory annual external audit with regulatory disclosure.
A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department.
Pending 2027-01-01
G-01.3G-01.4
G.S. § 114B-6(f)
Plain Language
Manufacturers and importers of licensed health-information chatbots must establish and maintain records and submit reports as required by the Director through regulation. The specific records and reports will be defined by regulation, but the obligation to establish and maintain a recordkeeping system capable of producing documentation on regulatory demand is immediate. This is a standing recordkeeping obligation tied to the Director's regulatory authority.
Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish andmaintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
Pending 2027-01-01
G-01.1
G.S. § 114B-4(a)
Plain Language
Licensees operating health-information chatbots must maintain professional liability insurance at a minimum coverage level set by the Department of Justice. This is a financial responsibility requirement that ensures chatbot operators have insurance to cover potential harms. The specific minimum coverage amount will be determined by Department rulemaking.
A licensee shall maintain professional liability insurance in an amount not less than the amount per occurrence required by the Department.
Failed 2027-01-01
G-01.3
Sec. 4(6)(a)-(b)
Plain Language
When publishing safety plan documents, large frontier developers and large chatbot providers may redact information to protect trade secrets, cybersecurity, public safety, national security, or to comply with law. However, any redaction must be described and justified in the published version (to the extent the justifying concerns permit), and the unredacted version must be retained for five years. This creates both a permissive redaction framework and a mandatory recordkeeping obligation for the unredacted originals.
(6)(a) When a large frontier developer or large chatbot provider publishes documents to comply with this section, the large frontier developer or large chatbot provider may make redactions to those documents that are necessary to protect the large frontier developer's trade secrets, the large frontier developer's or large chatbot provider's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (b) If a large frontier developer or large chatbot provider redacts information in a document pursuant to subdivision (6)(a) of this section, the large frontier developer or large chatbot provider shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
Failed 2026-02-01
G-01.1
Sec. 4(2)(a)-(b)
Plain Language
Deployers must implement a risk management policy and program governing their deployment of high-risk AI systems. Conformity with the NIST AI RMF or ISO/IEC 42001 (as of January 1, 2025) creates a presumption of compliance. A single program may cover multiple high-risk AI systems. Small deployers (fewer than 50 FTEs who do not use their own data to train) are exempt under Section 4(6).
(2)(a) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. High-risk artificial intelligence systems that are in conformity with the guidance and standards set forth in the following as of January 1, 2025, shall be presumed to be in conformity with this section: (i) The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology; or (ii) The standard ISO/IEC 42001 of the International Organization for Standardization. (b) Any risk management policy and program implemented pursuant to subdivision (a) of this subsection may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Pending
G-01.3G-01.4
Section 5(b)
Plain Language
Employers and public entities must maintain complete and accurate records for at least three years, covering all data collected by monitoring tools, all data used by automated decision systems for outputs, all performance evaluations, validation results, and impact assessments. Data for which an applicant has exercised their destruction right is exempt once destroyed. All data must be destroyed no later than 37 months after collection unless the individual provides uncoerced written consent for longer retention. This creates both a minimum retention floor (three years for compliance documentation) and a maximum retention ceiling (37 months for personal data).
b. An employer or public entity shall make, keep, and preserve, for not less than three years, true and accurate records, including complete records of data and information about an about an employee or applicant, or service beneficiary, or applicant for employment collected by an EMT or other surveillance and all data and information used by an AEDS for outputs concerning the employee, service beneficiary, or applicant, and all performance evaluations, validation results and impact assessments. Any data or information for which an applicant has exercised their right to have destroyed pursuant to subsection a. of this section shall be exempt from the record retention requirements of this subsection once the records are destroyed. The employer or public entity shall destroy the data and information no later than 37 months after collection unless the employee, service beneficiary, or applicant has provided uncoerced written consent for the employer or public entity to retain them.
Pending 2027-01-01
G-01.1G-01.2
GBL § 1552(2)(a)-(b)
Plain Language
Deployers must implement and maintain a risk management policy and program governing their high-risk AI system deployments. The program must specify the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks. Both the policy and program must be iterative and regularly reviewed and updated over the system lifecycle. Reasonableness is assessed against NIST AI RMF, ISO/IEC 42001, or a substantially equivalent framework, considering the deployer's size, complexity, system scope, and data sensitivity. A single program may cover multiple high-risk systems. Deployers may be exempt under § 1552(7) if the developer contractually assumes these duties and certain other conditions are met.
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
Pending 2027-01-01
G-01.1G-01.2G-01.3
GBL § 1553(1)(a)
Plain Language
Developers of general-purpose AI models must create, maintain, and annually review technical documentation covering: training and testing processes, compliance evaluation results, intended tasks, intended downstream integration contexts, acceptable use policies, release date, distribution methods, and input/output modalities and formats. Documentation depth should be proportionate to the model's size and risk profile. This is a continuing obligation — documentation must be kept current. Open-source models released under qualifying free licenses may be partially exempt under § 1553(2)(a), and models used exclusively for internal purposes are fully exempt under § 1553(2)(b). Trade secrets are protected under § 1553(3).
Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
Pending 2027-01-01
G-01.1
GBL § 1553(2)(a)-(d)
Plain Language
Developers of general-purpose AI models may be exempt from the technical documentation creation/maintenance and annual downstream documentation review requirements if: (1) the model is released under a qualifying open-source license with publicly available parameters (unless deployed as high-risk), or (2) the model is not offered commercially, not consumer-facing, and used solely for internal purposes. Internal management models (office supplies, payments) are fully exempt. However, developers claiming the internal-use-only exemption under (a)(ii) must still establish and maintain an AI risk management framework with governance, risk mapping, risk management, and risk measurement functions. All exemptions carry a burden of proof on the developer. This provision creates a conditional governance obligation for internal-use GPAI developers even while exempting them from the full technical documentation requirements.
(a) The provisions of paragraph (a) and subparagraph (iii) of paragraph (b) of subdivision one of this section shall not apply to a developer that develops, or intentionally and substantially modifies, a general-purpose artificial intelligence model on or after January first, two thousand twenty-seven, if: (i) (A) the developer releases such general-purpose artificial intelligence model under a free and open-source license that allows for: (I) access to, and modification, distribution, and usage of, such general-purpose artificial intelligence model; and (II) the parameters of such general-purpose artificial intelligence model to be made publicly available pursuant to clause (B) of this subparagraph; and (B) unless such general-purpose artificial intelligence model is deployed as a high-risk artificial intelligence decision system, the parameters of such general-purpose artificial intelligence model, including, but not limited to, the weights and information concerning the model architecture and model usage for such general-purpose artificial intelligence model, are made publicly available; or (ii) the general-purpose artificial intelligence model is: (A) not offered for sale in the market; (B) not intended to interact with consumers; and (C) solely utilized: (I) for an entity's internal purposes; or (II) pursuant to an agreement between multiple entities for such entities' internal purposes. (b) The provisions of this section shall not apply to a developer that develops, or intentionally and substantially modifies, a general-purpose artificial intelligence model on or after January first, two thousand twenty-seven, if such general purpose artificial intelligence model performs tasks exclusively related to an entity's internal management affairs, including, but not limited to, ordering office supplies or processing payments. (c) A developer that takes any action under an exemption pursuant to paragraph (a) or (b) of this subdivision shall bear the burden of demonstrating that such action qualifies for such exemption. (d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
Pending 2025-07-26
G-01.6
State Tech. Law § 516(1)-(3)
Plain Language
Every operator of a licensed high-risk AI system must establish an ethics and risk management board of at least five individuals. Board members must be independent — they cannot be members, officers, or directors of the operator's entity and need not be employed by the operator. The board must assess the ethical implications of all possible use cases (intended and unintended, likely and unlikely) and the system's current operational outcomes. Entity operators with multiple licensed systems need only one board. The board must adopt its own governance rules, which cannot conflict with the statute.
1. Every operator of a licensed high-risk advanced artificial intelligence system or systems shall establish an ethics and risk management board composed of no less than five individuals who shall have the responsibility to assess the ethical implications of all possible use cases of the system, whether such use cases are intended or unintended, and whether likely or unlikely to be used, and the current operational outcomes of the system. Such operator, other than an operator who is a natural person, operating more than one high-risk advanced artificial intelligence system with a supplemental license shall not be required to have more than one ethics and risk management board for each system. 2. No member of an ethics and risk management board shall be a member, officer, or director within the operator's entity. No member shall be required to be employed by the operator. 3. Such board shall adopt rules governing its decision-making processes, duties and responsibilities. Such rules shall not conflict with the provisions of this article.
Pending 2025-07-26
G-01.3G-01.4
State Tech. Law § 524
Plain Language
Every licensed high-risk AI system must automatically generate chronological logs every time it operates. Logs must record significant or notable occurrences, actions, and anomalies. The Secretary sets detailed standards for: what events must be logged, log format, who may access logs, encryption and cybersecurity protocols, and preservation and disposal procedures. Logs must be preserved for 10 years from generation and are subject to Secretary inspection. This is one of the longest recordkeeping retention periods in AI regulation — most jurisdictions require 2-5 years.
Every time a licensee's system operates it shall automatically generate a log. Standards related to the specific types of events that are required to be logged, the format in which logs must be kept, the individuals or entities permitted to access logs and the conditions governing such access, the encryption and cybersecurity protocols to be applied to logs, the procedures for both the preservation and disposal of logs, and any other actions pertinent to log management shall conform to the standards set by the secretary. Such logs shall be preserved for a period of ten years from the date they are generated and shall be subject to inspection under section five hundred twenty-six of this article.
Pending 2025-07-26
G-01.3G-01.4
State Tech. Law § 527(1)-(2)
Plain Language
Every operator must maintain all books, records, source code, and logs as required by the Secretary, with minimum requirements including all system-generated logs and a backup of every version of the system, stored securely as prescribed. Operators must file an annual report with the Secretary covering business and operations for the preceding calendar year, subscribed under penalties of perjury. The Secretary may also require additional regular or special reports at any time. All reports must be affirmed as true under penalty of perjury. The system version backup requirement is distinctive — operators must maintain a historical archive of every version of their AI system.
1. Every operator shall maintain such books, records, source code, and logs as the secretary shall require provided however that every operator shall, at least, maintain a copy of all logs generated from the system as well as a backup of every version of the system which shall be stored in a safe manner as prescribed by the secretary. 2. By a date to be set by the secretary, each operator shall annually file a report with the secretary giving such information as the secretary may require concerning the business and operations during the preceding calendar year of the operator within the state under the authority of this article. Such report shall be subscribed and affirmed as true by the operator under the penalties of perjury and be in the form prescribed by the secretary. In addition to such annual reports, the secretary may require of operators such additional regular or special reports as the secretary may deem necessary to the proper supervision of operators under this article. Such additional reports shall be in the form prescribed by the secretary and shall be subscribed and affirmed as true under the penalties of perjury.
Pending 2025-09-02
G-01.2
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must conduct an annual review of their safety and security protocol, considering changes in frontier model capabilities and industry best practices. If modifications result from the review, the updated protocol must be published with appropriate redactions and transmitted to the Division of Homeland Security and Emergency Services in the same manner as the initial publication. This is an ongoing obligation — it is not satisfied by the initial pre-deployment implementation.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
Pending 2025-09-02
G-01.5
Gen. Bus. Law § 1421(4)(a)-(e)
Plain Language
Large developers must annually retain an independent third-party auditor to assess compliance with all requirements of § 1421. The auditor must have access to unredacted materials, and the audit report must include: a detailed compliance assessment, identified noncompliance instances with improvement recommendations, an assessment of internal controls and designated senior compliance personnel, and the lead auditor's certification. The developer must retain the unredacted report for deployment plus five years, publicly publish an appropriately redacted version, transmit the redacted version to the Division of Homeland Security and Emergency Services, and provide unredacted access to the Division or Attorney General upon request (federal-law redactions only). The 90-day grace period applies to entities that newly qualify as large developers after the act's effective date.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
Pending 2025-06-04
G-01.1
Gen. Bus. Law § 390-f(2)(a)
Plain Language
Every entity doing business or offering products to consumers in New York must develop a responsible capability scaling policy governing its use and development of AI. The policy must constitute a set of best practices that identify, monitor, and rectify or mitigate risk of harm. This is an extremely broad mandate — it applies to any entity that uses or develops AI, with no size threshold, compute threshold, or risk-level trigger. The Chief Information Officer may issue waivers or designate exempt categories, which may narrow the practical scope considerably once rules are promulgated.
Every person, firm, partnership, association or corporation doing business or offering products to consumers in New York state shall develop a responsible capability scaling policy for the use and development of artificial intelligence by such entity.
Pending 2026-06-09
G-01.1G-01.2
Civ. Rights Law § 89(1)–(2)
Plain Language
Every developer and deployer of high-risk AI systems must plan, document, and implement a risk management policy and program covering the principles, processes, and personnel used to identify, document, and mitigate foreseeable algorithmic discrimination risks. The program must be iterative and systematically reviewed and updated over the system's life cycle. Reasonableness is assessed considering NIST AI RMF 1.0 (or an AG-approved equivalent framework), the entity's size and complexity, the system's nature and scope, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems if sufficient. This creates both an establishment obligation and an ongoing maintenance obligation.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient.
Enacted 2025-06-03
G-01.3
Gen. Bus. Law § 1421(1)(b)
Plain Language
Large developers must retain a complete, unredacted version of the safety and security protocol — including a changelog of all updates and revisions — for the entire period the frontier model is deployed plus five additional years. This is a document retention obligation. Note that the publicly published version may include appropriate redactions (see § 1421(1)(c)), but the retained internal version must be unredacted. Organizations should ensure their records management systems can track versioning with dates.
Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years.
Enacted 2025-06-03
G-01.3
Gen. Bus. Law § 1421(1)(d)
Plain Language
Large developers must record and retain detailed information about all tests and test results from frontier model assessments — both those required by the statute and those required by the developer's own safety and security protocol. Records must contain sufficient detail for third parties to replicate the testing procedure, creating a reproducibility standard. Retention period is the duration of deployment plus five years. The 'as and when reasonably possible' qualifier provides some flexibility for real-time testing contexts where immediate documentation may be impractical.
Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model required by this section or the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure.
Enacted 2025-06-03
G-01.2
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must review their safety and security protocol at least annually, with the review accounting for changes in frontier model capabilities and evolving industry best practices. If the review results in material modifications, the updated protocol must be re-published publicly (with appropriate redactions) and re-transmitted to the AG and Division of Homeland Security. This creates a continuing maintenance obligation — the protocol is not a one-time pre-deployment document but a living document requiring annual reassessment. The trigger for re-publication is 'material modifications,' which introduces a materiality judgment call.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any material modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
Enacted 2025-06-03
G-01.6
Gen. Bus. Law § 1420(12)(e)
Plain Language
The safety and security protocol must designate senior personnel responsible for ensuring compliance with the statute. This effectively creates a mandatory accountability role — a named senior individual or individuals who bear responsibility for the developer's compliance with the RAISE Act. While embedded within the protocol definition rather than stated as a standalone obligation, it is independently actionable because a protocol that omits this designation is deficient on its face.
"Safety and security protocol" means documented technical and organizational protocols that: ... (e) Designate senior personnel to be responsible for ensuring compliance.
Enacted 2025-06-03
G-01.4
Gen. Bus. Law § 1421(5)
Plain Language
Large developers are prohibited from knowingly making false or materially misleading statements or omissions in any documents produced under the statute — including the safety and security protocol, test records, and safety incident reports. This is an anti-fraud provision that applies to all documentary submissions and publications required by the RAISE Act. The 'knowingly' mens rea standard means the developer must have actual awareness that the statement is false or misleading; negligent inaccuracies would not violate this provision.
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
Pending 2026-01-01
G-01.1G-01.2
Civ. Rights Law § 89(1)-(2)
Plain Language
Every developer and deployer of high-risk AI systems must plan, document, and implement a risk management policy and program addressing algorithmic discrimination risks. The program must specify the principles, processes, and personnel used to identify, document, and mitigate known or foreseeable discrimination risks. It must be iterative — regularly and systematically reviewed and updated over the AI system's lifecycle. Reasonableness is evaluated considering the NIST AI RMF 1.0 (or an equivalent framework selected by the AG), the entity's size and complexity, the system's nature and intended uses, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems. The AG may require disclosure of the program and evaluate it for compliance.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering:
(a) The guidance and standards set forth in:
(i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or
(ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology;
(b) The size and complexity of the developer or deployer;
(c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and
(d) The sensitivity and volume of data processed in connection with the high-risk AI system.
2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient.
Pending 2025-10-11
G-01.1G-01.2
GBL § 1552(2)(a)-(b)
Plain Language
Deployers must implement and maintain a risk management policy and program governing their high-risk AI decision system deployments. The program must specify principles, processes, and personnel for identifying, documenting, and mitigating algorithmic discrimination risks, and must be iteratively reviewed and updated over each system's lifecycle. Reasonableness is calibrated to recognized frameworks (NIST AI RMF, ISO/IEC 42001, or substantially equivalent standards), deployer size and complexity, system scope, and data sensitivity. A single program may cover multiple high-risk systems. Deployers that meet the conditions of § 1552(7) — where the developer has contractually assumed these duties — are exempt.
2. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
Pending 2025-10-11
G-01.1G-01.2G-01.3
GBL § 1553(1)(a)-(b)
Plain Language
Developers of general-purpose AI models must create and maintain technical documentation covering training and testing processes, evaluation results for article compliance, intended tasks, target integration systems, acceptable use policies, release dates, distribution methods, and input/output formats. Documentation must be reviewed and revised at least annually. Developers must also make available to downstream integrators documentation enabling them to understand model capabilities and limitations, comply with their own obligations under the article, and integrate the model technically. This downstream-facing documentation must also be reviewed at least annually. Open-source models, internal-only models, and internal management tools may qualify for exemptions under § 1553(2).
1. Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation; and (b) create, implement, maintain and make available to persons that intend to integrate such general-purpose artificial intelligence model into such persons' artificial intelligence decision systems documentation and information that: (i) enables such persons to: (A) understand the capabilities and limitations of such general-purpose artificial intelligence model; and (B) comply with such persons' obligations pursuant to this article; (ii) discloses, at a minimum: (A) the technical means required for such general-purpose artificial intelligence model to be integrated into such persons' artificial intelligence decision systems; (B) the information listed in subparagraph (ii) of paragraph (a) of this subdivision; and (iii) except as provided in subdivision two of this section, is reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such documentation and information.
Pending 2025-10-11
G-01.1
GBL § 1553(2)(d)
Plain Language
Developers of internal-use-only GPAI models that are exempt from technical documentation requirements under § 1553(2)(a)(ii) must still establish and maintain an AI risk management framework. The framework must be iterative and include: internal governance, risk context mapping, risk management, and risk measurement/tracking functions. This ensures internal-use GPAI models are subject to baseline governance even though they are exempt from documentation disclosure obligations.
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
Passed 2025-06-25
G-01.3
Gen. Bus. Law § 1421(1)(d)
Plain Language
Large developers must contemporaneously record and retain for the life of deployment plus five years all testing information used in assessing the frontier model, including specific tests conducted and results obtained. The records must be detailed enough to enable a third party to replicate the testing procedure. The 'as and when reasonably possible' qualifier provides some flexibility in timing of record creation but does not excuse failure to record altogether.
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure;
Passed 2025-06-25
G-01.2
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must conduct an annual review of their safety and security protocol, considering changes to model capabilities and industry best practices. If modifications are warranted, the developer must update the protocol and re-publish it publicly with appropriate redactions and re-transmit it to the Division of Homeland Security and Emergency Services. This is a continuing obligation — not a one-time pre-deployment exercise. The review must happen regardless of whether modifications are ultimately made; the publication obligation is triggered only when modifications occur.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
Passed 2025-06-25
G-01.5
Gen. Bus. Law § 1421(4)(a)-(e)
Plain Language
Large developers must retain an independent third-party auditor annually to assess compliance with all § 1421 requirements. The auditor must have access to unredacted materials and must produce a certified report covering: compliance steps taken, any noncompliance instances with remediation recommendations, and an assessment of internal controls including senior personnel designation. The developer must retain the unredacted report for the deployment period plus five years, publish a redacted version publicly, transmit it to the Division of Homeland Security and Emergency Services, and provide the AG or DHSES with access to a version redacted only as required by federal law upon request. The audit clock starts at the later of the act's effective date or 90 days after a person first qualifies as a large developer.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
Pending 2025-11-01
G-01.3G-01.4
63 O.S. § 5503(D)
Plain Language
All AI device documentation must meet state and federal medical record-keeping standards and be available for regulatory inspection. Deployers must specifically maintain summary reports documenting when qualified end-users override or disagree with AI device outputs, including the frequency and nature of those overrides and the percentage or number of disagreements. This creates a quantitative tracking obligation — deployers must know and record how often their physicians reject AI recommendations and why.
D. All documentation shall comply with state and federal medical record-keeping requirements and be accessible for regulatory review. Documentation of relevant instances where a qualified end-user overrides or disagrees with AI device-generated outputs must be maintained through a summary report indicating the frequency and nature of overrides. Deployers shall document the percentage or number of such overrides or disagreements.
Pending 2025-11-01
G-01.6
63 O.S. § 5504(A)
Plain Language
Deployers must establish a formal AI governance group that includes representation from the qualified end-users (licensed physicians) who actually use the AI devices. This governance group is responsible for overseeing compliance with all requirements of the act. This goes beyond designating a single individual — it requires a multi-stakeholder governance body with practitioner representation.
A. Deployers of any artificial intelligence (AI) device shall establish an AI governance group with representation from qualified end-users. This governance group is responsible for overseeing compliance with this act.
Pending 2025-11-01
G-01.3
63 O.S. § 5504(B)
Plain Language
Deployers must maintain a current inventory of all AI devices they have deployed, along with each device's instructions for use and any relevant safety and effectiveness documentation. All of this must be made accessible to the qualified end-users (licensed physicians) who use the devices. This is both an inventory obligation and an internal documentation accessibility requirement — physicians must be able to find and review device documentation.
B. Deployers shall maintain an updated inventory of deployed AI devices, with device instructions for use and any relevant safety and effectiveness documentation made accessible to all qualified end-users of the device.
Pending 2025-11-01
G-01.3
63 O.S. § 5504(E)
Plain Language
Deployers must create and maintain documentation of the intended use case for each AI device and the training procedures that qualified end-users must complete before using it. This ensures there is a written record of why an AI device was deployed and how users were prepared to use it, which supports both internal governance and regulatory review.
E. Deployers shall document the use case and user training procedure for the AI device.
Pending 2026-10-06
G-01.3G-01.4
35 Pa.C.S. § 3506
Plain Language
Facilities must retain records related to their AI algorithms for a period to be determined by the Department of Health. While the specific retention period will be set by regulation, facilities should begin organizing AI-related documentation in anticipation. The department will establish the retention policy with input from facilities and providers.
§ 3506. Retention of records. The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
Pending 2026-04-01
G-01.3
12 Pa.C.S. § 7105(d)
Plain Language
Suppliers must create and maintain internal documentation covering five categories related to the chatbot's development and implementation: the foundation models used, training data, privacy law compliance, consumer data collection and sharing practices, and ongoing accuracy/reliability/fairness/safety efforts. This documentation is distinct from the consumer-facing disclosure policy — it is an internal recordkeeping obligation. The statute does not specify a retention period or require the documentation to be produced to regulators on demand, but it must be maintained.
(d) Documentation.--A supplier shall maintain documentation regarding the development and implementation of the chatbot that describes: (1) Foundation models used in development. (2) Training data used. (3) Compliance with Federal and State privacy law. (4) Consumer data collection and sharing practices. (5) Ongoing efforts to ensure accuracy, reliability, fairness and safety.
Pending 2026-04-01
12 Pa.C.S. § 7105(g)
Plain Language
Suppliers are legally bound to comply with the disclosure policy they file with the Bureau of Consumer Protection. This transforms the filed policy from a mere disclosure document into an enforceable set of commitments — any deviation from the filed policy's stated procedures constitutes a violation of the chapter. This is a compliance pass-through mechanism: the specific obligations vary by supplier based on what they disclosed in their policy, but the requirement to adhere to it is mandatory.
(g) Compliance.--A supplier shall comply with the requirements of the policy filed in accordance with this section.
Pending 2027-01-09
G-01.3G-01.4
35 Pa.C.S. § 3506
Plain Language
The Department of Health will establish a record retention policy specifying how long facilities must retain records related to AI algorithms. While the specific retention period is deferred to department rulemaking, facilities should anticipate a mandatory retention obligation and begin organizing records in a form suitable for production. The department may consult with facilities and providers in setting the policy.
The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
Pending 2027-01-09
G-01.3G-01.4
40 Pa.C.S. § 5207
Plain Language
The Insurance Department will establish a record retention policy for insurers' AI-related records. Insurers must retain records for the period to be determined by the department.
The department shall establish a record retention policy and determine the amount of time an insurer shall retain records. The department may request input from insurers or their representatives in making this determination.
Pending 2027-01-09
G-01.3G-01.4
40 Pa.C.S. § 5307
Plain Language
The Department of Human Services will establish a record retention policy for MA or CHIP managed care plans' AI-related records.
The department shall establish a record retention policy and determine the amount of time an MA or CHIP managed care plan shall retain records. The department may request input from an MA or CHIP managed care plan or their representative to make this determination.
Pending 2026-01-21
G-01.3G-01.4
R.I. Gen. Laws § 27-84-3(a)(3)
Plain Language
Insurers must retain documentation of all AI-involved decisions for at least five years. This expressly includes adverse benefit determinations where AI made or was a substantial factor in the decision. The retention obligation covers both administrative and non-administrative adverse determinations. Insurers should ensure their recordkeeping systems capture the AI's role, the decision output, and supporting rationale for every AI-influenced claims and coverage decision, and that records are maintained in a form producible to regulators under § 27-84-3(a)(2).
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
Pending
G-01.3G-01.4
§ 28-5.2-2(d)
Plain Language
Employers must create and maintain contemporaneous, true, and accurate records of all electronic monitoring data used in employment decisions (hiring, promotion, termination, discipline, compensation) and retain them for five years. All employee information collected via electronic monitoring must be destroyed no later than 61 months after collection unless the employee provides written, informed consent to longer retention. Employers must also implement reasonable administrative, technical, and physical data security practices appropriate to the data's volume and nature. Employees have the right to request corrections to erroneous data. Records must be producible upon request by the employee, their authorized representative, or the Department of Labor and Training.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Pending 2026-01-09
G-01.3G-01.4
R.I. Gen. Laws § 27-84-3(a)(3)
Plain Language
Insurers must retain documentation of all AI-driven decisions for at least five years. This expressly includes adverse benefit determinations where AI made or was a substantial factor in the decision. The retention obligation is broad — it covers all AI decisions, not just adverse ones — and the five-year minimum is among the longer retention periods seen in state AI legislation. Insurers should ensure their documentation systems capture both the AI output and the decision context for every claim and coverage determination involving AI.
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
Pending 2026-02-06
G-01.3
§ 28-5.2-2(d)
Plain Language
Employers must maintain contemporaneous, true, and accurate records of all electronic monitoring data used in hiring, promotion, termination, disciplinary, or compensation decisions for five years and make them available to the employee, their authorized representative, or the Department of Labor upon request. All employee data collected via monitoring must be destroyed no later than 61 months after collection absent informed written consent. Employers must maintain reasonable administrative, technical, and physical data security practices. Employees have the right to request corrections to erroneous data — this data correction right applies independently of any automated decision challenge.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Pending
G-01.1
S.C. Code § 39-80-20(D)
Plain Language
Chatbot providers must establish, document, and maintain a comprehensive written data security program with administrative, technical, and physical safeguards scaled to the volume and sensitivity of the personal data and chat logs they hold. The written program must be publicly posted on the provider's website. This is both a governance obligation (establish a formal program) and a transparency obligation (make it publicly available).
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
Pending
G-01.1
S.C. Code § 39-80-20(D)
Plain Language
Chatbot providers must develop, implement, and maintain a written, comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and nature of the personal data and chat logs they hold. The program must be publicly available on the provider's website. This is both a governance obligation (maintain a written program) and a transparency obligation (publish it publicly). The proportionality standard means the security measures must scale with the sensitivity and volume of data processed.
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
Pending 2025-01-01
G-01.1G-01.2
Section 37-31-30(B)(1)-(2)
Plain Language
Deployers must establish and maintain a risk management policy and program covering their deployment of high-risk AI systems. The program must identify, document, and mitigate algorithmic discrimination risks, specify responsible personnel, and be iteratively reviewed and updated throughout each system's lifecycle. Reasonableness is calibrated to the NIST AI RMF, ISO/IEC 42001, or another recognized or AG-designated framework, as well as the deployer's size, system scope, and data sensitivity. A single program may cover multiple high-risk AI systems. Small deployers (fewer than 50 employees, not training with own data) using the system for intended purposes and passing through the developer's impact assessment are exempt per subsection (F).
(B)(1) Except as provided in subsection (F), a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable considering: (a)(i) The guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (ii) any risk management framework for artificial intelligence systems that the Attorney General, in his discretion, may designate; (b) the size and complexity of the deployer; (c) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (d) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) A risk management policy and program implemented pursuant to item (1) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Passed 2025-09-01
G-01.1
Gov't Code § 2054.702(a)-(c)
Plain Language
DIR must establish by rule an AI code of ethics aligned with NIST AI RMF 1.0, covering human oversight, fairness, transparency, data privacy, redress/accountability, and evaluation frequency. All state agencies and local governments that procure, develop, deploy, or use AI systems must adopt this code. This is a mandatory adoption requirement — agencies cannot opt out or develop their own competing framework. The NIST AI RMF 1.0 alignment provides a recognized safe harbor framework for the code's substance.
Sec. 2054.702. ARTIFICIAL INTELLIGENCE SYSTEM CODE OF ETHICS. (a) The department by rule shall establish an artificial intelligence system code of ethics for use by state agencies and local governments that procure, develop, deploy, or use artificial intelligence systems. (b) At a minimum, the artificial intelligence system code of ethics must include guidance for the deployment and use of artificial intelligence systems and heightened scrutiny artificial intelligence systems that aligns with the Artificial Intelligence Risk Management Framework (AI RMF 1.0) published by the National Institute of Standards and Technology. The guidance must address: (1) human oversight and control; (2) fairness and accuracy; (3) transparency, including consumer disclosures; (4) data privacy and security; (5) public and internal redress, including accountability and liability; and (6) the frequency of evaluations and documentation of improvements. (c) State agencies and local governments shall adopt the code of ethics developed under this section.
Passed 2025-09-01
G-01.1
Gov't Code § 2054.703(a)-(c)
Plain Language
DIR must develop minimum risk management and governance standards — consistent with NIST AI RMF 1.0 — specifically for heightened scrutiny AI systems used by state agencies and local governments. These standards must require accountability reports, pre-deployment assessments covering security risks, performance metrics, and transparency, and re-assessments upon material changes to the system, data, or intended use. Standards must also address vendor risk management through contractual requirements, employee training, and acceptable use policies. All state agencies and local governments must adopt these standards. Pre-deployment testing is carved out from the definition of unlawful harm, creating a safe harbor for good-faith compliance testing.
Sec. 2054.703. MINIMUM STANDARDS FOR HEIGHTENED SCRUTINY ARTIFICIAL INTELLIGENCE SYSTEMS. (a) The department by rule shall develop minimum risk management and governance standards for the development, procurement, deployment, and use of heightened scrutiny artificial intelligence systems by a state agency or local government. (b) The minimum standards must be consistent with the Artificial Intelligence Risk Management Framework (AI RMF 1.0) published by the National Institute of Standards and Technology and must: (1) establish accountability measures, such as required reports describing the use of, limitations of, and safeguards for the heightened scrutiny artificial intelligence system; (2) require the assessment and documentation of the heightened scrutiny artificial intelligence system's known security risks, performance metrics, and transparency measures: (A) before deploying the system; and (B) at the time any material change is made to: (i) the system; (ii) the state or local data used by the system; or (iii) the intended use of the system; (3) provide to local governments resources that advise on managing, procuring, and deploying a heightened scrutiny artificial intelligence system, including data protection measures and employee training; and (4) establish guidelines for: (A) risk management frameworks, acceptable use policies, and training employees; and (B) mitigating the risk of unlawful harm by contractually requiring vendors to implement risk management frameworks when deploying heightened scrutiny artificial intelligence systems on behalf of state agencies or local governments. (c) State agencies and local governments shall adopt the standards developed under Subsection (a).
Passed 2025-09-01
G-01.6
Gov't Code § 2054.137(a-1), (c)
Plain Language
Small state agencies (150 or fewer full-time employees) may either designate a full-time employee as a data management officer or share a data management officer with other agencies, subject to DIR approval. The data management officer must annually post at least three high-value data sets on the Texas Open Data Portal, excluding confidential information. While this provision is primarily about data management rather than AI governance specifically, the data management officer role intersects with AI governance because AI systems rely on government data — and the broader bill context (Subchapter S) makes this role relevant to AI data governance.
(a-1) A state agency with 150 or fewer full-time employees may: (1) designate a full-time employee of the agency to serve as a data management officer; or (2) enter into an agreement with one or more state agencies to jointly employ a data management officer if approved by the department. (c) In accordance with department guidelines, the data management officer for a state agency shall annually post on the Texas Open Data Portal established by the department under Section 2054.070 at least three high-value data sets as defined by Section 2054.1265. The high-value data sets may not include information that is confidential or protected from disclosure under state or federal law.
Enacted 2024-05-01
G-01.3
Utah Code § 13-70-304(2), (4)
Plain Language
Participants in the AI Learning Laboratory must provide information to state agencies and report to the Office as specified in their participation agreement. They must also retain records as required by Office rules or the agreement. The specifics of what information, what reports, and what records will be determined by the Office's rules and the individual participation agreement — the statute delegates those details.
(2) A participant shall: (a) provide required information to state agencies in accordance with the terms of the participation agreement; and (b) report to the office as required in the participation agreement. ... (4) A participant shall retain records as required by office rule or the participation agreement.
Enacted 2024-05-01
G-01.1
Utah Code § 13-70-302(4), (6)
Plain Language
Each regulatory mitigation agreement must specify scope limitations on the AI technology's use (user types, geographic boundaries, and other implementation constraints), safeguards that must be in place, and the specific regulatory relief granted. Critically, participants remain fully subject to every legal and regulatory requirement that the agreement does not expressly waive or modify. This provision structures the sandbox as a limited, documented departure from baseline regulation rather than a blanket exemption.
(4) A regulatory mitigation agreement between a participant and the office and relevant agencies shall specify: (a) limitations on scope of the use of the participant's artificial intelligence technology, including: (i) the number and types of users; (ii) geographic limitations; and (iii) other limitations to implementation; (b) safeguards to be implemented; and (c) any regulatory mitigation granted to the applicant. ... (6) A participant remains subject to all legal and regulatory requirements not expressly waived or modified by the terms of the regulatory mitigation agreement.
Pending 2026-07-01
G-01.3
§ 19.2-11.14(E)
Plain Language
Law-enforcement agencies must retain the first draft of any AI-generated report or record for as long as the final version is retained. The generative AI program used must maintain an audit trail that at minimum identifies who used the AI, tracks all changes made after the initial draft, and logs any video or audio footage used as source material for report generation. This creates both a document-retention obligation on the agency and a functional requirement on the AI tool itself to support audit trail capabilities.
E. The first draft of any report or record created in whole or in part by using generative artificial intelligence shall be retained for as long as the final report is retained. The program used to generate a draft or final report shall maintain an audit trail that, at a minimum, identifies (i) the person who used artificial intelligence to create or edit the report; (ii) any changes made to the report following the initial draft; and (iii) the video and audio footage used to create a report, if any.
Pending 2026-07-01
G-01.3
§ 38.2-3407.15(B)(15)(iii)
Plain Language
Health carriers must maintain documentation of all AI-driven decisions related to insurance claims and coverage management for at least three years. This is a standalone recordkeeping obligation — carriers need a system capable of logging and retaining records of each AI decision, including adverse determinations, for the full retention period. The three-year window runs from the date of the decision.
Each carrier shall ... (iii) maintain documentation of AI decisions for at least three years;
Pending 2025-07-01
G-01.1G-01.2
9 V.S.A. § 4193g(a)-(b)
Plain Language
Every developer and deployer must plan, document, and implement a risk management policy and program governing their automated decision systems. The program must identify, document, and mitigate known or foreseeable risks of algorithmic discrimination, and must be iteratively reviewed and updated over the system's lifecycle. Reasonableness is assessed against the NIST AI RMF v1.0 (or a later version if the AG determines it is at least as stringent), the entity's size and complexity, the system's nature and scope, and data sensitivity and volume. A single program may cover multiple systems if sufficient. The NIST AI RMF reference functions as a reasonableness benchmark rather than a strict safe harbor.
(a) Each developer or deployer of automated decision systems used in consequential decisions shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under section 4193b of this title. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this subsection shall be reasonable considering the: (1) guidance and standards set forth in version 1.0 of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce, or the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology if, in the Attorney General's discretion, the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce is at least as stringent as version 1.0; (2) size and complexity of the developer or deployer; (3) nature, scope, and intended uses of the automated decision system developed or deployed for use in consequential decisions; and (4) sensitivity and volume of data processed in connection with the automated decision system. (b) A risk management policy and program implemented pursuant to subsection (a) of this section may cover multiple automated decision systems developed by the same developer or deployed by the same deployer for use in consequential decisions if sufficient.
Pre-filed 2025-07-01
G-01.1
9 V.S.A. § 4193g(b)
Plain Language
Deployers may not deploy an inherently dangerous AI system or any AI system creating foreseeable risks of harm unless they have first designed and implemented a risk management policy and program. The policy must specify the principles, processes, and personnel for ongoing risk identification, mitigation, and documentation. The program must meet the NIST AI RMF as a floor and must be reasonable considering the deployer's size and complexity, the nature and scope of the system (including intended and unintended uses and deployer modifications), and the data the system processes post-deployment. This is a pre-deployment prerequisite with ongoing maintenance obligations — the program must be 'maintained,' not just created.
(b) No deployer shall deploy an inherently dangerous artificial intelligence system or an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter unless the deployer has designed and implemented a risk management policy and program for the model or system. The risk management policy shall specify the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk that is a reasonably foreseeable consequence of deploying or using the system. Each risk management policy and program designed, implemented, and maintained pursuant to this subsection shall be: (1) at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the NIST; and (2) reasonable considering: (A) the size and complexity of the deployer; (B) the nature and scope of the system, including the intended uses and unintended uses and the modifications made to the system by the deployer; and (C) the data that the system, once deployed, processes as inputs.
Pre-filed 2026-07-01
G-01.1
9 V.S.A. § 4193b(d)
Plain Language
Chatbot providers must develop, implement, and maintain a written, comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and sensitivity of the personal data and chat logs they hold. The written program must be published on the provider's website. This is both a governance obligation (establishing and documenting the program) and a public transparency obligation (publishing it). The proportionality standard means that providers handling larger volumes of sensitive data need correspondingly stronger safeguards.
(d) Data security program. A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of the personal data and chat logs maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
Passed 2026-07-01
G-01.1G-01.3
18 V.S.A. § 9764(a)-(b)
Plain Language
Suppliers of mental health chatbots may assert an affirmative defense to professional conduct enforcement actions if they can demonstrate they: (1) created, maintained, and implemented a comprehensive written policy covering the chatbot's intended purposes, abilities, limitations, safety procedures (including licensed provider involvement in development, clinical best-practice compliance, pre- and post-deployment testing, adverse outcome identification, user harm reporting mechanisms, real-time crisis response protocols, regular safety audits, nondiscrimination measures, and HIPAA-equivalent compliance); (2) maintained documentation of foundation models used, training tools, privacy compliance, data practices, and ongoing accuracy/safety efforts; (3) filed the policy with the Attorney General; and (4) complied with the filed policy at the time of the alleged violation. This is structured as a safe harbor rather than an affirmative obligation — but practically, any supplier that wants access to the defense must build and maintain this comprehensive governance program.
(a) It is an affirmative defense to liability in an action for unlawful or unprofessional conduct brought against a supplier by the Office of Professional Regulation or the Board of Medical Practice if the supplier demonstrates that the supplier meets all of the following conditions: (1) the supplier created, maintained, and implemented a policy that meets the requirements of subsection (b) of this section; (2) the supplier maintains documentation regarding the development and implementation of the mental health chatbot that describes: (A) foundation models used in development; (B) training tools used; (C) compliance with federal health privacy regulations; (D) user data collection and sharing practices; and (E) ongoing efforts to ensure accuracy, reliability, fairness, and safety; (3) the supplier filed the policy with the Office of the Attorney General; and (4) the supplier complied with all requirements of the filed policy at the time of the alleged violation. (b) A policy described in subdivision (a)(1) of this section shall meet all of the following requirements: (1) be in writing; (2) clearly state: (A) the intended purposes of the mental health chatbot; and (B) the abilities and limitations of the mental health chatbot; (3) describe the procedures by which the supplier: (A) ensures that qualified mental health providers licensed in Vermont or in one or more other states, or both, are involved in the development and review process; (B) ensures that the mental health chatbot is developed and monitored in a manner consistent with clinical best practices; (C) conducts testing prior to making the mental health chatbot publicly available and regularly thereafter to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in psychotherapy with a licensed mental health provider; (D) identifies reasonably foreseeable adverse outcomes to and potentially harmful interactions with users that could result from using the mental health chatbot; (E) provides a mechanism for a user to report any potentially harmful interactions from use of the mental health chatbot; (F) implements protocols to assess and respond to risk of harm to users or other individuals; (G) details actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions; (H) implements protocols to respond in real time to acute risk of physical harm; (I) reasonably ensures regular, objective reviews of safety, accuracy, and efficacy, which may include internal or external audits; (J) provides users any necessary instructions on the safe use of the mental health chatbot; (K) ensures users understand that they are interacting with artificial intelligence; (L) ensures users understand the intended purpose, capabilities, and limitations of the mental health chatbot; (M) prioritizes user mental health and safety over engagement metrics or profit; (N) implements measures to prevent discriminatory treatment of users; and (O) ensures compliance with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A, C, and E, as if the supplier were a covered entity, and applicable consumer protection requirements, including sections 9761-9763 of this subchapter.
Pending 2027-01-01
G-01.1
Sec. 2(5)
Plain Language
Developers of high-risk AI systems that conform to the NIST AI RMF, ISO/IEC 42001, or an equivalent nationally or internationally recognized AI risk management framework receive a presumption of compliance with the developer obligations in Section 2. This is a safe harbor — not a standalone obligation — but it incentivizes adoption of recognized risk management frameworks. Developers should document their conformity to invoke this presumption.
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
Pending 2027-01-01
G-01.2
Sec. 2(6)
Plain Language
Developers must update all Section 2 disclosures within 90 days of performing an intentional and substantial modification to a high-risk AI system. An intentional and substantial modification is a deliberate change that creates a new material risk of algorithmic discrimination, or for GPAI models, one that affects compliance or materially changes purpose. Routine deployer customizations and predetermined continuous-learning changes covered in the initial impact assessment are excluded from the modification definition.
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
Pending 2027-01-01
G-01.1
Sec. 3(2)(a)-(c)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions without first designing and implementing a formal risk management policy and program specifying the principles, processes, and personnel for identifying, mitigating, and documenting algorithmic discrimination risks. Alignment with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent framework creates a rebuttable presumption of compliance. This is a pre-deployment gating requirement — the system cannot be used until the risk management program is in place.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
Pending 2027-01-01
G-01.2
Sec. 3(7)
Plain Language
Deployers must update all Section 3 disclosures within 30 days of being notified by the developer that an intentional and substantial modification has been made to the high-risk AI system. This is a shorter window than the developer's 90-day update obligation, reflecting the deployer's downstream position. Deployers should establish a process for receiving and acting on developer modification notices promptly.
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
Pending 2026-07-01
G-01.1G-01.2
Sec. 4(1)-(2)
Plain Language
Deployers must establish and maintain a risk management policy and program governing deployment of high-risk AI systems. The program must specify the principles, processes, and personnel used to identify, document, and mitigate risks of algorithmic discrimination, and must include an iterative process that is regularly and systematically reviewed and updated over the system's lifecycle. The program must be reasonable considering the deployer's size, system scope, data sensitivity, and adherence to a recognized risk framework — the NIST AI RMF and ISO/IEC 42001 are expressly cited as safe harbors, as is any framework the attorney general may designate. A single program may cover multiple high-risk AI systems. The small-deployer exemption in Sec. 6 exempts deployers with fewer than 50 FTEs that do not use their own data to train the system, subject to conditions.
(1) Beginning July 1, 2027, and except as provided in section 5(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Pending 2027-01-01
G-01.1
Sec. 2(5)
Plain Language
This provision establishes a safe harbor for developers: high-risk AI systems conforming to the NIST AI RMF, ISO/IEC 42001, or an equivalent nationally or internationally recognized risk management framework are presumed to comply with the developer obligations in Section 2. This is a rebuttable presumption — conformity with a recognized framework does not guarantee compliance but shifts the burden of proof. The safe harbor applies to the full scope of Section 2 developer obligations, including documentation, disclosure, and anti-discrimination duties.
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
Pending 2027-01-01
G-01.1
Sec. 3(2)(a)-(c)
Plain Language
Deployers may not use a high-risk AI system to make consequential decisions without first designing and implementing a risk management policy and program. The policy must specify the principles, processes, and personnel for identifying, mitigating, and documenting algorithmic discrimination risks. Alignment with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent framework creates a rebuttable presumption of compliance for both the risk management program and the high-risk system itself. This is a deployment prerequisite — the system cannot be used for consequential decisions until the program is in place.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
Pending 2026-07-01
G-01.1G-01.2
Sec. 4(1)-(3)
Plain Language
Deployers must implement and maintain a formal risk management policy and program governing each high-risk AI system deployment by July 1, 2027. The program must identify the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks, and must include an iterative lifecycle review process. Reasonableness is assessed based on deployer size and complexity, system scope, data sensitivity and volume, and adherence to a recognized risk framework — the NIST AI RMF, ISO/IEC 42001, or an equivalent or more stringent standard serve as safe harbors, and the AG may designate additional acceptable frameworks. A single program may cover multiple high-risk AI systems. Small deployer exemptions apply under Sec. 7. Trade secret protections apply.
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
Pending 2026-07-01
G-01.1G-01.2
Sec. 5(1)-(5)
Plain Language
Developers of high-risk AI systems must implement and maintain a risk management policy and program parallel to the deployer obligation, with the same reasonableness factors and safe harbor frameworks (NIST AI RMF, ISO/IEC 42001, AG-designated frameworks). A developer that also serves as a deployer is not required to produce the documentation required by this section unless the system is provided to an unaffiliated entity acting as a deployer. This section does not apply to developers with fewer than 50 full-time equivalent employees. Trade secret protections apply.
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each developer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the developer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the life cycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the developer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the developer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the developer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) A developer that also serves as a deployer for any high-risk artificial intelligence system may not be required to generate the documentation required by this section unless such high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer or as otherwise required by law. (4) Nothing in this section may be construed to require a developer to disclose any trade secret, or other confidential or proprietary information. (5) This section does not apply to a developer with fewer than 50 full-time equivalent employees.