G-01
Governance & Documentation
AI Governance Program & Documentation
Organizations developing or deploying AI must establish a formal AI governance program, maintain contemporaneous records of AI system design, testing, and deployment decisions, and designate a responsible individual or office for AI governance. Program establishment is not a one-time exercise — ongoing maintenance, recordkeeping, and accountability designation are continuing obligations.
Applies to DeveloperDeployerProfessionalGovernment
Bills — Enacted
5
unique bills
Bills — Proposed
59
Last Updated
2026-03-29
Core Obligation

Organizations developing or deploying AI must establish a formal AI governance program, maintain contemporaneous records of AI system design, testing, and deployment decisions, and designate a responsible individual or office for AI governance. Program establishment is not a one-time exercise — ongoing maintenance, recordkeeping, and accountability designation are continuing obligations.

Sub-Obligations6 sub-obligations
ID
Name & Description
Enacted
Proposed
G-01.1
Risk management program establishment A formal AI risk management program must be established, documented, and approved by appropriate organizational leadership. Must cover risk identification, assessment criteria, mitigation strategies, and escalation procedures. NIST AI RMF is commonly cited as a safe harbor framework.
2 enacted
28 proposed
G-01.2
Ongoing program maintenance and update The program must be reviewed and updated periodically — typically annually — and following material changes to AI systems in scope or to the regulatory environment.
3 enacted
18 proposed
G-01.3
Record keeping and audit trail Documentation of AI system design decisions, training data characteristics, bias testing results, safety evaluation results, and deployment parameters must be created contemporaneously and retained for defined periods — typically 2–5 years depending on jurisdiction.
3 enacted
32 proposed
G-01.4
Regulatory production of records Records must be organized and maintained in a form that can be produced to regulatory authorities upon request within a reasonable timeframe.
1 enacted
17 proposed
G-01.5
Third-party audit and certification High-risk AI systems must be submitted to a qualified independent auditor for evaluation, and results disclosed to regulators or publicly.
0 enacted
9 proposed
G-01.6
Designated AI accountability role A specific individual or office must be formally designated as responsible for AI governance, with defined responsibilities, authority, and resources. SPublic disclosure of the designated role may be required.
1 enacted
4 proposed
Bills That Map This Requirement 64 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-01-01
G-01.1
A.R.S. § 44-1383.01(D)
Plain Language
Chatbot providers must develop, implement, and maintain a written comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and nature of personal data and chat logs they hold. The written program must be publicly posted on the provider's website. This is both a governance obligation (establish and maintain a program) and a transparency obligation (publish it publicly).
A chatbot provider shall develop, implement and maintain a comprehensive data security program that contains administrative, technical and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
Pending 2027-01-01
G-01.3
Bus. & Prof. Code § 22587.3(a)
Plain Language
Operators must maintain contemporaneous documentation for every companion chatbot they make available in California covering three categories: (1) confirmation that a graduated response system exists, (2) all credible crisis expressions detected by the chatbot, and (3) the duration and triggering conditions of every crisis interruption pause initiated. This is an ongoing recordkeeping obligation — operators must document each crisis detection and pause event as it occurs, not merely attest to having a system in place. These records form the basis of the annual reporting obligation under § 22587.3(b).
(a) An operator shall document all of the following with respect to any companion chatbot that the operator makes available in this state: (1) The existence of a graduated response system. (2) All credible crisis expressions detected by the companion chatbot. (3) The duration and conditions of a crisis interruption pause initiated by the companion chatbot.
Pending 2027-07-01
G-01.5
Bus. & Prof. Code § 22614(a)-(c)
Plain Language
Operators must submit to an annual independent audit of their compliance with this entire chapter, beginning 180 days after the Attorney General adopts implementing regulations (which are due by January 1, 2028). The auditor — who must be certified by the Attorney General — must submit the audit report to the Attorney General within 90 days of completing the audit. Reports are confidential by default, but the Attorney General may disclose specific information to government agencies and public prosecutors for enforcement, qualified researchers subject to confidentiality agreements, and child safety organizations for standards development. Operators cannot select an uncertified auditor.
(a) Beginning on the date that is 180 days after the Attorney General adopts regulations pursuant to Section 22615, and annually thereafter, an operator shall submit to an independent audit assessing the operator's compliance with this chapter. (b) Within 90 days of completing an independent audit pursuant to subdivision (a), the auditor shall submit an AI child safety audit report to the Attorney General for any audited companion chatbot. (c) (1) Notwithstanding any other law, except as provided in paragraph (2), an AI child safety audit report submitted pursuant to this section is confidential. (2) The Attorney General may disclose specific information from an AI child safety audit report to any of the following: (A) A government agency or a public prosecutor in the state as necessary for enforcement purposes. (B) A qualified researcher conducting a study on child safety, subject to confidentiality agreements and data protection requirements set by the Attorney General. (C) An independent child safety organization or advocacy group for the purpose of developing safety standards or educational resources, subject to appropriate confidentiality protections.
Pending 2026-01-01
G-01.1
Bus. & Prof. Code § 22756.3(a)-(b)
Plain Language
Both developers and deployers must establish, document, implement, and maintain a formal governance program with reasonable administrative and technical safeguards against algorithmic discrimination risks. The program must be proportionate to the system's intended use, the entity's size and resources, the nature and scope of activities, and the technical feasibility and cost of available risk management tools. This is a continuing obligation — the program must be maintained, not merely created — and applies to each high-risk automated decision system in use or intended for use.
(a) A developer or a deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to govern the reasonably foreseeable risks of algorithmic discrimination associated with the use, or intended use, of a high-risk automated decision system. (b) The governance program required by this subdivision shall be appropriately designed with respect to all of the following: (1) The use, or intended use, of the high-risk automated decision system. (2) The size, complexity, and resources of the deployer or developer. (3) The nature, context, and scope of the activities of the deployer or developer in connection with the high-risk automated decision system. (4) The technical feasibility and cost of available tools, assessments, and other means used by a deployer or developer to map, measure, manage, and govern the risks associated with a high-risk automated decision system.
Pending 2026-01-01
G-01.1G-01.2
Civ. Code § 1798.91.3(a)-(c)(1)-(2)-(3)-(5)-(6)-(9)-(10)-(11)
Plain Language
Covered deployers must develop, implement, and maintain a comprehensive written information security program with administrative, technical, and physical safeguards scaled to the deployer's size, resources, data volume, and sensitivity. The program must designate at least one responsible employee, identify and assess reasonably foreseeable internal and external security risks, require ongoing employee and contractor training, mandate compliance with program policies, include disciplinary measures for violations, prevent terminated employees from accessing personal information, incorporate regular monitoring for unauthorized access, and document incident response actions with post-incident reviews. The program must be reviewed at least annually and whenever there is a material change in business practices affecting data security. Safeguards must be consistent with existing applicable state and federal data protection requirements.
(a) A covered deployer conducting business in this state shall have a duty to protect personal information held by the covered deployer as provided by this section.
(b) A covered deployer whose high-risk artificial intelligence systems process personal information shall develop, implement, and maintain a comprehensive information security program that is written in one or more readily accessible parts and contains administrative, technical, and physical safeguards that are appropriate for all of the following:
(1) The covered deployer's size, scope, and type of business.
(2) The amount of resources available to the covered deployer.
(3) The amount of data stored by the covered deployer.
(4) The need for security and confidentiality of personal information stored by the covered deployer.
(c) The comprehensive information security program required by subdivision (a) shall meet all of the following requirements:
(1) The program shall incorporate safeguards that are consistent with the safeguards for the protection of personal information and information of a similar character under state or federal laws and regulations applicable to the covered deployer.
(2) The program shall include the designation of one or more employees of the covered deployer to maintain the program.
(3) The program shall require the identification and assessment of reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of any electronic, paper, or other record containing personal information, and the establishment of a process for evaluating and improving, as necessary, the effectiveness of the current safeguards for limiting those risks, including by all of the following:
(A) Requiring ongoing employee and contractor education and training, including education and training for temporary employees and contractors of the covered deployer, on the proper use of security procedures and protocols and the importance of personal information security.
(B) Mandating employee compliance with policies and procedures established under the program.
(C) Providing a means for detecting and preventing security system failures.
(5) The program shall provide disciplinary measures for violations of a policy or procedure established under the program.
(6) The program shall include measures for preventing a terminated employee from accessing records containing personal information.
(9) The program shall include regular monitoring to ensure that the program is operating in a manner reasonably calculated to prevent unauthorized access to or unauthorized use of personal information and, as necessary, upgrading information safeguards to limit the risk of unauthorized access to or unauthorized use of personal information.
(10) The program shall require the regular review of the scope of the program's security measures that must occur subject to both of the following timeframes:
(A) At least annually.
(B) Whenever there is a material change in the covered deployer's business practices that may reasonably affect the security or integrity of records containing personal information.
(11) The program shall require the documentation of responsive actions taken in connection with any incident involving a breach of security, including a mandatory postincident review of each event and the actions taken, if any, in response to that event to make changes in business practices relating to protection of personal information.
Pending 2026-01-01
G-01.6
Civ. Code § 1798.91.3(c)(2)
Plain Language
The covered deployer must formally designate one or more employees as responsible for maintaining the comprehensive information security program. This is a standing obligation — the designation must be current at all times, not merely established at launch.
(2) The program shall include the designation of one or more employees of the covered deployer to maintain the program.
Pending 2026-01-01
G-01.3
Civ. Code § 1798.91.3(c)(4)-(7)-(8)
Plain Language
The information security program must include written security policies governing off-premises storage, access, and transportation of personal information records by employees. It must also include policies for supervising third-party service providers — requiring reasonable diligence in selecting providers capable of maintaining appropriate security and contractually obligating providers to implement and maintain those security measures. Physical access to records containing personal information must be reasonably restricted, including storage in locked facilities or containers.
(4) The program shall include security policies for the covered deployer's employees relating to the storage, access, and transportation of records containing personal information outside of the covered deployer's physical business premises.
(7) The program shall provide policies for the supervision of third-party service providers that include both of the following:
(A) Taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect personal information consistent with applicable law.
(B) Requiring third-party service providers by contract to implement and maintain appropriate security measures for personal information.
(8) The program shall provide reasonable restrictions on physical access to records containing personal information, including by requiring the records containing the data to be stored in a locked facility, storage area, or container.
Pending 2026-01-01
G-01.3
Civ. Code § 1798.91.3(c)(12)
Plain Language
To the extent feasible, the information security program must include specific computer system security protocols: secure user authentication (credential control, secure password methods, access restricted to active users, lockout after failed attempts); least-privilege access controls with unique credentials per employee/contractor (no vendor-default passwords); encryption of personal information transmitted over public or wireless networks and stored on portable devices; monitoring for unauthorized access; reasonably current firewall protection and OS patches for internet-connected systems; and current anti-malware software with regular security updates. The 'to the extent feasible' qualifier provides limited flexibility, but covered deployers should document why any listed protocol was not implemented.
(12) The program shall, to the extent feasible, include all of the following procedures and protocols with respect to computer system security requirements or procedures and protocols providing a higher degree of security, for the protection of personal information:
(A) The use of secure user authentication protocols that include all of the following features:
(i) The control of user login credentials and other identifiers.
(ii) The use of a reasonably secure method of assigning and selecting passwords or using unique identifier technologies, which may include biometrics or token devices.
(iii) The control of data security passwords to ensure that the passwords are kept in a location and a format that do not compromise the security of the data the passwords protect.
(iv) The restriction of access to only active users and active user accounts.
(v) The blocking of access to user credentials or identification after multiple unsuccessful attempts to gain access.
(B) The use of secure access control measures that include both of the following:
(i) The restriction of access to records and files containing personal information to only employees or contractors who need access to that personal information to perform the job duties of the employees or contractors.
(ii) The assignment of a unique identification and a password to each employee or contractor with access to a computer containing personal information, that may not be a vendor-supplied default password, or the use of another protocol reasonably designed to maintain the integrity of the security of the access controls to personal information.
(C) The encryption of both of the following:
(i) Transmitted records and files containing personal information that will travel across public networks.
(ii) Data containing personal information that is transmitted wirelessly.
(D) The use of reasonable monitoring of systems for unauthorized use of or access to personal information.
(E) The encryption of all personal information stored on laptop computers or other portable devices.
(F) For files containing personal information on a system that is connected to the internet, the use of reasonably current firewall protection and operating system security patches that are reasonably designed to maintain the integrity of the personal information.
(G) The use of both of the following:
(i) A reasonably current version of system security agent software that shall include malware protection and reasonably current patches and virus definitions.
(ii) A version of a system security agent software that is supportable with current patches and virus definitions, and is set to receive the most current security updates on a regular basis.
Enacted 2026-01-01
G-01.3
Bus. & Prof. Code § 22757.12(f)(1)-(2)
Plain Language
Frontier developers may redact published compliance documents when necessary to protect trade secrets, cybersecurity, public safety, national security, or to comply with other law. However, any redaction must be accompanied by a description of the character and justification of what was redacted, to the extent the underlying concern permits disclosure of that description. Unredacted versions must be retained for five years. This creates both a permissive redaction right and an affirmative documentation/retention obligation — developers cannot simply omit information without explanation or destroy the original.
(f) (1) When a frontier developer publishes documents to comply with this section, the frontier developer may make redactions to those documents that are necessary to protect the frontier developer's trade secrets, the frontier developer's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (2) If a frontier developer redacts information in a document pursuant to this subdivision, the frontier developer shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(a)
Plain Language
Large frontier developers must create, follow, and publicly publish a comprehensive frontier AI framework covering catastrophic risk assessment thresholds, mitigations, third-party evaluations, cybersecurity for model weights, incident response, internal governance, and management of internal-use risks. This is in effect a mandatory AI risk management program.
A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework. (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds. (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to paragraph (2). (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally. (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks. (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to subdivision (c). (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties. (8) Identifying and responding to critical safety incidents. (9) Instituting internal governance practices to ensure implementation of these processes. (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
Enacted 2026-01-01
G-01.2
Bus. & Prof. Code § 22757.12(b)(1)
Plain Language
Large frontier developers must review their frontier AI framework at least annually.
A large frontier developer shall review and, as appropriate, update its frontier AI framework at least once per year.
Enacted 2026-01-01
G-01.2
Bus. & Prof. Code § 22757.12(b)(2)
Plain Language
When a large frontier developer makes a material modification to its frontier AI framework, it must publish the updated framework and a written justification for the change within 30 days of making that modification.
If a large frontier developer makes a material modification to its frontier AI framework, the large frontier developer shall clearly and conspicuously publish the modified frontier AI framework and a justification for that modification within 30 days.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(e)(1)(B)
Plain Language
Large frontier developers must not misrepresent their implementation of or compliance with their own frontier AI framework.
(B) A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework... (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances.
Enacted 2026-01-01
Labor Code § 1107.1(a)
Plain Language
Frontier developers must not adopt rules, policies, or contracts that prevent covered employees from reporting catastrophic risk dangers or TFAIA violations to the Attorney General, federal authorities, or authorized internal personnel, and must not retaliate against employees who make such disclosures.
A frontier developer shall not make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either of the following: (1) The frontier developer's activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk. (2) The frontier developer has violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code.
Enacted 2026-01-01
Labor Code § 1107.1(b)
Plain Language
Frontier developers may not include provisions in contracts that prohibit or restrict employees from making whistleblower disclosures protected under California Labor Code Section 1102.5.
A frontier developer shall not enter into a contract that prevents a covered employee from making a disclosure protected under Section 1102.5.
Enacted 2026-01-01
Labor Code § 1107.1(d)
Plain Language
Frontier developers must provide clear notice to all covered employees of their whistleblower rights, either through continuous workplace posting (including periodic notice for remote workers) or annual written notice acknowledged by each employee.
A frontier developer shall provide a clear notice to all covered employees of their rights and responsibilities under this section, including by doing either of the following: (1) At all times posting and displaying within any workplace maintained by the frontier developer a notice to all covered employees of their rights under this section, ensuring that any new covered employee receives equivalent notice, and ensuring that any covered employee who works remotely periodically receives an equivalent notice. (2) At least once each year, providing written notice to each covered employee of the covered employee's rights under this section and ensuring that the notice is received and acknowledged by all of those covered employees.
Enacted 2026-01-01
Labor Code § 1107.1(e)(1)
Plain Language
Large frontier developers must establish an anonymous internal reporting process for covered employees to disclose good-faith concerns about catastrophic safety risks or violations of California's frontier AI law. The developer must provide the reporting employee with monthly status updates on the investigation and any actions taken in response. Disclosures and responses must be shared with the company's officers and directors at least quarterly — except that if an employee has alleged wrongdoing by a specific officer or director, that individual must be excluded from receiving the relevant disclosures.
A large frontier developer shall provide a reasonable internal process through which a covered employee may anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the information indicates that the large frontier developer's activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated Chapter 25.1 (commencing with Section 22757.10) of Division 8 of the Business and Professions Code, including a monthly update to the person who made the disclosure regarding the status of the large frontier developer's investigation of the disclosure and the actions taken by the large frontier developer in response to the disclosure. (2)(A)  Except as provided in subparagraph (B), the disclosures and responses of the process required by this subdivision shall be shared with officers and directors of the large frontier developer at least once each quarter. (B)  If a covered employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, subparagraph (A) shall not apply with respect to that officer or director. 
Pending 2027-01-01
G-01.3G-01.4
C.R.S. § 10-16-112.7(3)(e)
Plain Language
Entities using AI for utilization review must ensure the AI system produces and retains documentation, audit logs, and model-governance records sufficient to demonstrate compliance with the utilization review requirements in this section and with the existing insurance regulatory requirements in section 10-3-1104.9. This is both a contemporaneous documentation obligation and a retention obligation — records must be created as part of ongoing operations and maintained for regulatory inspection.
(e) THE ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES AND RETAINS DOCUMENTATION, AUDIT LOGS, AND MODEL-GOVERNANCE RECORDS IN ORDER TO DEMONSTRATE COMPLIANCE WITH THIS SECTION AND SECTION 10-3-1104.9;
Enacted 2026-06-30
G-01.1G-01.2
C.R.S. § 6-1-1703(2)(a)
Plain Language
Deployers must implement and maintain a formal risk management policy and program governing their deployment of high-risk AI systems. The program must cover the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks. Critically, this is not a one-time exercise — it must be iterative, regularly and systematically reviewed, and updated over the full lifecycle of the AI system. Reasonableness is assessed based on factors specified in the original SB 205 (size/complexity of the deployer, nature/scope of the AI system, sensitivity of data, etc.). This maps closely to the NIST AI RMF approach.
(2) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (2) must be reasonable considering:
Pending 2025-07-01
G-01.1G-01.2
O.C.G.A. § 10-16-3(b)-(c)
Plain Language
Deployers must implement and maintain a risk management policy and program governing their use of automated decision systems. The program must specify the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks. It must be iterative — regularly and systematically reviewed and updated over the system's lifecycle — and must consider the NIST AI RMF, ISO/IEC 42001, or equivalent frameworks, as well as the deployer's size and complexity, nature and scope of deployed systems, and sensitivity and volume of data processed. A single program may cover multiple deployed systems. Small deployers meeting all conditions in § 10-16-6 (fewer than 15 employees, fewer than 1,000 affected consumers, no own-data training, etc.) are exempt.
(b) Except as provided in Code Section 10-16-6, a deployer of an automated decision system shall implement a risk management policy and program to govern the deployer's deployment of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection shall take into consideration: (1) Either: (A) The guidance and standards set forth in the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology of the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) Any risk management framework for artificial intelligence systems that the Attorney General, in the Attorney General's discretion, may designate; (2) The size and complexity of the deployer; (3) The nature and scope of the automated decision systems deployed by the deployer, including the intended uses of the automated decision systems; and (4) The sensitivity and volume of data processed in connection with the automated decision systems deployed by the deployer. (c) A risk management policy and program implemented pursuant to this Code section may cover multiple automated decision systems deployed by the deployer.
Pending 2025-07-01
G-01.3
O.C.G.A. § 10-16-3(d)
Plain Language
Deployers must establish and follow written standards, policies, and procedures governing their acquisition and use of third-party automated decision systems. This includes contractual controls ensuring developers provide all information needed for deployer compliance, procedures for reporting errors or evidence of algorithmic discrimination back to developers, and procedures for remediating and eliminating incorrect information from deployed systems. These are standing governance obligations — not one-time documentation exercises.
Each deployer shall establish and adhere to: (1) Written standards, policies, procedures, and protocols for the acquisition, use of, or reliance on automated decision systems developed by third-party developers, including reasonable contractual controls ensuring that the developer statements and summaries described in subsection (b) of Code Section 10-16-2 include all information necessary for the deployer to fulfill its obligations under this Code section; (2) Procedures for reporting any incorrect information or evidence of algorithmic discrimination to a developer for further investigation and mitigation, as necessary; and (3) Procedures to remediate and eliminate incorrect information from its automated decision systems that the deployer has identified or has been reported to a developer.
Pending 2028-07-01
G-01.3
HRS § 321-__ (Monitoring; performance evaluation; record keeping)(4)
Plain Language
Health care providers must maintain four categories of records: (1) an updated inventory of all AI systems used in consequential decisions; (2) documentation of each system's design, intended use, and training data; (3) records of all ongoing monitoring, performance evaluations, and oversight activities; and (4) documentation of findings and remedial actions taken when deficiencies are identified. These are continuing recordkeeping obligations — the inventory must be kept current, and documentation must be maintained as monitoring and evaluations occur. The bill does not specify a retention period, which may be addressed in implementing rules.
(4) Maintain:
(A) An updated inventory of the artificial intelligence systems;
(B) Documentation on the system design, intended use, and training data of the artificial intelligence systems;
(C) Record of the monitoring, performance evaluations, and oversight activities; and
(D) Documentation of findings and actions taken to address any deficiencies identified through the monitoring or performance evaluations.
Pending 2026-07-01
G-01.3
Iowa Code § 91F.2(5)
Plain Language
Employers must maintain a current inventory of all automated decision systems they use. This is a recordkeeping obligation designed to support compliance with the notice requirements — the list must be kept up to date as systems are deployed or retired. There is no explicit requirement to publish the list or submit it to a regulator, but it must exist and be current.
5. An employer shall maintain an updated list of all automated decision systems currently in use by the employer to facilitate implementation of this section.
Pending 2025-07-01
G-01.3
§ 554J.2(1)(a)-(c)
Plain Language
Any private entity that possesses biometric data must develop a written retention and destruction policy establishing how long it will keep biometric data before destroying it. The policy must be publicly available. Regardless of what the policy states, the hard ceiling is three years after the subject's last interaction with the entity or until the original collection purpose has been fulfilled — whichever is longer. This creates three distinct obligations: (1) create a written policy, (2) make it publicly available, and (3) comply with the maximum retention period.
1. a. A private entity in possession of biometric data shall develop a written policy to establish a schedule for how long the private entity will retain biometric data before the private entity destroys the biometric data. b. A written policy shall be available to the public. c. A private entity shall not retain biometric data for more than three years after the subject of the biometric data last interacts with the private entity or until the purposes for which the biometric data was collected have been accomplished, whichever is longer.
Pending 2025-01-01
G-01.1
Section 20(b)
Plain Language
Health insurance issuers must establish and maintain an AI systems program — defined as their controls and processes for responsible AI use, including governance, risk management, and internal audit functions — that includes policies and procedures ensuring compliance with this Act by all employees, directors, trustees, agents, representatives, and contractors involved in administering coverage. The issuer bears ultimate responsibility for noncompliance regardless of whether a third party performed the noncompliant action. Separately, third parties and other persons are not relieved of liability for failing to cooperate with Department investigations or market conduct actions.
(b) A health insurance issuer shall ensure that its health insurance coverage is administered in conformity with this Act. The health insurance issuer's AI systems program shall include policies and procedures to ensure such conformity by all employees, directors, trustees, agents, representatives, and persons directly or indirectly contracted to administer the health insurance coverage. The health insurance issuer shall be responsible for any noncompliance under this Act with respect to its health insurance coverage. Nothing in this Section relieves any other person from liability for failure to comply with the Department's investigations or market conduct actions related to a health insurance issuer's compliance with this Act.
Pending 2026-01-01
G-01.1
Section 15
Plain Language
The Department of Innovation and Technology must adopt rules requiring businesses using AI systems to comply with five governance principles: safety (no harm to individuals), transparency (clear explanations of how systems work and decide), accountability (identifying responsible parties), fairness (preventing bias), and contestability (allowing individuals to challenge AI decisions). The principles themselves are high-level and aspirational as written — the operative compliance details will be determined through future Department rulemaking. Applies only to businesses with 10 or more employees (per Section 25).
To address the concerns detailed in the findings in Section 5 of this Act and to ensure that negative impacts of AI system use are prevented, the Department of Innovation and Technology shall adopt rules as may be necessary to ensure that businesses using AI systems are compliant with the 5 principles of AI governance as follows: (1) Safety: Ensuring systems operate without causing harm to individuals. (2) Transparency: Providing clear and understandable explanations of how systems work and make decisions. (3) Accountability: Identifying and holding individuals or companies responsible for the system's performance and outcomes. (4) Fairness: Preventing and mitigating bias to ensure equitable treatment for all individuals. (5) Contestability: Allowing individuals to challenge and seek redress for decisions made by the system.
Pending 2027-01-01
G-01.1G-01.2G-01.3G-01.6
Section 20(a)-(d)
Plain Language
Deployers must establish, document, implement, and maintain a governance program with reasonable administrative and technical safeguards to manage the risks of algorithmic discrimination from their automated decision tools. The safeguards must be proportionate to the tool's use, the deployer's size and resources, and the technical feasibility of available risk management tools. The program must include: risk identification and safeguard implementation, integration with the impact assessment process, an annual comprehensive compliance review, retention of impact assessment results for at least two years after completion, and ongoing adjustments in response to material changes in technology or operations. At least one designated employee must be responsible for overseeing the program and compliance. That employee has the authority to raise compliance concerns in good faith, and the employer must promptly and completely assess any such concern. Deployers with fewer than 25 employees are exempt unless their tool impacted more than 999 people in the prior calendar year.
(a) A deployer shall establish, document, implement, and maintain a governance program that contains reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination associated with the use or intended use of an automated decision tool. The safeguards required by this subsection shall be appropriate to all of the following: (1) the use or intended use of the automated decision tool; (2) the deployer's role as a deployer; (3) the size, complexity, and resources of the deployer; (4) the nature, context, and scope of the activities of the deployer in connection with the automated decision tool; and (5) the technical feasibility and cost of available tools, assessments, and other means used by a deployer to map, measure, manage, and govern the risks associated with an automated decision tool. (b) The governance program required by this Section shall be designed to do all of the following: (1) identify and implement safeguards to address reasonably foreseeable risks of algorithmic discrimination resulting from the use or intended use of an automated decision tool; (2) if established by a deployer, provide for the performance of impact assessments as required by Section 10; (3) conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this Act; (4) maintain for 2 years after completion the results of an impact assessment; and (5) evaluate and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the automated decision tool, the state of technical standards, and changes in business arrangements or operations of the deployer. (c) A deployer shall designate at least one employee to be responsible for overseeing and maintaining the governance program and compliance with this Act. An employee designated under this subsection shall have the authority to assert to the employee's employer a good faith belief that the design, production, or use of an automated decision tool fails to comply with the requirements of this Act. An employer of an employee designated under this subsection shall conduct a prompt and complete assessment of any compliance issue raised by that employee. (d) This Section does not apply to a deployer with fewer than 25 employees unless, as of the end of the prior calendar year, the deployer deployed an automated decision tool that impacted more than 999 people per year.
Pending 2027-01-01
G-01.5
Section 20(a)
Plain Language
Operators must obtain an independent third-party compliance audit at least every two years covering all obligations under the Act. The operator must publish a high-level summary of the audit findings on its website, though confidential or proprietary information may be excluded. The audit must assess compliance with the full Act — including prohibited design practices, user safeguards, AI identity notifications, and crisis intervention protocols. This creates both an audit obligation and a public transparency obligation.
(a) At least once every 2 years, an operator shall obtain an independent, third-party audit to assess the operator's compliance with this Act. The operator shall make publicly available on its website a high-level summary of the audit's findings, excluding confidential or proprietary information.
Pre-filed 2025-07-07
G-01.1
Chapter 93M, Section 3(a)
Plain Language
Deployers of high-risk AI systems must establish and maintain a formal risk management program that identifies and mitigates known or foreseeable risks of algorithmic discrimination and aligns with industry standards such as the NIST AI Risk Management Framework. This is a continuing obligation — the program must be maintained, not merely created. NIST AI RMF alignment is cited as a benchmark but the provision uses 'such as,' suggesting it is illustrative rather than an exclusive safe harbor. The AG has rulemaking authority under Section 7 to designate recognized frameworks.
(a) Risk Management Policy: Deployers of high-risk AI systems must implement and maintain a risk management program that: (1) Identifies and mitigates known or foreseeable risks of algorithmic discrimination; (2) Aligns with industry standards, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
Pre-filed 2025-07-17
G-01.1G-01.2
Ch. 93M § 3(b)
Plain Language
Deployers must implement a documented risk management policy and program governing their deployment of high-risk AI systems. The program must identify, document, and mitigate algorithmic discrimination risks using defined principles, processes, and personnel. It must be iterative, regularly and systematically reviewed and updated over the system lifecycle. Reasonableness is assessed against the NIST AI RMF, ISO/IEC 42001, or other recognized frameworks (or AG-designated frameworks), the deployer's size and complexity, the nature of deployed systems, and data sensitivity and volume. A single program may cover multiple systems. A small-deployer exemption applies under Section 3(f) for deployers with fewer than 50 employees that do not use their own data to train the system.
(b) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (b) must be reasonable considering: (i) (A) the guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) a risk management policy and program implemented pursuant to subsection (b)(1) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Pre-filed 2025-01-14
G-01.3
Chapter 149B, § 2(c)
Plain Language
Employers must maintain contemporaneous, true, and accurate records of all electronically monitored data for three years, and destroy the data no later than 37 months after collection absent employee consent to longer retention. Employers must implement reasonable administrative, technical, and physical data security measures appropriate to the data's volume and nature. Employees have the right to request corrections to erroneous data. The 37-month window slightly exceeds the 3-year record preservation obligation, giving employers a one-month buffer.
(c) An employer shall establish, maintain, and preserve for three years contemporaneous, true, and accurate records of data collected via an electronic monitoring tool to ensure compliance with employee or commissioner requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than thirty-seven months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Pre-filed 2025-01-14
G-01.3G-01.4
Chapter 149B, § 3(c)-(d)
Plain Language
Employers and their vendors must retain all documentation necessary for impact assessments, including data sources, technical specifications, developer identities, historical use data, and a version history of the tool. Vendors must grant employers a license to access this documentation for sharing with labor organizations or courts as required by law. Documentation must be stored in a form that is legible and accessible to auditors, per commissioner specifications. Employee data collected for impact assessments must be handled to protect privacy, cannot be shared with the employer, and may only be shared with others strictly necessary for the assessment.
(c) An employer or its vendor shall retain all documentation pertaining to the design, development, use, and data of an automated employment decision tool that may be necessary to conduct an impact assessment. To the extent held by a vendor, the employer shall be granted a license to access this documentation and share this documentation with a labor organization to the extent required by federal or state law, or to the extent required by a court or agency in connection with employment or labor litigation. This includes but is not limited to the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, and historical use data for the tool. Such documentation must include a historical record of versions of the tool, such that an employer shall be able to attest in the event of litigation disputing an employment decision, the nature and specifications of the tool as it was used at the time of that employment decision. Such documentation shall be stored in accordance with such record-keeping, data retention, and security requirements as the commissioner may specify, and in such a manner as to be legible and accessible to the party conducting an impact assessment. (d) If an initial or subsequent impact assessment requires the collection of employee data to assess a tool's disparate impact on employees, such data shall be collected, processed, stored, retained, and disposed of in such a manner as to protect the privacy of employees, and shall comply with any data retention and security requirements specified by the commissioner. Employee data provided to auditors for the purpose of an impact assessment shall not be shared with the employer, nor shall it be shared with any person, business entity, or other organization unless strictly necessary for the completion of the impact assessment.
Pre-filed 2025-01-17
G-01.3
Ch. 93M § 2(a)
Plain Language
Any private entity that possesses biometric identifiers or biometric information must create a written retention and destruction policy and make it available to the individuals whose data was collected. The policy must establish a schedule for permanently destroying biometric data when the original purpose for collection has been satisfied or within one year of the individual's last interaction with the entity, whichever comes first. The entity must then follow its own policy — the only exception is a valid court order, warrant, subpoena, or governmental agency request. This is both a policy-creation obligation and an ongoing compliance obligation to adhere to the policy once created.
(a) A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the person from whom biometric information is to be collected or was collected, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 1 year of the individual's last interaction with the private entity, whichever occurs first. Absent a valid order, warrant, or subpoena issued by a court of competent jurisdiction or a local or federal governmental agency, a private entity in possession of biometric identifiers or biometric information must comply with its established retention schedule and destruction guidelines.
Pending 2026-01-01
G-01.3G-01.4
Sec. 7(1)(d), Sec. 7(3)-(4)
Plain Language
Large developers must record and retain all critical risk testing details — tests used and results obtained — for at least five years with sufficient detail for third-party replication. All documents published under the act must appear on a conspicuous page on the developer's website. Redactions are permitted for trade secrets, public safety, national security, or legal compliance, but if any redaction is made, the developer must retain the unredacted version for five years, provide the Attorney General access on request, and describe the character and justification of each redaction in the published version. The same redaction and retention rules apply to auditors publishing reports under Section 9.
(d) Record and retain for 5 years any specific tests used and results obtained as a part of an assessment of critical risk with sufficient detail for qualified third parties to replicate the testing. (3) If a large developer publishes a document in accordance with the requirements of this act, the large developer shall publish the information on a conspicuous page on the large developer's website. The large developer may redact the document as reasonably necessary to protect the large developer's trade secrets, public safety, or national security, or to comply with applicable law. An auditor required to perform an audit and produce a report under section 9 may redact information from the report using the same procedure described in this subsection before the publication of that report under section 9(3). (4) If a large developer or auditor makes a redaction under subsection (3), the large developer or auditor shall do both of the following: (a) Retain an unredacted version of the document for not less than 5 years and provide the attorney general with the ability to inspect the unredacted document on request. (b) Describe the character and justification of the redactions in the published version of the document.
Pending 2026-01-01
G-01.5
Sec. 9(1)-(4)
Plain Language
At least once per year, large developers must hire a reputable third-party auditor to assess (1) compliance with the developer's own safety and security protocol, (2) any instances where the protocol was too vague to determine compliance, and (3) any potential violations of the truthfulness, publication, and redaction requirements in Section 7. The developer must grant the auditor access to all act-related materials and any other materials reasonably necessary. The audit team must include at least one corporate compliance expert and one technical AI safety expert. The completed report must be conspicuously published within 90 days of completion, subject to the same redaction rules as other published documents under the act.
(1) Beginning on January 1, 2026, not less than once per year, a large developer shall retain a reputable third-party auditor to produce a report that assesses all of the following: (a) If the large developer has complied with the large developer's safety and security protocol and any instances of noncompliance. (b) Any instance where the large developer's safety and security protocol was not stated clearly enough to determine if the large developer has complied with the safety and security protocol. (c) Any instance that the auditor believes the large developer violated section 7(2), (3), or (4). (2) A large developer shall grant the auditor access to all materials produced to comply with this act and any other materials reasonably necessary to perform the assessment under subsection (1). (3) Not more than 90 days after the completion of the auditor's report under subsection (1), a large developer shall conspicuously publish that report. (4) In conducting an audit under this section, an auditor shall employ or contract 1 or more individuals with expertise in corporate compliance and 1 or more individuals with technical expertise in the safety of foundation models.
Pending 2026-02-24
G-01.3
Sec. 7(1)-(3)
Plain Language
Employers must delete collected covered-individual data no later than 3 years after the purpose for which it was collected is achieved, unless a collective bargaining agreement specifies a different period. Data that the employer never actually uses must be deleted immediately. Employers are flatly prohibited from selling or licensing covered-individual data in any form — including deidentified or aggregated data. Data sharing with state or local government is also prohibited except when providing information to the Department of Labor and Economic Opportunity, complying with law, or responding to a court order. The sale/license prohibition is notably absolute and includes deidentified data.
Sec. 7. (1) An employer that collects a covered individual's data shall retain the data for not more than 3 years after the date on which the purpose for using the electronic monitoring tool or automated decisions tool is achieved, unless otherwise specified by a collective bargaining agreement. If the employer does not use any specific data of a covered individual, the employer must delete that data immediately. (2) An employer shall not sell or license a covered individual's data, including, but not limited to, data that is deidentified or aggregated. (3) An employer shall not share data collected under section 4 or 5 with this state or a local unit of government unless otherwise necessary to do any of the following: (a) Provide information to the department. (b) Comply with the requirements of federal, state, or local law. (c) Comply with a court-issued subpoena, warrant, or order.
Pending 2026-02-24
G-01.3G-01.4
Sec. 9(4)-(7)
Plain Language
Employers must retain all documentation related to the design, development, use, and data of their electronic monitoring and automated decision tools — including data sources, technical specifications, developer identities, historical use data, and version history. Service providers that supply these tools must give employers access to this documentation. Employers must share it with labor organizations as required by law or court order in connection with employment litigation. Documentation must be stored in a manner prescribed by the Director of the Department of Labor and Economic Opportunity to ensure legibility and accessibility for the third-party assessor conducting the impact assessment. No retention period is specified in this subsection (the 3-year limit in Sec. 7 applies to covered individual data, not tool documentation).
(4) An employer shall retain all documentation pertaining to the design, development, use, and data of an electronic monitoring tool or automated decisions tool that may be necessary to conduct an impact assessment. The documentation includes, but is not limited to, the source of the data used to develop the tool, the technical specifications of the tool, individuals involved in the development of the tool, historical use data for the tool, and a historical record of the versions of the tool the employer uses. (5) A service provider that contracts with an employer to provide electronic monitoring or automated decisions shall allow the employer access to the documentation described in subsection (4). (6) An employer shall share the documentation described in subsection (4) with a labor organization as required under law or as required by a court or agency in connection with any employment or labor litigation to which the employer is a party. (7) The documentation described in subsection (4) must be stored in manner as prescribed by the director. The director shall prescribe the manner so that the documentation is legible and accessible to the party that conducts an impact assessment of the tool.
Pending 2026-08-01
G-01.3G-01.4
Minn. Stat. § 181.9923, subd. 1(a)-(c)
Plain Language
Employers must retain all worker data collected, used, or produced by an ADS — including input/output data and human reviewer corroborating evidence — for 36 months from the most recent collection, production, or use. This records must be available to workers and the Commissioner of Labor and Industry upon request. Data must be destroyed no later than 37 months after its most recent use unless the worker gives written, informed consent for longer retention. Employers must also protect worker data with security practices appropriate to the data's volume and nature, consistent with applicable data and cybersecurity laws. Violations carry $2,500 per violation per day per affected worker.
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
Pending 2026-01-01
G-01.2
Minn. Stat. § 325M.41, subd. 3(a)-(b)
Plain Language
Developers must annually review and update their safety and security protocol to reflect changes in the AI model's capabilities and evolving industry best practices. If the review results in a material modification, the developer must republish the updated protocol publicly with appropriate redactions and transmit a copy to the attorney general — the same dual-publication obligation that applies at initial deployment. The annual review is mandatory, as is modification — the statute says 'modify,' not 'modify if necessary,' suggesting continuous improvement is expected.
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
Pending 2026-01-01
Minn. Stat. § 325M.41, subd. 5
Plain Language
Developers must not knowingly include false or materially misleading statements or omissions in any documents produced under the RAISE Act — including the safety and security protocol, test records, and safety incident disclosures. This is a truthfulness-in-reporting obligation that applies to all documents the statute requires the developer to create, retain, publish, or submit to the attorney general. The 'knowingly' scienter requirement means negligent misstatements are not covered, but deliberate misrepresentation or deliberate omission of material facts is prohibited.
A developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced under this section.
Pre-filed 2026-08-01
G-01.2
Minn. Stat. § 325M.41, subd. 3(a)-(b)
Plain Language
Developers must conduct an annual review of their safety and security protocol, accounting for both changes to the AI model's capabilities and evolving industry best practices. The protocol must be modified as needed following review. If the modifications are material, the developer must re-publish the updated protocol (with appropriate redactions) and transmit a copy to the attorney general — the same publication requirements that apply to the initial protocol. This is a continuing obligation, not a one-time pre-deployment exercise.
(a) A developer must (1) conduct an annual review of the safety and security protocol required under this section to account for changes to the capabilities of the artificial intelligence model and industry best practices; and (2) modify the safety and security protocol. (b) If a material modification is made to the safety and security protocol, the developer must publish the safety and security protocol in the same manner required under subdivision 1, clause (3).
Pending 2026-09-01
G-01.3G-01.4
§ 181.9923, Subd. 1(a)-(c)
Plain Language
Employers must retain all worker data collected, used, or produced by an automated decision system — including ADS inputs, outputs, and corroborating evidence used by human reviewers — for 36 months from the most recent collection, production, or use. Data must be destroyed no later than 37 months unless the worker has given written, informed consent to longer retention. Employers must also protect this data using security practices consistent with applicable data and cyber privacy laws, proportionate to the volume and nature of data collected. This creates both a retention floor (36 months) and a mandatory destruction ceiling (37 months), which is an unusually narrow and prescriptive window.
Subdivision 1. Data records. (a) Employers must maintain records of worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer for 36 months after the data's most recent collection, production, or use to ensure compliance with requests for data from workers or the commissioner of labor and industry. (b) Employers must destroy any worker data collected, used, or produced by an automated decision system and any input or output data used or produced by the automated decision system or used as corroborating evidence by a human reviewer no later than 37 months after its most recent collection, production, or use, unless the worker has provided written and informed consent to the retention of the worker's data by the employer. (c) Employers must protect the confidentiality, integrity, and accessibility of worker data using data security practices consistent with data and cyber privacy laws and appropriate to the volume and nature of the worker data collected.
Pre-filed 2026-08-28
G-01.3
§ 1.566(1)
Plain Language
Any private entity possessing biometric identifiers or biometric information must create and publicly publish a written retention and destruction policy. The policy must establish a retention schedule and guidelines for permanently destroying biometric data when the original collection purpose has been satisfied or within one year of the individual's last interaction with the entity — whichever comes first. The entity must actually comply with its own published schedule and destruction guidelines, and may deviate only pursuant to a valid warrant or subpoena. This is both a documentation obligation (creating the policy) and an operational obligation (following it).
1. Any private entity in possession of biometric identifiers or biometric information shall develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within one year of the individual's last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information shall comply with its established retention schedule and destruction guidelines.
Pending 2026-01-01
G-01.3G-01.5
G.S. § 114B-4(e)-(f)
Plain Language
Licensed health information chatbot operators must conduct regular self-inspections and undergo an annual third-party audit, with all results made available to the Department of Justice. They must also implement continuous monitoring for safety and risk indicators and submit quarterly performance reports that include incident reports. The quarterly reporting obligation creates a regular cadence of regulatory submissions beyond what most AI statutes require.
(e) A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department. (f) A licensee shall implement continuous monitoring systems for safety and risk indicators and submit quarterly performance reports including incident reports.
Pending 2027-01-01
G-01.3
Sec. 4(6)(a)-(b)
Plain Language
When publishing documents to comply with Sec. 4, large frontier developers and large chatbot providers may redact information necessary to protect trade secrets, cybersecurity, public safety, national security, or to comply with law. However, any redaction must be accompanied by a description of the character and justification of the redaction in the published document, and the unredacted information must be retained for five years. This creates a recordkeeping obligation that survives the publication event.
(6)(a) When a large frontier developer or large chatbot provider publishes documents to comply with this section, the large frontier developer or large chatbot provider may make redactions to those documents that are necessary to protect the large frontier developer's trade secrets, the large frontier developer's or large chatbot provider's cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. (b) If a large frontier developer or large chatbot provider redacts information in a document pursuant to subdivision (6)(a) of this section, the large frontier developer or large chatbot provider shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted information for five years.
Pending 2026-02-01
G-01.1
Sec. 4(2)(a)-(b)
Plain Language
Deployers must implement a risk management policy and program governing their deployment of high-risk AI systems. Conformity with the NIST AI RMF or ISO/IEC 42001 (as of January 1, 2025) creates a presumption of compliance. A single program may cover multiple high-risk systems. Small deployers (fewer than 50 FTEs who do not use their own data to train the system) are exempt under the conditions specified in Sec. 4(6).
(2)(a) Except as otherwise provided in subsection (6) of this section, on and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. High-risk artificial intelligence systems that are in conformity with the guidance and standards set forth in the following as of January 1, 2025, shall be presumed to be in conformity with this section: (i) The Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology; or (ii) The standard ISO/IEC 42001 of the International Organization for Standardization. (b) Any risk management policy and program implemented pursuant to subdivision (a) of this subsection may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Pending 2027-01-01
G-01.1G-01.2
GBL § 1552(2)(a)-(b)
Plain Language
Deployers must implement and maintain a risk management policy and program governing their deployment of high-risk AI decision systems, covering principles, processes, and personnel for identifying, documenting, and mitigating algorithmic discrimination risks. The program must be iterative and regularly reviewed and updated over the system lifecycle. Reasonableness is assessed against NIST AI RMF, ISO/IEC 42001, or an equivalent framework, scaled by the deployer's size and complexity, the nature of the deployed systems, and data sensitivity and volume. A single policy and program may cover multiple high-risk systems. The obligation may be shifted to the developer by contract under the § 1552(7) exemption conditions.
(a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
Pending 2027-01-01
G-01.3
GBL § 1553(1)(a)
Plain Language
Developers of general-purpose AI models must create and maintain technical documentation covering: training and testing processes, compliance evaluation results, intended tasks, integration contexts, acceptable use policies, release date, distribution methods, and input/output modalities. Documentation must be reviewed and revised at least annually. The scope of required content scales with the model's size and risk profile. This obligation is distinct from the high-risk system documentation obligations in § 1551 and applies to GPAI models specifically. Open-source models with publicly available parameters are exempt from this documentation requirement (but not from downstream disclosure under § 1553(1)(b)) under § 1553(2)(a). Models used exclusively for internal management affairs are fully exempt under § 1553(2)(b).
(a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
Pending 2027-01-01
G-01.1
GBL § 1553(2)(d)
Plain Language
Developers of general-purpose AI models that qualify for the internal-use/multi-entity exemption under § 1553(2)(a)(ii) — i.e., models not offered for market sale, not intended to interact with consumers, and used solely for internal purposes — must still establish and maintain an AI risk management framework. The framework must be iterative and ongoing, and must include at minimum: internal governance, risk context mapping, risk management, and risk measurement/tracking functions. This is a residual governance obligation for otherwise-exempt internal-use GPAI models.
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
Pending 2025-07-26
G-01.1G-01.5G-01.6
State Tech. Law § 516(1)-(5)
Plain Language
Every operator must establish an independent ethics and risk management board of at least five individuals, none of whom may be members, officers, or directors of the operator's entity. The board must annually submit to the Secretary a comprehensive report covering: all possible use cases, thorough risk assessments for each use case, evaluation of whether certain applications should be constrained, mitigation plans, incident review, user education plans, conflicts of interest disclosure, and compliance updates. Board members face criminal liability (misdemeanor, up to $500 fine and/or 6 months imprisonment) for false statements, undisclosed conflicts, or misrepresentation of risks. Operators with multiple licensed systems need only one board. The independence requirement — no insiders on the board — is a key compliance detail.
§ 516. Ethics and risk management board and reports. 1. Every operator of a licensed high-risk advanced artificial intelligence system or systems shall establish an ethics and risk management board composed of no less than five individuals who shall have the responsibility to assess the ethical implications of all possible use cases of the system, whether such use cases are intended or unintended, and whether likely or unlikely to be used, and the current operational outcomes of the system. Such operator, other than an operator who is a natural person, operating more than one high-risk advanced artificial intelligence system with a supplemental license shall not be required to have more than one ethics and risk management board for each system. 2. No member of an ethics and risk management board shall be a member, officer, or director within the operator's entity. No member shall be required to be employed by the operator. 3. Such board shall adopt rules governing its decision-making processes, duties and responsibilities. Such rules shall not conflict with the provisions of this article. 4. Annually, the ethics and risk management board of each operator shall submit to the secretary a comprehensive report for each licensed high-risk advanced artificial intelligence system which consists of the following: (a) All possible use cases, whether intended or unintended, whether likely or unlikely. (b) A thorough risk assessment for each use case, considering and evaluating the potential for harm, irrespective of the probability of such risk materializing. This shall include, but not be limited to, the system's potential impact on privacy, security, fairness, economic implications, societal well-being, and safety of persons and the environment. (c) A detailed evaluation of known use cases of the system by users, exploring whether certain applications ought to be constrained or banned due to ethical considerations. This shall include an assessment of the operator's capacity to impose such constraints on use cases. (d) A mitigation plan for each identified risk, including preemptive measures, monitoring processes, and responsive actions. This shall also include a communication strategy to inform users and stakeholders about potential risks and steps taken to mitigate them. (e) A comprehensive review of any incidents or failures of the system in the past year, detailing the circumstances, impacts, measures taken to address the issue, and modifications made to prevent such incidents in the future. (f) Any existing attempts to educate users and, based on the existing use of the system by users, a detailed plan on how the operator intends to inform and instruct users on the safe and ethical use of the system, considering varying levels of digital literacy among users. (g) A disclosure of any conflicts of interest within the ethics board, which could potentially influence the board's decisions and recommendations. This shall include measures to manage and resolve such conflicts. (h) An update on the measures taken by the operator to ensure the system's adherence to existing laws, regulations, and ethical guidelines related to artificial intelligence. 5. In addition to any applicable civil penalties pursuant to section five hundred eight of this article, a member of an ethics and risk management board who makes a false statement, fails to disclose conflicts of interest or misrepresents the risks or severity of the risks posed by a system in the performance of their duties as a member of such board, shall be guilty of a misdemeanor and, upon conviction, shall be fined not more than five hundred dollars or imprisoned for not more than six months or both, in the discretion of the court.
Pending 2025-07-26
G-01.3G-01.4
State Tech. Law § 524
Plain Language
Every licensed high-risk AI system must automatically generate operational logs every time it operates. Logs must conform to Secretary-prescribed standards covering event types, format, access controls, encryption, cybersecurity, preservation, and disposal. Logs must be retained for 10 years from generation and are subject to regulatory inspection. The 10-year retention period is significantly longer than typical AI recordkeeping requirements (usually 2-5 years). Operators should plan for substantial data storage and security infrastructure to meet this obligation.
§ 524. Logging. Every time a licensee's system operates it shall automatically generate a log. Standards related to the specific types of events that are required to be logged, the format in which logs must be kept, the individuals or entities permitted to access logs and the conditions governing such access, the encryption and cybersecurity protocols to be applied to logs, the procedures for both the preservation and disposal of logs, and any other actions pertinent to log management shall conform to the standards set by the secretary. Such logs shall be preserved for a period of ten years from the date they are generated and shall be subject to inspection under section five hundred twenty-six of this article.
Pending 2025-07-26
G-01.3G-01.4
State Tech. Law § 527(1)-(2)
Plain Language
Operators must maintain all books, records, source code, and logs required by the Secretary, including at minimum all system-generated logs and a backup of every version of the system, stored securely per Secretary standards. Operators must also file annual reports on business and operations, sworn under penalty of perjury. The Secretary may demand additional regular or special reports at any time. Combined with the § 524 logging requirement and 10-year retention, this creates a comprehensive documentation and recordkeeping obligation covering the full operational lifecycle of every licensed system.
§ 527. Books, records, source code, and logs to be kept. 1. Every operator shall maintain such books, records, source code, and logs as the secretary shall require provided however that every operator shall, at least, maintain a copy of all logs generated from the system as well as a backup of every version of the system which shall be stored in a safe manner as prescribed by the secretary. 2. By a date to be set by the secretary, each operator shall annually file a report with the secretary giving such information as the secretary may require concerning the business and operations during the preceding calendar year of the operator within the state under the authority of this article. Such report shall be subscribed and affirmed as true by the operator under the penalties of perjury and be in the form prescribed by the secretary. In addition to such annual reports, the secretary may require of operators such additional regular or special reports as the secretary may deem necessary to the proper supervision of operators under this article. Such additional reports shall be in the form prescribed by the secretary and shall be subscribed and affirmed as true under the penalties of perjury.
Pending 2025-09-02
G-01.2
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must annually review their safety and security protocols, accounting for changes in model capabilities and evolving industry best practices. If modifications are needed, the developer must update the protocol and re-publish the redacted version conspicuously and re-transmit it to the Division of Homeland Security and Emergency Services — following the same publication process as the initial protocol. This is a continuing obligation that ensures protocols do not become stale as models evolve.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
Pending 2025-09-02
G-01.5
Gen. Bus. Law § 1421(4)(a)-(e)
Plain Language
Large developers must annually engage an independent third-party auditor to assess compliance with all § 1421 requirements. The auditor must receive unredacted access to all necessary materials and must produce a detailed report covering: compliance steps taken, identified noncompliance instances and improvement recommendations, an assessment of internal controls including the empowerment of designated senior personnel, and a certifying signature from the lead auditor. The unredacted audit report must be retained for the duration of deployment plus five years. A redacted version must be conspicuously published and transmitted to the Division of Homeland Security and Emergency Services. The unredacted report must be provided to the Division or Attorney General upon request, redacted only as required by federal law. The 90-day grace period for new large developers means this obligation kicks in promptly upon qualifying.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
Pending 2025-06-04
G-01.1
Gen. Bus. Law § 390-f(2)(a)
Plain Language
Every entity doing business or offering products to New York consumers must develop a responsible capability scaling policy governing its use and development of AI. The policy must constitute a set of best practices that identify, monitor, and rectify or mitigate risk of harm. The bill does not prescribe specific content requirements beyond this general framework — the CIO is empowered to promulgate implementing regulations. The CIO may issue waivers or designate categories of entities that are covered or exempt from this requirement.
Every person, firm, partnership, association or corporation doing business or offering products to consumers in New York state shall develop a responsible capability scaling policy for the use and development of artificial intelligence by such entity.
Pending
G-01.5
Civil Rights Law § 87(1)-(3), (5)-(9)
Plain Language
Developers and deployers of high-risk AI systems must engage independent third-party auditors on recurring schedules. Developers must complete a first audit within six months of initial offering or deployment, then annually. Deployers must complete a first audit within six months of deployment, a second one year later, then every two years. Developer audits must evaluate reasonable care against algorithmic discrimination and conformity of the risk management program. Deployer audits must also assess system accuracy and reliability against intended and actual use cases. Auditors must be independent — no prior service relationship with the company in the past 12 months, no competitive conflict for 5 years post-audit, no contingent fees. Audits may use AI tools in part (e.g., controlled testing, pattern detection) but cannot be completed entirely by AI — a different high-risk AI system cannot be used for auditing, and AI-drafted audits require meaningful human review. Auditors must receive all prior § 88 reports. Cross-compliance: audits conducted under other applicable law that satisfy all § 87 requirements are deemed compliant.
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A developer of a high-risk AI system shall complete at least: (i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and (ii) one audit every one year following the submission of the first audit. (b) A developer audit under this section shall include: (i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and (ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section. (a) A deployer of a high-risk AI system shall complete at least: (i) a first audit within six months after initial deployment; (ii) a second audit within one year following the submission of the first audit; and (iii) one audit every two years following the submission of the second audit. (b) A deployer audit under this section shall include: (i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; (ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and (iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine. 3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section. 5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article. 6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system. (a) Acceptable auditor uses of an AI system include, but are not limited to: (i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or (ii) detecting patterns in the behavior of an audited AI system. (b) An auditor shall not: (i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or (ii) use an AI system to draft an audit under this section without meaningful human review and oversight. 7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association. (b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity: (i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or (ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit. (c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result. 8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited. 9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
Pending
G-01.1G-01.2
Civil Rights Law § 89(1)-(3)
Plain Language
Each developer and deployer of a high-risk AI system must plan, document, and implement a risk management policy and program covering the identification, documentation, and mitigation of known and reasonably foreseeable algorithmic discrimination risks. The program must be iterative — regularly and systematically reviewed and updated over the system's life cycle, including updates to documentation. Reasonableness is assessed against NIST AI RMF v1.0 or an AG-approved equivalent framework, the entity's size and complexity, the system's nature and intended uses, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems if sufficient. The attorney general may require disclosure of the program and evaluate it for compliance. This obligation is the foundation that the independent audit under § 87 evaluates for conformity.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering: (a) The guidance and standards set forth in: (i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or (ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology; (b) The size and complexity of the developer or deployer; (c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and (d) The sensitivity and volume of data processed in connection with the high-risk AI system. 2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient. 3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
Pending 2027-01-01
G-01.3G-01.4
Civil Rights Law § 107(1)-(3)
Plain Language
Developers must support deployer compliance by: (1) providing pre-deployment evaluation reports and information necessary for deployers to conduct their own evaluations upon reasonable request; and (2) either cooperating with deployer-initiated assessments or arranging independent auditor assessments of the developer's policies and practices. When developers license covered algorithms to deployers, the written contract must specify data processing procedures, deployment instructions, data types, processing duration, mutual rights and obligations, and material change notification methods. Contracts must prohibit data combination across parties, may not relieve either party of statutory liability, and may not prohibit either party from reporting concerns to enforcement agencies. Developers must retain all deployer contracts for at least 10 years. The government entity provision (§ 107(4)) extends developer obligations to apply equally when the downstream user is a government entity.
1. A developer shall do the following: (a) upon the reasonable request of the deployer, make available to the deployer information necessary to demonstrate the compliance of the deployer with the requirements of this article, including: (i) making available a report of the pre-deployment evaluation described in section one hundred three of this article or the annual review of assessments conducted by the developer under section one hundred four of this article; and (ii) providing information necessary to enable the deployer to conduct and document a pre-deployment evaluation under section one hundred three or an impact assessment described in section one hundred four of this article; and (b) either: (i) allow and cooperate with reasonable assessments conducted by the deployer or the deployer's designated independent auditor; or (ii) arrange for an independent auditor to conduct an assessment of the developer's policies and practices in support of the obligations under this article using an appropriate and accepted control standard or framework and assessment procedure for such assessments and provide a report of such assessment to the deployer upon request. 2. A developer may offer or license a covered algorithm to a deployer pursuant to a written contract between the developer and deployer, provided that the contract: (a) clearly sets forth the data processing procedures of the developer with respect to any collection, processing, or transfer of data performed on behalf of the deployer; (b) clearly sets forth: (i) instructions for collecting, processing, transferring, or disposing of data by the developer or deployer in the context of the use of the covered algorithm; (ii) instructions for deploying the covered algorithm as intended; (iii) the nature and purpose of any collection, processing, or transferring of data; (iv) the type of data subject to such collection, processing, or transferring; (v) the duration of such processing of data; and (vi) the rights and obligations of both parties, including a method by which the developer shall notify the deployer of material changes to its covered algorithm; (c) shall not relieve a developer or deployer of any requirement or liability imposed on such developer or deployer under this article; (d) prohibits both the developer and deployer from combining data received from or collected on behalf of the other party with data the developer or deployer received from or collected on behalf of another party; and (e) shall not prohibit a developer or deployer from raising concerns to any relevant enforcement agency with respect to the other party. 3. Each developer shall retain for a period of ten years a copy of each contract entered into with a deployer to which it provides requested products or services.
Enacted 2025-06-03
G-01.3
Gen. Bus. Law § 1421(1)(b)
Plain Language
Large developers must retain a complete, unredacted version of the safety and security protocol — including a changelog of all updates and revisions — for the entire period the frontier model is deployed plus five additional years. This is a document retention obligation. Note that the publicly published version may include appropriate redactions (see § 1421(1)(c)), but the retained internal version must be unredacted. Organizations should ensure their records management systems can track versioning with dates.
Retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions. Such unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, shall be retained for as long as a frontier model is deployed plus five years.
Enacted 2025-06-03
G-01.3
Gen. Bus. Law § 1421(1)(d)
Plain Language
Large developers must record and retain detailed information about all tests and test results from frontier model assessments — both those required by the statute and those required by the developer's own safety and security protocol. Records must contain sufficient detail for third parties to replicate the testing procedure, creating a reproducibility standard. Retention period is the duration of deployment plus five years. The 'as and when reasonably possible' qualifier provides some flexibility for real-time testing contexts where immediate documentation may be impractical.
Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model required by this section or the developer's safety and security protocol that provides sufficient detail for third parties to replicate the testing procedure.
Enacted 2025-06-03
G-01.2
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must review their safety and security protocol at least annually, with the review accounting for changes in frontier model capabilities and evolving industry best practices. If the review results in material modifications, the updated protocol must be re-published publicly (with appropriate redactions) and re-transmitted to the AG and Division of Homeland Security. This creates a continuing maintenance obligation — the protocol is not a one-time pre-deployment document but a living document requiring annual reassessment. The trigger for re-publication is 'material modifications,' which introduces a materiality judgment call.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any material modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
Enacted 2025-06-03
G-01.6
Gen. Bus. Law § 1420(12)(e)
Plain Language
The safety and security protocol must designate senior personnel responsible for ensuring compliance with the statute. This effectively creates a mandatory accountability role — a named senior individual or individuals who bear responsibility for the developer's compliance with the RAISE Act. While embedded within the protocol definition rather than stated as a standalone obligation, it is independently actionable because a protocol that omits this designation is deficient on its face.
"Safety and security protocol" means documented technical and organizational protocols that: ... (e) Designate senior personnel to be responsible for ensuring compliance.
Enacted 2025-06-03
G-01.4
Gen. Bus. Law § 1421(5)
Plain Language
Large developers are prohibited from knowingly making false or materially misleading statements or omissions in any documents produced under the statute — including the safety and security protocol, test records, and safety incident reports. This is an anti-fraud provision that applies to all documentary submissions and publications required by the RAISE Act. The 'knowingly' mens rea standard means the developer must have actual awareness that the statement is false or misleading; negligent inaccuracies would not violate this provision.
A large developer shall not knowingly make false or materially misleading statements or omissions in or regarding documents produced pursuant to this section.
Pending 2027-01-01
G-01.5
Civ. Rights Law § 87(1)-(2), (3)-(9)
Plain Language
Both developers and deployers of high-risk AI systems must engage independent third-party auditors to evaluate their systems on a recurring schedule. Developers must complete a first audit within six months of offering or deploying the system, then annually. Deployers must complete a first audit within six months of deployment, a second audit one year later, then biennially. Audits must evaluate: (1) whether the entity has taken reasonable care to prevent algorithmic discrimination, (2) conformity of the risk management program with § 89, and for deployers additionally (3) system accuracy and reliability against intended and actual use cases. Strict auditor independence requirements apply — no entity that provided any service to the commissioning company in the past 12 months, and no competitor planning to compete for 5 years post-audit. Audit fees cannot be contingent on results. Audits may use AI as a tool (e.g., controlled testing) but may not be completed entirely by AI, and a separate high-risk AI system may not be used to complete the audit. An audit completed for compliance with another law satisfies this section if it meets all requirements. For systems already deployed at the effective date, an 18-month transition period applies (per § 88(6)).
1. Developers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section.
(a) A developer of a high-risk AI system shall complete at least:
(i) a first audit within six months after completion of development of the high-risk AI system and the initial offering of the high-risk AI system to a deployer for deployment or, if the developer is first deployer to deploy the high-risk AI system, after initial deployment; and
(ii) one audit every one year following the submission of the first audit.
(b) A developer audit under this section shall include:
(i) an evaluation and determination of whether the developer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system; and
(ii) an evaluation of the developer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine.
2. Deployers of high-risk AI systems shall cause to be conducted third-party audits in accordance with this section.
(a) A deployer of a high-risk AI system shall complete at least:
(i) a first audit within six months after initial deployment;
(ii) a second audit within one year following the submission of the first audit; and
(iii) one audit every two years following the submission of the second audit.
(b) A deployer audit under this section shall include:
(i) an evaluation and determination of whether the deployer has taken reasonable care to prevent foreseeable risk of algorithmic discrimination with respect to such high-risk AI system;
(ii) an evaluation of system accuracy and reliability with respect to such high-risk AI system's deployer-intended and actual use cases; and
(iii) an evaluation of the deployer's documented risk management policy and program required under section eighty-nine of this article for conformity with subdivision one of such section eighty-nine.
3. A deployer or developer may hire more than one auditor to fulfill the requirements of this section.
4. At the attorney general's discretion, the attorney general may:
(a) promulgate further rules as necessary to ensure that audits under this section assess whether or not AI systems produce algorithmic discrimination and otherwise comply with the provisions of this article; and
(b) recommend an updated AI system auditing framework to the legislature, where such recommendations are based on a standard or framework (i) designed to evaluate the risks of AI systems, and (ii) that is nationally or internationally recognized and consensus-driven, including but not limited to a relevant framework or standard created by the International Standards Organization.
5. The independent auditor shall have complete and unredacted copies of all reports previously filed by the deployer or developer under section eighty-eight of this article.
6. An audit conducted under this section may be completed in part, but shall not be completed entirely, with the assistance of an AI system.
(a) Acceptable auditor uses of an AI system include, but are not limited to:
(i) use of an audited high-risk AI system in a controlled environment without impacts on end users for system testing purposes; or
(ii) detecting patterns in the behavior of an audited AI system.
(b) An auditor shall not:
(i) use a different high-risk AI system that is not the subject of an audit to complete an audit; or
(ii) use an AI system to draft an audit under this section without meaningful human review and oversight.
7. (a) An auditor shall be an independent entity including but not limited to an individual, non-profit, firm, corporation, partnership, cooperative, or association.
(b) For the purposes of this article, no auditor may be commissioned by a developer or deployer of a high-risk AI system if such entity:
(i) has already been commissioned to provide any auditing or non-auditing service, including but not limited to financial auditing, cybersecurity auditing, or consulting services of any type, to the commissioning company in the past twelve months; or
(ii) is, will be, or plans to be engaged in the business of developing or deploying an AI system that can compete commercially with such developer's or deployer's high-risk AI system in the five years following an audit.
(c) Fees paid to auditors may not be contingent on the result of the audit and the commissioning company shall not provide any incentives or bonuses for a positive audit result.
8. The attorney general may promulgate further rules to ensure (a) the independence of auditors under this section, and (b) that teams conducting audits incorporate feedback from communities that may foreseeably be the subject of algorithmic discrimination with respect to the AI system being audited.
9. If a developer or deployer has an audit completed for the purpose of complying with another applicable federal, state, or local law or regulation, and the audit otherwise satisfies all other requirements of this section, such audit shall be deemed to satisfy the requirements of this section.
Pending 2027-01-01
G-01.1G-01.2
Civ. Rights Law § 89(1)-(3)
Plain Language
Every developer and deployer of high-risk AI systems must plan, document, and implement a risk management policy and program covering the identification, documentation, and mitigation of known or reasonably foreseeable risks of algorithmic discrimination. The program must be iterative, regularly and systematically reviewed and updated over the AI system's lifecycle, including documentation updates. Reasonableness is evaluated against the NIST AI RMF 1.0 or an equivalent framework designated by the AG, and must account for the entity's size and complexity, the system's nature and intended uses, and the sensitivity and volume of data processed. A single program may cover multiple high-risk AI systems if sufficient. The AG may require disclosure of the program and evaluate it for compliance. This is a continuing obligation — not a one-time pre-deployment exercise.
1. Each developer or deployer of high-risk AI systems shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of such high-risk AI system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under subdivision one of section eighty-six of this article. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk AI system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this section shall be reasonable considering:
(a) The guidance and standards set forth in:
(i) version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States department of commerce, or
(ii) another substantially equivalent framework selected at the discretion of the attorney general, if such framework was designed to manage risks associated with AI systems, is nationally or internationally recognized and consensus-driven, and is at least as stringent as version 1.0 of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology;
(b) The size and complexity of the developer or deployer;
(c) The nature, scope, and intended uses of the high-risk AI system developed or deployed; and
(d) The sensitivity and volume of data processed in connection with the high-risk AI system.
2. A risk management policy and program implemented pursuant to subdivision one of this section may cover multiple high-risk AI systems developed by the same developer or deployed by the same deployer if sufficient.
3. The attorney general may require a developer or a deployer to disclose the risk management policy and program implemented pursuant to subdivision one of this section in a form and manner prescribed by the attorney general. The attorney general may evaluate the risk management policy and program to ensure compliance with this section.
Pending 2025-10-11
G-01.3
GBL § 1551(3)(a)-(b)
Plain Language
When a developer distributes a high-risk AI decision system to deployers, it must make available — to the extent feasible — all documentation and information needed for the deployer to complete an impact assessment, delivered through model cards, dataset cards, or similar artifacts. Developers that also act as deployers of the same system are exempt from this documentation obligation unless the system is provided to an unaffiliated deployer.
3. (a) Except as provided in subdivision five of this section, any developer that, on or after January first, two thousand twenty-seven, offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence decision system shall, to the extent feasible, make available to such deployers and other developers the documentation and information relating to such high-risk artificial intelligence decision system necessary for a deployer, or the third party contracted by a deployer, to complete an impact assessment pursuant to this article. The developer shall make such documentation and information available through artifacts such as model cards, dataset cards, or other impact assessments. (b) A developer that also serves as a deployer for any high-risk artificial intelligence decision system shall not be required to generate the documentation and information required pursuant to this section unless such high-risk artificial intelligence decision system is provided to an unaffiliated entity acting as a deployer.
Pending 2025-10-11
G-01.1G-01.2
GBL § 1552(2)(a)-(b)
Plain Language
Deployers must implement and maintain a risk management policy and program covering all deployed high-risk AI decision systems. The program must specify principles, processes, and personnel for identifying, documenting, and mitigating algorithmic discrimination risks. Both the policy and program must be iterative and regularly reviewed and updated over the system lifecycle. Reasonableness is evaluated against NIST AI RMF, ISO/IEC 42001, or an equivalent framework, adjusted for the deployer's size, the system's nature and scope, and data sensitivity and volume. A single risk management program may cover multiple high-risk systems. Deployers meeting the conditions in subdivision 7 (developer contract assumption, non-exclusive data, and impact assessment pass-through) are exempt.
2. (a) Beginning on January first, two thousand twenty-seven, and except as provided in subdivision seven of this section, each deployer of a high-risk artificial intelligence decision system shall implement and maintain a risk management policy and program to govern such deployer's deployment of the high-risk artificial intelligence decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer shall use to identify, document, and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy shall be the product of an iterative process, the risk management program shall be an iterative process and both the risk management policy and program shall be planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence decision system. Each risk management policy and program implemented and maintained pursuant to this subdivision shall be reasonable, considering: (i) the guidance and standards set forth in the latest version of: (A) the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology; (B) ISO or IEC 42001 of the international organization for standardization; or (C) a nationally or internationally recognized risk management framework for artificial intelligence decision systems, other than the guidance and standards specified in clauses (A) and (B) of this subparagraph, that imposes requirements that are substantially equivalent to, and at least as stringent as, the requirements established pursuant to this section for risk management policies and programs; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence decision systems deployed by the deployer, including, but not limited to, the intended uses of such high-risk artificial intelligence decision systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence decision systems deployed by the deployer. (b) A risk management policy and program implemented and maintained pursuant to paragraph (a) of this subdivision may cover multiple high-risk artificial intelligence decision systems deployed by the deployer.
Pending 2025-10-11
G-01.3
GBL § 1553(1)(a)
Plain Language
Developers of general-purpose AI models must create and maintain technical documentation covering: training and testing processes, compliance evaluation results, intended tasks, types of downstream AI systems the model is intended for, acceptable use policies, release date, distribution methods, and input/output modalities and formats. The documentation must be reviewed and revised at least annually. The scope is calibrated to the model's size and risk profile. Exemptions apply for open-source models (subdivision 2(a)) and models used solely for internal purposes (subdivision 2(b)).
1. Beginning on January first, two thousand twenty-seven, each developer of a general-purpose artificial intelligence model shall, except as provided in subdivision two of this section: (a) create and maintain technical documentation for the general-purpose artificial intelligence model, which shall: (i) include: (A) the training and testing processes for such general-purpose artificial intelligence model; and (B) the results of an evaluation of such general-purpose artificial intelligence model performed to determine whether such general-purpose artificial intelligence model is in compliance with the provisions of this article; (ii) include, as appropriate, considering the size and risk profile of such general-purpose artificial intelligence model, at least: (A) the tasks such general-purpose artificial intelligence model is intended to perform; (B) the type and nature of artificial intelligence decision systems in which such general-purpose artificial intelligence model is intended to be integrated; (C) acceptable use policies for such general-purpose artificial intelligence model; (D) the date such general-purpose artificial intelligence model is released; (E) the methods by which such general-purpose artificial intelligence model is distributed; and (F) the modality and format of inputs and outputs for such general-purpose artificial intelligence model; and (iii) be reviewed and revised at least annually, or more frequently, as necessary to maintain the accuracy of such technical documentation;
Pending 2025-10-11
G-01.1
GBL § 1553(2)(d)
Plain Language
Developers of general-purpose AI models that qualify for the internal-use exemption from technical documentation requirements must still establish and maintain an AI risk management framework. The framework must be iterative and ongoing, and must include at minimum: internal governance, risk-framing (map function), risk management, and risk measurement (assess, analyze, and track). This ensures that even internally-used models have a baseline governance structure despite being exempt from external-facing documentation.
(d) A developer that is exempt pursuant to subparagraph (ii) of paragraph (a) of this subdivision shall establish and maintain an artificial intelligence risk management framework, which shall: (i) be the product of an iterative process and ongoing efforts; and (ii) include, at a minimum: (A) an internal governance function; (B) a map function that shall establish the context to frame risks; (C) a risk management function; and (D) a function to measure identified risks by assessing, analyzing and tracking such risks.
Pending 2025-06-25
G-01.2
Gen. Bus. Law § 1421(3)
Plain Language
Large developers must conduct an annual review of their safety and security protocol to ensure it accounts for changes in model capabilities and evolving industry best practices. If the review identifies needed modifications, the developer must update the protocol and re-publish the redacted version and transmit it to DHSES in the same manner as the initial publication. This is a continuing obligation — the protocol is not a static document filed once at deployment.
A large developer shall conduct an annual review of any safety and security protocol required by this section to account for any changes to the capabilities of their frontier models and industry best practices and, if necessary, make modifications to such safety and security protocol. If any modifications are made, the large developer shall publish the safety and security protocol in the same manner as required pursuant to paragraph (c) of subdivision one of this section.
Pending 2025-06-25
G-01.5
Gen. Bus. Law § 1421(4)(a)-(e)
Plain Language
Large developers must annually retain an independent third-party auditor to evaluate compliance with all § 1421 requirements. The auditor must receive full unredacted access and produce a signed report covering: compliance assessment, any noncompliance instances with recommendations, and assessment of internal controls including the designation and empowerment of senior compliance personnel. The developer must retain the unredacted report for the deployment period plus five years, conspicuously publish a redacted version, transmit the redacted version to DHSES, and make the unredacted version available to DHSES or the Attorney General upon request (with redactions only as required by federal law). The 90-day grace period for newly qualifying large developers means the first audit must be retained no later than 90 days after the developer first meets the large developer threshold.
(a) Beginning on the effective date of this article, or ninety days after a developer first qualifies as a large developer, whichever is later, a large developer shall annually retain a third party to perform an independent audit of compliance with the requirements of this section. Such third party shall conduct audits consistent with best practices. (b) The third party shall be granted access to unredacted materials as necessary to comply with the third party's obligations under this subdivision. (c) The third party shall produce a report including all of the following: (i) A detailed assessment of the large developer's steps to comply with the requirements of this section; (ii) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section; (iii) A detailed assessment of the large developer's internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the large developer, its employees, and its contractors; and (iv) The signature of the lead auditor certifying the results of the audit. (d) The large developer shall retain an unredacted copy of the report for as long as a frontier model is deployed plus five years. (e) (i) The large developer shall conspicuously publish a copy of the third party's report with appropriate redactions and transmit a copy of such redacted report to the division of homeland security and emergency services. (ii) The large developer shall grant the division of homeland security and emergency services or the attorney general access to the third party's report, with redactions only to the extent required by federal law, upon request.
Pre-filed 2025-11-01
G-01.3G-01.4
63 O.S. § 5503(D)
Plain Language
All documentation related to AI device use must comply with state and federal medical record-keeping requirements and be accessible for regulatory review by the State Department of Health. In addition, deployers must specifically track and document instances where a qualified end-user overrides or disagrees with AI device outputs, maintaining a summary report that includes the frequency and nature of overrides and the percentage or number of such disagreements. This creates both a general documentation compliance obligation and a specific AI-override tracking obligation.
D. All documentation shall comply with state and federal medical record-keeping requirements and be accessible for regulatory review. Documentation of relevant instances where a qualified end-user overrides or disagrees with AI device-generated outputs must be maintained through a summary report indicating the frequency and nature of overrides. Deployers shall document the percentage or number of such overrides or disagreements.
Pre-filed 2025-11-01
G-01.6
63 O.S. § 5504(A)
Plain Language
Every deployer of an AI medical device must establish a formal AI governance group that includes representation from qualified end-users (licensed physicians trained on the devices). This group is the designated body responsible for overseeing compliance with all provisions of the act. This is a structural governance requirement — the deployer must create and maintain this body, not merely designate a single compliance officer.
A. Deployers of any artificial intelligence (AI) device shall establish an AI governance group with representation from qualified end-users. This governance group is responsible for overseeing compliance with this act.
Pre-filed 2025-11-01
G-01.3
63 O.S. § 5504(B)
Plain Language
Deployers must maintain a current inventory of all deployed AI devices. For each device, the deployer must ensure that instructions for use and relevant safety and effectiveness documentation are accessible to all qualified end-users. This is an ongoing maintenance obligation — the inventory must be kept up to date as devices are added or removed.
B. Deployers shall maintain an updated inventory of deployed AI devices, with device instructions for use and any relevant safety and effectiveness documentation made accessible to all qualified end-users of the device.
Pre-filed 2025-11-01
G-01.3
63 O.S. § 5504(E)
Plain Language
For each deployed AI device, deployers must create and maintain documentation covering (1) the intended use case for the device in their clinical setting and (2) the training procedure for users. This ensures that each AI device has a documented purpose and that there is a formal training protocol for the qualified end-users who will operate it.
E. Deployers shall document the use case and user training procedure for the AI device.
Pending 2026-10-06
G-01.3G-01.4
35 Pa.C.S. § 3506
Plain Language
Facilities must retain records related to AI algorithms for a period to be determined by the Department of Health. While the specific retention period will be set by department policy, facilities should begin preserving all AI-related records — including compliance statements, training data documentation, performance reviews, and disclosure records — from the effective date of the chapter. The obligation is on the facility to retain; the department will set the timeframe.
The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
Pending 2026-10-06
G-01.3G-01.4
40 Pa.C.S. § 5207
Plain Language
Insurers must retain records related to AI use in utilization review for a period to be determined by the Insurance Department. Insurers should preserve all AI-related records pending department guidance on the specific retention period.
The department shall establish a record retention policy and determine the amount of time an insurer shall retain records. The department may request input from insurers or their representatives in making this determination.
Pending 2026-10-06
G-01.3G-01.4
40 Pa.C.S. § 5307
Plain Language
MA or CHIP managed care plans must retain records related to AI use for a period to be determined by the Department of Human Services.
The department shall establish a record retention policy and determine the amount of time an MA or CHIP managed care plan shall retain records. The department may request input from an MA or CHIP managed care plan or their representative to make this determination.
Pending 2026-04-01
G-01.3
12 Pa.C.S. § 7105(d)
Plain Language
Suppliers must maintain contemporaneous documentation describing five categories of information about the chatbot: the foundation models used in development, the training data used, compliance with federal and state privacy law, consumer data collection and sharing practices, and ongoing efforts to ensure accuracy, reliability, fairness, and safety. This is an internal documentation requirement — distinct from the public-facing policy or the Bureau filing — creating a recordkeeping obligation that covers the full lifecycle from development through operation.
(d) Documentation.--A supplier shall maintain documentation regarding the development and implementation of the chatbot that describes: (1) Foundation models used in development. (2) Training data used. (3) Compliance with Federal and State privacy law. (4) Consumer data collection and sharing practices. (5) Ongoing efforts to ensure accuracy, reliability, fairness and safety.
Pending 2027-01-09
G-01.3G-01.4
35 Pa.C.S. § 3506
Plain Language
The Department of Health will establish a record retention policy specifying how long facilities must retain records related to AI algorithms. The specific retention period will be set by the Department, with input from facilities and providers. Facilities should anticipate a retention obligation once the Department acts, and should begin preserving records from the effective date.
The department shall establish a record retention policy and determine the amount of time a facility shall retain records related to artificial-intelligence algorithms. The department may request input from facilities and health care providers or their representatives in making the determination under this section.
Pending 2027-01-09
G-01.3G-01.4
40 Pa.C.S. § 5207
Plain Language
The Insurance Department will establish a record retention policy specifying how long insurers must retain AI-related records. Insurers should anticipate a retention obligation and begin preserving records from the effective date.
The department shall establish a record retention policy and determine the amount of time an insurer shall retain records. The department may request input from insurers or their representatives in making this determination.
Pending 2027-01-09
G-01.3G-01.4
40 Pa.C.S. § 5307
Plain Language
The Department of Human Services will establish a record retention policy specifying how long MA or CHIP managed care plans must retain AI-related records.
The department shall establish a record retention policy and determine the amount of time an MA or CHIP managed care plan shall retain records. The department may request input from an MA or CHIP managed care plan or their representative to make this determination.
Pending 2026-01-21
G-01.3G-01.4
R.I. Gen. Laws § 27-84-3(a)(3)
Plain Language
Insurers must maintain records of all AI-driven decisions for at least five years. The retention obligation is not limited to adverse determinations — it covers all AI decisions — but specifically calls out adverse benefit determinations where AI made or substantially factored into the decision. Practically, insurers need a records management system that captures and preserves documentation of every claim or coverage decision involving AI, including the AI output, the human review (if any), and the rationale. These records must be producible to OHIC/DBR upon request under § 27-84-3(a)(2).
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
Pending 2026-02-12
G-01.3G-01.4
§ 28-5.2-2(d)
Plain Language
Employers must maintain contemporaneous, true, and accurate records of all monitoring data used in hiring, promotion, termination, discipline, or compensation decisions for five years, and must be able to produce them upon employee, authorized representative, or department request. All monitoring data must be destroyed no later than 61 months after collection unless the employee provides written informed consent to longer retention. Employers must implement reasonable administrative, technical, and physical data security practices appropriate to the data's volume and nature. Employees have the right to request corrections to erroneous data — this right is not time-limited.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Pending 2026-01-09
G-01.3G-01.4
R.I. Gen. Laws § 27-84-3(a)(3)
Plain Language
Insurers must maintain documentation of all AI decisions — including but not limited to adverse benefit determinations where AI made or substantially contributed to the determination — for at least five years. This encompasses both administrative and non-administrative adverse benefit determinations. The retention period is notably longer than the two- to three-year periods seen in many other AI governance statutes. Records must be maintained in a form that supports production to OHIC/DBR upon request under § 27-84-3(a)(2).
Insurers shall maintain documentation of artificial intelligence decisions for at least five (5) years including adverse benefit determinations where artificial intelligence made, or was a substantial factor in making, the adverse benefit determination.
Pending 2026-02-06
G-01.3G-01.4
§ 28-5.2-2(d)
Plain Language
Employers must create and retain contemporaneous, true, and accurate records of all electronic monitoring data used in employment decisions (hiring, promotion, termination, discipline, compensation) for five years. All monitoring-collected employee data must be destroyed no later than 61 months after collection unless the employee provides written informed consent for longer retention. Employers must maintain reasonable administrative, technical, and physical data security practices proportionate to the data volume and nature. Employees have a right to request corrections to erroneous data. Records must be producible upon request by the employee, their authorized representative, or the Department.
(d) An employer shall establish, maintain, and preserve for five (5) years contemporaneous, true, and accurate records of data gathered through the use of an electronic monitoring tool and used in a hiring, promotion, termination, disciplinary or compensation decision to ensure compliance with the employee or their authorized representative or the department requests for data. The employer shall destroy any employee information collected via an electronic monitoring tool no later than sixty-one (61) months after collection unless the employee has provided written and informed consent to the retention of their data by the employer. An employer shall establish, implement and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of employee data, appropriate to the volume and nature of the employee data at issue. An employee shall have the right to request corrections to erroneous employee data.
Pre-filed 2026-01-01
G-01.1
S.C. Code § 39-80-20(D)
Plain Language
Chatbot providers must develop, implement, and maintain a written, comprehensive data security program covering administrative, technical, and physical safeguards. Safeguards must be proportionate to the volume and nature of personal data and chat logs maintained. The written program must be made publicly available on the provider's website. This is both a governance obligation (formal program documentation) and a transparency obligation (public publication).
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
Pending 2025-01-01
G-01.1
S.C. Code § 39-80-20(D)
Plain Language
Chatbot providers must develop, implement, and maintain a written comprehensive data security program covering administrative, technical, and physical safeguards. The program must be proportionate to the volume and nature of personal data and chat logs the provider maintains. The written program must be made publicly available on the provider's website. This is both an operational requirement (implement and maintain) and a transparency requirement (publish on website).
(D) A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program must be written and made publicly available on the chatbot provider's website.
Pending
G-01.1G-01.2
S.C. Code § 37-31-30(B)
Plain Language
Deployers must establish and maintain a risk management policy and program covering the identification, documentation, and mitigation of algorithmic discrimination risks. The program must specify principles, processes, and personnel, and must be iteratively reviewed and updated over the system's lifecycle. Reasonableness is assessed considering the NIST AI RMF, ISO/IEC 42001, or an AG-designated framework, as well as the deployer's size, system scope, and data sensitivity. A single program may cover multiple high-risk AI systems. The small deployer exemption in subsection (F) applies.
(B)(1) Except as provided in subsection (F), a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable considering: (a)(i) The guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (ii) any risk management framework for artificial intelligence systems that the Attorney General, in his discretion, may designate; (b) the size and complexity of the deployer; (c) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (d) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) A risk management policy and program implemented pursuant to item (1) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Enacted 2024-05-01
G-01.3
Utah Code § 13-70-304(2), (4)
Plain Language
Participants in the AI Learning Laboratory must provide information to state agencies and report to the Office as specified in their participation agreement. They must also retain records as required by Office rules or the agreement. The specifics of what information, what reports, and what records will be determined by the Office's rules and the individual participation agreement — the statute delegates those details.
(2) A participant shall: (a) provide required information to state agencies in accordance with the terms of the participation agreement; and (b) report to the office as required in the participation agreement. ... (4) A participant shall retain records as required by office rule or the participation agreement.
Enacted 2024-05-01
G-01.1
Utah Code § 13-70-302(4), (6)
Plain Language
Each regulatory mitigation agreement must specify scope limitations on the AI technology's use (user types, geographic boundaries, and other implementation constraints), safeguards that must be in place, and the specific regulatory relief granted. Critically, participants remain fully subject to every legal and regulatory requirement that the agreement does not expressly waive or modify. This provision structures the sandbox as a limited, documented departure from baseline regulation rather than a blanket exemption.
(4) A regulatory mitigation agreement between a participant and the office and relevant agencies shall specify: (a) limitations on scope of the use of the participant's artificial intelligence technology, including: (i) the number and types of users; (ii) geographic limitations; and (iii) other limitations to implementation; (b) safeguards to be implemented; and (c) any regulatory mitigation granted to the applicant. ... (6) A participant remains subject to all legal and regulatory requirements not expressly waived or modified by the terms of the regulatory mitigation agreement.
Pending 2026-07-01
G-01.3G-01.4
Va. Code § 19.2-11.14(E)
Plain Language
Law-enforcement agencies must retain the first draft of any AI-generated report or record for the same duration as the final version. The AI program used must maintain an audit trail identifying: who used AI to create or edit the report, all changes made after the initial draft, and any video or audio footage used as input. This creates a documentation and records-retention obligation that ensures the provenance and revision history of AI-generated law-enforcement records can be reconstructed for litigation, oversight, or audit purposes.
E. The first draft of any report or record created in whole or in part by using generative artificial intelligence shall be retained for as long as the final report is retained. The program used to generate a draft or final report shall maintain an audit trail that, at a minimum, identifies (i) the person who used artificial intelligence to create or edit the report; (ii) any changes made to the report following the initial draft; and (iii) the video and audio footage used to create a report, if any.
Pending 2026-07-01
G-01.3G-01.4
Va. Code § 38.2-3407.15(B)(15)(iii)
Plain Language
Carriers must maintain documentation of all AI-driven decisions related to claims and coverage management for a minimum of three years. This is a recordkeeping obligation — the documentation must be retained in a form that can be produced to the Bureau upon request under subdivision (ii). The three-year retention period runs from the date of the AI decision, creating an ongoing rolling retention window.
Each carrier shall (iii) maintain documentation of AI decisions for at least three years;
Pending 2025-07-01
G-01.1G-01.2
9 V.S.A. § 4193g(a)-(b)
Plain Language
Every developer and deployer must plan, document, and implement a risk management policy and program governing its automated decision systems. The program must specify the principles, processes, and personnel used to identify, document, and mitigate known or foreseeable risks of algorithmic discrimination. It must be iterative with regular systematic reviews and updates over the system's lifecycle. Reasonableness is assessed considering: NIST AI RMF v1.0 (or a later version if the AG determines it is at least as stringent); the entity's size and complexity; the nature and scope of the system; and the sensitivity and volume of data processed. A single program may cover multiple systems if sufficient. The NIST AI RMF reference provides a benchmark standard, though compliance is evaluated based on reasonableness rather than strict conformity.
(a) Each developer or deployer of automated decision systems used in consequential decisions shall plan, document, and implement a risk management policy and program to govern development or deployment, as applicable, of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination covered under section 4193b of this title. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates, including updates to documentation. A risk management policy and program implemented and maintained pursuant to this subsection shall be reasonable considering the: (1) guidance and standards set forth in version 1.0 of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce, or the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology if, in the Attorney General's discretion, the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology in the U.S. Department of Commerce is at least as stringent as version 1.0; (2) size and complexity of the developer or deployer; (3) nature, scope, and intended uses of the automated decision system developed or deployed for use in consequential decisions; and (4) sensitivity and volume of data processed in connection with the automated decision system. (b) A risk management policy and program implemented pursuant to subsection (a) of this section may cover multiple automated decision systems developed by the same developer or deployed by the same deployer for use in consequential decisions if sufficient.
Pre-filed 2025-07-01
G-01.1
9 V.S.A. § 4193g(b)
Plain Language
Deployers may not deploy an inherently dangerous AI system or any AI system creating reasonably foreseeable risks unless they have designed and implemented a risk management policy and program for that system. The policy must specify the principles, processes, and personnel the deployer will use to identify, mitigate, and document foreseeable risks. The program must be at least as stringent as the latest NIST AI Risk Management Framework, and must also be reasonable in light of the deployer's size and complexity, the nature and scope of the system (including intended and unintended uses and deployer modifications), and the data the system processes as inputs. This is a deployment prerequisite — the program must be in place before the system goes live.
(b) No deployer shall deploy an inherently dangerous artificial intelligence system or an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter unless the deployer has designed and implemented a risk management policy and program for the model or system. The risk management policy shall specify the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk that is a reasonably foreseeable consequence of deploying or using the system. Each risk management policy and program designed, implemented, and maintained pursuant to this subsection shall be: (1) at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the NIST; and (2) reasonable considering: (A) the size and complexity of the deployer; (B) the nature and scope of the system, including the intended uses and unintended uses and the modifications made to the system by the deployer; and (C) the data that the system, once deployed, processes as inputs.
Pre-filed 2026-07-01
G-01.1
9 V.S.A. § 4193b(d)
Plain Language
Chatbot providers must develop, implement, and maintain a written comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and nature of personal data and chat logs they maintain. The program must be published on the provider's website. This is both an operational requirement (the program must actually exist and function) and a public transparency requirement (the written program must be publicly accessible).
(d) Data security program. A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of the personal data and chat logs maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
Passed 2026-07-01
G-01.1G-01.3
18 V.S.A. § 9764(a)-(b)
Plain Language
Mental health chatbot suppliers may claim an affirmative defense against professional regulation enforcement actions if they have created, maintained, and implemented a comprehensive written policy covering 15 enumerated requirements, maintained documentation of the chatbot's development and implementation (including foundation models, training tools, privacy compliance, data practices, and safety efforts), filed the policy with the Attorney General, and complied with it at the time of the alleged violation. The required policy is extensive — it must cover clinical professional involvement, best-practices monitoring, pre- and post-deployment testing benchmarked against human therapy safety, adverse outcome identification, user harm reporting mechanisms, real-time acute harm protocols, regular safety audits, user disclosure of AI nature and limitations, prioritization of user safety over engagement, anti-discrimination measures, and HIPAA-equivalent privacy compliance. While structured as an affirmative defense rather than a mandatory obligation, as a practical matter any supplier seeking regulatory protection will need to comply with all requirements.
(a) It is an affirmative defense to liability in an action for unlawful or unprofessional conduct brought against a supplier by the Office of Professional Regulation or the Board of Medical Practice if the supplier demonstrates that the supplier meets all of the following conditions: (1) the supplier created, maintained, and implemented a policy that meets the requirements of subsection (b) of this section; (2) the supplier maintains documentation regarding the development and implementation of the mental health chatbot that describes: (A) foundation models used in development; (B) training tools used; (C) compliance with federal health privacy regulations; (D) user data collection and sharing practices; and (E) ongoing efforts to ensure accuracy, reliability, fairness, and safety; (3) the supplier filed the policy with the Office of the Attorney General; and (4) the supplier complied with all requirements of the filed policy at the time of the alleged violation. (b) A policy described in subdivision (a)(1) of this section shall meet all of the following requirements: (1) be in writing; (2) clearly state: (A) the intended purposes of the mental health chatbot; and (B) the abilities and limitations of the mental health chatbot; (3) describe the procedures by which the supplier: (A) ensures that qualified mental health providers licensed in Vermont or in one or more other states, or both, are involved in the development and review process; (B) ensures that the mental health chatbot is developed and monitored in a manner consistent with clinical best practices; (C) conducts testing prior to making the mental health chatbot publicly available and regularly thereafter to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in psychotherapy with a licensed mental health provider; (D) identifies reasonably foreseeable adverse outcomes to and potentially harmful interactions with users that could result from using the mental health chatbot; (E) provides a mechanism for a user to report any potentially harmful interactions from use of the mental health chatbot; (F) implements protocols to assess and respond to risk of harm to users or other individuals; (G) details actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions; (H) implements protocols to respond in real time to acute risk of physical harm; (I) reasonably ensures regular, objective reviews of safety, accuracy, and efficacy, which may include internal or external audits; (J) provides users any necessary instructions on the safe use of the mental health chatbot; (K) ensures users understand that they are interacting with artificial intelligence; (L) ensures users understand the intended purpose, capabilities, and limitations of the mental health chatbot; (M) prioritizes user mental health and safety over engagement metrics or profit; (N) implements measures to prevent discriminatory treatment of users; and (O) ensures compliance with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A, C, and E, as if the supplier were a covered entity, and applicable consumer protection requirements, including sections 9761-9763 of this subchapter.
Pending 2027-01-01
G-01.1
Sec. 2(5)
Plain Language
Developers that conform their high-risk AI systems to the NIST AI RMF, ISO/IEC 42001, or another nationally or internationally recognized risk management framework receive a presumption of compliance with the developer obligations in Section 2. This is a safe harbor — it does not eliminate the underlying obligation but shifts the burden. Developers choosing not to follow one of these frameworks must independently demonstrate compliance.
(5) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
Pending 2027-01-01
G-01.2
Sec. 2(6)
Plain Language
When a developer performs an intentional and substantial modification to a high-risk AI system, the developer must update all previously provided disclosures within 90 days to keep them accurate. Routine deployer customizations and predetermined continuous-learning changes covered in the initial impact assessment are excluded from the definition of intentional and substantial modification and do not trigger this update obligation.
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
Pending 2027-01-01
G-01.1
Sec. 3(2)(a)-(c)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions unless they have designed and implemented a risk management policy and program specifying the principles, processes, and personnel for identifying, mitigating, and documenting algorithmic discrimination risks. Aligning the program with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent framework creates a rebuttable presumption of compliance. This is a prerequisite to deployment — the program must exist before the system is used for consequential decisions.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
Pending 2027-01-01
G-01.2
Sec. 3(7)
Plain Language
When a developer notifies a deployer of an intentional and substantial modification to a high-risk AI system, the deployer must update all of its consumer-facing disclosures within 30 days to ensure accuracy. This is a shorter window than the 90 days developers have under Section 2(6), reflecting the expectation that deployers can update their disclosures more quickly once they receive the developer's updated documentation.
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
Pending 2026-07-01
G-01.1G-01.2
Sec. 4(1)-(3)
Plain Language
Beginning July 1, 2027, deployers must implement and maintain a risk management policy and program governing deployment of each high-risk AI system. The program must specify the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks, and must be iteratively reviewed and updated throughout the system lifecycle. Reasonableness is judged by deployer size and complexity, system scope and intended uses, data sensitivity and volume, and adherence to a recognized framework such as the NIST AI RMF, ISO/IEC 42001, or another framework designated by the attorney general. A single program may cover multiple high-risk systems. Small deployers (fewer than 50 FTEs who do not use their own training data) may be exempt under Section 6 if additional conditions are met.
(1) Beginning July 1, 2027, and except as provided in section 5(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) Nothing in this section may be construed to require a deployer to disclose any trade secret, or other confidential or proprietary information.
Pending 2027-01-01
G-01.2
Sec. 2(6)
Plain Language
When a developer makes an intentional and substantial modification to a high-risk AI system, all disclosures required under Section 2 must be updated within 90 days to remain accurate. The definition of intentional and substantial modification narrows the trigger to changes that create new material discrimination risks — routine deployer customizations within scope and pre-approved continuous learning changes do not trigger the update obligation.
(6) For a disclosure required pursuant to this section, a developer shall, no later than 90 days after the developer performs an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
Pending 2027-01-01
G-01.1
Sec. 3(2)
Plain Language
Deployers may not use a high-risk AI system for consequential decisions unless they have designed and implemented a risk management policy and program covering the principles, processes, and personnel for identifying, mitigating, and documenting algorithmic discrimination risks. Alignment with the NIST AI RMF, ISO/IEC 42001, or a substantially equivalent recognized framework creates a rebuttable presumption of conformity. This is a deployment prerequisite — the deployer must have the program in place before using the system.
(2)(a) A deployer may not deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy must specify the principles, processes, and personnel that the deployer must use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. (b) A risk management policy and program designed, implemented, and maintained pursuant to this section is presumed to be in conformity with related requirements set out in this section if the policy and program align with the guidance and standards set forth in the latest version of: (i) The artificial intelligence risk management framework published by the national institute of standards and technology; (ii) Standard ISO/IEC 42001 of the international organization for standardization; or (iii) A nationally or internationally recognized risk management framework for artificial intelligence systems with requirements that are substantially equivalent to, and at least as stringent as, the guidance and standards described in (b)(i) and (ii) of this subsection (2). (c) High-risk artificial intelligence systems that are in conformity with the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, standard ISO/IEC 42001 of the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, are presumed to be in conformity with related requirements set out in this section.
Pending 2027-01-01
G-01.2
Sec. 3(7)-(8)
Plain Language
Deployers must update all required disclosures within 30 days after being notified by the developer of an intentional and substantial modification to the AI system. Separately, if a deployer itself performs an intentional and substantial modification, it must also comply with all developer-level documentation and disclosure requirements under Section 2. This means a deployer that significantly modifies a system effectively steps into the developer's shoes for documentation purposes.
(7) For a disclosure required pursuant to this section, each deployer shall, no later than 30 days after the deployer is notified by the developer that the developer has performed an intentional and substantial modification to a high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate. (8) A deployer who performs an intentional and substantial modification to a high-risk artificial intelligence system shall comply with the documentation and disclosure requirements for developers pursuant to section 2 of this act.
Pending 2026-07-01
G-01.1G-01.2
Sec. 4(1)-(2)
Plain Language
Each deployer of a high-risk AI system must implement and maintain a risk management policy and program beginning July 1, 2027. The program must identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination, using an iterative process regularly reviewed and updated over the system lifecycle. The program's reasonableness is evaluated based on the deployer's size and complexity, the nature and scope of deployed systems, data sensitivity and volume, and adherence to a recognized risk management framework — NIST AI RMF, ISO/IEC 42001, an equivalent international standard, or a framework designated by the AG. A single program may cover multiple high-risk AI systems. Small deployers meeting the conditions of Section 7 are exempt (fewer than 50 employees, no use of own data to train, system used for disclosed purposes, and developer's impact assessment made available to consumers).
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each deployer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the deployer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the deployer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
Pending 2026-07-01
G-01.1G-01.2
Sec. 5(1)-(5)
Plain Language
Each developer of a high-risk AI system with 50 or more full-time equivalent employees must implement and maintain a risk management policy and program beginning July 1, 2027, with the same substantive requirements as the deployer program (Sec. 4): the program must identify, document, and mitigate algorithmic discrimination risks using an iterative process and must align with NIST AI RMF, ISO/IEC 42001, an equivalent framework, or one designated by the AG. A developer that also deploys its own system is not required to produce the developer-side documentation unless the system is provided to an unaffiliated deployer. Developers with fewer than 50 FTEs are entirely exempt from this section.
(1) Beginning July 1, 2027, and except as provided in section 6(6) of this act, each developer of a high-risk artificial intelligence system shall implement and maintain a risk management policy and program to govern the developer's deployment of a high-risk artificial intelligence system. (2)(a) The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must include an iterative process that is planned, implemented, and regularly and systematically reviewed and updated over the life cycle of the high-risk artificial intelligence system. (b) A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable, considering: (i) The size and complexity of the developer; (ii) The nature and scope of the high-risk artificial intelligence systems deployed by the developer including, but not limited to, the intended uses of such high-risk artificial intelligence systems; (iii) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the developer; and (iv) A risk management framework that either: (A) Adheres to the guidance and standards set forth in the latest version of the artificial intelligence risk management framework published by the national institute of standards and technology, ISO/IEC 42001, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements; or (B) Complies with any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate. (c) A risk management policy and program implemented and maintained pursuant to this subsection (2) may cover multiple high-risk artificial intelligence systems deployed by the deployer. (3) A developer that also serves as a deployer for any high-risk artificial intelligence system may not be required to generate the documentation required by this section unless such high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer or as otherwise required by law. (5) This section does not apply to a developer with fewer than 50 full-time equivalent employees.
Pending 2026-06-06
G-01.1
§ 15-17-3(a)
Plain Language
Any private entity that possesses biometric identifiers or biometric information must develop and make publicly available a written retention and destruction policy. The policy must establish a schedule for permanently destroying biometric data when the original purpose for collection has been satisfied or within three years of the individual's last interaction with the entity — whichever comes first. The entity must then actually comply with its own published schedule and destruction guidelines, absent a valid warrant or subpoena. This is both a documentation obligation and an ongoing operational obligation.
(a) A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within three years of the individual's last interaction with the private entity, whichever occurs first. Absent a valid warrant or subpoena issued by a court of competent jurisdiction, a private entity in possession of biometric identifiers or biometric information must comply with its established retention schedule and destruction guidelines.