S-01
Safety & Prohibited Conduct
AI System Safety Program
Developers and deployers of high-risk AI systems must conduct documented safety evaluations before deployment, conduct adversarial testing to identify misuse potential and failure modes, and maintain ongoing safety controls. Pre-deployment evaluation does not permanently satisfy this obligation — post-deployment monitoring and re-evaluation are continuing requirements.
Applies to DeveloperDeployerProfessionalGovernment Sector Foundation ModelHealthcareGovernment System
Bills — Enacted
2
unique bills
Bills — Proposed
22
Last Updated
2026-03-29
Core Obligation

Developers and deployers of high-risk AI systems must conduct documented safety evaluations before deployment, conduct adversarial testing to identify misuse potential and failure modes, and maintain ongoing safety controls. Pre-deployment evaluation does not permanently satisfy this obligation — post-deployment monitoring and re-evaluation are continuing requirements.

Sub-Obligations6 sub-obligations
ID
Name & Description
Enacted
Proposed
S-01.1
Internal pre-deployment safety evaluation A documented safety evaluation covering the system's behavior across intended use cases and reasonably foreseeable misuse cases must be conducted and retained before deployment. Must identify failure modes and the harms they could cause.
1 enacted
9 proposed
S-01.2
Red-teaming and adversarial testing Structured adversarial testing must be conducted to identify the system's potential for misuse, harmful output elicitation, jailbreaking, and dangerous capability expression. Covers both internal and, for frontier models, independent external red-teaming.
0 enacted
0 proposed
S-01.3
Third-party safety evaluation For frontier or high-capability models, independent external safety evaluation by a qualified third party is required or strongly expected. The third party must have meaningful model access and freedom to probe without restriction.
0 enacted
1 proposed
S-01.4
Post-deployment monitoring and re-evaluation Deployed AI systems must be monitored for drift, unexpected behavior, and safety incidents. Material model updates and safety incidents trigger re-evaluation obligations.
0 enacted
11 proposed
S-01.5
Ongoing risk management program A formal documented AI risk management program must be established and maintained, covering risk identification, assessment criteria, mitigation strategies, and escalation procedures. The NIST AI RMF is commonly cited as a safe harbor framework.
2 enacted
7 proposed
S-01.7
Continuous Post-Deployment Quality Assurance Deployed AI tools must be subject to periodic performance review and revision to maximize accuracy, reliability, and safety on an ongoing operational basis, distinct from pre-deployment testing or incident response.
0 enacted
15 proposed
Bills That Map This Requirement 24 bills
Bill
Status
Sub-Obligations
Section
Passed 2026-10-01
S-01.7
Section 1(c)(2)
Plain Language
Insurers must annually certify to the Department of Insurance that (1) the AI system and its outputs are periodically reviewed to maximize accuracy and reliability, and (2) AI use in utilization review complies with the individualized data, fairness, and non-discrimination requirements of subsection (b). The first element is a substantive periodic performance review obligation — the insurer must actually conduct ongoing reviews, not merely certify at year-end. The second element is a compliance certification overlapping with the subsection (b)(2) annual certification but framed in the context of ongoing utilization review operations.
(2) Certify annually to the department that: (i) use of artificial intelligence and the outcomes that it generates are reviewed on a periodic basis to maximize accuracy and reliability; and (ii) use of artificial intelligence in utilization review complies with the requirements of subsection (b).
Pending 2026-01-01
S-01.4S-01.7
A.R.S. § 44-1383.02(C)
Plain Language
Chatbot providers must, on a monthly basis, evaluate their chatbots for potential risk of harm to users and publish information about their chatbots on their website. Providers must also mitigate any identified risks of harm. The specific form and content of evaluations and the definition of risk of harm will be established by Attorney General rulemaking. This is a continuous, monthly operational obligation — significantly more frequent than the annual reviews required by most other AI safety statutes.
In compliance with the rules adopted by the attorney general pursuant to section 44-1383.03, a chatbot provider shall: 1. On a monthly basis: (a) Evaluate its chatbot for potential risk of harm to users. (b) Make information about its chatbot publicly available on its website. 2. Mitigate any risk of harm to users.
Pending 2027-07-01
S-01.5
Bus. & Prof. Code § 22612(a)-(b)
Plain Language
Operators must annually conduct and document a comprehensive child safety risk assessment covering the likelihood of covered harms, differential risks by age and developmental stage, known child vulnerabilities, empirical usage data, and relevant research and regulatory guidance. Operators must then take and document reasonable mitigation measures for each identified risk. This is not a one-time exercise — it must be performed annually and is grounded in actual use data. The covered harm definition is broad, encompassing physical, financial, psychological, privacy, and discrimination harms.
On or before July 1, 2027, an operator shall do all of the following: (a) Annually perform and document a comprehensive risk assessment to identify any child safety risk posed by the design, configuration, and operation of the companion chatbot that assesses all of the following: (1) The likelihood of a covered harm occurring to users. (2) Differential risks across age groups and developmental stages. (3) Known vulnerabilities of children. (4) Empirical data from actual use. (5) Relevant academic research and regulatory guidance. (b) Take and document measures that reasonably mitigate any child safety risk identified in a risk assessment conducted pursuant to subdivision (a).
Pending 2028-07-01
S-01.4S-01.7
HRS § 321-__ (Monitoring; performance evaluation; record keeping)(1)-(3)
Plain Language
Health care providers using AI in consequential patient decisions must maintain an ongoing program of monitoring, performance evaluation, and remediation for those AI systems. Monitoring must cover actual usage in consequential decision-making. Regular performance evaluations must assess potential biases, risks to patient safety and data confidentiality, and develop mitigation strategies for identified risks. The provider must also implement procedures to address deficiencies discovered through monitoring or evaluation, up to and including suspending or recalibrating the AI system. The frequency of evaluations will be established by Department of Health rules.
Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall: (1) Monitor the usage of artificial intelligence systems to make, or be a substantial factor in making, consequential decisions; (2) Conduct regular performance evaluations of the artificial intelligence systems, including the assessment of: (A) Potential biases; (B) Risks to the safety and rights of patients, including the confidentiality of personal data; and (C) Mitigation strategies for any identified risks; (3) Implement procedures to address any deficiencies identified through the monitoring or performance evaluations, including the suspension or recalibration of any artificial intelligence system;
Pending 2025-07-01
S-01.5
§ 554J.2(1)
Plain Language
Every deployer of a chatbot must establish and maintain ongoing protocols designed to detect, respond to, report, and mitigate harms the chatbot may cause users. These protocols must prioritize user safety and well-being over the deployer's commercial or other interests. This is a continuing obligation — the protocols must be maintained, not merely established once. The statute does not specify the content of the protocols in detail, leaving significant discretion to deployers but also creating compliance ambiguity.
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
Pending
S-01.4S-01.5
§ 554J.2(1)(a)
Plain Language
Deployers must implement and maintain ongoing protocols to detect, respond to, report, and mitigate harms their public-facing chatbot may cause users. The protocols must take commercially reasonable steps — meaning steps consistent with prevailing industry standards and proportionate to the deployer's size and resources — to protect user safety and well-being. This is a continuous operating requirement, not a one-time pre-launch check. A deployer that makes commercially reasonable efforts to comply with the entire chapter is not liable for unforeseeable or emergent outputs (safe harbor under § 554J.5).
A deployer of a public-facing chatbot shall do all of the following: a. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the public-facing chatbot may cause a user in a manner that takes commercially reasonable steps to protect the safety and well-being of users.
Pending
S-01.4S-01.5
§ 554J.2(1)
Plain Language
Deployers of chatbots must establish and continuously maintain protocols that detect, respond to, report on, and mitigate harms the chatbot may cause users. The protocols must prioritize user safety and well-being over the deployer's own commercial or operational interests. This is a continuing operational obligation — not a one-time pre-deployment check. The statute does not specify the form, content, or review cadence of these protocols, giving deployers discretion on implementation details.
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
Pending 2026-10-01
S-01.7
Insurance Article § 15–10B–05.1(c)(9)
Plain Language
Carriers must review the performance, use, and outcomes of their AI utilization review tools at least quarterly and revise them as necessary to maximize accuracy and reliability. This is a continuing operational review obligation — not a one-time pre-deployment test. The quarterly cadence is more frequent than the annual review requirement seen in many other jurisdictions. This is existing law reenacted without amendment.
(9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability;
Pending 2026-01-01
S-01.1S-01.7
G.S. 114B-4(d)
Plain Language
Licensed health information chatbot operators must demonstrate the chatbot's effectiveness through three separate requirements: (1) peer-reviewed, controlled trials with adequate sample sizes using real-world performance data, (2) comparative analysis against human expert performance, and (3) meeting minimum domain benchmarks set by the Department. These are substantive efficacy validation requirements — not just safety testing — and are unusual in requiring peer-reviewed trials and human-expert benchmarking for a chatbot product.
(d) A licensees shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
Pending 2027-01-01
S-01.4S-01.7
G.S. § 114B-4(b)(1)
Plain Language
Licensees operating health-information chatbots must implement industry-standard encryption for data both in transit and at rest, maintain detailed access logs of system activity, and conduct security audits at least every six months. This is an ongoing operational security requirement — not a one-time pre-launch check.
A licensee shall do all of the following: (1) Implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct regular security audits no less than once every six months.
Pending 2027-01-01
S-01.1
G.S. § 114B-4(d)(1)-(3)
Plain Language
Licensees must demonstrate their health-information chatbot's effectiveness through three mechanisms: peer-reviewed controlled trials with real-world performance data, a comparative analysis against human expert performance, and meeting minimum domain benchmarks set by the Department. This is an unusually rigorous validation requirement — resembling FDA clinical trial standards more than typical AI regulation — and requires ongoing demonstration, not just pre-deployment testing.
A licensee shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
Pending 2027-01-01
G.S. § 170-6(d)
Plain Language
All covered platforms must encrypt all messages transmitted between users and chatbots during transit. The statute defines transport encryption as data encrypted during transmission but potentially accessible in unencrypted form at endpoints or by intermediary service providers. This is a baseline security requirement — notably, it mandates only transport-layer encryption (e.g., TLS), not end-to-end encryption, meaning the platform itself may access message content at rest.
All covered platforms shall utilize transport encryption for all messages between a user and a chatbot.
Pre-filed 2026-07-01
S-01.1
Section 1(b)(1)(a)-(c)
Plain Language
The Office of Information Technology must establish minimum requirements for AI safety tests. At a minimum, safety tests must include: an analysis of cybersecurity threats and vulnerabilities; an analysis of data sources and potential sources of bias, inaccuracy, or legal violations (criminal, copyright, patent, trade secret); and descriptions of remedies or defensive measures to address identified issues. This provision obligates OIT to create the testing framework — the corresponding obligation on AI companies to actually conduct these tests is in Section 1(c).
The Office of Information Technology shall: (1) establish minimum requirements for an artificial intelligence safety test for artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State that is conducted by an artificial intelligence company pursuant to subsection c. of this section, which requirements shall include but not be limited to: (a) an analysis of potential cybersecurity threats and vulnerabilities; (b) an analysis of an artificial intelligence technology's data sources and potential sources of bias, incorrect or inaccurate information, or violations of State or federal criminal, copyright, patent, or trade secret laws; and (c) descriptions of possible remedies or defensive measures that can be taken by the artificial intelligence company to address all potential cybersecurity threats and vulnerabilities, potential sources of bias, incorrect or inaccurate information, or potential violations of State or federal criminal, copyright, patent, or trade secret laws identified during the conducting of the safety test
Pre-filed 2026-07-01
S-01.1
Section 1(c)(1)-(4)
Plain Language
Every AI company (broadly defined to include any private entity or public agency that sells, develops, deploys, uses, or offers AI technology for sale in New Jersey) must annually conduct safety tests on all of its AI technologies. The tests must meet OIT's minimum requirements (covering cybersecurity, bias, inaccuracy, and legal compliance). This is both a testing obligation and a reporting obligation — after conducting the tests, the company must submit a report to OIT listing all technologies tested, describing each test and its adherence to OIT requirements, identifying any third parties used, and providing results. The annual cadence means this is a recurring obligation, not a one-time pre-deployment assessment.
An artificial intelligence company shall annually subject all artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State to a safety test that adheres to the requirements established pursuant to subsection b. of this section and submit a report to the Office of Information Technology containing: (1) a list of all artificial intelligence technologies tested; (2) a description of each safety test conducted, including the safety test's adherence to the requirements established pursuant to subsection b. of this section; (3) a list of all third parties used to conduct safety tests, if any; and (4) the results of each safety test administered.
Pending 2025-04-27
S-01.1S-01.4
State Tech. Law § 504(1)-(2)
Plain Language
Automated systems must undergo pre-deployment testing covering risk identification and mitigation, and must be subject to ongoing post-deployment monitoring to demonstrate continued safety and effectiveness. Testing and monitoring must be measured against the system's intended use, foreseeable misuse, and domain-specific standards. Additionally, systems must be developed with input from diverse communities, stakeholders, and domain experts to surface concerns before deployment.
1. New York residents have the right to be protected from unsafe or ineffective automated systems. These systems must be developed in collaboration with diverse communities, stakeholders, and domain experts to identify and address any potential concerns, risks, or impacts.
2. Automated systems shall undergo pre-deployment testing, risk identification and mitigation, and shall also be subjected to ongoing monitoring that demonstrates they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.
Pending 2025-04-27
S-01.1
State Tech. Law § 504(3)-(4)
Plain Language
An automated system that fails to meet the safety and effectiveness requirements of § 504 must not be deployed — or if already deployed, must be removed. There is a categorical prohibition on designing systems with the intent or reasonably foreseeable possibility of endangering New York residents. Systems must also be proactively designed to protect against harms from foreseeable but unintended uses. This creates both a deployment-gating requirement and a proactive safety-by-design obligation.
3. If an automated system fails to meet the requirements of this section, it shall not be deployed or, if already in use, shall be removed. No automated system shall be designed with the intent or a reasonably foreseeable possibility of endangering the safety of any New York resident or New York communities.
4. Automated systems shall be designed to proactively protect New York residents from harm stemming from unintended, yet foreseeable, uses or impacts.
Pending 2025-04-27
S-01.3
State Tech. Law § 504(6)
Plain Language
Independent evaluations must be conducted to confirm that automated systems are safe and effective, including documentation of harm mitigation steps. Results must be made public whenever possible. The 'whenever possible' qualifier introduces ambiguity about when public disclosure is actually required, but the independent evaluation itself appears mandatory.
6. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, shall be performed and the results made public whenever possible.
Pending 2025-04-27
State Tech. Law § 504(5)
Plain Language
Residents must be protected from the use of inappropriate or irrelevant data in the design, development, and deployment of automated systems, including protection from the compounded harm of reusing such data across systems. This creates a data quality and relevance obligation in the safety context — data used to train and operate automated systems must be appropriate and relevant to the system's purpose.
5. New York residents are entitled to protection from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse.
Pending 2025-07-26
S-01.4S-01.7
State Tech. Law § 517(1)-(4)
Plain Language
The Secretary periodically evaluates the source code and outcomes of each licensed high-risk AI system to determine compliance, with review frequency based on system risk, complexity, update frequency, and compliance history. After review, the Secretary issues binding recommendations for alignment with the ethical code of conduct, prohibited systems restrictions, and source code modification procedures. Operators must consult with the Secretary, provide a binding implementation plan and timeline, and may request amendments for unexpected circumstances (subject to 30-day Secretary approval). The Secretary monitors implementation and may impose fines for non-compliance. While the Secretary initiates the review, the operator has an affirmative obligation to cooperate, develop the compliance plan, and implement recommendations.
1. The secretary shall conduct periodic evaluations of the source code and outcomes associated with each high-risk advanced artificial intelligence system. These examinations shall determine whether the system is in compliance with this article. The timing and frequency of these reviews shall be determined at the secretary's discretion, taking into account the potential risk posed by the system, the complexity of the system, the frequency of updates and upgrades, the complexity of such updates and upgrades, and any previous issues of non-compliance. 2. Upon completion of the review, the secretary is empowered to make binding recommendations to the operator to ensure the system's functionality and outcomes are aligned with the principles in the advanced artificial intelligence ethical code of conduct pursuant to section five hundred twenty-nine of this article, restrictions on prohibited artificial intelligence systems pursuant to section five hundred thirty of this article, and limitations and procedures for source code modifications, updates, upgrades, and rewrites pursuant to section five hundred nineteen of this article. 3. Following receipt of the secretary's recommendations, the operator shall consult with the secretary to determine the feasibility of implementing the recommendations and the time frame in which such recommendations can be implemented to ensure full compliance with the secretary's recommendations. The operator shall provide a detailed plan outlining how the recommendations will be addressed, along with a timeline for their implementation. The detailed plan shall be binding on the operator; provided however that where an unexpected occurrence arises which causes changes to such plan, the operator shall be entitled to extend such timeline or alter such plans where such operator notifies the secretary in writing regarding the unexpected occurrence and, within such writing, sets forth amendments to the detailed plan and timeline. The secretary shall have thirty days to approve or reject such amendments. Where such amendments are rejected, the operator shall continue with their original plan and timeline. 4. The secretary shall monitor the operator's compliance with such recommendations and may impose fines and other penalties pursuant to the provisions of this article for non-compliance that the secretary shall deem just and proportionate to the violation.
Pending 2027-01-01
S-01.7
Civil Rights Law § 106(1)(a)-(b)
Plain Language
Developers and deployers must take reasonable measures to prevent and mitigate any harms identified by pre-deployment evaluations or post-deployment impact assessments. This is not merely an obligation to evaluate — it requires affirmative remediation of identified harms. Additionally, developers and deployers must ensure that independent auditors have all information necessary to conduct accurate evaluations and assessments. This creates a duty of cooperation with auditors that cannot be evaded by withholding information.
(a) take reasonable measures to prevent and mitigate any harm identified by a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article; (b) take reasonable measures to ensure that an independent auditor has all necessary information to complete an accurate and effective pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article;
Pending 2027-01-01
S-01.1
Civil Rights Law § 106(2)(b)-(c)
Plain Language
Developers may not knowingly offer or license a covered algorithm for any consequential action that was not covered by the pre-deployment evaluation. Deployers may not knowingly use a covered algorithm for unevaluated consequential actions, unless the deployer assumes full developer responsibilities under the act. This effectively gates deployment to evaluated use cases — any expansion into new consequential action domains requires a new or supplemental evaluation. The deployer assumption-of-developer-responsibilities pathway provides a safety valve but at a significant compliance cost.
(b) It shall be unlawful for a developer to knowingly offer or license a covered algorithm for any consequential action other than those evaluated in the pre-deployment evaluation described in section one hundred three of this article. (c) It shall be unlawful for a deployer to knowingly use a covered algorithm for any consequential action other than a use evaluated in the pre-deployment evaluation described in section one hundred three of this article, unless the deployer agrees to assume the responsibilities of a developer required by this article.
Enacted 2025-06-03
S-01.1S-01.5
Gen. Bus. Law § 1421(1)(a)
Plain Language
Before deploying any frontier model, the large developer must have a written safety and security protocol in place. The protocol must cover risk reduction procedures, cybersecurity protections (including against sophisticated actors), detailed testing procedures, and must designate senior personnel responsible for compliance. This is a pre-deployment prerequisite — no frontier model may be deployed without this documentation and these safeguards in place.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol;
Enacted 2025-06-03
S-01.1S-01.5
Gen. Bus. Law § 1421(1)(e)
Plain Language
Before deploying any frontier model, the large developer must implement appropriate safeguards to prevent unreasonable risk of critical harm.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: ... (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pending 2025-11-01
S-01.5
63 O.S. § 5502(C)
Plain Language
Deployers must establish and maintain an ongoing Quality Assurance Program for their AI devices. This is a cross-reference to the detailed governance requirements in Section 4 (§ 5504), which specifies the components of the program including the governance group, inventory, review and selection processes, use case documentation, and continuous monitoring. This provision creates the overarching mandate; the specific requirements are mapped separately under their respective Section 4 provisions.
C. Deployers shall implement and maintain a Quality Assurance Program, as outlined in Section 4 of this act, to ensure the safe, effective, and compliant use of AI devices in patient care.
Pending 2025-11-01
S-01.4S-01.7
63 O.S. § 5503(C)
Plain Language
Deployers must regularly evaluate AI device performance and conduct risk assessments, documenting the results. These evaluations should incorporate feedback solicited from the licensed physicians who use the devices and, where feasible, participation in national specialty society AI assessment registries. When performance concerns surface, deployers must take corrective action to mitigate patient risk. This is a continuing post-deployment obligation — not satisfied by pre-deployment testing alone.
C. Deployers of an AI device shall conduct and document regular performance evaluations and risk assessments of the device. Such evaluations and assessments should be informed by invited feedback from qualified end-users and, when applicable, participation in national specialty society-administered AI assessment registries. Whenever AI device performance concerns are identified, deployers shall implement appropriate corrective actions to mitigate risk to patients.
Pending 2025-11-01
S-01.1
63 O.S. § 5504(D)
Plain Language
Before deploying an AI device, deployers must have a diligent review and selection process in place. This is a pre-deployment evaluation requirement — deployers cannot simply adopt any AI device without first conducting due diligence on the device's suitability, safety, and effectiveness for their intended clinical use case. The statute does not specify what 'diligent review' entails, which may be further defined by State Department of Health rulemaking.
D. Deployers shall have a diligent review and selection process for the deployed AI device.
Pending 2025-11-01
S-01.4S-01.7
63 O.S. § 5504(F)-(G)
Plain Language
Deployers must continuously monitor the performance of every deployed AI device, with specific attention to patient safety impacts and care quality. As part of this monitoring, deployers must participate in national specialty society-administered AI assessment registries when feasible. The registry participation requirement is qualified by feasibility — if no applicable registry exists for a given specialty or device, the obligation does not apply. The continuous monitoring obligation itself, however, is mandatory and ongoing.
F. Deployers shall continuously monitor the performance of all deployed AI devices, including assessing any impact on patient safety or the quality of patient care. G. In conducting performance monitoring described in subsection F of this section, deployers must participate in national specialty society-administered artificial intelligence assessment registries when feasible.
Pending 2026-10-06
S-01.7
35 Pa.C.S. § 3503(b)(5)
Plain Language
Facilities must periodically review the performance, use, and outcomes of their AI-based algorithms used in clinical decision making, and revise them as needed to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify the review frequency, leaving that to department regulations or guidance.
(5) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2026-10-06
S-01.7
40 Pa.C.S. § 5203(b)(7)
Plain Language
Insurers must periodically review the performance, use, and outcomes of AI algorithms used in utilization review, and revise them as needed to maximize accuracy and reliability. This is a continuing obligation requiring ongoing monitoring and improvement, not a one-time assessment.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2026-10-06
S-01.7
40 Pa.C.S. § 5303(b)(7)
Plain Language
MA/CHIP managed care plans must periodically review and revise their AI algorithms used in utilization review to maximize accuracy and reliability. This is an ongoing operational review obligation, not a one-time assessment.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2026-04-01
S-01.1S-01.4S-01.5S-01.7
12 Pa.C.S. § 7105(c)(4)(i)-(xi)
Plain Language
Suppliers must disclose in their written policy the specific procedures they use to ensure chatbot safety, covering eleven enumerated topics: pre-launch and ongoing testing ensuring outputs pose no greater risk than human interaction; identification of foreseeable adverse outcomes and harmful interactions; a mechanism for consumers to report harmful interactions; protocols for assessing and responding to risk of harm; actions taken to prevent or mitigate adverse outcomes; protocols for responding promptly to acute physical harm risks; regular objective reviews of safety, accuracy, and efficacy (including possible audits); instructions for safe use of the chatbot; prioritization of consumer mental health and safety over engagement metrics or profit; measures to prevent discriminatory treatment; and compliance with HIPAA privacy and security rules as if the supplier were a covered entity. This is both a disclosure obligation (requiring these procedures to be described in the policy) and a substantive safety obligation (requiring the procedures to actually exist and be implemented, per § 7105(g)).
(4) The procedures by which the supplier: (i) Conducts testing, prior to making the chatbot publicly available and regularly thereafter, to ensure that the output of the chatbot poses no greater risk to a consumer than that posed to an individual communicating with a human. (ii) Identifies reasonably foreseeable adverse outcomes to, and potentially harmful interactions with, consumers that could result from using the chatbot. (iii) Provides a mechanism for a consumer to report any potentially harmful interactions from the use of the chatbot. (iv) Implements protocols to assess and respond to risk of harm to consumers or other individuals. (v) Details actions taken to prevent or mitigate any adverse outcomes or potentially harmful interactions. (vi) Implements protocols to respond, as soon as practicable, to acute risks of physical harm. (vii) Reasonably ensures regular, objective reviews of safety, accuracy and efficacy, which may include internal or external audits. (viii) Provides consumers with instructions on the safe use of the chatbot. (ix) Prioritizes consumer mental health and safety over engagement metrics or profit. (x) Implements measures to prevent discriminatory treatment of consumers. (xi) Ensures compliance with the security and privacy provisions of 45 CFR Pts. 160 (relating to general administrative requirements) and 164 (relating to security and privacy), as if the supplier were a covered entity.
Pending 2027-01-09
S-01.7
35 Pa.C.S. § 3503(b)(5)
Plain Language
Facilities must periodically review and revise the performance, use, and outcomes of their AI-based algorithms to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check — requiring continuous monitoring and improvement of AI tools used in clinical decision making.
(5) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2027-01-09
S-01.7
40 Pa.C.S. § 5203(b)(7)
Plain Language
Insurers must periodically review and revise the performance, use, and outcomes of their AI-based algorithms used in utilization review to maximize accuracy and reliability. This mirrors the facility obligation under Chapter 35.
(7) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2027-01-09
S-01.7
40 Pa.C.S. § 5303(b)(7)
Plain Language
MA or CHIP managed care plans must periodically review and revise the performance, use, and outcomes of AI-based algorithms used in utilization review to maximize accuracy and reliability.
(7) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2027-01-09
S-01.1
35 Pa.C.S. § 3503(b)(7)
Plain Language
AI-based algorithms used by facilities for clinical decision making must not create foreseeable, material risks of harm to patients. This is a substantive safety standard — facilities must ensure their AI tools do not expose patients to predictable, significant risks of harm. Compliance likely requires pre-deployment and ongoing safety evaluation.
(7) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the patient.
Pending 2027-01-09
S-01.1
40 Pa.C.S. § 5203(b)(9)
Plain Language
Insurers' AI-based algorithms used in utilization review must not create foreseeable, material risks of harm to covered persons. This is a substantive safety standard paralleling the facility requirement.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the covered person.
Pending 2027-01-09
S-01.1
40 Pa.C.S. § 5303(b)(9)
Plain Language
MA or CHIP managed care plans' AI-based algorithms used in utilization review must not create foreseeable, material risks of harm to enrollees.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the enrollee.
Pending
S-01.4S-01.7
S.C. Code § 39-80-30(C)
Plain Language
Chatbot providers must conduct monthly safety evaluations of their chatbot for potential risk of harm to users, publish information about their chatbot on their website monthly, and mitigate any identified risks. The specific scope of these evaluations and the form of the public disclosures will be defined by Attorney General rulemaking. This creates a rolling monthly cycle of evaluation, disclosure, and mitigation — significantly more frequent than annual review requirements in other jurisdictions.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
Pending
S-01.4S-01.7
S.C. Code § 39-80-30(C)
Plain Language
Chatbot providers must conduct monthly evaluations of their chatbot for potential risk of harm to users, publish information about the chatbot on their website on a monthly basis, and mitigate any identified risks. The specific evaluation methodology and risk categories will be further defined by AG regulations (Section 39-80-40). The monthly cadence is notably frequent compared to most AI safety evaluation requirements. The mitigation obligation is ongoing and not limited to specific risk categories — any identified risk of harm must be addressed.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
Enacted 2024-05-01
S-01.5
Utah Code § 13-70-303(1)
Plain Language
To qualify for regulatory mitigation (reduced enforcement terms) within the Learning Laboratory, a participant must affirmatively demonstrate to the Office: technical capability, sufficient financial resources, that the AI technology's consumer benefits potentially outweigh risks from relaxed enforcement, an effective risk monitoring and minimization plan, and that the proposed testing is appropriately scoped and limited based on risk assessments. These are eligibility prerequisites — the Office evaluates them before granting any mitigation agreement.
To be eligible for regulatory mitigation, a participant shall demonstrate to the office that: (a) the participant has the technical expertise and capability to responsibly develop and test the proposed artificial intelligence technology; (b) the participant has sufficient financial resources to meet obligations during testing; (c) the artificial intelligence technology provides potential substantial consumer benefits that may outweigh identified risks from mitigated enforcement of regulations; (d) the participant has an effective plan to monitor and minimize identified risks from testing; and (e) the scale, scope, and duration of proposed testing is appropriately limited based on risk assessments.
Pre-filed 2025-07-01
S-01.1S-01.5
9 V.S.A. § 4193g(a)
Plain Language
Developers may not place an inherently dangerous AI system in commerce unless they have first conducted documented testing, evaluation, verification, and validation at least as stringent as the latest NIST AI Risk Management Framework. For any AI system creating reasonably foreseeable risks of harm under § 4193f, the developer must mitigate those risks to the extent possible, consider alternatives, and disclose vulnerabilities and mitigation tactics to downstream deployers. This is a pre-distribution gate — developers cannot release the product without completing NIST-level safety evaluation and documentation, and must affirmatively disclose residual risks to deployers.
(a) No developer shall offer, sell, lease, give, or otherwise place in the stream of commerce: (1) an inherently dangerous artificial intelligence system, unless the developer has conducted a documented testing, evaluation, verification, and validation of that system at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST); or (2) an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter, unless the developer mitigates these risks to the extent possible, considers alternatives, and discloses vulnerabilities and mitigation tactics to a deployer.
Pre-filed 2025-07-01
S-01.5
9 V.S.A. § 4193f(a)
Plain Language
Developers and deployers of inherently dangerous AI systems that could reasonably be expected to impact Vermont consumers must exercise reasonable care to avoid foreseeable risks across nine categories of harm: criminal conduct, unfair or deceptive treatment, physical/financial/relational/reputational injury, highly offensive psychological injuries, privacy intrusion, intellectual property violations, discrimination across a broad enumeration of protected characteristics, behavioral distortion causing harm, and exploitation of vulnerable groups (by age or disability) to distort behavior harmfully. This is a general negligence-style standard of care with an enumerated list of harm categories — it functions as the statute's core safety obligation, defining the harms that developers and deployers must affirmatively work to prevent. Compliance with the subchapter creates a rebuttable presumption that the standard of care was met (per § 4193i(a)).
(a) Each developer or deployer of any inherently dangerous artificial intelligence system that could be reasonably expected to impact consumers shall exercise reasonable care to avoid any reasonably foreseeable risk arising out of the development of, intentional and substantial modification to, or deployment of an artificial intelligence system that causes or is likely to cause: (1) the commission of a crime or unlawful act; (2) any unfair or deceptive treatment of or unlawful impact on an individual; (3) any physical, financial, relational, or reputational injury on an individual; (4) psychological injuries that would be highly offensive to a reasonable person; (5) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns of a person, if the intrusion would be offensive to a reasonable person; (6) any violation to the intellectual property rights of persons under applicable State and federal laws; (7) discrimination on the basis of a person's or class of persons' actual or perceived race, color, ethnicity, sex, sexual orientation, gender identity, sex characteristics, religion, national origin, familial status, biometric information, or disability status; (8) distortion of a person's behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm; or (9) the exploitation of the vulnerabilities of a specific group of persons due to their age or physical or mental disability in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
Pre-filed 2026-07-01
S-01.7
9 V.S.A. § 4193c(c)
Plain Language
Chatbot providers must assess their chatbots for risks of harm to users on a monthly basis, using metrics defined by Attorney General rulemaking, and must actively mitigate any identified risks. This is an ongoing operational obligation — not a one-time pre-deployment assessment. The monthly cadence is notably more frequent than the annual reviews required by most other state AI safety frameworks. The specific risk categories and assessment metrics will be determined by AG rulemaking, so the full scope of the obligation will depend on those rules.
(c) Risk assessment. A chatbot provider shall on a monthly basis, according to metrics as set forth in rules adopted by the Attorney General pursuant to this subchapter, assess its chatbot for risks of harm to users and actively mitigate any risks of harm.