S-01
Safety & Prohibited Conduct
AI System Safety Program
Developers and deployers of high-risk AI systems must conduct documented safety evaluations before deployment, conduct adversarial testing to identify misuse potential and failure modes, and maintain ongoing safety controls. Pre-deployment evaluation does not permanently satisfy this obligation — post-deployment monitoring and re-evaluation are continuing requirements.
Applies to DeveloperDeployerProfessionalGovernment Sector Foundation ModelHealthcareGovernment System
Bills — Enacted
2
unique bills
Bills — Proposed
25
Last Updated
2026-03-29
Core Obligation

Developers and deployers of high-risk AI systems must conduct documented safety evaluations before deployment, conduct adversarial testing to identify misuse potential and failure modes, and maintain ongoing safety controls. Pre-deployment evaluation does not permanently satisfy this obligation — post-deployment monitoring and re-evaluation are continuing requirements.

Sub-Obligations6 sub-obligations
ID
Name & Description
Enacted
Proposed
S-01.1
Internal pre-deployment safety evaluation A documented safety evaluation covering the system's behavior across intended use cases and reasonably foreseeable misuse cases must be conducted and retained before deployment. Must identify failure modes and the harms they could cause.
1 enacted
13 proposed
S-01.2
Red-teaming and adversarial testing Structured adversarial testing must be conducted to identify the system's potential for misuse, harmful output elicitation, jailbreaking, and dangerous capability expression. Covers both internal and, for frontier models, independent external red-teaming.
0 enacted
0 proposed
S-01.3
Third-party safety evaluation For frontier or high-capability models, independent external safety evaluation by a qualified third party is required or strongly expected. The third party must have meaningful model access and freedom to probe without restriction.
0 enacted
1 proposed
S-01.4
Post-deployment monitoring and re-evaluation Deployed AI systems must be monitored for drift, unexpected behavior, and safety incidents. Material model updates and safety incidents trigger re-evaluation obligations.
0 enacted
14 proposed
S-01.5
Ongoing risk management program A formal documented AI risk management program must be established and maintained, covering risk identification, assessment criteria, mitigation strategies, and escalation procedures. The NIST AI RMF is commonly cited as a safe harbor framework.
2 enacted
11 proposed
S-01.7
Continuous Post-Deployment Quality Assurance Deployed AI tools must be subject to periodic performance review and revision to maximize accuracy, reliability, and safety on an ongoing operational basis, distinct from pre-deployment testing or incident response.
0 enacted
12 proposed
Bills That Map This Requirement 27 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-01-01
S-01.4S-01.7
A.R.S. § 44-1383.02(C)
Plain Language
Chatbot providers must, on a monthly basis: (1) evaluate their chatbot for potential risk of harm to users, and (2) publish information about the chatbot on their website. Providers must also mitigate any identified risks of harm. The specifics of what constitutes 'risk of harm' and what risk-reduction requirements apply will be defined by Attorney General rulemaking. This is a continuing, monthly operational obligation — significantly more frequent than the typical annual review cadence in other jurisdictions.
In compliance with the rules adopted by the attorney general pursuant to section 44-1383.03, a chatbot provider shall: 1. On a monthly basis: (a) Evaluate its chatbot for potential risk of harm to users. (b) Make information about its chatbot publicly available on its website. 2. Mitigate any risk of harm to users.
Pending 2027-07-01
S-01.1S-01.5
Bus. & Prof. Code § 22612(a)-(b)
Plain Language
Operators must conduct and document a comprehensive child safety risk assessment annually, beginning by July 1, 2027. The assessment must evaluate the likelihood of covered harms, differential risks across age groups and developmental stages, known child vulnerabilities, empirical usage data, and relevant academic and regulatory guidance. Operators must then take and document reasonable mitigation measures for every risk identified. This is not a one-time exercise — it is an annual recurring obligation that must incorporate empirical data from actual deployment.
(a) Annually perform and document a comprehensive risk assessment to identify any child safety risk posed by the design, configuration, and operation of the companion chatbot that assesses all of the following: (1) The likelihood of a covered harm occurring to users. (2) Differential risks across age groups and developmental stages. (3) Known vulnerabilities of children. (4) Empirical data from actual use. (5) Relevant academic research and regulatory guidance. (b) Take and document measures that reasonably mitigate any child safety risk identified in a risk assessment conducted pursuant to subdivision (a).
Pending 2028-07-01
S-01.4S-01.7
HRS § 321-__ (Monitoring; performance evaluation; record keeping)(1)-(3)
Plain Language
Health care providers using AI in consequential decisions must: (1) monitor AI system usage on an ongoing basis; (2) conduct regular performance evaluations covering potential biases, patient safety and rights risks (including data confidentiality), and mitigation strategies for identified risks; and (3) implement procedures to remediate deficiencies found through monitoring or evaluations, including suspension or recalibration of AI systems as needed. The frequency of 'regular' performance evaluations will be specified by Department of Health rules. This is a continuing operational obligation — not a one-time pre-deployment assessment.
Any health care provider that uses an artificial intelligence system to make, or be a substantial factor in making, a consequential decision shall:
(1) Monitor the usage of artificial intelligence systems to make, or be a substantial factor in making, consequential decisions;
(2) Conduct regular performance evaluations of the artificial intelligence systems, including the assessment of:
(A) Potential biases;
(B) Risks to the safety and rights of patients, including the confidentiality of personal data; and
(C) Mitigation strategies for any identified risks;
(3) Implement procedures to address any deficiencies identified through the monitoring or performance evaluations, including the suspension or recalibration of any artificial intelligence system;
Pre-filed 2025-07-01
S-01.4S-01.5
§ 554J.2(1)
Plain Language
Every deployer of a chatbot must establish, implement, and continuously maintain protocols designed to detect potential harms the chatbot may cause users, respond to those harms, report on them, and mitigate them. The statute expressly requires that these protocols prioritize user safety and well-being over the deployer's own commercial or operational interests. This is a continuing operational obligation — not a one-time pre-launch exercise.
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
Pending 2025-07-01
S-01.4S-01.5
§ 554J.2(1)(a)
Plain Language
Deployers of public-facing chatbots must implement and maintain protocols to detect, respond to, report, and mitigate harms the chatbot may cause users. The standard is commercially reasonable — proportionate to the deployer's size, resources, and technical capabilities and consistent with prevailing industry standards. This is a continuing operational obligation, not a one-time pre-launch check. A deployer making commercially reasonable efforts to comply is protected from liability for unforeseeable or emergent outputs under the safe harbor provision in § 554J.5.
A deployer of a public-facing chatbot shall do all of the following: a. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the public-facing chatbot may cause a user in a manner that takes commercially reasonable steps to protect the safety and well-being of users.
Pending 2025-07-01
S-01.1
§ 554J.3(3)
Plain Language
Deployers may not make a therapeutic chatbot available to a minor unless all five conditions are satisfied: (1) a licensed psychologist (chapter 154B) or mental health professional (chapter 154D) recommended the chatbot for the specific minor after evaluation; (2) the developer has significant testing documentation; (3) peer-reviewed clinical trial data demonstrates the chatbot is safe and effective for the minor's mental health condition; (4) the deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to both the recommending professional and the minor's parents, guardians, or custodians; and (5) the deployer developed and implemented protocols for testing the chatbot for risks to users, identifying risks, mitigating risks, and quickly rectifying harm. This is an extraordinarily high bar — effectively requiring FDA-style clinical evidence and a licensed professional's individualized recommendation before a minor can access a therapeutic chatbot.
3. A deployer shall only make a therapeutic chatbot available for a minor's use or purchase if all of the following apply: a. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. b. The therapeutic chatbot's developer has significant documentation of how the public-facing chatbot was tested. c. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. d. The therapeutic chatbot's deployer provided clear disclosures of the therapeutic chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "a", and to the minor's parents, guardians, or custodians. e. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.
Pending 2026-07-01
S-01.4S-01.5
§ 554J.2(1)
Plain Language
Deployers must implement and maintain ongoing protocols for detecting, responding to, reporting, and mitigating harms their chatbot may cause users. The protocols must prioritize user safety and well-being over the deployer's commercial or business interests. This is a continuing operational obligation — not a one-time pre-launch check. The statute does not specify to whom harm must be reported, leaving that to the deployer's protocol design.
A deployer of a chatbot shall do all of the following: 1. Implement and maintain protocols meant to detect, respond to, report, and mitigate harm the chatbot may cause a user in a manner that prioritizes the safety and well-being of users over the deployer's interests.
Pending 2026-07-01
S-01.5
IC 22-5-10.4-10(2)(A)(iv)
Plain Language
As a precondition to using automated decision system output in any employment decision, the employer must validate the system's compliance with the NIST AI Risk Management Framework (version released January 26, 2023) or any successor framework. This is a mandatory compliance requirement, not a safe harbor — the NIST AI RMF is referenced as a required baseline rather than an optional benchmark. Employers must be prepared to demonstrate this compliance as part of their predeployment testing documentation.
the compliance of the system with the Artificial Intelligence Risk Management Framework released by the National Institute of Standards and Technology on January 26, 2023, or a successor framework;
Pending 2026-10-01
S-01.7
Insurance § 15–10A–06(a)(3)
Plain Language
If more than a Commissioner-determined percentage of a carrier's adverse decisions made using the same AI, algorithm, or software tool result in grievances within any six-month period, the carrier must conduct a model review of that AI tool and submit the findings in its quarterly report. The specific grievance-rate threshold that triggers this obligation will be set by the Commissioner — the statute delegates that threshold determination. This creates a performance-triggered audit requirement: carriers must monitor grievance rates per AI tool and initiate a formal review process when the threshold is exceeded. The review findings must be documented and submitted to the Commissioner alongside the regular quarterly reporting.
(3) IF, WITHIN A 6–MONTH PERIOD, MORE THAN A SPECIFIED PERCENTAGE, AS DETERMINED BY THE COMMISSIONER, OF A CARRIER'S ADVERSE DECISIONS MADE USING THE SAME ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL RESULT IN A GRIEVANCE, THE CARRIER SHALL PROVIDE FOR A MODEL REVIEW PROCESS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL AND SUBMIT THE FINDINGS IN THE REPORT REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION.
Pending 2026-01-01
S-01.1S-01.7
G.S. § 114B-4(d)
Plain Language
Licensed health information chatbot operators must demonstrate their chatbot's effectiveness through peer-reviewed controlled trials with adequate sample sizes and real-world performance data, comparative analysis against human expert performance, and compliance with minimum domain benchmarks set by the Department. This is an ongoing operational requirement — licensees must continue to meet these standards, not merely demonstrate them at the time of application. The requirement for peer-reviewed trials is notably more rigorous than typical AI safety evaluation requirements in other jurisdictions.
(d) A licensees shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
Pre-filed 2026-07-01
S-01.1S-01.5
Section 1(b)(1)(a)-(c)
Plain Language
The Office of Information Technology must establish minimum requirements for AI safety tests that all AI companies must follow. The required safety test must include at minimum: analysis of cybersecurity threats and vulnerabilities, analysis of data sources for bias, inaccuracies, and potential legal violations (criminal, copyright, patent, or trade secret), and descriptions of remedial or defensive measures the company can take to address identified issues. This provision obligates the government agency to create the testing framework; the corresponding company obligation to actually conduct the tests is in subsection c.
The Office of Information Technology shall: (1) establish minimum requirements for an artificial intelligence safety test for artificial intelligence technology sold, developed, deployed, used, or offered for sale in this State that is conducted by an artificial intelligence company pursuant to subsection c. of this section, which requirements shall include but not be limited to: (a) an analysis of potential cybersecurity threats and vulnerabilities; (b) an analysis of an artificial intelligence technology's data sources and potential sources of bias, incorrect or inaccurate information, or violations of State or federal criminal, copyright, patent, or trade secret laws; and (c) descriptions of possible remedies or defensive measures that can be taken by the artificial intelligence company to address all potential cybersecurity threats and vulnerabilities, potential sources of bias, incorrect or inaccurate information, or potential violations of State or federal criminal, copyright, patent, or trade secret laws identified during the conducting of the safety test
Pre-filed 2026-07-01
S-01.4
Section 1(b)(2)
Plain Language
The Office of Information Technology is required to review every annual safety test report submitted by AI companies. This creates a regulatory review obligation on OIT, ensuring that submitted reports are not merely filed but actually examined. The bill does not specify what actions OIT must take if a report reveals deficiencies or what standard of review applies.
The Office of Information Technology shall: (2) review each annual report required to be submitted by an artificial intelligence company pursuant to subsection c. of this section.
Pending 2025-04-27
S-01.1S-01.4
State Tech. Law § 504(1)-(2)
Plain Language
Automated systems that meaningfully impact New York residents must undergo pre-deployment testing, risk identification, and risk mitigation before going live. Systems must also be subjected to ongoing monitoring post-deployment to demonstrate continued safety and effectiveness based on intended use, foreseeable misuse, and domain-specific standards. Development must include collaboration with diverse communities and domain experts. The obligations apply broadly to any computational system affecting New York residents, excluding only passive computing infrastructure.
1. New York residents have the right to be protected from unsafe or ineffective automated systems. These systems must be developed in collaboration with diverse communities, stakeholders, and domain experts to identify and address any potential concerns, risks, or impacts.
2. Automated systems shall undergo pre-deployment testing, risk identification and mitigation, and shall also be subjected to ongoing monitoring that demonstrates they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.
Pending 2025-04-27
S-01.1
State Tech. Law § 504(3)-(4)
Plain Language
Automated systems that fail safety and effectiveness requirements must not be deployed — and if already deployed, must be pulled from service. No system may be designed with the intent or reasonably foreseeable possibility of endangering the safety of New York residents. Systems must also be affirmatively designed to protect against foreseeable harms even from unintended uses. This effectively creates a deployment-gating obligation and a continuing removal obligation.
3. If an automated system fails to meet the requirements of this section, it shall not be deployed or, if already in use, shall be removed. No automated system shall be designed with the intent or a reasonably foreseeable possibility of endangering the safety of any New York resident or New York communities.
4. Automated systems shall be designed to proactively protect New York residents from harm stemming from unintended, yet foreseeable, uses or impacts.
Pending 2025-04-27
S-01.3
State Tech. Law § 504(6)
Plain Language
Independent evaluations must be conducted to confirm that automated systems are safe and effective, including documentation of harm mitigation steps. Results must be made public 'whenever possible.' The qualifier 'whenever possible' introduces ambiguity about when public disclosure is actually required — it appears to contemplate exceptions but does not define them.
6. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, shall be performed and the results made public whenever possible.
Pending 2025-07-26
S-01.4S-01.7
State Tech. Law § 517(1)-(4)
Plain Language
The Secretary conducts periodic source code and outcome reviews of each licensed high-risk AI system, at a frequency determined by system risk, complexity, update frequency, and compliance history. The Secretary issues binding recommendations based on these reviews. Operators must then consult with the Secretary, produce a binding detailed implementation plan with a timeline, and execute it. Plan amendments are permitted only for unexpected occurrences and require Secretary approval within 30 days. Non-compliance with recommendations triggers fines and penalties. This creates an ongoing government-supervised safety review cycle — not a one-time pre-deployment check.
§ 517. Source code and outcome review. 1. The secretary shall conduct periodic evaluations of the source code and outcomes associated with each high-risk advanced artificial intelligence system. These examinations shall determine whether the system is in compliance with this article. The timing and frequency of these reviews shall be determined at the secretary's discretion, taking into account the potential risk posed by the system, the complexity of the system, the frequency of updates and upgrades, the complexity of such updates and upgrades, and any previous issues of non-compliance. 2. Upon completion of the review, the secretary is empowered to make binding recommendations to the operator to ensure the system's functionality and outcomes are aligned with the principles in the advanced artificial intelligence ethical code of conduct pursuant to section five hundred twenty-nine of this article, restrictions on prohibited artificial intelligence systems pursuant to section five hundred thirty of this article, and limitations and procedures for source code modifications, updates, upgrades, and rewrites pursuant to section five hundred nineteen of this article. 3. Following receipt of the secretary's recommendations, the operator shall consult with the secretary to determine the feasibility of implementing the recommendations and the time frame in which such recommendations can be implemented to ensure full compliance with the secretary's recommendations. The operator shall provide a detailed plan outlining how the recommendations will be addressed, along with a timeline for their implementation. The detailed plan shall be binding on the operator; provided however that where an unexpected occurrence arises which causes changes to such plan, the operator shall be entitled to extend such timeline or alter such plans where such operator notifies the secretary in writing regarding the unexpected occurrence and, within such writing, sets forth amendments to the detailed plan and timeline. The secretary shall have thirty days to approve or reject such amendments. Where such amendments are rejected, the operator shall continue with their original plan and timeline. 4. The secretary shall monitor the operator's compliance with such recommendations and may impose fines and other penalties pursuant to the provisions of this article for non-compliance that the secretary shall deem just and proportionate to the violation.
Pending 2025-07-26
S-01.1
State Tech. Law § 518(1)-(5)
Plain Language
Developers of high-risk AI systems — whether licensed or not — may not willfully or negligently allow their source code to become uncontained (i.e., reproduced so widely it becomes impossible to control). Written Secretary authorization is required for any intentional release that could lead to uncontainment. Criminal penalties attach to individuals: class E felony for willful uncontainment, class A misdemeanor for negligent uncontainment, and class C felony for uncontaining financial systems or prohibited AI systems. The knowledge defense protects individuals who had no explicit or implicit awareness of the risk. This effectively creates a containment obligation for high-risk AI source code.
§ 518. Willfully or negligently uncontaining high-risk source code. 1. No licensee or non-licensee who develops a high-risk advanced artificial intelligence system shall willfully or negligently uncontain their source code except where authorized by the secretary in writing. 2. Any member, officer, director or employee of an entity who willfully violates subdivision one of this section shall be guilty of a class E felony. 3. Any member, officer, director or employee of an entity who negligently violates subdivision one of this section shall be guilty of a class A misdemeanor. 4. Where any member, officer, director or employee or an entity willfully or negligently uncontains a high-risk advanced artificial intelligence system described in paragraph (f) of subdivision two of section five hundred one of this article or a prohibited high-risk advanced artificial intelligence system as described in section five hundred thirty of this article shall be guilty of a class C felony. 5. The provisions of this section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the risk or circumstances that caused the uncontainment of the high-risk advanced artificial intelligence system.
Pending 2025-07-26
S-01.1
State Tech. Law § 525
Plain Language
Every licensee must maintain kill-switch capability — internal controls that can safely and indefinitely halt the operation of the entire system or a major part of it within a reasonable time after initiation. This is an ongoing operational requirement, not a one-time design obligation. The controls must be able to sustain indefinite shutdown, not just temporary pauses.
§ 525. Internal controls; ceasing operation. Every licensee shall have in place internal controls that, within a reasonable time following initiation, can safely and indefinitely cease the operation of the system or a major part of the system.
Pending
S-01.1S-01.4
GBL § 1711(1)-(2)
Plain Language
Any developer of AI technology intended for use in a professional domain regulated under Title Eight of the New York Education Law must ensure that at least one professional domain expert — a credentialed individual with at least three years of experience in the relevant field — is directly and substantially involved in: (a) technology design, (b) data selection and training, (c) validation and testing of outputs, and (d) ongoing risk assessment and post-deployment evaluation. This covers healthcare diagnostics, legal decision-making, financial advising, educational tools, construction/architecture safety, and public safety technologies, among others. The requirement is not limited to the enumerated areas or the four listed phases — both lists are non-exhaustive. This is both a pre-deployment and ongoing obligation given the post-deployment evaluation requirement.
§ 1711. Professional oversight requirement. 1. Any developer of an artificial intelligence technology intended for use in a professional domain regulated under title eight of the education law shall demonstrate that at least one professional domain expert has been directly and substantially involved in at least, but not limited to: (a) the technology design phase; (b) the data selection and training process; (c) validation and testing of system outputs; and (d) ongoing risk assessment and post-deployment evaluation. 2. The provisions of subdivision one of this section shall apply to artificial intelligence technology used in areas such as, but not limited to: (a) health care diagnostics, treatment recommendations, or patient monitoring; (b) legal decision-making or document generation; (c) financial advising or lending tools; (d) educational curriculum or assessment tools; (e) construction, architecture, or structural safety systems; and (f) public safety, law enforcement, or surveillance technologies.
Pending 2027-01-01
S-01.5S-01.7
Civil Rights Law § 106(1)
Plain Language
Developers and deployers must take affirmative steps to maintain covered algorithm safety and performance. Specifically, they must: (1) take reasonable measures to prevent and mitigate harms identified in pre-deployment evaluations and impact assessments; (2) ensure independent auditors have all necessary information for accurate evaluations; (3) consult impacted stakeholders and communities before deploying; (4) certify that the algorithm is not likely to cause harm, disparate impact, or deceptive practices, and that benefits outweigh harms; (5) ensure the algorithm performs at a reasonable standard consistent with its publicly-advertised purpose; (6) ensure data used is relevant and appropriate to the deployment context; and (7) ensure the algorithm's intended use is not likely to violate the article. The certification requirement is a meaningful compliance gate — developers and deployers must affirmatively attest based on evaluation results.
1. A developer or deployer shall do the following: (a) take reasonable measures to prevent and mitigate any harm identified by a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article; (b) take reasonable measures to ensure that an independent auditor has all necessary information to complete an accurate and effective pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article; (c) with respect to a covered algorithm, consult stakeholders, including any communities that will be impacted by the covered algorithm, regarding the development or deployment of the covered algorithm prior to the deploying, licensing, or offering the covered algorithm; (d) with respect to a covered algorithm, certify that, based on the results of a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article: (i) use of the covered algorithm is not likely to result in harm or disparate impact in the equal enjoyment of goods, services, or other activities or opportunities; (ii) the benefits from the use of the covered algorithm to individuals affected by the covered algorithm likely outweigh the harms from the use of the covered algorithm to such individuals; and (iii) use of the covered algorithm is not likely to result in a deceptive act or practice; (e) ensure that any covered algorithm of the developer or deployer functions at a level that would be considered reasonable performance by an individual with ordinary skill in the art; and in a manner that is consistent with its expected and publicly-advertised performance, purpose, or use; (f) ensure any data used in the design, development, deployment, or use of the covered algorithm is relevant and appropriate to the deployment context and the publicly-advertised purpose or use; and (g) ensure use of the covered algorithm as intended is not likely to result in a violation of this article.
Pending 2027-01-01
S-01.1
Civil Rights Law § 106(2)(b)-(c)
Plain Language
Developers may not knowingly offer or license a covered algorithm for any consequential action that was not evaluated in the pre-deployment evaluation. Deployers may not knowingly use a covered algorithm for any unevaluated consequential action unless the deployer assumes full developer responsibilities under the article. This effectively gates each covered algorithm's permissible uses to those specifically evaluated for harm — if a new use case arises, the algorithm cannot be deployed for that use until a new evaluation is completed.
(b) It shall be unlawful for a developer to knowingly offer or license a covered algorithm for any consequential action other than those evaluated in the pre-deployment evaluation described in section one hundred three of this article. (c) It shall be unlawful for a deployer to knowingly use a covered algorithm for any consequential action other than a use evaluated in the pre-deployment evaluation described in section one hundred three of this article, unless the deployer agrees to assume the responsibilities of a developer required by this article.
Enacted 2025-06-03
S-01.1S-01.5
Gen. Bus. Law § 1421(1)(a)
Plain Language
Before deploying any frontier model, the large developer must have a written safety and security protocol in place. The protocol must cover risk reduction procedures, cybersecurity protections (including against sophisticated actors), detailed testing procedures, and must designate senior personnel responsible for compliance. This is a pre-deployment prerequisite — no frontier model may be deployed without this documentation and these safeguards in place.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: (a) Implement a written safety and security protocol;
Enacted 2025-06-03
S-01.1S-01.5
Gen. Bus. Law § 1421(1)(e)
Plain Language
Before deploying any frontier model, the large developer must implement appropriate safeguards to prevent unreasonable risk of critical harm.
Before deploying a frontier model, the large developer of such frontier model shall do all of the following: ... (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pending 2025-10-11
S-01.5
GBL § 1551(1)(a)-(b)
Plain Language
Developers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended uses. A rebuttable presumption of reasonable care arises if the developer complies with § 1551 requirements and retains an AG-identified independent third party to conduct bias and governance audits. The AG must identify qualified independent auditors and publish a list on its website by January 1, 2026, updated annually.
1. (a) Beginning on January first, two thousand twenty-seven, each developer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of a high-risk artificial intelligence decision system. In any enforcement action brought on or after such date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a developer used reasonable care as required pursuant to this subdivision if: (i) the developer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the developer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-six, and at least annually thereafter, the attorney general shall: (i) identify independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) publish a list of such independent third parties available on the attorney general's website.
Pending 2025-10-11
S-01.5
GBL § 1552(1)(a)-(b)
Plain Language
Deployers of high-risk AI decision systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. A rebuttable presumption of reasonable care arises if the deployer complies with § 1552's requirements and retains an AG-identified independent third party for bias and governance audits. This mirrors the parallel developer obligation in § 1551(1) but applies to deployers.
1. (a) Beginning on January first, two thousand twenty-seven, each deployer of a high-risk artificial intelligence decision system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after said date by the attorney general pursuant to this article, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence decision system used reasonable care as required pursuant to this subdivision if: (i) the deployer complied with the provisions of this section; and (ii) an independent third party identified by the attorney general pursuant to paragraph (b) of this subdivision and retained by the deployer completed bias and governance audits for the high-risk artificial intelligence decision system. (b) No later than January first, two thousand twenty-seven, and at least annually thereafter, the attorney general shall: (i) identify the independent third parties who, in the attorney general's opinion, are qualified to complete bias and governance audits for the purposes of subparagraph (ii) of paragraph (a) of this subdivision; and (ii) make a list of such independent third parties available on the attorney general's web site.
Pending 2025-06-25
S-01.1
Gen. Bus. Law § 1421(1)(d)-(e)
Plain Language
Before deploying a frontier model, the large developer must record and retain detailed information on all tests and test results used to assess the model, in sufficient detail for third parties to replicate the testing procedure. These records must be retained for the duration of deployment plus five years. Additionally, the developer must implement appropriate safeguards to prevent unreasonable risk of critical harm. The 'reasonably possible' qualifier on recordkeeping provides some flexibility, but the obligation is a pre-deployment prerequisite — the developer may not deploy until both the testing documentation and safeguards are in place.
(d) Record, as and when reasonably possible, and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and (e) Implement appropriate safeguards to prevent unreasonable risk of critical harm.
Pre-filed 2025-11-01
S-01.5
63 O.S. § 5502(C)
Plain Language
Deployers (hospitals, physician practices, and other healthcare facilities) must implement and maintain a formal Quality Assurance Program to ensure AI devices are used safely, effectively, and in compliance with the act. This is a standing programmatic obligation — not a one-time setup — that requires ongoing maintenance. The specific components of the QA Program are detailed in Section 4 of the act (63 O.S. § 5504).
C. Deployers shall implement and maintain a Quality Assurance Program, as outlined in Section 4 of this act, to ensure the safe, effective, and compliant use of AI devices in patient care.
Pre-filed 2025-11-01
S-01.4S-01.7
63 O.S. § 5503(C)
Plain Language
Deployers must conduct and document regular performance evaluations and risk assessments of each AI device. The evaluations should incorporate feedback solicited from qualified end-users and, when feasible, participation in national specialty society AI assessment registries. When performance concerns are identified, deployers must take corrective action to mitigate patient risk. This is an ongoing obligation — not a one-time pre-deployment assessment — covering the entire lifecycle of the deployed device.
C. Deployers of an AI device shall conduct and document regular performance evaluations and risk assessments of the device. Such evaluations and assessments should be informed by invited feedback from qualified end-users and, when applicable, participation in national specialty society-administered AI assessment registries. Whenever AI device performance concerns are identified, deployers shall implement appropriate corrective actions to mitigate risk to patients.
Pre-filed 2025-11-01
S-01.1
63 O.S. § 5504(D)
Plain Language
Before deploying an AI medical device, deployers must conduct a diligent review and selection process. While the statute does not specify the elements of this process, it requires that the selection of AI devices not be ad hoc — deployers must be able to demonstrate a deliberate evaluation process was followed. This is a pre-deployment obligation that applies to each AI device the deployer selects for use in patient care.
D. Deployers shall have a diligent review and selection process for the deployed AI device.
Pre-filed 2025-11-01
S-01.4
63 O.S. § 5504(F)-(G)
Plain Language
Deployers must continuously monitor all deployed AI devices for performance issues, with specific attention to patient safety and care quality impacts. As part of this monitoring, deployers must participate in national specialty society-administered AI assessment registries when feasible. The feasibility qualifier means participation is mandatory when a relevant registry exists and participation is practicable, but not when no applicable registry exists or participation would be impracticable. This is a continuous post-deployment obligation distinct from the periodic performance evaluations required by Section 5503(C).
F. Deployers shall continuously monitor the performance of all deployed AI devices, including assessing any impact on patient safety or the quality of patient care. G. In conducting performance monitoring described in subsection F of this section, deployers must participate in national specialty society-administered artificial intelligence assessment registries when feasible.
Pending 2026-10-06
S-01.7
35 Pa.C.S. § 3503(b)(5),(7)
Plain Language
Facilities must periodically review and revise the performance, use, and outcomes of AI algorithms used in clinical decision making to maximize accuracy and reliability. Additionally, the algorithms must not create foreseeable, material risks of harm to patients. This is an ongoing operational obligation — not a one-time pre-deployment check — requiring continuous monitoring and improvement of AI algorithm performance.
(5) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability. (7) The artificial intelligence-based algorithms must not create foreseeable, material risks of harm to the patient.
Pending 2026-10-06
S-01.7
40 Pa.C.S. § 5203(b)(7),(9)
Plain Language
Insurers must periodically review and revise AI algorithms used in utilization review to maximize accuracy and reliability, and the algorithms must not create foreseeable, material risks of harm to covered persons. This is a continuing operational obligation requiring ongoing monitoring and improvement.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability. (9) The artificial intelligence-based algorithms must not create foreseeable, material risks of harm to the covered person.
Pending 2026-10-06
S-01.7
40 Pa.C.S. § 5303(b)(7),(9)
Plain Language
MA or CHIP managed care plans must periodically review and revise AI algorithms used in utilization review to maximize accuracy and reliability, and the algorithms must not create foreseeable, material risks of harm to enrollees.
(7) The performance, use and outcomes of the artificial intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability. (9) The artificial intelligence-based algorithms must not create foreseeable, material risks of harm to the enrollee.
Pending 2026-04-01
S-01.1S-01.4S-01.5S-01.7
12 Pa.C.S. § 7105(c)(4)
Plain Language
Suppliers must disclose in their written policy the procedures covering a comprehensive safety program for the chatbot. This includes: pre-deployment and ongoing testing benchmarked against the risk level of human communication; identification of foreseeable adverse outcomes and harmful interactions; a consumer harm-reporting mechanism; protocols for assessing and responding to risk of harm; documentation of actions taken to prevent or mitigate adverse outcomes; protocols for rapid response to acute physical harm risks; regular objective safety, accuracy, and efficacy reviews (which may include internal or external audits); safe-use instructions for consumers; prioritization of consumer mental health and safety over engagement metrics or profit; anti-discrimination measures; and HIPAA-equivalent privacy and security compliance as if the supplier were a covered entity. The supplier must not merely describe these procedures — under § 7105(g), the supplier must actually comply with the policy as filed.
(4) The procedures by which the supplier: (i) Conducts testing, prior to making the chatbot publicly available and regularly thereafter, to ensure that the output of the chatbot poses no greater risk to a consumer than that posed to an individual communicating with a human. (ii) Identifies reasonably foreseeable adverse outcomes to, and potentially harmful interactions with, consumers that could result from using the chatbot. (iii) Provides a mechanism for a consumer to report any potentially harmful interactions from the use of the chatbot. (iv) Implements protocols to assess and respond to risk of harm to consumers or other individuals. (v) Details actions taken to prevent or mitigate any adverse outcomes or potentially harmful interactions. (vi) Implements protocols to respond, as soon as practicable, to acute risks of physical harm. (vii) Reasonably ensures regular, objective reviews of safety, accuracy and efficacy, which may include internal or external audits. (viii) Provides consumers with instructions on the safe use of the chatbot. (ix) Prioritizes consumer mental health and safety over engagement metrics or profit. (x) Implements measures to prevent discriminatory treatment of consumers. (xi) Ensures compliance with the security and privacy provisions of 45 CFR Pts. 160 (relating to general administrative requirements) and 164 (relating to security and privacy), as if the supplier were a covered entity.
Pending 2027-01-09
S-01.1
35 Pa.C.S. § 3503(b)(7)
Plain Language
Facilities must ensure that their AI algorithms used in clinical decision making do not create foreseeable, material risks of harm to patients. This is an affirmative safety obligation — the facility must evaluate and ensure its AI tools do not pose material harm risks, not merely react after harm occurs.
(7) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the patient.
Pending 2027-01-09
S-01.1
40 Pa.C.S. § 5203(b)(9)
Plain Language
Insurers must ensure that their AI algorithms used in utilization review do not create foreseeable, material risks of harm to covered persons.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the covered person.
Pending 2027-01-09
S-01.1
40 Pa.C.S. § 5303(b)(9)
Plain Language
MA or CHIP managed care plans must ensure that their AI algorithms used in utilization review do not create foreseeable, material risks of harm to enrollees.
(9) The artificial-intelligence-based algorithms must not create foreseeable, material risks of harm to the enrollee.
Pre-filed 2026-01-01
S-01.4S-01.7
S.C. Code § 39-80-30(C)
Plain Language
Chatbot providers must, on a monthly basis, evaluate their chatbot for potential risk of harm to users and publish information about the chatbot on their website. They must also mitigate any identified risks of harm. The specific form and content of the evaluation and public disclosure will be governed by Attorney General regulations. This creates a continuous monthly safety evaluation cycle — not a one-time pre-deployment assessment — paired with an ongoing public transparency obligation and a duty to remediate identified risks.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
Pending 2025-01-01
S-01.4S-01.7
S.C. Code § 39-80-30(C)
Plain Language
Chatbot providers must conduct monthly safety evaluations of their chatbots to identify potential risks of harm to users and must publish information about their chatbot on their website on the same monthly cadence. When risks are identified, the provider must mitigate them. The specific content and form of the evaluations and public disclosures will be defined by Attorney General regulations. The monthly frequency makes this one of the most frequent mandatory safety evaluation cadences in U.S. AI legislation. The mitigation obligation is open-ended — any risk of harm identified must be addressed.
(C) In compliance with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40, a chatbot provider shall: (1) on a monthly basis: (a) evaluate its chatbot for potential risk of harm to users; and (b) make information about its chatbot publicly available on its website; and (2) mitigate any risk of harm to users.
Enacted 2024-05-01
S-01.5
Utah Code § 13-70-303(1)
Plain Language
To qualify for regulatory mitigation (reduced enforcement terms) within the Learning Laboratory, a participant must affirmatively demonstrate to the Office: technical capability, sufficient financial resources, that the AI technology's consumer benefits potentially outweigh risks from relaxed enforcement, an effective risk monitoring and minimization plan, and that the proposed testing is appropriately scoped and limited based on risk assessments. These are eligibility prerequisites — the Office evaluates them before granting any mitigation agreement.
To be eligible for regulatory mitigation, a participant shall demonstrate to the office that: (a) the participant has the technical expertise and capability to responsibly develop and test the proposed artificial intelligence technology; (b) the participant has sufficient financial resources to meet obligations during testing; (c) the artificial intelligence technology provides potential substantial consumer benefits that may outweigh identified risks from mitigated enforcement of regulations; (d) the participant has an effective plan to monitor and minimize identified risks from testing; and (e) the scale, scope, and duration of proposed testing is appropriately limited based on risk assessments.
Pre-filed 2025-07-01
S-01.1S-01.5
9 V.S.A. § 4193f(a)-(b)
Plain Language
Developers and deployers of inherently dangerous AI systems that could reasonably impact consumers must exercise reasonable care to prevent nine enumerated categories of foreseeable harm, ranging from criminal facilitation and deceptive practices to discrimination, privacy intrusion, IP violations, psychological harm, behavioral distortion, and exploitation of vulnerable populations. Additionally, developers must document and disclose to actual or potential deployers all reasonably foreseeable risks (including misuse risks) and available risk mitigation processes. This is a general duty of care provision — compliance with the subchapter creates a rebuttable presumption that the standard was met (per § 4193i(a)). A deployer who is not the developer is shielded from liability if they deploy in accordance with the developer's instructions and disclosures (per § 4193i(b)).
(a) Each developer or deployer of any inherently dangerous artificial intelligence system that could be reasonably expected to impact consumers shall exercise reasonable care to avoid any reasonably foreseeable risk arising out of the development of, intentional and substantial modification to, or deployment of an artificial intelligence system that causes or is likely to cause: (1) the commission of a crime or unlawful act; (2) any unfair or deceptive treatment of or unlawful impact on an individual; (3) any physical, financial, relational, or reputational injury on an individual; (4) psychological injuries that would be highly offensive to a reasonable person; (5) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns of a person, if the intrusion would be offensive to a reasonable person; (6) any violation to the intellectual property rights of persons under applicable State and federal laws; (7) discrimination on the basis of a person's or class of persons' actual or perceived race, color, ethnicity, sex, sexual orientation, gender identity, sex characteristics, religion, national origin, familial status, biometric information, or disability status; (8) distortion of a person's behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm; or (9) the exploitation of the vulnerabilities of a specific group of persons due to their age or physical or mental disability in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm. (b) Each developer of an inherently dangerous artificial intelligence system shall document and disclose to any actual or potential deployer of the artificial intelligence system any: (1) reasonably foreseeable risk, including by unintended or unauthorized uses, that causes or is likely to cause any of the injuries as set forth in subsection (a) of this section; and (2) risk mitigation processes that are reasonably foreseeable to mitigate any injury as set forth in subsection (a) of this section.
Pre-filed 2025-07-01
S-01.1
9 V.S.A. § 4193g(a)(1)
Plain Language
Developers are prohibited from placing an inherently dangerous AI system into the stream of commerce unless they have first conducted documented testing, evaluation, verification, and validation at least as stringent as the latest NIST AI Risk Management Framework. For AI systems that create reasonably foreseeable risks under the standard-of-care provision (§ 4193f), the developer must mitigate risks to the extent possible, consider alternatives, and disclose vulnerabilities and mitigation tactics to deployers. This is a pre-market gate — the system cannot be offered, sold, leased, or given away without satisfying these conditions.
(a) No developer shall offer, sell, lease, give, or otherwise place in the stream of commerce: (1) an inherently dangerous artificial intelligence system, unless the developer has conducted a documented testing, evaluation, verification, and validation of that system at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST); or (2) an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter, unless the developer mitigates these risks to the extent possible, considers alternatives, and discloses vulnerabilities and mitigation tactics to a deployer.
Pre-filed 2026-07-01
S-01.4S-01.7
9 V.S.A. § 4193c(c)
Plain Language
Chatbot providers must conduct monthly risk assessments of their chatbots for risks of harm to users, using metrics defined by AG rulemaking, and must actively mitigate any identified risks. This is an unusually frequent assessment cadence — monthly rather than the annual or pre-deployment assessments common in other jurisdictions. The specific metrics and risk categories will be defined by future AG rules, so the scope of this obligation is not yet fully determined. The mitigation obligation is ongoing and immediate upon risk identification.
(c) Risk assessment. A chatbot provider shall on a monthly basis, according to metrics as set forth in rules adopted by the Attorney General pursuant to this subchapter, assess its chatbot for risks of harm to users and actively mitigate any risks of harm.
Pending 2026-06-06
§ 15-17-3(e)
Plain Language
Private entities must protect biometric identifiers and biometric information through two cumulative security standards: (1) the reasonable standard of care within the entity's industry, and (2) protections at least as strong as those the entity applies to its other confidential and sensitive information (e.g., Social Security numbers, account numbers, PINs). Both standards must be met simultaneously — the entity must apply whichever is more protective. This covers storage, transmission, and protection from disclosure.
(e) A private entity in possession of a biometric identifier or biometric information shall: (1) Store, transmit, and protect from disclosure all biometric identifiers and biometric information using the reasonable standard of care within the private entity's industry; and (2) Store, transmit, and protect from disclosure all biometric identifiers and biometric information in a manner that is the same as or more protective than the way the private entity stores, transmits, and protects other confidential and sensitive information.