H-0341
VT · State · USA
VT
USA
● Pre-filed
Proposed Effective Date
2025-07-01
Vermont H.341 — An act relating to creating oversight and safety standards for developers and deployers of inherently dangerous artificial intelligence systems
VT H.341 establishes safety and oversight standards for developers and deployers of 'inherently dangerous' AI systems — a category encompassing high-risk AI systems, dual-use foundational models, and generative AI systems. Deployers must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence before deployment and every two years thereafter, with updated assessments upon material changes. Developers and deployers must exercise reasonable care to avoid foreseeable harms including discrimination, deception, privacy intrusion, and physical or psychological injury. Developers may not place inherently dangerous AI systems in commerce without documented testing at least as stringent as the NIST AI Risk Management Framework. Deployers must implement a risk management program meeting the same standard. The bill applies only to non-small-businesses and creates both AG enforcement and a private right of action for harmed consumers, with a rebuttable presumption of compliance for entities that follow the subchapter's requirements.
Summary

VT H.341 establishes safety and oversight standards for developers and deployers of 'inherently dangerous' AI systems — a category encompassing high-risk AI systems, dual-use foundational models, and generative AI systems. Deployers must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence before deployment and every two years thereafter, with updated assessments upon material changes. Developers and deployers must exercise reasonable care to avoid foreseeable harms including discrimination, deception, privacy intrusion, and physical or psychological injury. Developers may not place inherently dangerous AI systems in commerce without documented testing at least as stringent as the NIST AI Risk Management Framework. Deployers must implement a risk management program meeting the same standard. The bill applies only to non-small-businesses and creates both AG enforcement and a private right of action for harmed consumers, with a rebuttable presumption of compliance for entities that follow the subchapter's requirements.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement. The Attorney General may bring an action in the name of the State against a deployer or developer for noncompliance, seeking temporary or permanent injunction, dissolution, or revocation of certificate of authority. The Attorney General may issue civil investigative demands when there is reasonable cause to believe a violation has occurred. The Division of Artificial Intelligence within the Agency of Digital Services collects and reviews AI System Safety and Impact Assessments and refers noncompliance to the Attorney General after a 45-day cure period. Private right of action available to consumers harmed by a violation. Complaint-driven via online mechanism on the Attorney General's website.
Penalties
Consumers harmed by a violation may bring an action in Superior Court for damages incurred, injunctive relief, punitive damages in the case of an intentional violation, and reasonable costs and attorney's fees. Violations also constitute unfair practices in commerce under 9 V.S.A. § 2453. No statutory minimum per-violation amount is specified; damages require proof of harm incurred.
Who Is Covered
"Deployer" means a person, including a developer, who uses or operates an artificial intelligence system for internal use or for use by third parties in the State.
"Developer" means a person who designs, codes, produces, owns, or substantially modifies an artificial intelligence system for internal use or for use by a third party in the State.
What Is Covered
"Inherently dangerous artificial intelligence system" means a high-risk artificial intelligence system, dual-use foundational model, or generative artificial intelligence system.
"High-risk artificial intelligence system" means any artificial intelligence system, regardless of the number of parameters and supervision structure, that is: (A) used, or reasonably foreseeable as being used: (i) as a controlling factor in making a consequential decision; (ii) to categorize groups of persons by sensitive and protected characteristics, such as race, ethnic origin, or religious belief; (iii) in the direct management or operation of critical infrastructure; (iv) in vehicles, medical devices, or in the safety system of a product; or (v) to influence elections or voters; or (B) used to collect the biometric data of an individual from a biometric identification system without consent.
"Dual-use foundational model" means an artificial intelligence system that: (A) is trained on broad data; (B) generally uses self-supervision; (C) contains at least 10 billion parameters; (D) is applicable across a wide range of contexts; and (E) exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to economic security, public health or safety, or any combination of those matters, such as by: (i) substantially lowering the barrier of entry for nonexperts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyberattacks; or (iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
"Generative artificial intelligence system" means an artificial intelligence system that can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence's training data. This definition includes an artificial intelligence agent.
Compliance Obligations 13 obligations · click obligation ID to open requirement page
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193e(a)
Plain Language
Before deploying any inherently dangerous AI system in Vermont, deployers must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence. The assessment must be resubmitted every two years and also whenever the deployer makes a material and substantial change to the system's purpose or the type of data it processes or uses for training. This is a pre-deployment gate — deployment cannot proceed until the assessment is filed.
Statutory Text
(a) Each deployer of an inherently dangerous artificial intelligence system shall: (1) submit to the Division of Artificial Intelligence an Artificial Intelligence System Safety and Impact Assessment prior to deploying the inherently dangerous artificial intelligence system in this State, and every two years thereafter; and (2) submit to the Division of Artificial Intelligence an updated Artificial Intelligence System Safety and Impact Assessment if the deployer makes a material and substantial change to the inherently dangerous artificial intelligence system that includes: (A) the purpose for which the system is used for; or (B) the type of data the system processes or uses for training purposes.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193e(b)
Plain Language
The Safety and Impact Assessment submitted to the Division of Artificial Intelligence must cover thirteen specific elements: system purpose, deployment context and use cases, benefits, foreseeable misuse risks and mitigations, whether the model is proprietary, training data descriptions, whether training data was processed to remove personal information, copyrighted information, and 'do not train' data, transparency measures including user notification, third-party system and dataset dependencies, whether the developer has disclosed testing results and safe-use parameters, post-deployment input data descriptions, post-deployment monitoring and oversight processes, and the system's impact on consequential decisions or biometric data collection. This is a comprehensive documentation requirement that effectively requires deployers to understand and document the entire lifecycle of the AI system.
Statutory Text
(b) Each Artificial Intelligence System Safety and Impact Assessment pursuant to subsection (a) of this section shall include, with respect to the inherently dangerous artificial intelligence system: (1) the purpose of the system; (2) the deployment context and intended use cases; (3) the benefits of use; (4) any foreseeable risk of unintended or unauthorized uses and the steps taken, to the extent reasonable, to mitigate the risk; (5) whether the model is proprietary; (6) a description of the data the system processes or uses for training purposes; (7) whether the data the system uses for training purposes has been processed to remove personal information, copyrighted information, and do not train data; (8) a description of transparency measures, including identifying to individuals when the system is in use; (9) identification of any third-party artificial intelligence systems or datasets the deployer relies on to train or operate the system, if applicable; (10) whether the developer of the system, if different than the deployer, disclosed the information pursuant to this subsection as well as the results of testing, vulnerabilities, and the parameters for safe and intended use; (11) a description of the data that the system, once deployed, processes as inputs; (12) a description of postdeployment monitoring and user safeguards, including a description of the oversight process in place to address issues as issues arise; and (13) a description of how the model impacts consequential decisions or the collection of biometric data.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193e(c)
Plain Language
In the first year after deploying a high-risk AI system, deployers must submit testing results to the Division of Artificial Intelligence at three intervals: one month, six months, and twelve months after deployment. Each submission must show the reliability of the system's results, any variance over the testing periods, and strategies for mitigating variances. This post-deployment testing and reporting obligation applies specifically to high-risk AI systems — a subset of the broader 'inherently dangerous' category — and is a first-year-only requirement distinct from the biennial safety and impact assessment.
Statutory Text
(c) Each deployer of a high-risk artificial intelligence system shall submit a one-, six-, and 12-month testing result to the Division of Artificial Intelligence showing the reliability of the results generated by the system, any variance in those results over the testing periods, and any mitigation strategies for variances, in the first year of deployment.
S-01 AI System Safety Program · S-01.1S-01.5 · Developer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193g(a)
Plain Language
Developers may not place an inherently dangerous AI system in commerce unless they have first conducted documented testing, evaluation, verification, and validation at least as stringent as the latest NIST AI Risk Management Framework. For any AI system creating reasonably foreseeable risks of harm under § 4193f, the developer must mitigate those risks to the extent possible, consider alternatives, and disclose vulnerabilities and mitigation tactics to downstream deployers. This is a pre-distribution gate — developers cannot release the product without completing NIST-level safety evaluation and documentation, and must affirmatively disclose residual risks to deployers.
Statutory Text
(a) No developer shall offer, sell, lease, give, or otherwise place in the stream of commerce: (1) an inherently dangerous artificial intelligence system, unless the developer has conducted a documented testing, evaluation, verification, and validation of that system at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST); or (2) an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter, unless the developer mitigates these risks to the extent possible, considers alternatives, and discloses vulnerabilities and mitigation tactics to a deployer.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193g(b)
Plain Language
Deployers may not deploy an inherently dangerous AI system or any AI system creating foreseeable risks of harm unless they have first designed and implemented a risk management policy and program. The policy must specify the principles, processes, and personnel for ongoing risk identification, mitigation, and documentation. The program must meet the NIST AI RMF as a floor and must be reasonable considering the deployer's size and complexity, the nature and scope of the system (including intended and unintended uses and deployer modifications), and the data the system processes post-deployment. This is a pre-deployment prerequisite with ongoing maintenance obligations — the program must be 'maintained,' not just created.
Statutory Text
(b) No deployer shall deploy an inherently dangerous artificial intelligence system or an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter unless the deployer has designed and implemented a risk management policy and program for the model or system. The risk management policy shall specify the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk that is a reasonably foreseeable consequence of deploying or using the system. Each risk management policy and program designed, implemented, and maintained pursuant to this subsection shall be: (1) at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the NIST; and (2) reasonable considering: (A) the size and complexity of the deployer; (B) the nature and scope of the system, including the intended uses and unintended uses and the modifications made to the system by the deployer; and (C) the data that the system, once deployed, processes as inputs.
S-01 AI System Safety Program · S-01.5 · DeveloperDeployer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193f(a)
Plain Language
Developers and deployers of inherently dangerous AI systems that could reasonably be expected to impact Vermont consumers must exercise reasonable care to avoid foreseeable risks across nine categories of harm: criminal conduct, unfair or deceptive treatment, physical/financial/relational/reputational injury, highly offensive psychological injuries, privacy intrusion, intellectual property violations, discrimination across a broad enumeration of protected characteristics, behavioral distortion causing harm, and exploitation of vulnerable groups (by age or disability) to distort behavior harmfully. This is a general negligence-style standard of care with an enumerated list of harm categories — it functions as the statute's core safety obligation, defining the harms that developers and deployers must affirmatively work to prevent. Compliance with the subchapter creates a rebuttable presumption that the standard of care was met (per § 4193i(a)).
Statutory Text
(a) Each developer or deployer of any inherently dangerous artificial intelligence system that could be reasonably expected to impact consumers shall exercise reasonable care to avoid any reasonably foreseeable risk arising out of the development of, intentional and substantial modification to, or deployment of an artificial intelligence system that causes or is likely to cause: (1) the commission of a crime or unlawful act; (2) any unfair or deceptive treatment of or unlawful impact on an individual; (3) any physical, financial, relational, or reputational injury on an individual; (4) psychological injuries that would be highly offensive to a reasonable person; (5) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns of a person, if the intrusion would be offensive to a reasonable person; (6) any violation to the intellectual property rights of persons under applicable State and federal laws; (7) discrimination on the basis of a person's or class of persons' actual or perceived race, color, ethnicity, sex, sexual orientation, gender identity, sex characteristics, religion, national origin, familial status, biometric information, or disability status; (8) distortion of a person's behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm; or (9) the exploitation of the vulnerabilities of a specific group of persons due to their age or physical or mental disability in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193f(b)
Plain Language
Developers of inherently dangerous AI systems must document and disclose to all actual and potential deployers: (1) all reasonably foreseeable risks — including from unintended or unauthorized uses — that could cause any of the nine categories of harm enumerated in § 4193f(a), and (2) risk mitigation processes reasonably foreseeable to mitigate those harms. This is a pre-deployment downstream disclosure obligation — developers must affirmatively push risk and mitigation information to deployers, not merely make it available on request. The disclosure covers both the risk landscape and the developer's recommended mitigation approaches.
Statutory Text
(b) Each developer of an inherently dangerous artificial intelligence system shall document and disclose to any actual or potential deployer of the artificial intelligence system any: (1) reasonably foreseeable risk, including by unintended or unauthorized uses, that causes or is likely to cause any of the injuries as set forth in subsection (a) of this section; and (2) risk mitigation processes that are reasonably foreseeable to mitigate any injury as set forth in subsection (a) of this section.
Other · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193h(a)
Plain Language
Violations of this subchapter are declared to be per se unfair practices in commerce under Vermont's Consumer Protection Act (9 V.S.A. § 2453). This activates the existing CPA enforcement framework — including the Attorney General's enforcement powers under that act — but creates no new, independent compliance obligation beyond what the subchapter already requires.
Statutory Text
(a) A person who violates this subchapter or rules adopted under this subchapter commits an unfair practice in commerce in violation of section 2453 of this title.
Other · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193h(b)
Plain Language
A consumer harmed by any violation of this subchapter may sue in Superior Court for damages incurred, injunctive relief, punitive damages (if the violation was intentional), and reasonable costs and attorney's fees. This creates the private enforcement mechanism but imposes no new compliance obligation on developers or deployers beyond what other provisions already require.
Statutory Text
(b) A consumer harmed by a violation of this subchapter or rules adopted under this subchapter may bring an action in Superior Court for damages incurred, injunctive relief, punitive damages in the case of an intentional violation, and reasonable costs and attorney's fees.
Other · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193i(a)-(b)
Plain Language
Two safe harbors apply: (1) a rebuttable presumption that a developer or deployer met the standard of care if they complied with all provisions of the subchapter — meaning full compliance shifts the burden of proof to the plaintiff; and (2) a deployer who is not the developer is not liable if the deployer followed the developer's instructions and risk information under § 4193f. The second safe harbor is significant for downstream deployers using off-the-shelf systems — it effectively shifts liability upstream to the developer when the deployer followed the developer's guidance.
Statutory Text
(a) In any civil action brought against a deployer or developer pursuant to section 4193h of this subchapter, there shall be a rebuttable presumption that a developer or deployer upheld the standard of care if the developer or deployer complied with the provisions of this subchapter. (b) A deployer who is not also the developer of an inherently dangerous artificial intelligence system shall not be found in violation of this subchapter if the deployer deploys the system in accordance with the developer's instructions and information as set forth in section 4193f of this subchapter.
Other · Government · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193d
Plain Language
The Attorney General must publish on the AG's website information about developer, distributor, and deployer obligations under § 4193f, and an online complaint submission mechanism for consumers. This is a government-facing obligation — it imposes no compliance burden on developers or deployers.
Statutory Text
The Attorney General shall post on the Attorney General's website: (1) information relating to the responsibilities of a developer, distributor, and deployer pursuant to section 4193f of this title; and (2) an online mechanism through which a consumer may submit a complaint under this subchapter to the Attorney General.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193e(d)
Plain Language
When the Division of Artificial Intelligence learns a deployer is not in compliance with assessment requirements, it must immediately notify the deployer in writing and order submission of the required assessment. If the deployer fails to submit within 45 days, the Division refers the violation to the Attorney General. This creates a 45-day cure window between the Division's noncompliance notice and Attorney General referral. Deployers should treat the initial Division notice as an urgent compliance demand — the 45-day period is a hard deadline, not a suggestion.
Statutory Text
(d) Upon the Division of Artificial Intelligence receiving notice that a deployer of an inherently dangerous artificial intelligence system is not in compliance with the requirements under this section, the Division shall immediately notify the deployer of the finding in writing and order the deployer to submit the assessment required pursuant to subsection (a) of this section. If the deployer fails to submit the assessment on or before 45 days after the deployers receives the notice, the Division of Artificial Intelligence shall notify the Attorney General in writing of the violation.
R-02 Regulatory Disclosure & Submissions · R-02.2 · DeveloperDeployer · Automated DecisionmakingFrontier AI SystemContent Generation
9 V.S.A. § 4193c(c)
Plain Language
The Attorney General may issue a civil investigative demand (CID) when there is reasonable cause to believe a violation has occurred. Developers and deployers must produce responsive documents but may redact trade secrets and legally protected information — provided they affirmatively state that the basis for withholding is a trade secret claim. Attorney-client privilege and work-product protections are preserved and disclosure does not waive them. All materials produced to the AG under a CID are exempt from public records disclosure. Practically, entities should maintain documentation in a form that allows rapid response to a CID, with trade-secret designations pre-identified.
Statutory Text
(c)(1) Whenever the Attorney General has reasonable cause to believe that any person has engaged in or is engaging in any violation of this subchapter, the Attorney General may issue a civil investigative demand. (2) In rendering and furnishing any information requested pursuant to a civil investigative demand, a developer or deployer may redact or omit any trade secrets or information protected from disclosure by State or federal law. If a developer or deployer refuses to disclose or redacts or omits information based on the exemption from disclosure of trade secrets, the developer or deployer shall affirmatively state to the Attorney General that the basis for nondisclosure, redaction, or omission is because the information is a trade secret. (3) To the extent that any information requested pursuant to a civil investigative demand is subject to attorney-client privilege or work-product protection, disclosure of the information shall not constitute a waiver of the privilege or protection. (4) Any information, statement, or documentation provided to the Attorney General pursuant to this subsection shall be exempt from public inspection and copying under the Public Records Act.