H-0341
VT · State · USA
VT
USA
● Pre-filed
Proposed Effective Date
2025-07-01
Vermont H.341 — An act relating to creating oversight and safety standards for developers and deployers of inherently dangerous artificial intelligence systems
Vermont H.341 creates safety and oversight standards for developers and deployers of 'inherently dangerous' AI systems, defined to include high-risk AI systems, dual-use foundational models, and generative AI systems. Deployers must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence before deployment and every two years thereafter, covering purpose, deployment context, training data, risk mitigation, post-deployment monitoring, and impacts on consequential decisions or biometric data collection. Developers must conduct NIST AI RMF-aligned testing before placing inherently dangerous systems in commerce and must disclose foreseeable risks and mitigation processes to deployers. Deployers must design and implement a NIST AI RMF-aligned risk management program. The bill applies only to non-small businesses (as defined by the SBA) operating in Vermont. Enforcement is by the Attorney General, with a private right of action for harmed consumers providing actual damages, injunctive relief, punitive damages for intentional violations, and attorney's fees.
Summary

Vermont H.341 creates safety and oversight standards for developers and deployers of 'inherently dangerous' AI systems, defined to include high-risk AI systems, dual-use foundational models, and generative AI systems. Deployers must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence before deployment and every two years thereafter, covering purpose, deployment context, training data, risk mitigation, post-deployment monitoring, and impacts on consequential decisions or biometric data collection. Developers must conduct NIST AI RMF-aligned testing before placing inherently dangerous systems in commerce and must disclose foreseeable risks and mitigation processes to deployers. Deployers must design and implement a NIST AI RMF-aligned risk management program. The bill applies only to non-small businesses (as defined by the SBA) operating in Vermont. Enforcement is by the Attorney General, with a private right of action for harmed consumers providing actual damages, injunctive relief, punitive damages for intentional violations, and attorney's fees.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement. The Attorney General shall enforce the subchapter and may bring an action in the name of the State against a deployer or developer for noncompliance, including seeking temporary or permanent injunction, dissolution, or revocation of certificate of authority. The Attorney General may issue civil investigative demands upon reasonable cause to believe a violation has occurred. The Division of Artificial Intelligence within the Agency of Digital Services collects and reviews AI System Safety and Impact Assessments and refers noncompliant deployers to the Attorney General after a 45-day cure period. Private right of action available to any consumer harmed by a violation.
Penalties
A consumer harmed by a violation may bring an action in Superior Court for damages incurred, injunctive relief, punitive damages in the case of an intentional violation, and reasonable costs and attorney's fees. Violations also constitute unfair practices in commerce under 9 V.S.A. § 2453. No statutory minimum is specified; actual damages must be proven. Punitive damages are available only for intentional violations.
Who Is Covered
"Deployer" means a person, including a developer, who uses or operates an artificial intelligence system for internal use or for use by third parties in the State.
"Developer" means a person who designs, codes, produces, owns, or substantially modifies an artificial intelligence system for internal use or for use by a third party in the State.
What Is Covered
"Inherently dangerous artificial intelligence system" means a high-risk artificial intelligence system, dual-use foundational model, or generative artificial intelligence system.
"High-risk artificial intelligence system" means any artificial intelligence system, regardless of the number of parameters and supervision structure, that is: (A) used, or reasonably foreseeable as being used: (i) as a controlling factor in making a consequential decision; (ii) to categorize groups of persons by sensitive and protected characteristics, such as race, ethnic origin, or religious belief; (iii) in the direct management or operation of critical infrastructure; (iv) in vehicles, medical devices, or in the safety system of a product; or (v) to influence elections or voters; or (B) used to collect the biometric data of an individual from a biometric identification system without consent.
"Dual-use foundational model" means an artificial intelligence system that: (A) is trained on broad data; (B) generally uses self-supervision; (C) contains at least 10 billion parameters; (D) is applicable across a wide range of contexts; and (E) exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to economic security, public health or safety, or any combination of those matters, such as by: (i) substantially lowering the barrier of entry for nonexperts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyberattacks; or (iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
"Generative artificial intelligence system" means an artificial intelligence system that can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence's training data. This definition includes an artificial intelligence agent.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Automated DecisionmakingFoundation ModelContent Generation
9 V.S.A. § 4193e(a)-(b)
Plain Language
Every deployer of an inherently dangerous AI system must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence before deploying the system in Vermont, and must resubmit every two years. An updated assessment is also required upon any material and substantial change to the system's purpose or the type of data it processes or uses for training. The assessment must cover 13 enumerated elements including the system's purpose, deployment context, training data description, whether personal information and copyrighted content have been removed from training data, transparency measures, third-party dependencies, post-deployment monitoring, and the system's impact on consequential decisions or biometric data collection. Upon notice that a deployer is not in compliance, the Division notifies the deployer in writing and grants a 45-day cure period; failure to submit triggers referral to the Attorney General.
Statutory Text
(a) Each deployer of an inherently dangerous artificial intelligence system shall: (1) submit to the Division of Artificial Intelligence an Artificial Intelligence System Safety and Impact Assessment prior to deploying the inherently dangerous artificial intelligence system in this State, and every two years thereafter; and (2) submit to the Division of Artificial Intelligence an updated Artificial Intelligence System Safety and Impact Assessment if the deployer makes a material and substantial change to the inherently dangerous artificial intelligence system that includes: (A) the purpose for which the system is used for; or (B) the type of data the system processes or uses for training purposes. (b) Each Artificial Intelligence System Safety and Impact Assessment pursuant to subsection (a) of this section shall include, with respect to the inherently dangerous artificial intelligence system: (1) the purpose of the system; (2) the deployment context and intended use cases; (3) the benefits of use; (4) any foreseeable risk of unintended or unauthorized uses and the steps taken, to the extent reasonable, to mitigate the risk; (5) whether the model is proprietary; (6) a description of the data the system processes or uses for training purposes; (7) whether the data the system uses for training purposes has been processed to remove personal information, copyrighted information, and do not train data; (8) a description of transparency measures, including identifying to individuals when the system is in use; (9) identification of any third-party artificial intelligence systems or datasets the deployer relies on to train or operate the system, if applicable; (10) whether the developer of the system, if different than the deployer, disclosed the information pursuant to this subsection as well as the results of testing, vulnerabilities, and the parameters for safe and intended use; (11) a description of the data that the system, once deployed, processes as inputs; (12) a description of postdeployment monitoring and user safeguards, including a description of the oversight process in place to address issues as issues arise; and (13) a description of how the model impacts consequential decisions or the collection of biometric data.
R-03 Operational Performance Reporting · R-03.1 · Deployer · Automated Decisionmaking
9 V.S.A. § 4193e(c)
Plain Language
In the first year after deploying a high-risk AI system, deployers must submit testing results at the one-month, six-month, and twelve-month marks to the Division of Artificial Intelligence. These results must demonstrate the reliability of the system's outputs, document any variance over time, and describe mitigation strategies for those variances. This is distinct from the broader Safety and Impact Assessment — it is a performance monitoring submission specific to high-risk systems in their first operational year.
Statutory Text
(c) Each deployer of a high-risk artificial intelligence system shall submit a one-, six-, and 12-month testing result to the Division of Artificial Intelligence showing the reliability of the results generated by the system, any variance in those results over the testing periods, and any mitigation strategies for variances, in the first year of deployment.
S-01 AI System Safety Program · S-01.1S-01.5 · DeveloperDeployer · Automated DecisionmakingFoundation ModelContent Generation
9 V.S.A. § 4193f(a)-(b)
Plain Language
Developers and deployers of inherently dangerous AI systems that could reasonably impact consumers must exercise reasonable care to prevent nine enumerated categories of foreseeable harm, ranging from criminal facilitation and deceptive practices to discrimination, privacy intrusion, IP violations, psychological harm, behavioral distortion, and exploitation of vulnerable populations. Additionally, developers must document and disclose to actual or potential deployers all reasonably foreseeable risks (including misuse risks) and available risk mitigation processes. This is a general duty of care provision — compliance with the subchapter creates a rebuttable presumption that the standard was met (per § 4193i(a)). A deployer who is not the developer is shielded from liability if they deploy in accordance with the developer's instructions and disclosures (per § 4193i(b)).
Statutory Text
(a) Each developer or deployer of any inherently dangerous artificial intelligence system that could be reasonably expected to impact consumers shall exercise reasonable care to avoid any reasonably foreseeable risk arising out of the development of, intentional and substantial modification to, or deployment of an artificial intelligence system that causes or is likely to cause: (1) the commission of a crime or unlawful act; (2) any unfair or deceptive treatment of or unlawful impact on an individual; (3) any physical, financial, relational, or reputational injury on an individual; (4) psychological injuries that would be highly offensive to a reasonable person; (5) any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns of a person, if the intrusion would be offensive to a reasonable person; (6) any violation to the intellectual property rights of persons under applicable State and federal laws; (7) discrimination on the basis of a person's or class of persons' actual or perceived race, color, ethnicity, sex, sexual orientation, gender identity, sex characteristics, religion, national origin, familial status, biometric information, or disability status; (8) distortion of a person's behavior in a manner that causes or is likely to cause that person or another person physical or psychological harm; or (9) the exploitation of the vulnerabilities of a specific group of persons due to their age or physical or mental disability in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm. (b) Each developer of an inherently dangerous artificial intelligence system shall document and disclose to any actual or potential deployer of the artificial intelligence system any: (1) reasonably foreseeable risk, including by unintended or unauthorized uses, that causes or is likely to cause any of the injuries as set forth in subsection (a) of this section; and (2) risk mitigation processes that are reasonably foreseeable to mitigate any injury as set forth in subsection (a) of this section.
S-01 AI System Safety Program · S-01.1 · Developer · Automated DecisionmakingFoundation ModelContent Generation
9 V.S.A. § 4193g(a)(1)
Plain Language
Developers are prohibited from placing an inherently dangerous AI system into the stream of commerce unless they have first conducted documented testing, evaluation, verification, and validation at least as stringent as the latest NIST AI Risk Management Framework. For AI systems that create reasonably foreseeable risks under the standard-of-care provision (§ 4193f), the developer must mitigate risks to the extent possible, consider alternatives, and disclose vulnerabilities and mitigation tactics to deployers. This is a pre-market gate — the system cannot be offered, sold, leased, or given away without satisfying these conditions.
Statutory Text
(a) No developer shall offer, sell, lease, give, or otherwise place in the stream of commerce: (1) an inherently dangerous artificial intelligence system, unless the developer has conducted a documented testing, evaluation, verification, and validation of that system at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology (NIST); or (2) an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter, unless the developer mitigates these risks to the extent possible, considers alternatives, and discloses vulnerabilities and mitigation tactics to a deployer.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated DecisionmakingFoundation ModelContent Generation
9 V.S.A. § 4193g(b)
Plain Language
Deployers may not deploy an inherently dangerous AI system or any AI system creating reasonably foreseeable risks unless they have designed and implemented a risk management policy and program for that system. The policy must specify the principles, processes, and personnel the deployer will use to identify, mitigate, and document foreseeable risks. The program must be at least as stringent as the latest NIST AI Risk Management Framework, and must also be reasonable in light of the deployer's size and complexity, the nature and scope of the system (including intended and unintended uses and deployer modifications), and the data the system processes as inputs. This is a deployment prerequisite — the program must be in place before the system goes live.
Statutory Text
(b) No deployer shall deploy an inherently dangerous artificial intelligence system or an artificial intelligence system that creates reasonably foreseeable risks pursuant to section 4193f of this subchapter unless the deployer has designed and implemented a risk management policy and program for the model or system. The risk management policy shall specify the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk that is a reasonably foreseeable consequence of deploying or using the system. Each risk management policy and program designed, implemented, and maintained pursuant to this subsection shall be: (1) at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the NIST; and (2) reasonable considering: (A) the size and complexity of the deployer; (B) the nature and scope of the system, including the intended uses and unintended uses and the modifications made to the system by the deployer; and (C) the data that the system, once deployed, processes as inputs.
Other · Automated DecisionmakingFoundation ModelContent Generation
9 V.S.A. § 4193h(a)-(b)
Plain Language
Any violation of the subchapter constitutes an unfair practice in commerce under Vermont's existing consumer protection statute (9 V.S.A. § 2453), which independently empowers the Attorney General to seek civil penalties. Additionally, any consumer harmed by a violation may bring a private action in Superior Court for actual damages, injunctive relief, punitive damages (for intentional violations), and reasonable attorney's fees and costs. This dual enforcement channel — AG action under the consumer protection act plus a direct private right of action — is the bill's enforcement mechanism.
Statutory Text
(a) A person who violates this subchapter or rules adopted under this subchapter commits an unfair practice in commerce in violation of section 2453 of this title. (b) A consumer harmed by a violation of this subchapter or rules adopted under this subchapter may bring an action in Superior Court for damages incurred, injunctive relief, punitive damages in the case of an intentional violation, and reasonable costs and attorney's fees.
R-02 Regulatory Disclosure & Submissions · R-02.2 · DeveloperDeployer · Automated DecisionmakingFoundation ModelContent Generation
9 V.S.A. § 4193c(c)(1)-(4)
Plain Language
The Attorney General may issue a civil investigative demand whenever there is reasonable cause to believe a violation of the subchapter has occurred. Developers and deployers must respond but may redact trade secrets or information protected by state or federal law, provided they affirmatively state the basis for redaction. Attorney-client privilege and work-product protection are preserved and not waived by disclosure. All information provided to the Attorney General under this subsection is exempt from public inspection under the Public Records Act. This creates an obligation to maintain documentation in a form producible upon demand, with defined confidentiality protections.
Statutory Text
(c)(1) Whenever the Attorney General has reasonable cause to believe that any person has engaged in or is engaging in any violation of this subchapter, the Attorney General may issue a civil investigative demand. (2) In rendering and furnishing any information requested pursuant to a civil investigative demand, a developer or deployer may redact or omit any trade secrets or information protected from disclosure by State or federal law. If a developer or deployer refuses to disclose or redacts or omits information based on the exemption from disclosure of trade secrets, the developer or deployer shall affirmatively state to the Attorney General that the basis for nondisclosure, redaction, or omission is because the information is a trade secret. (3) To the extent that any information requested pursuant to a civil investigative demand is subject to attorney-client privilege or work-product protection, disclosure of the information shall not constitute a waiver of the privilege or protection. (4) Any information, statement, or documentation provided to the Attorney General pursuant to this subsection shall be exempt from public inspection and copying under the Public Records Act.