Vermont H.341 creates safety and oversight standards for developers and deployers of 'inherently dangerous' AI systems, defined to include high-risk AI systems, dual-use foundational models, and generative AI systems. Deployers must submit an AI System Safety and Impact Assessment to the Division of Artificial Intelligence before deployment and every two years thereafter, covering purpose, deployment context, training data, risk mitigation, post-deployment monitoring, and impacts on consequential decisions or biometric data collection. Developers must conduct NIST AI RMF-aligned testing before placing inherently dangerous systems in commerce and must disclose foreseeable risks and mitigation processes to deployers. Deployers must design and implement a NIST AI RMF-aligned risk management program. The bill applies only to non-small businesses (as defined by the SBA) operating in Vermont. Enforcement is by the Attorney General, with a private right of action for harmed consumers providing actual damages, injunctive relief, punitive damages for intentional violations, and attorney's fees.
"Deployer" means a person, including a developer, who uses or operates an artificial intelligence system for internal use or for use by third parties in the State.
"Developer" means a person who designs, codes, produces, owns, or substantially modifies an artificial intelligence system for internal use or for use by a third party in the State.
"Inherently dangerous artificial intelligence system" means a high-risk artificial intelligence system, dual-use foundational model, or generative artificial intelligence system.
"High-risk artificial intelligence system" means any artificial intelligence system, regardless of the number of parameters and supervision structure, that is: (A) used, or reasonably foreseeable as being used: (i) as a controlling factor in making a consequential decision; (ii) to categorize groups of persons by sensitive and protected characteristics, such as race, ethnic origin, or religious belief; (iii) in the direct management or operation of critical infrastructure; (iv) in vehicles, medical devices, or in the safety system of a product; or (v) to influence elections or voters; or (B) used to collect the biometric data of an individual from a biometric identification system without consent.
"Dual-use foundational model" means an artificial intelligence system that: (A) is trained on broad data; (B) generally uses self-supervision; (C) contains at least 10 billion parameters; (D) is applicable across a wide range of contexts; and (E) exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to economic security, public health or safety, or any combination of those matters, such as by: (i) substantially lowering the barrier of entry for nonexperts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyberattacks; or (iii) permitting the evasion of human control or oversight through means of deception or obfuscation.
"Generative artificial intelligence system" means an artificial intelligence system that can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence's training data. This definition includes an artificial intelligence agent.