H-0814
VT · State · USA
VT
USA
● Passed
Proposed Effective Date
2026-07-01
Vermont H.814 — An act relating to neurological rights and the use of artificial intelligence technology in health and human services
Vermont H.814 addresses AI in health and human services through three main pillars. First, it creates a new neurological rights chapter requiring written informed consent before collecting, sharing, or using neural data from brain-computer interfaces, with prohibitions on consciousness bypass without specific consent. Second, it regulates mental health chatbots by requiring suppliers to protect user data, restrict targeted advertising, disclose AI identity, and maintain detailed safety and compliance policies filed with the Attorney General. Third, it restricts the use of AI in health insurance utilization review, requiring individualized clinical data, prohibiting AI from making medical necessity determinations (reserving those to licensed human providers), and mandating periodic review and nondiscrimination. Enforcement is primarily through the Attorney General with civil penalties up to $10,000 per violation for neurological rights and $2,500 per violation for health care AI provisions, with consumer remedies available under the Vermont Consumer Protection Act.
Summary

Vermont H.814 addresses AI in health and human services through three main pillars. First, it creates a new neurological rights chapter requiring written informed consent before collecting, sharing, or using neural data from brain-computer interfaces, with prohibitions on consciousness bypass without specific consent. Second, it regulates mental health chatbots by requiring suppliers to protect user data, restrict targeted advertising, disclose AI identity, and maintain detailed safety and compliance policies filed with the Attorney General. Third, it restricts the use of AI in health insurance utilization review, requiring individualized clinical data, prohibiting AI from making medical necessity determinations (reserving those to licensed human providers), and mandating periodic review and nondiscrimination. Enforcement is primarily through the Attorney General with civil penalties up to $10,000 per violation for neurological rights and $2,500 per violation for health care AI provisions, with consumer remedies available under the Vermont Consumer Protection Act.

Enforcement & Penalties
Enforcement Authority
For the neurological rights chapter (18 V.S.A. ch. 42C): Violations constitute unfair or deceptive acts under 9 V.S.A. chapter 63 (Vermont Consumer Protection Act). The Attorney General has authority to make rules, conduct civil investigations, enter into assurances of discontinuance, and bring civil actions. Consumers have the same rights and remedies as provided under 9 V.S.A. chapter 63, subchapter 1. For the AI in health care chapter (18 V.S.A. ch. 233): The Attorney General may impose administrative penalties and file actions in Superior Court with the same investigative and remedial authority as under the Vermont Consumer Protection Act. Violations of § 9752 by licensed health care providers are also subject to the jurisdiction of the Office of Professional Regulation and the Board of Medical Practice. The mental health chatbot affirmative defense provisions (§ 9764) contemplate enforcement actions by the Office of Professional Regulation or Board of Medical Practice against suppliers.
Penalties
For neurological rights violations (18 V.S.A. § 1895): Civil penalty of up to $10,000 per violation. Consumers have the same rights and remedies as under 9 V.S.A. chapter 63, subchapter 1 (Vermont Consumer Protection Act), which provides for actual damages, injunctive relief, and attorney's fees. For AI in health care violations (18 V.S.A. § 9753): Administrative penalty of up to $2,500 per violation. The Attorney General may obtain remedies as if the action were brought under the Vermont Consumer Protection Act. Each violation constitutes a separate violation.
Who Is Covered
"Supplier" means a seller, lessor, assignor, offeror, broker, or other person who regularly solicits, engages in, or enforces consumer transactions, regardless of whether the person deals directly with the consumer.
What Is Covered
"Mental health chatbot" means an artificial intelligence technology that: (i) uses generative artificial intelligence to engage in interactive conversations with a user of the mental health chatbot similar to the confidential communications that an individual would have with a licensed mental health provider; and (ii) a supplier represents, or a reasonable person would believe, can or will provide psychotherapy or help a user manage or treat mental health conditions. "Mental health chatbot" does not include artificial intelligence technology that only: (i) provides scripted output, such as guided meditations or mindfulness exercises; or (ii) analyzes an individual's input for the purpose of connecting the individual with a human mental health provider.
Compliance Obligations 19 obligations · click obligation ID to open requirement page
D-01 Automated Processing Rights & Data Controls · D-01.8 · DeployerManufacturer · BiometricsHealthcare
18 V.S.A. § 1893(a)-(b)
Plain Language
No person may collect or record neural data from a brain-computer interface without first providing the individual with a written notice explaining how the data will be used, then obtaining written informed consent. This is an affirmative opt-in consent requirement — collection is prohibited by default. The consent must be voluntary, informed as to nature/benefits/risks/consequences, and may be given by an agent, guardian, or surrogate on behalf of an individual who lacks capacity.
Statutory Text
(a) Prohibition. Subject to the limited exceptions provided in this section, no person shall: (1) collect or record an individual's neural data gathered from a brain-computer interface; or (2) share with a third party an individual's neural data gathered from a brain-computer interface. (b) Consent to collect. A person shall not collect or record an individual's neural data gathered from a brain-computer interface unless the person: (1) provides the individual with a written notice explaining how the person will use the individual's neural data; and (2) thereafter receives written informed consent from the individual to collect or record the individual's neural data.
D-01 Automated Processing Rights & Data Controls · D-01.8 · DeployerManufacturer · BiometricsHealthcare
18 V.S.A. § 1893(c)
Plain Language
Sharing neural data with any third party requires a separate written informed consent process, distinct from the consent to collect. The person must identify the specific third party by name and address and explain the purposes for sharing before obtaining consent. This is more granular than most biometric consent statutes, which typically bundle collection and sharing consent.
Statutory Text
(c) Consent to share. A person shall not share with a third party an individual's neural data gathered from a brain-computer interface unless the person: (1) provides the individual with a written request for the individual's neural data to be shared with a third party and for what purposes, including the name and address of the third party; and (2) thereafter receives written informed consent from the individual to share the individual's neural data with the third party.
D-01 Automated Processing Rights & Data Controls · D-01.3 · DeployerManufacturer · BiometricsHealthcare
18 V.S.A. § 1893(d)
Plain Language
Individuals have the right to revoke consent for neural data collection or sharing at any time by written notice. The revocation process must be at least as easy as the original consent process. Upon receiving revocation notice, the entity must destroy all neural data records within 10 days, immediately cease sharing with all third parties, and notify all third parties that consent has been revoked. This creates a deletion obligation more aggressive than most data privacy laws (10-day destruction deadline).
Statutory Text
(d) Revocation of consent. (1) An individual who has provided written informed consent allowing a person to collect, record, or share the individual's neural data pursuant to this section has the right to revoke consent at any time thereafter by providing written notice to the person initially receiving the consent. This revocation of consent notice shall be as easy or easier for the individual to provide as compared to the requirements for initially providing consent. (2) A person who receives written notice from an individual revoking consent pursuant to subdivision (1) of this subsection shall: (A) destroy all records of the individual's neural data not later than 10 days after receiving the notice; and (B) in the case of the revocation of consent to share an individual's neural data, immediately: (i) cease sharing an individual's neural data with all third parties upon receipt of the notice; and (ii) inform all third parties with whom the person has shared the individual's neural data that the individual has revoked consent.
Other · Manufacturer · BiometricsHealthcare
18 V.S.A. § 1894(a)-(b)
Plain Language
Manufacturers of brain-computer interfaces may not allow their devices to bypass an individual's conscious decision-making unless they obtain specific, written informed consent for each category of action the device will perform. Consent obtained through a consciousness bypass itself is categorically invalid. Manufacturers must keep records of all consent. Consent may be revoked at any time by the individual (or their agent/guardian/surrogate) with a process at least as easy as the original consent. This is a first-of-its-kind neurological autonomy protection that has no direct analogue in AI regulation.
Statutory Text
(a) Specific consent required. (1) A person shall not allow a brain-computer interface it manufactures to be used to bypass the conscious decision making of an individual unless the person has received specific, written informed consent from the individual. As used in this section, "specific" means written consent for each and every category of action performed by the brain-computer interface. (2) A person receiving written informed consent from an individual shall keep a record of the individual's consent. (3) Consent obtained by using a consciousness bypass is not informed consent. (b) Revoking consent. (1) An individual who has provided specific, written informed consent allowing a brain-computer interface to be used to bypass the conscious decision making of the individual pursuant to this section has the right to revoke consent at any time thereafter by providing notice to the person initially receiving the consent. This revocation of consent notice shall be as easy or easier for the individual to provide as compared to the requirements for initially providing consent. (2) An individual's agent, guardian, or surrogate has the right to revoke consent on behalf of the individual pursuant to subdivision (1) of this subsection.
T-01 AI Identity Disclosure · T-01.1 · Deployer · HealthcareChatbot
18 V.S.A. § 9752(a)-(b)
Plain Language
Health care providers using generative AI to produce patient communications about clinical information must include two things: (1) a disclaimer that the communication was AI-generated, with specific placement rules depending on the medium (prominently at the beginning for letters/emails, displayed throughout for chat and video, verbally at start and end for audio); and (2) clear instructions for contacting a human provider. There is a safe harbor: if a licensed human health care provider reads and reviews the AI-generated communication before it reaches the patient, no disclaimer is required. This creates a practical choice for providers — either have a human review every AI-generated communication, or label it.
Statutory Text
(a) Except as provided in subsection (b) of this section, any health care provider that uses generative artificial intelligence to generate written or verbal patient communications relating to patient clinical information shall ensure that those communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence. (A) For written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication. (B) For written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction. (C) For audio communications, the disclaimer shall be provided verbally at the start and end of the interaction. (D) For video communications, the disclaimer shall be prominently displayed throughout the interaction. (2) Clear instructions describing how a patient may contact a human health care provider; an employee of the health care facility, clinic, physician's office, or office of a group provider; or other appropriate person. (b) If a communication is generated by generative artificial intelligence and read and reviewed by a licensed human health care provider, the requirements of subsection (a) of this section shall not apply.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotHealthcare
18 V.S.A. § 9761(a)-(b)
Plain Language
Suppliers of mental health chatbots are prohibited from selling or sharing Vermont users' individually identifiable health information or user inputs with third parties, with three narrow exceptions: (1) health care providers requesting data with user consent, (2) health plans at user request, and (3) contractors necessary for chatbot functionality, who must comply with HIPAA privacy and security rules as if the supplier were a HIPAA covered entity. This effectively extends HIPAA-equivalent obligations to mental health chatbot suppliers who would not otherwise be covered entities under federal law.
Statutory Text
(a)(1) Except as provided in subdivision (2) of this subsection, a supplier of a mental health chatbot shall not sell to or share with any third party any: (A) individually identifiable health information of a Vermont user; or (B) user input of a Vermont user. (2) The prohibition set forth in subdivision (1) of this subsection shall not apply to individually identifiable health information that is: (A) requested by a health care provider with the consent of the Vermont user; (B) provided to a health plan of a Vermont user upon request of the Vermont user; or (C) shared in compliance with subsection (b) of this section. (b)(1) A supplier may share individually identifiable health information necessary to ensure the effective functionality of the mental health chatbot with another person with whom the supplier has a contract related to such functionality. (2) When sharing information pursuant to subdivision (1) of this subsection, the supplier and the other person shall comply with all applicable privacy and security provisions of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A and E, as if the supplier were a covered entity and the other person were a business associate, as those terms are defined in 45 C.F.R. § 160.103.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1 · Deployer · ChatbotHealthcare
18 V.S.A. § 9762(a)-(c)
Plain Language
Suppliers of mental health chatbots face two advertising restrictions: (1) any in-conversation advertisement must be clearly labeled as an ad and must disclose all sponsorship, business affiliations, and third-party promotional agreements; and (2) suppliers may not use user inputs to target, select, or customize advertisements (with a narrow exception for advertising the chatbot itself). The ban on using user inputs for ad targeting is a categorical prohibition — not merely a disclosure obligation. Recommending that a user seek help from a licensed provider is expressly carved out and is not considered advertising.
Statutory Text
(a) A supplier shall not use a mental health chatbot to advertise a specific product or service to a Vermont user in a conversation between the Vermont user and the mental health chatbot unless the mental health chatbot: (1) clearly and conspicuously identifies the advertisement as an advertisement; and (2) clearly and conspicuously discloses to the Vermont user any: (A) sponsorship; (B) business affiliation; or (C) agreement that the supplier has with a third party to promote, advertise, or recommend the product or service. (b) A supplier of a mental health chatbot shall not use a Vermont user's input to: (1) determine whether to display an advertisement for a product or service to the Vermont user, unless the advertisement is for the mental health chatbot itself; (2) determine a product, service, or category of product or service to advertise to the Vermont user; or (3) customize how an advertisement is presented to a Vermont user. (c) Nothing in this section shall be construed to prohibit a mental health chatbot from recommending that a Vermont user seek psychotherapy or other assistance from a licensed health care provider, including a specific licensed health care provider.
T-01 AI Identity Disclosure · T-01.1T-01.3 · Deployer · ChatbotHealthcare
18 V.S.A. § 9763(a)-(b)
Plain Language
Suppliers must ensure the mental health chatbot clearly and conspicuously discloses that it is AI and not a human at three trigger points: (1) before the user can access chatbot features (unconditional initial disclosure); (2) at the beginning of any interaction after a 7-day gap in access (re-disclosure after absence); and (3) whenever a user asks or prompts about whether AI is being used (on-demand disclosure). This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled. The 7-day re-disclosure trigger is a distinctive feature compared to other states' periodic re-disclosure requirements.
Statutory Text
(a) A supplier of a mental health chatbot shall cause the mental health chatbot to clearly and conspicuously disclose to a Vermont user that the mental health chatbot is an artificial intelligence technology and not a human. (b) The disclosure described in subsection (a) of this section shall be made: (1) before the Vermont user may access the features of the mental health chatbot; (2) at the beginning of any interaction with the Vermont user if the Vermont user has not accessed the mental health chatbot within the previous seven days; and (3) any time a Vermont user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
G-01 AI Governance Program & Documentation · G-01.1G-01.3 · Deployer · ChatbotHealthcare
18 V.S.A. § 9764(a)-(b)
Plain Language
Suppliers of mental health chatbots may assert an affirmative defense to professional conduct enforcement actions if they can demonstrate they: (1) created, maintained, and implemented a comprehensive written policy covering the chatbot's intended purposes, abilities, limitations, safety procedures (including licensed provider involvement in development, clinical best-practice compliance, pre- and post-deployment testing, adverse outcome identification, user harm reporting mechanisms, real-time crisis response protocols, regular safety audits, nondiscrimination measures, and HIPAA-equivalent compliance); (2) maintained documentation of foundation models used, training tools, privacy compliance, data practices, and ongoing accuracy/safety efforts; (3) filed the policy with the Attorney General; and (4) complied with the filed policy at the time of the alleged violation. This is structured as a safe harbor rather than an affirmative obligation — but practically, any supplier that wants access to the defense must build and maintain this comprehensive governance program.
Statutory Text
(a) It is an affirmative defense to liability in an action for unlawful or unprofessional conduct brought against a supplier by the Office of Professional Regulation or the Board of Medical Practice if the supplier demonstrates that the supplier meets all of the following conditions: (1) the supplier created, maintained, and implemented a policy that meets the requirements of subsection (b) of this section; (2) the supplier maintains documentation regarding the development and implementation of the mental health chatbot that describes: (A) foundation models used in development; (B) training tools used; (C) compliance with federal health privacy regulations; (D) user data collection and sharing practices; and (E) ongoing efforts to ensure accuracy, reliability, fairness, and safety; (3) the supplier filed the policy with the Office of the Attorney General; and (4) the supplier complied with all requirements of the filed policy at the time of the alleged violation. (b) A policy described in subdivision (a)(1) of this section shall meet all of the following requirements: (1) be in writing; (2) clearly state: (A) the intended purposes of the mental health chatbot; and (B) the abilities and limitations of the mental health chatbot; (3) describe the procedures by which the supplier: (A) ensures that qualified mental health providers licensed in Vermont or in one or more other states, or both, are involved in the development and review process; (B) ensures that the mental health chatbot is developed and monitored in a manner consistent with clinical best practices; (C) conducts testing prior to making the mental health chatbot publicly available and regularly thereafter to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in psychotherapy with a licensed mental health provider; (D) identifies reasonably foreseeable adverse outcomes to and potentially harmful interactions with users that could result from using the mental health chatbot; (E) provides a mechanism for a user to report any potentially harmful interactions from use of the mental health chatbot; (F) implements protocols to assess and respond to risk of harm to users or other individuals; (G) details actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions; (H) implements protocols to respond in real time to acute risk of physical harm; (I) reasonably ensures regular, objective reviews of safety, accuracy, and efficacy, which may include internal or external audits; (J) provides users any necessary instructions on the safe use of the mental health chatbot; (K) ensures users understand that they are interacting with artificial intelligence; (L) ensures users understand the intended purpose, capabilities, and limitations of the mental health chatbot; (M) prioritizes user mental health and safety over engagement metrics or profit; (N) implements measures to prevent discriminatory treatment of users; and (O) ensures compliance with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A, C, and E, as if the supplier were a covered entity, and applicable consumer protection requirements, including sections 9761-9763 of this subchapter.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · ChatbotHealthcare
18 V.S.A. § 9764(c)
Plain Language
To activate the affirmative defense, suppliers must file with the Attorney General's office: the supplier's name and address, the chatbot's name, the comprehensive written policy described in § 9764(b), and a $100 filing fee. Suppliers may also voluntarily file policy revisions and additional documentation. While technically optional (the filing is part of an affirmative defense, not a standalone mandate), the practical incentive to file is strong for any supplier that wants regulatory protection.
Statutory Text
(c) To file a policy with the Office of the Attorney General under this section, a supplier of a mental health chatbot: (1) shall provide to the Office, in the form and manner prescribed by the Office: (A) the name and address of the supplier; (B) the name of the mental health chatbot supplied by the supplier; (C) the written policy described in subsection (b) of this section; and (D) a $100.00 filing fee; and (2) may provide to the Office: (A) any revisions to a policy filed under this section; and (B) any other documentation that the supplier elects to provide.
HC-01 Healthcare AI Decision Restrictions · HC-01.1HC-01.2HC-01.3 · Deployer · Healthcare
18 V.S.A. § 9771(a)(1)-(2), (b)
Plain Language
Health plans using AI, algorithms, or software tools for utilization review must ensure those tools base determinations on individualized patient data — the individual's medical history, clinical circumstances presented by the provider, and other relevant clinical records — and may not rely solely on group-level datasets. Critically, the AI tool may not deny, delay, or modify health care services based on medical necessity; only a licensed human health care provider competent in the relevant clinical specialty may make medical necessity determinations, considering the requesting provider's recommendation and the patient's individual circumstances. This applies whether the health plan uses AI internally or contracts with a third-party entity. The obligation covers prospective, retrospective, and concurrent utilization review (§ 9771(c)).
Statutory Text
(a) A health plan, as defined in section 9418 of this title, that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, or that contracts with or otherwise works through an entity that uses artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, shall ensure all of the following: (1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (A) a covered individual's medical or other clinical history; (B) the specific clinical circumstances as presented by the requesting health care provider; and (C) other relevant clinical information contained in the covered individual's medical or other clinical record. (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset. (b) Notwithstanding subsection (a) of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based in whole or in part on medical necessity. A determination of medical necessity shall be made only by a licensed human health care provider who is competent to evaluate the specific clinical issues involved in the health care services requested by a treating health care provider by reviewing and considering the requesting provider's recommendation; the covered individual's medical or other clinical history, as appropriate; and the specific clinical circumstances.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
18 V.S.A. § 9771(a)(9)
Plain Language
Health plans must periodically review and revise the performance, use, and outcomes of any AI, algorithm, or software tool used in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment assessment.
Statutory Text
(9) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
HC-01 Healthcare AI Decision Restrictions · HC-01.5 · Deployer · Healthcare
18 V.S.A. § 9771(a)(10)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond its intended and stated purpose. Compliance must be consistent with Vermont's health information technology chapter (18 V.S.A. ch. 42B) and with HIPAA privacy and security rules. This is a purpose limitation rule specific to healthcare AI data.
Statutory Text
(10) Patient data is not used beyond its intended and stated purpose, consistent with chapter 42B of this title and with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A and E, as applicable.
HC-01 Healthcare AI Decision Restrictions · HC-01.7 · Deployer · Healthcare
18 V.S.A. § 9771(a)(7)-(8)
Plain Language
Health plans must ensure their AI utilization review tools are open to inspection and audit by the Department of Financial Regulation and other state agencies. Plans must also include disclosures about AI use and oversight in their written policies and procedures to the extent the Department of Financial Regulation requires. This creates both a regulatory access obligation and a documentation/disclosure obligation.
Statutory Text
(7) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Department of Financial Regulation and by other State agencies and departments pursuant to applicable State and federal law. (8) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the health plan's written policies and procedures to the extent required by the Department of Financial Regulation.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Healthcare
18 V.S.A. § 9771(a)(4)-(6)
Plain Language
Health plans must ensure AI utilization review tools do not supplant provider decision-making (reinforcing the human oversight requirement in § 9771(b)), do not discriminate directly or indirectly against covered individuals in violation of state or federal law, and are fairly and equitably applied consistent with HHS regulations and guidance. The nondiscrimination obligation covers both direct and indirect (disparate impact) discrimination.
Statutory Text
(4) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision making. (5) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against covered individuals in violation of State or federal law. (6) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services.
Other · Deployer · Healthcare
18 V.S.A. § 9771(a)(11)
Plain Language
Health plans must ensure their AI utilization review tools do not directly or indirectly cause harm to covered individuals. This is a broad duty of care — it goes beyond nondiscrimination and accuracy to create a general prohibition on harmful outcomes from AI in utilization review. The scope of 'harm' is not defined, which creates significant interpretive latitude.
Statutory Text
(11) The artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to the covered individual.
Other · BiometricsHealthcare
18 V.S.A. § 1895(a)-(c)
Plain Language
Violations of the neurological rights chapter are deemed unfair or deceptive trade practices under Vermont's Consumer Protection Act, subjecting violators to civil penalties of up to $10,000 per violation. The Attorney General has full CPA enforcement authority, and consumers have the same rights and remedies as under the CPA (including private right of action). This is an enforcement hook — it does not create new substantive obligations.
Statutory Text
(a) A violation of this chapter shall constitute an unfair or deceptive act or practice in commerce under 9 V.S.A. chapter 63, Vermont's Consumer Protection Act. (b) A person who violates this chapter shall be subject to a civil penalty of not more than $10,000.00 for each violation. (c) The Attorney General shall have the same authority to make rules, conduct civil investigations, enter into assurances of discontinuance, and bring civil actions as provided under 9 V.S.A. chapter 63, subchapter 1. Consumers shall have the same rights and remedies as provided under 9 V.S.A. chapter 63, subchapter 1.
Other · HealthcareChatbot
18 V.S.A. § 9753
Plain Language
The Attorney General may impose administrative penalties of up to $2,500 per violation of the AI in health care chapter and may file Superior Court actions with CPA-equivalent investigative and remedial authority. Each violation is separately actionable. This is an enforcement mechanism, not a substantive compliance obligation.
Statutory Text
The Attorney General may impose an administrative penalty of not more than $2,500.00 for each violation of this chapter. In addition, and in addition to any other remedy provided by law, the Attorney General may file an action in Superior Court for a violation of this chapter. The Attorney General shall have the same authority to investigate and obtain remedies as if the action were brought under the Vermont Consumer Protection Act, 9 V.S.A. chapter 63. Each violation of this chapter constitutes a separate violation for which the Attorney General may obtain relief.
Other · Healthcare
18 V.S.A. § 9771(a)(3)
Plain Language
AI utilization review tools must comply with existing Vermont insurance law (8 V.S.A. ch. 107), health care administration law (18 V.S.A. ch. 221), and all other applicable state and federal laws. This is a compliance pass-through confirming that AI tools are not exempt from existing regulatory frameworks — it does not create a new AI-specific obligation.
Statutory Text
(3) The artificial intelligence, algorithm, or other software tool's criteria and guidelines comply with 8 V.S.A. chapter 107, chapter 221 of this title, and other applicable State and federal laws.