H-0814
VT · State · USA
VT
USA
● Passed
Proposed Effective Date
2026-07-01
Vermont H.814 — An act relating to neurological rights and the use of artificial intelligence technology in health and human services
Vermont H.814 is a multi-part bill addressing AI in health care, neurological rights, and mental health chatbots. It creates new neurological rights protections requiring written informed consent before collecting, recording, or sharing neural data from brain-computer interfaces, and prohibiting consciousness bypass without specific written consent. It imposes disclosure requirements on health care providers using generative AI for patient communications and restricts the use of AI in health insurance utilization review by requiring individualized clinical data and human decision-making for medical necessity determinations. The bill regulates mental health chatbot suppliers by prohibiting sale/sharing of user health data, restricting targeted advertising, requiring AI identity disclosure, and creating an affirmative defense framework tied to filing a comprehensive safety and compliance policy with the Attorney General. Enforcement is primarily through the Attorney General under the Consumer Protection Act, with additional professional regulation jurisdiction.
Summary

Vermont H.814 is a multi-part bill addressing AI in health care, neurological rights, and mental health chatbots. It creates new neurological rights protections requiring written informed consent before collecting, recording, or sharing neural data from brain-computer interfaces, and prohibiting consciousness bypass without specific written consent. It imposes disclosure requirements on health care providers using generative AI for patient communications and restricts the use of AI in health insurance utilization review by requiring individualized clinical data and human decision-making for medical necessity determinations. The bill regulates mental health chatbot suppliers by prohibiting sale/sharing of user health data, restricting targeted advertising, requiring AI identity disclosure, and creating an affirmative defense framework tied to filing a comprehensive safety and compliance policy with the Attorney General. Enforcement is primarily through the Attorney General under the Consumer Protection Act, with additional professional regulation jurisdiction.

Enforcement & Penalties
Enforcement Authority
Chapter 42C (Neurological Rights): Attorney General enforcement under 9 V.S.A. chapter 63 (Consumer Protection Act), with authority to make rules, conduct civil investigations, enter into assurances of discontinuance, and bring civil actions. Consumers have the same rights and remedies as provided under 9 V.S.A. chapter 63, subchapter 1. Chapter 233 (AI in Health Care): Attorney General may impose administrative penalties and file actions in Superior Court with the same investigative and remedial authority as under the Vermont Consumer Protection Act. For § 9752 violations by licensed health care providers, the Office of Professional Regulation and Board of Medical Practice also have jurisdiction. For mental health chatbot suppliers, the Office of Professional Regulation and Board of Medical Practice may bring actions for unlawful or unprofessional conduct.
Penalties
Chapter 42C (Neurological Rights): Civil penalty of not more than $10,000 per violation. Consumers have the same rights and remedies as under 9 V.S.A. chapter 63, subchapter 1 (Vermont Consumer Protection Act). Chapter 233 (AI in Health Care): Administrative penalty of not more than $2,500 per violation. Each violation constitutes a separate violation for which the Attorney General may obtain relief. Attorney General may also obtain remedies available under the Vermont Consumer Protection Act. Mental health chatbot suppliers may also face professional regulation actions.
Who Is Covered
"Supplier" means a seller, lessor, assignor, offeror, broker, or other person who regularly solicits, engages in, or enforces consumer transactions, regardless of whether the person deals directly with the consumer.
What Is Covered
"Mental health chatbot" means an artificial intelligence technology that: (i) uses generative artificial intelligence to engage in interactive conversations with a user of the mental health chatbot similar to the confidential communications that an individual would have with a licensed mental health provider; and (ii) a supplier represents, or a reasonable person would believe, can or will provide psychotherapy or help a user manage or treat mental health conditions. "Mental health chatbot" does not include artificial intelligence technology that only: (i) provides scripted output, such as guided meditations or mindfulness exercises; or (ii) analyzes an individual's input for the purpose of connecting the individual with a human mental health provider.
Compliance Obligations 17 obligations · click obligation ID to open requirement page
D-01 Automated Processing Rights & Data Controls · D-01.8 · DeployerManufacturer · BiometricsHealthcare
18 V.S.A. § 1893(a)-(b)
Plain Language
No person may collect or record neural data from a brain-computer interface unless they first provide the individual with a written notice explaining how the data will be used, and then receive written informed consent. This is an affirmative opt-in requirement — collection is prohibited by default. The written notice must precede and be separate from the consent itself. Consent must be voluntary, from an individual with capacity, and given after full disclosure of the nature, benefits, risks, and consequences.
Statutory Text
(a) Prohibition. Subject to the limited exceptions provided in this section, no person shall: (1) collect or record an individual's neural data gathered from a brain-computer interface; or (2) share with a third party an individual's neural data gathered from a brain-computer interface. (b) Consent to collect. A person shall not collect or record an individual's neural data gathered from a brain-computer interface unless the person: (1) provides the individual with a written notice explaining how the person will use the individual's neural data; and (2) thereafter receives written informed consent from the individual to collect or record the individual's neural data.
D-01 Automated Processing Rights & Data Controls · D-01.8 · DeployerManufacturer · BiometricsHealthcare
18 V.S.A. § 1893(c)
Plain Language
Before sharing any individual's neural data from a brain-computer interface with a third party, the person must provide a written request to the individual specifying the purpose for sharing and the name and address of the third party, and must receive written informed consent. This is a separate consent requirement from collection — even if an individual consented to collection, sharing requires its own specific, written informed consent naming each third-party recipient.
Statutory Text
(c) Consent to share. A person shall not share with a third party an individual's neural data gathered from a brain-computer interface unless the person: (1) provides the individual with a written request for the individual's neural data to be shared with a third party and for what purposes, including the name and address of the third party; and (2) thereafter receives written informed consent from the individual to share the individual's neural data with the third party.
D-01 Automated Processing Rights & Data Controls · D-01.3 · DeployerManufacturer · BiometricsHealthcare
18 V.S.A. § 1893(d)
Plain Language
Individuals have the right to revoke consent to collect, record, or share neural data at any time. The revocation mechanism must be at least as easy as the original consent process. Upon receiving a revocation notice, the person must destroy all records of the individual's neural data within 10 days, immediately cease all third-party sharing, and notify all third parties with whom neural data was shared. This creates both a deletion right and a downstream notification obligation — merely stopping collection is insufficient.
Statutory Text
(d) Revocation of consent. (1) An individual who has provided written informed consent allowing a person to collect, record, or share the individual's neural data pursuant to this section has the right to revoke consent at any time thereafter by providing written notice to the person initially receiving the consent. This revocation of consent notice shall be as easy or easier for the individual to provide as compared to the requirements for initially providing consent. (2) A person who receives written notice from an individual revoking consent pursuant to subdivision (1) of this subsection shall: (A) destroy all records of the individual's neural data not later than 10 days after receiving the notice; and (B) in the case of the revocation of consent to share an individual's neural data, immediately: (i) cease sharing an individual's neural data with all third parties upon receipt of the notice; and (ii) inform all third parties with whom the person has shared the individual's neural data that the individual has revoked consent.
Other · BiometricsHealthcare
18 V.S.A. § 1894(a)-(b)
Plain Language
Manufacturers of brain-computer interfaces may not allow their devices to bypass an individual's conscious decision making unless they have received specific, written informed consent for each category of action the device will perform. Consent obtained through a consciousness bypass itself is void. Records of consent must be maintained. Individuals (or their agents, guardians, or surrogates) may revoke consent at any time, and the revocation process must be at least as easy as the original consent process. This is a first-of-its-kind neurotechnology consent provision with no close analog in existing AI regulation.
Statutory Text
(a) Specific consent required. (1) A person shall not allow a brain-computer interface it manufactures to be used to bypass the conscious decision making of an individual unless the person has received specific, written informed consent from the individual. As used in this section, "specific" means written consent for each and every category of action performed by the brain-computer interface. (2) A person receiving written informed consent from an individual shall keep a record of the individual's consent. (3) Consent obtained by using a consciousness bypass is not informed consent. (b) Revoking consent. (1) An individual who has provided specific, written informed consent allowing a brain-computer interface to be used to bypass the conscious decision making of the individual pursuant to this section has the right to revoke consent at any time thereafter by providing notice to the person initially receiving the consent. This revocation of consent notice shall be as easy or easier for the individual to provide as compared to the requirements for initially providing consent. (2) An individual's agent, guardian, or surrogate has the right to revoke consent on behalf of the individual pursuant to subdivision (1) of this subsection.
T-01 AI Identity Disclosure · T-01.1 · Deployer · HealthcareChatbot
18 V.S.A. § 9752(a)-(b)
Plain Language
Health care providers using generative AI to create patient communications about clinical information must include a disclaimer that the communication was AI-generated and provide clear instructions for contacting a human provider. The disclaimer placement varies by medium: at the beginning for letters/emails, throughout for chat and video, and verbally at start and end for audio. A critical safe harbor applies: if a licensed human provider reads and reviews the AI-generated communication before it is sent, none of these requirements apply. Additionally, violations by licensed providers are subject to jurisdiction of the Office of Professional Regulation and Board of Medical Practice.
Statutory Text
(a) Except as provided in subsection (b) of this section, any health care provider that uses generative artificial intelligence to generate written or verbal patient communications relating to patient clinical information shall ensure that those communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence. (A) For written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication. (B) For written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction. (C) For audio communications, the disclaimer shall be provided verbally at the start and end of the interaction. (D) For video communications, the disclaimer shall be prominently displayed throughout the interaction. (2) Clear instructions describing how a patient may contact a human health care provider; an employee of the health care facility, clinic, physician's office, or office of a group provider; or other appropriate person. (b) If a communication is generated by generative artificial intelligence and read and reviewed by a licensed human health care provider, the requirements of subsection (a) of this section shall not apply.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotHealthcare
18 V.S.A. § 9761(a)-(b)
Plain Language
Suppliers of mental health chatbots are broadly prohibited from selling or sharing Vermont users' individually identifiable health information or user inputs with third parties. Three narrow exceptions apply: (1) when a health care provider requests the information with user consent, (2) when the user requests the information be sent to their health plan, or (3) when sharing with a contractor is necessary for the chatbot's effective functionality. In the contractor-sharing exception, both the supplier and contractor must comply with HIPAA privacy and security rules as if the supplier were a HIPAA covered entity and the contractor a business associate. Notably, user input is protected absolutely — even the contractor exception applies only to individually identifiable health information, not to user input.
Statutory Text
(a)(1) Except as provided in subdivision (2) of this subsection, a supplier of a mental health chatbot shall not sell to or share with any third party any: (A) individually identifiable health information of a Vermont user; or (B) user input of a Vermont user. (2) The prohibition set forth in subdivision (1) of this subsection shall not apply to individually identifiable health information that is: (A) requested by a health care provider with the consent of the Vermont user; (B) provided to a health plan of a Vermont user upon request of the Vermont user; or (C) shared in compliance with subsection (b) of this section. (b)(1) A supplier may share individually identifiable health information necessary to ensure the effective functionality of the mental health chatbot with another person with whom the supplier has a contract related to such functionality. (2) When sharing information pursuant to subdivision (1) of this subsection, the supplier and the other person shall comply with all applicable privacy and security provisions of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A and E, as if the supplier were a covered entity and the other person were a business associate, as those terms are defined in 45 C.F.R. § 160.103.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5 · Deployer · ChatbotHealthcare
18 V.S.A. § 9762(a)-(c)
Plain Language
Suppliers of mental health chatbots face two layers of advertising restrictions. First, any in-conversation advertisement must be clearly labeled as an advertisement and must disclose any sponsorship, affiliation, or third-party promotional agreement. Second, and more restrictively, suppliers may not use any Vermont user input to decide whether, what, or how to advertise — this is effectively a ban on personalized advertising within mental health chatbot conversations, with a narrow exception for promoting the chatbot itself. Recommending that a user seek therapy from a licensed provider (including a specific one) is expressly permitted and is not considered advertising under this section.
Statutory Text
(a) A supplier shall not use a mental health chatbot to advertise a specific product or service to a Vermont user in a conversation between the Vermont user and the mental health chatbot unless the mental health chatbot: (1) clearly and conspicuously identifies the advertisement as an advertisement; and (2) clearly and conspicuously discloses to the Vermont user any: (A) sponsorship; (B) business affiliation; or (C) agreement that the supplier has with a third party to promote, advertise, or recommend the product or service. (b) A supplier of a mental health chatbot shall not use a Vermont user's input to: (1) determine whether to display an advertisement for a product or service to the Vermont user, unless the advertisement is for the mental health chatbot itself; (2) determine a product, service, or category of product or service to advertise to the Vermont user; or (3) customize how an advertisement is presented to a Vermont user. (c) Nothing in this section shall be construed to prohibit a mental health chatbot from recommending that a Vermont user seek psychotherapy or other assistance from a licensed health care provider, including a specific licensed health care provider.
T-01 AI Identity Disclosure · T-01.1T-01.3 · Deployer · ChatbotHealthcare
18 V.S.A. § 9763(a)-(b)
Plain Language
Mental health chatbot suppliers must ensure the chatbot clearly and conspicuously discloses to Vermont users that it is AI and not a human. This disclosure is unconditional — it is not triggered by whether a reasonable person would be misled. Timing requirements are: (1) before the user can access chatbot features (initial gating), (2) at the start of any interaction if the user hasn't used the chatbot within 7 days (re-disclosure after inactivity), and (3) whenever the user asks whether AI is being used (on-demand). The 7-day re-disclosure threshold is less aggressive than CA SB 243's 3-hour rule but applies to a broader trigger (any gap over 7 days, not just continuous sessions).
Statutory Text
(a) A supplier of a mental health chatbot shall cause the mental health chatbot to clearly and conspicuously disclose to a Vermont user that the mental health chatbot is an artificial intelligence technology and not a human. (b) The disclosure described in subsection (a) of this section shall be made: (1) before the Vermont user may access the features of the mental health chatbot; (2) at the beginning of any interaction with the Vermont user if the Vermont user has not accessed the mental health chatbot within the previous seven days; and (3) any time a Vermont user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
G-01 AI Governance Program & Documentation · G-01.1G-01.3 · Deployer · ChatbotHealthcare
18 V.S.A. § 9764(a)-(b)
Plain Language
Mental health chatbot suppliers may claim an affirmative defense against professional regulation enforcement actions if they have created, maintained, and implemented a comprehensive written policy covering 15 enumerated requirements, maintained documentation of the chatbot's development and implementation (including foundation models, training tools, privacy compliance, data practices, and safety efforts), filed the policy with the Attorney General, and complied with it at the time of the alleged violation. The required policy is extensive — it must cover clinical professional involvement, best-practices monitoring, pre- and post-deployment testing benchmarked against human therapy safety, adverse outcome identification, user harm reporting mechanisms, real-time acute harm protocols, regular safety audits, user disclosure of AI nature and limitations, prioritization of user safety over engagement, anti-discrimination measures, and HIPAA-equivalent privacy compliance. While structured as an affirmative defense rather than a mandatory obligation, as a practical matter any supplier seeking regulatory protection will need to comply with all requirements.
Statutory Text
(a) It is an affirmative defense to liability in an action for unlawful or unprofessional conduct brought against a supplier by the Office of Professional Regulation or the Board of Medical Practice if the supplier demonstrates that the supplier meets all of the following conditions: (1) the supplier created, maintained, and implemented a policy that meets the requirements of subsection (b) of this section; (2) the supplier maintains documentation regarding the development and implementation of the mental health chatbot that describes: (A) foundation models used in development; (B) training tools used; (C) compliance with federal health privacy regulations; (D) user data collection and sharing practices; and (E) ongoing efforts to ensure accuracy, reliability, fairness, and safety; (3) the supplier filed the policy with the Office of the Attorney General; and (4) the supplier complied with all requirements of the filed policy at the time of the alleged violation. (b) A policy described in subdivision (a)(1) of this section shall meet all of the following requirements: (1) be in writing; (2) clearly state: (A) the intended purposes of the mental health chatbot; and (B) the abilities and limitations of the mental health chatbot; (3) describe the procedures by which the supplier: (A) ensures that qualified mental health providers licensed in Vermont or in one or more other states, or both, are involved in the development and review process; (B) ensures that the mental health chatbot is developed and monitored in a manner consistent with clinical best practices; (C) conducts testing prior to making the mental health chatbot publicly available and regularly thereafter to ensure that the output of the mental health chatbot poses no greater risk to a user than that posed to an individual in psychotherapy with a licensed mental health provider; (D) identifies reasonably foreseeable adverse outcomes to and potentially harmful interactions with users that could result from using the mental health chatbot; (E) provides a mechanism for a user to report any potentially harmful interactions from use of the mental health chatbot; (F) implements protocols to assess and respond to risk of harm to users or other individuals; (G) details actions taken to prevent or mitigate any such adverse outcomes or potentially harmful interactions; (H) implements protocols to respond in real time to acute risk of physical harm; (I) reasonably ensures regular, objective reviews of safety, accuracy, and efficacy, which may include internal or external audits; (J) provides users any necessary instructions on the safe use of the mental health chatbot; (K) ensures users understand that they are interacting with artificial intelligence; (L) ensures users understand the intended purpose, capabilities, and limitations of the mental health chatbot; (M) prioritizes user mental health and safety over engagement metrics or profit; (N) implements measures to prevent discriminatory treatment of users; and (O) ensures compliance with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A, C, and E, as if the supplier were a covered entity, and applicable consumer protection requirements, including sections 9761-9763 of this subchapter.
R-02 Regulatory Disclosure & Submissions · R-02.3 · Deployer · ChatbotHealthcare
18 V.S.A. § 9764(c)
Plain Language
To obtain the affirmative defense, suppliers must file with the Office of the Attorney General their name and address, the chatbot's name, the written compliance policy, and a $100 filing fee. Suppliers may also voluntarily submit policy revisions and additional documentation. This is a registration-like requirement — the filing is a prerequisite to claiming the affirmative defense, and the AG's office prescribes the form and manner of filing.
Statutory Text
(c) To file a policy with the Office of the Attorney General under this section, a supplier of a mental health chatbot: (1) shall provide to the Office, in the form and manner prescribed by the Office: (A) the name and address of the supplier; (B) the name of the mental health chatbot supplied by the supplier; (C) the written policy described in subsection (b) of this section; and (D) a $100.00 filing fee; and (2) may provide to the Office: (A) any revisions to a policy filed under this section; and (B) any other documentation that the supplier elects to provide.
HC-01 Healthcare AI Decision Restrictions · HC-01.1HC-01.2HC-01.3 · Deployer · Healthcare
18 V.S.A. § 9771(a)(1)-(2), (a)(4), (b)
Plain Language
Health plans using AI, algorithms, or other software for utilization review based on medical necessity must ensure the tool bases determinations on individualized clinical data — the enrollee's medical history, the treating provider's clinical presentation, and other relevant records — and does not rely solely on group datasets. The AI tool may not supplant provider decision making. Most critically, subsection (b) provides an absolute prohibition: AI may not deny, delay, or modify health care services based on medical necessity. Only a licensed human provider competent in the relevant clinical specialty may make medical necessity determinations, after reviewing the treating provider's recommendation and the individual's clinical record. This applies to prospective, retrospective, and concurrent utilization review. The obligation extends to contracted utilization review entities.
Statutory Text
(a) A health plan, as defined in section 9418 of this title, that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, or that contracts with or otherwise works through an entity that uses artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, shall ensure all of the following: (1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (A) a covered individual's medical or other clinical history; (B) the specific clinical circumstances as presented by the requesting health care provider; and (C) other relevant clinical information contained in the covered individual's medical or other clinical record. (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset. (4) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision making. (b) Notwithstanding subsection (a) of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based in whole or in part on medical necessity. A determination of medical necessity shall be made only by a licensed human health care provider who is competent to evaluate the specific clinical issues involved in the health care services requested by a treating health care provider by reviewing and considering the requesting provider's recommendation; the covered individual's medical or other clinical history, as appropriate; and the specific clinical circumstances.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
18 V.S.A. § 9771(a)(9)
Plain Language
Health plans must periodically review and revise the AI tools used in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify a review cadence, leaving the frequency to the health plan's discretion, but the obligation is continuous.
Statutory Text
(9) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
HC-01 Healthcare AI Decision Restrictions · HC-01.5 · Deployer · Healthcare
18 V.S.A. § 9771(a)(10)
Plain Language
Patient data used by AI tools in utilization review may not be used beyond its intended and stated purpose. This purpose limitation must be consistent with Vermont's existing health privacy law (chapter 42B) and HIPAA privacy and security rules. Health plans must ensure that patient clinical data ingested by AI for utilization review is not repurposed for other uses such as marketing, risk profiling, or model training.
Statutory Text
(10) Patient data is not used beyond its intended and stated purpose, consistent with chapter 42B of this title and with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A and E, as applicable.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Healthcare
18 V.S.A. § 9771(a)(5)-(6)
Plain Language
Health plans must ensure their AI utilization review tools do not discriminate directly or indirectly against covered individuals in violation of state or federal law, and that the tools are applied fairly and equitably in accordance with HHS regulations and guidance. This creates both a non-discrimination compliance obligation and a fairness standard — the tool must not produce disparate outcomes, and it must be applied consistently across the covered population.
Statutory Text
(5) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against covered individuals in violation of State or federal law. (6) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the U.S. Department of Health and Human Services.
HC-01 Healthcare AI Decision Restrictions · HC-01.7 · Deployer · Healthcare
18 V.S.A. § 9771(a)(7)-(8)
Plain Language
Health plans must make their AI utilization review tools available for inspection and audit by the Department of Financial Regulation and other state agencies. Additionally, the health plan's written policies and procedures must contain disclosures about the use and oversight of the AI tool, to the extent the Department of Financial Regulation requires. This creates both a regulatory audit access obligation and a documentation/disclosure requirement, though the scope of the disclosure obligation is partially delegated to DFR rulemaking.
Statutory Text
(7) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Department of Financial Regulation and by other State agencies and departments pursuant to applicable State and federal law. (8) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the health plan's written policies and procedures to the extent required by the Department of Financial Regulation.
Other · Deployer · Healthcare
18 V.S.A. § 9771(a)(11)
Plain Language
Health plans must ensure that AI tools used in utilization review do not directly or indirectly cause harm to covered individuals. This is a broad, outcomes-based obligation — not limited to discrimination or denial of services, but encompassing any harm the AI tool may cause. The standard is strict: the tool must not cause harm, period. Combined with the other provisions of § 9771, this creates a comprehensive duty of care for AI-assisted utilization review.
Statutory Text
(11) The artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to the covered individual.
Other · Healthcare
18 V.S.A. § 9771(a)(3)
Plain Language
AI tools used for utilization review must comply with Vermont's existing insurance laws (8 V.S.A. chapter 107), health care administration chapter (18 V.S.A. chapter 221), and other applicable state and federal law. This is a compliance pass-through confirming that AI-assisted processes remain subject to the same regulatory framework as human-directed processes.
Statutory Text
(3) The artificial intelligence, algorithm, or other software tool's criteria and guidelines comply with 8 V.S.A. chapter 107, chapter 221 of this title, and other applicable State and federal laws.