H-0784
VT · State · USA
VT
USA
● Pre-filed
Proposed Effective Date
2026-07-01
Vermont H.784 — An act relating to the regulation of chatbots
Regulates providers of chatbots by imposing comprehensive data privacy restrictions, transparency obligations, and safety requirements. Chatbot providers may not process personal data beyond input data without affirmative consent, may not use chat logs for advertising, and must obtain parental consent before processing data for users under 18. Providers must disclose to users that they are interacting with a chatbot — not a human — prior to any outputs, hourly thereafter, and upon user inquiry. Providers may not imply that chatbot outputs are endorsed by or equivalent to those of licensed professionals. The bill requires monthly risk assessments, a written data security program published on the provider's website, and monthly public disclosure of chatbot information. Chatbots are classified as products for product liability purposes, with strict liability for provider-caused injuries. Enforcement is through the Attorney General, State's Attorney, or a private right of action with $5,000 liquidated damages per violation for data privacy breaches.
Summary

Regulates providers of chatbots by imposing comprehensive data privacy restrictions, transparency obligations, and safety requirements. Chatbot providers may not process personal data beyond input data without affirmative consent, may not use chat logs for advertising, and must obtain parental consent before processing data for users under 18. Providers must disclose to users that they are interacting with a chatbot — not a human — prior to any outputs, hourly thereafter, and upon user inquiry. Providers may not imply that chatbot outputs are endorsed by or equivalent to those of licensed professionals. The bill requires monthly risk assessments, a written data security program published on the provider's website, and monthly public disclosure of chatbot information. Chatbots are classified as products for product liability purposes, with strict liability for provider-caused injuries. Enforcement is through the Attorney General, State's Attorney, or a private right of action with $5,000 liquidated damages per violation for data privacy breaches.

Enforcement & Penalties
Enforcement Authority
The Attorney General or a State's Attorney may bring a civil action against a chatbot provider that violates the subchapter to enjoin violations, enforce compliance, and obtain damages, civil penalties, restitution, or other remedies on behalf of state residents. A violation of § 4193b (data privacy and security) or § 4193c(a) or (b) (licensed professional misrepresentation and AI identity disclosure) constitutes an injury in fact to a user, enabling private suit. A user injured under those provisions may bring an action in Superior Court. The statute also establishes strict product liability for chatbot providers — a provider is liable for any injury caused to a user through the use of its chatbot even if the provider exercised all reasonable care or did not directly distribute the chatbot to the user.
Penalties
For violations of § 4193b (data privacy and security): liquidated damages of $5,000 per violation or actual damages, whichever is greater. For violations of § 4193c(a) (licensed professional misrepresentation) or § 4193c(b) (AI identity disclosure): liquidated damages of $5,000 in total for all violations or actual damages, whichever is greater. Punitive damages are available for reckless and knowing violations. Courts may also award injunctive relief, declaratory relief, and reasonable attorney's fees and litigation costs. The Attorney General or State's Attorney may separately obtain damages, civil penalties, restitution, other remedies, and reasonable attorney's fees and litigation costs.
Who Is Covered
"Chatbot provider" means any person creating, distributing, or otherwise making available a chatbot.
What Is Covered
"Chatbot" means any artificial intelligence, algorithmic, or automated system that generates information via text, audio, image, or video in a manner that simulates interpersonal interactions or conversation.
Compliance Obligations 19 obligations · click obligation ID to open requirement page
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(1)
Plain Language
Chatbot providers may not use personal data — beyond the user's own input data — to inform chatbot outputs unless two conditions are met: (1) the processing is necessary to fulfill an express user request, and (2) the user has given affirmative consent. This effectively creates a default prohibition on enriching chatbot responses with personal data from external sources, behavioral profiles, or cross-session data unless the user specifically asks for it and consents. The affirmative consent standard is stringent — it cannot be bundled into general terms of service, must be presented as a standalone disclosure, and the refuse option must be at least as prominent as the accept option.
Statutory Text
(1) process personal data other than input data to inform chatbot outputs unless the processing of personal data is necessary to fulfill an express request made by a user and that user has provided affirmative consent;
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(2)
Plain Language
Chatbot providers are categorically prohibited from using a user's chat logs for any advertising purpose — whether to decide whether to show an ad, what to advertise, or how to customize or present an ad. This is an absolute prohibition with no consent exception. Chat logs include both user inputs and chatbot outputs, so providers cannot mine conversational history for ad targeting under any circumstances.
Statutory Text
(2) process a user's chat log to: (A) determine whether to display an advertisement for a product or service to the user; (B) determine a product, service, or category of product or service to advertise to the user; or (C) customize an advertisement or how an advertisement is presented to the user;
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(3)(C)-(D)
Plain Language
Chatbot providers face two restrictions on processing adult users' data: (1) they may not use chat logs or personal data of adult users for training purposes unless the provider first obtains affirmative consent, and (2) they may not use chat logs or personal data for profiling — classifying users' personality or behavioral characteristics — beyond what is necessary to fulfill an express user request. The training definition carves out safety testing and legally required modifications, so those activities do not require consent. Similarly, chat log processing for user safety is excluded from the profiling prohibition.
Statutory Text
(3) process a user's chat log or personal data: (C) of a user over 18 years of age for training purposes, unless the chatbot provider first obtains affirmative consent; or (D) to engage in profiling beyond what is necessary to fulfill an express request from the user;
D-01 Automated Processing Rights & Data Controls · D-01.6 · Deployer · ChatbotMinors
9 V.S.A. § 4193b(a)(3)(A)-(B)
Plain Language
When a chatbot provider knows or should know that a user is under 18, the provider faces two distinct restrictions: (1) no processing of the minor's chat logs or personal data at all without affirmative consent from a parent or legal guardian, and (2) an absolute prohibition on using the minor's chat logs or personal data for model training — with no consent carve-out even from a parent. The knowledge standard is constructive — 'should have known based on knowledge fairly implied on the basis of objective circumstances' — meaning providers cannot avoid the obligation by simply not asking about age.
Statutory Text
(3) process a user's chat log or personal data: (A) if the chatbot provider knows or should have known, based on knowledge fairly implied on the basis of objective circumstances, that the user is under 18 years of age without the affirmative consent of that user's parent or legal guardian; (B) for training purposes, if the chatbot provider knows or should have known, based on knowledge fairly implied on the basis of objective circumstances, that a user is under 18 years of age;
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(4)
Plain Language
This is a downstream use restriction on profiling outputs: even if a chatbot provider has legitimately profiled a user (e.g., to fulfill an express request), the resulting personality or behavioral classifications may not be used for any purpose beyond what is necessary to fulfill that express request. This prevents providers from building and then repurposing user behavioral profiles for marketing, content personalization, or other secondary purposes.
Statutory Text
(4) use any classification or designation of a user's personality or behavioral characteristics created through profiling beyond what is necessary to fulfill an express request made by the user;
Other · Deployer · Chatbot
9 V.S.A. § 4193b(a)(5)
Plain Language
Chatbot providers may not sell user chat logs under any circumstances. This is an absolute prohibition with no consent exception. The definition of 'sell' is broad — it covers not only monetary exchange but also making data available for other valuable consideration, including data-sharing arrangements. Exceptions exist only for service provider disclosures, user-directed sharing, and intentionally public data, but these narrow the definition of 'sell' rather than creating exceptions to the prohibition.
Statutory Text
(5) sell a user's chat logs;
Other · Deployer · Chatbot
9 V.S.A. § 4193b(a)(6)
Plain Language
Chatbot providers must delete user chat logs within 10 years of collection, unless longer retention is necessary to comply with this subchapter or is otherwise required by law. This creates an outer boundary on data retention, functioning as a statutory data retention maximum. Providers may retain data shorter than 10 years at their discretion, but may not exceed this limit.
Statutory Text
(6) retain a user's chat log for longer than 10 years, unless retention is necessary to comply with this subchapter or otherwise required by law;
Other · Deployer · Chatbot
9 V.S.A. § 4193b(a)(7)
Plain Language
Chatbot providers may not discriminate against or retaliate against users who refuse to consent to the use of their chat logs or personal data for training. Prohibited retaliation includes denying services, charging higher prices, or degrading service quality. This ensures that the consent required by § 4193b(a)(3)(C) for adult training use is genuinely voluntary — users cannot be punished for exercising their right to refuse.
Statutory Text
(7) discriminate or retaliate against any user, including by denying products or services, charging different prices or rates for products or services, or providing lower-quality products or services to the user, for refusing to consent to the use of chat logs or personal data for training purposes; or
Other · Deployer · Chatbot
9 V.S.A. § 4193b(a)(8)
Plain Language
Chatbot providers may not tell users that their input data or chat logs are confidential. This is a flat prohibition — providers cannot make confidentiality representations regardless of the security measures they employ. The practical effect is to ensure users understand that their conversations are not private in the way that, for example, attorney-client or therapist-patient communications would be. This complements the broader transparency obligations in the bill.
Statutory Text
(8) represent to a user that the user's input data or chat log is confidential.
D-01 Automated Processing Rights & Data Controls · D-01.1 · Deployer · Chatbot
9 V.S.A. § 4193b(b)(1)-(2)
Plain Language
Users have the right to access their own chat logs at any time in a portable, downloadable, human-readable and machine-readable format. Chatbot providers must make this data available on demand. Providers may not discriminate or retaliate against users for exercising this access right — the same anti-retaliation protections that apply to training consent refusal also apply here, covering denial of service, price discrimination, and quality degradation.
Statutory Text
(b) Right to access. A user has the right to access, in a portable and readily usable format and at any time, any of the user's own chat logs that a chatbot provider has retained. (1) Chat logs must be made available to users in a downloadable and human- and machine-readable format. (2) A chatbot provider shall not discriminate or retaliate against any user, including by denying products or services, charging different prices or rates for products or services, or providing lower-quality products or services to the user, for accessing their own chat logs.
Other · Government · Chatbot
9 V.S.A. § 4193b(c)
Plain Language
Government agencies may not compel chatbot providers to produce user input data or chat logs without a wiretap warrant issued under the Vermont Electronic Communication Privacy Act. This creates a heightened privacy protection for chatbot conversations by treating them as analogous to wiretapped communications rather than ordinary business records. The obligation falls on public agencies, not on chatbot providers — providers benefit from the protection but do not need to take affirmative compliance action beyond refusing warrantless demands.
Statutory Text
(c) Compelling production. A public agency, as that term is defined in 1 V.S.A. § 317, shall not compel the production of or access to input data or chat logs from a chatbot provider without a duly issued wiretap warrant pursuant to 13 V.S.A. chapter 232 (Vermont Electronic Communication Privacy Act).
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Chatbot
9 V.S.A. § 4193b(d)
Plain Language
Chatbot providers must develop, implement, and maintain a written, comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and sensitivity of the personal data and chat logs they hold. The written program must be published on the provider's website. This is both a governance obligation (establishing and documenting the program) and a public transparency obligation (publishing it). The proportionality standard means that providers handling larger volumes of sensitive data need correspondingly stronger safeguards.
Statutory Text
(d) Data security program. A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of the personal data and chat logs maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
9 V.S.A. § 4193c(a)(1)-(2)
Plain Language
Chatbot providers may not use any language in their advertising, interface, or chatbot outputs that indicates or implies that the output is provided by, endorsed by, or equivalent to services from a licensed healthcare, legal, accounting, or financial professional, or any professional regulated by Vermont's Office of Professional Regulation. This covers the entire user-facing surface — marketing materials, the chatbot interface, and the chatbot's own generated responses. A violation is deemed an unfair and deceptive act in commerce, triggering enforcement under the subchapter's penalty provisions.
Statutory Text
(a) Licensed professionals. (1) A chatbot provider shall not use any term, letter, or phrase in the advertising, interface, or outputs of a chatbot that indicates or implies that any output data is being provided by or endorsed by or is equivalent to that provided by: (A) a licensed health care professional; (B) a licensed legal professional; (C) a licensed accounting professional; (D) a certified financial fiduciary or planner; or (E) any licensed or certified professional regulated by the Office of Professional Regulation. (2) A violation of subdivision (1) of this subsection is an unfair and deceptive and act in commerce, subject to enforcement and penalties as provided in this subchapter.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
9 V.S.A. § 4193c(b)(1)-(3)
Plain Language
Chatbot providers must proactively inform every user that they are interacting with a chatbot, not a human, at three touchpoints: (1) before the chatbot generates any outputs, (2) every hour during continuing interactions, and (3) any time the user asks whether the chatbot is a real person. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The notice must appear in the user's interaction language, in a font at least as large as the largest other text on the interface, be accessible to users with disabilities, and comply with Attorney General rules. The every-hour periodic reminder and on-demand disclosure make this one of the more comprehensive AI identity disclosure requirements among state bills.
Statutory Text
(b) Disclosure. Chatbot providers shall provide clear, conspicuous, and explicit notice to users that users are interacting with a chatbot rather than a human prior to the chatbot generating any outputs, every hour thereafter, and each time a user prompts the chatbot about whether it is a real person subject to the following: (1) The text of this notice must appear in the same language as the one in which the user is interacting with the chatbot, in a font size easily readable by an average user, and no smaller than the largest font size of other text appearing on the interface on which the chatbot is provided. (2) This notice must be accessible to users with disabilities. (3) This notice must comply with rules adopted by the Attorney General pursuant to this subchapter.
S-01 AI System Safety Program · S-01.7 · Deployer · Chatbot
9 V.S.A. § 4193c(c)
Plain Language
Chatbot providers must assess their chatbots for risks of harm to users on a monthly basis, using metrics defined by Attorney General rulemaking, and must actively mitigate any identified risks. This is an ongoing operational obligation — not a one-time pre-deployment assessment. The monthly cadence is notably more frequent than the annual reviews required by most other state AI safety frameworks. The specific risk categories and assessment metrics will be determined by AG rulemaking, so the full scope of the obligation will depend on those rules.
Statutory Text
(c) Risk assessment. A chatbot provider shall on a monthly basis, according to metrics as set forth in rules adopted by the Attorney General pursuant to this subchapter, assess its chatbot for risks of harm to users and actively mitigate any risks of harm.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Chatbot
9 V.S.A. § 4193c(d)
Plain Language
Chatbot providers must publish information about their chatbot on their website, updated monthly, with the specific categories of information to be defined by Attorney General rulemaking. This is a public transparency obligation distinct from the data security program publication requirement in § 4193b(d). The full scope of what must be disclosed will depend on AG rules, but the monthly update cadence ensures ongoing disclosure rather than a one-time publication.
Statutory Text
(d) Chatbot information. A chatbot provider shall make information about its chatbot publicly available on its website on a monthly basis as set forth in rules adopted by the Attorney General pursuant to this subchapter.
Other · Chatbot
9 V.S.A. § 4193e(a)-(c)
Plain Language
Chatbots are classified as products for product liability purposes, and chatbot providers are subject to strict liability for any injury caused to a user. The strict liability standard means that exercising reasonable care is not a defense, and providers are liable even if they did not directly distribute the chatbot to the injured user or have a contractual relationship with them. This is a significant expansion of liability — it eliminates negligence as a defense and extends liability beyond the direct distribution chain. While this does not create a specific compliance checklist item, it fundamentally changes the risk calculus for chatbot providers.
Statutory Text
(a) A chatbot is a product for the purposes of product liability actions. (b) A chatbot provider has a duty to ensure that the use of its chatbot does not cause injury to a user. (c) A chatbot provider is liable for any injury it caused a user through the use of its chatbot, even if: (1) the chatbot provider exercised all reasonable care in the design and distribution of the chatbot; or (2) the chatbot provider did not directly distribute the chatbot to the user or otherwise enter into a contractual relationship with the user.
Other · Chatbot
9 V.S.A. § 4193e(d)
Plain Language
This savings clause confirms that the chatbot regulation subchapter does not preempt or displace any existing legal rights, claims, remedies, or defenses — including antidiscrimination, consumer protection, labor, and civil rights laws. This means chatbot providers remain subject to all pre-existing legal frameworks in addition to the new obligations created by this bill. It creates no new compliance obligation.
Statutory Text
(d) Nothing in this subchapter preempts or otherwise affects any right, claim, remedy, presumption, or defense available at law or in equity, including antidiscrimination, consumer protection, labor, and civil rights laws.
Other · Government · Chatbot
9 V.S.A. § 4193d(a)-(b)
Plain Language
The Attorney General is directed to adopt rules specifying: (1) the form and content of AI identity disclosure notices (including an example template), (2) risk assessment metrics that providers must use for monthly assessments, and (3) categories of chatbot information that must be publicly disclosed. The AG may also adopt additional rules as needed. This provision obligates the AG to create the regulatory framework — until these rules are adopted, the disclosure, risk assessment, and public information obligations in § 4193c have incomplete compliance standards.
Statutory Text
(a) The Attorney General shall adopt rules: (1) describing the form and content of the disclosures and providing an example template for the disclosures required pursuant to subsection 4193c(b) of this subchapter; (2) describing risks of harm to users and the metrics that each chatbot provider shall use to assess its chatbots for these risks of harm to users pursuant to subsection 4193c(c) of this subchapter; and (3) identifying and describing categories of information that each chatbot provider must make publicly available about its chatbots pursuant to subsection 4193c(d) of this subchapter. (b) The Attorney General may adopt other rules as necessary to implement the provisions of this subchapter.