H-0784
VT · State · USA
VT
USA
● Pre-filed
Proposed Effective Date
2026-07-01
Vermont H.784 — An act relating to the regulation of chatbots
Regulates providers of chatbots — broadly defined as any AI system generating information that simulates interpersonal interactions — by imposing data privacy restrictions, transparency obligations, and safety requirements. Prohibits processing personal data beyond input data without affirmative consent, bans use of chat logs for advertising targeting, prohibits processing minor users' data for training, and bars the sale of chat logs. Requires clear AI identity disclosure before first output, every hour thereafter, and on user demand. Prohibits chatbot providers from implying outputs are provided by licensed professionals. Requires monthly risk assessments, public chatbot information disclosures, a written data security program, and user access to their own chat logs. Establishes strict product liability for chatbot providers and creates a private right of action with $5,000 liquidated damages per violation of data privacy provisions. Enforced by the Attorney General and State's Attorneys.
Summary

Regulates providers of chatbots — broadly defined as any AI system generating information that simulates interpersonal interactions — by imposing data privacy restrictions, transparency obligations, and safety requirements. Prohibits processing personal data beyond input data without affirmative consent, bans use of chat logs for advertising targeting, prohibits processing minor users' data for training, and bars the sale of chat logs. Requires clear AI identity disclosure before first output, every hour thereafter, and on user demand. Prohibits chatbot providers from implying outputs are provided by licensed professionals. Requires monthly risk assessments, public chatbot information disclosures, a written data security program, and user access to their own chat logs. Establishes strict product liability for chatbot providers and creates a private right of action with $5,000 liquidated damages per violation of data privacy provisions. Enforced by the Attorney General and State's Attorneys.

Enforcement & Penalties
Enforcement Authority
Attorney General or State's Attorney may bring civil action against a chatbot provider that violates the subchapter to enjoin violations, enforce compliance, obtain damages, civil penalties, restitution, or other remedies on behalf of residents, and obtain attorney's fees and litigation costs. Private right of action for users: a violation of § 4193b or § 4193c(a) or (b) constitutes an injury in fact to a user, and the injured user may bring an action in Superior Court against the chatbot provider. No cure period or safe harbor is provided.
Penalties
For violations of § 4193b (data privacy and security): greater of $5,000 per violation or actual damages. For violations of § 4193c(a) (licensed professional misrepresentation) or § 4193c(b) (AI identity disclosure): greater of $5,000 in total for all violations or actual damages. Punitive damages are available for reckless and knowing violations. Injunctive relief, declaratory relief, and reasonable attorney's fees and litigation costs are also available. AG enforcement may obtain damages, civil penalties, restitution, other remedies, and attorney's fees. Statutory liquidated damages do not require proof of actual monetary harm — a violation of § 4193b or § 4193c(a) or (b) constitutes injury in fact by statute.
Who Is Covered
"Chatbot provider" means any person creating, distributing, or otherwise making available a chatbot.
What Is Covered
"Chatbot" means any artificial intelligence, algorithmic, or automated system that generates information via text, audio, image, or video in a manner that simulates interpersonal interactions or conversation.
Compliance Obligations 16 obligations · click obligation ID to open requirement page
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(1)
Plain Language
Chatbot providers may not process any personal data beyond the user's direct input data to inform chatbot outputs, unless the processing is necessary to fulfill an express user request and the user has given affirmative consent. 'Affirmative consent' is defined with strict requirements — it must be a clear standalone request, cannot be bundled in general terms of use, and cannot be inferred from inaction or continued use. This effectively creates a data minimization requirement: by default, only input data may be used to generate outputs.
Statutory Text
A chatbot provider shall not: (1) process personal data other than input data to inform chatbot outputs unless the processing of personal data is necessary to fulfill an express request made by a user and that user has provided affirmative consent;
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(2)
Plain Language
Chatbot providers are categorically prohibited from using a user's chat logs for any advertising-related purposes — including deciding whether to show an ad, selecting what category of ad to show, or customizing how an ad is presented. This is an absolute prohibition with no consent override. Note that 'advertisement' is broadly defined to include any promotional content displayed in exchange for monetary or other valuable consideration, including data-sharing arrangements between the chatbot provider and the advertiser.
Statutory Text
A chatbot provider shall not: (2) process a user's chat log to: (A) determine whether to display an advertisement for a product or service to the user; (B) determine a product, service, or category of product or service to advertise to the user; or (C) customize an advertisement or how an advertisement is presented to the user;
D-01 Automated Processing Rights & Data Controls · D-01.4D-01.6 · Deployer · ChatbotMinors
9 V.S.A. § 4193b(a)(3)
Plain Language
This provision imposes four distinct data processing restrictions: (A) For known or constructively known minor users, all processing of chat logs or personal data requires parental or guardian affirmative consent. (B) Minor users' chat logs and personal data may never be used for model training — this is an absolute prohibition with no consent override. (C) Adult users' chat logs and personal data may be used for training only with prior affirmative consent. (D) Profiling — classifying personality or behavioral characteristics — may not exceed what is necessary to fulfill an express user request. Note that 'training' carves out safety testing and compliance activities, and 'profiling' carves out processing for user safety purposes.
Statutory Text
A chatbot provider shall not: (3) process a user's chat log or personal data: (A) if the chatbot provider knows or should have known, based on knowledge fairly implied on the basis of objective circumstances, that the user is under 18 years of age without the affirmative consent of that user's parent or legal guardian; (B) for training purposes, if the chatbot provider knows or should have known, based on knowledge fairly implied on the basis of objective circumstances, that a user is under 18 years of age; (C) of a user over 18 years of age for training purposes, unless the chatbot provider first obtains affirmative consent; or (D) to engage in profiling beyond what is necessary to fulfill an express request from the user;
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(4)
Plain Language
Even where profiling results already exist, chatbot providers may not use any classification or designation of a user's personality or behavioral characteristics beyond what is necessary to fulfill an express user request. This is a use restriction on profiling outputs — distinct from the § 4193b(a)(3)(D) prohibition on engaging in profiling, this provision restricts downstream use of profiling-derived classifications.
Statutory Text
A chatbot provider shall not: (4) use any classification or designation of a user's personality or behavioral characteristics created through profiling beyond what is necessary to fulfill an express request made by the user;
Other · Chatbot
9 V.S.A. § 4193b(a)(5)
Plain Language
Chatbot providers are categorically prohibited from selling a user's chat logs. 'Sell' is broadly defined to include exchanging data for monetary or other valuable consideration, or making data available to a third party. Exceptions exist for disclosures to service providers acting on the chatbot provider's behalf, user-directed disclosures with affirmative consent, and disclosures of data the user intentionally made public. This is an absolute prohibition — no consent mechanism can override it.
Statutory Text
A chatbot provider shall not: (5) sell a user's chat logs;
Other · Chatbot
9 V.S.A. § 4193b(a)(6)
Plain Language
Chatbot providers may not retain a user's chat log for more than 10 years, except where retention is necessary for legal compliance. This creates a hard outer limit on chat log retention regardless of user consent.
Statutory Text
A chatbot provider shall not: (6) retain a user's chat log for longer than 10 years, unless retention is necessary to comply with this subchapter or otherwise required by law;
Other · Chatbot
9 V.S.A. § 4193b(a)(7)
Plain Language
Chatbot providers may not punish users in any way for refusing to consent to the use of their chat logs or personal data for training purposes. Prohibited retaliation includes denying services, imposing different pricing, or degrading service quality. This ensures that the consent requirement for training data use is genuinely voluntary.
Statutory Text
A chatbot provider shall not: (7) discriminate or retaliate against any user, including by denying products or services, charging different prices or rates for products or services, or providing lower-quality products or services to the user, for refusing to consent to the use of chat logs or personal data for training purposes;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5 · Deployer · Chatbot
9 V.S.A. § 4193b(a)(8)
Plain Language
Chatbot providers are prohibited from claiming or implying to users that their input data or chat logs are confidential. This is a deceptive conduct prohibition — providers must not create false impressions about the privacy status of user interactions. Given the broad definition of 'sell' and the data access rights elsewhere in the statute, this provision prevents providers from suggesting a level of privacy protection that does not exist.
Statutory Text
A chatbot provider shall not: (8) represent to a user that the user's input data or chat log is confidential.
D-01 Automated Processing Rights & Data Controls · D-01.1 · Deployer · Chatbot
9 V.S.A. § 4193b(b)-(b)(2)
Plain Language
Users have the right to access any of their own retained chat logs at any time, in a portable, downloadable, human- and machine-readable format. Chat logs include both the user's input data and the chatbot's generated outputs. Chatbot providers may not discriminate or retaliate against users for exercising this access right — including through service denial, price changes, or quality degradation.
Statutory Text
(b) Right to access. A user has the right to access, in a portable and readily usable format and at any time, any of the user's own chat logs that a chatbot provider has retained. (1) Chat logs must be made available to users in a downloadable and human- and machine-readable format. (2) A chatbot provider shall not discriminate or retaliate against any user, including by denying products or services, charging different prices or rates for products or services, or providing lower-quality products or services to the user, for accessing their own chat logs.
Other · Chatbot
9 V.S.A. § 4193b(c)
Plain Language
Public agencies may not compel a chatbot provider to produce or grant access to input data or chat logs without first obtaining a wiretap warrant under the Vermont Electronic Communication Privacy Act. This elevates the legal standard for government access to chatbot interaction data to the wiretap warrant level — higher than a standard subpoena or court order.
Statutory Text
(c) Compelling production. A public agency, as that term is defined in 1 V.S.A. § 317, shall not compel the production of or access to input data or chat logs from a chatbot provider without a duly issued wiretap warrant pursuant to 13 V.S.A. chapter 232 (Vermont Electronic Communication Privacy Act).
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Chatbot
9 V.S.A. § 4193b(d)
Plain Language
Chatbot providers must develop, implement, and maintain a written comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and nature of personal data and chat logs they maintain. The program must be published on the provider's website. This is both an operational requirement (the program must actually exist and function) and a public transparency requirement (the written program must be publicly accessible).
Statutory Text
(d) Data security program. A chatbot provider shall develop, implement, and maintain a comprehensive data security program that contains administrative, technical, and physical safeguards that are proportionate to the volume and nature of the personal data and chat logs maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
9 V.S.A. § 4193c(a)(1)-(2)
Plain Language
Chatbot providers may not use any language in their advertising, chatbot interface, or chatbot outputs that indicates or implies that AI-generated output is being provided by, endorsed by, or equivalent to the services of a licensed or certified professional — including healthcare, legal, accounting, and financial professionals, as well as any professional regulated by the Vermont Office of Professional Regulation. A violation is deemed an unfair and deceptive act in commerce. This is a broad prohibition covering the entire user experience from advertising through to generated outputs.
Statutory Text
(a) Licensed professionals. (1) A chatbot provider shall not use any term, letter, or phrase in the advertising, interface, or outputs of a chatbot that indicates or implies that any output data is being provided by or endorsed by or is equivalent to that provided by: (A) a licensed health care professional; (B) a licensed legal professional; (C) a licensed accounting professional; (D) a certified financial fiduciary or planner; or (E) any licensed or certified professional regulated by the Office of Professional Regulation. (2) A violation of subdivision (1) of this subsection is an unfair and deceptive and act in commerce, subject to enforcement and penalties as provided in this subchapter.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
9 V.S.A. § 4193c(b)-(b)(3)
Plain Language
Chatbot providers must unconditionally disclose to users that they are interacting with an AI, not a human, at three trigger points: (1) before the chatbot generates any output; (2) every hour during continuing interactions; and (3) whenever a user asks whether the chatbot is a real person. The notice must be in the user's interaction language, in a font at least as large as the largest text on the interface, accessible to users with disabilities, and compliant with AG rules. This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled. The hourly re-disclosure frequency is stricter than CA SB 243's three-hour interval.
Statutory Text
(b) Disclosure. Chatbot providers shall provide clear, conspicuous, and explicit notice to users that users are interacting with a chatbot rather than a human prior to the chatbot generating any outputs, every hour thereafter, and each time a user prompts the chatbot about whether it is a real person subject to the following: (1) The text of this notice must appear in the same language as the one in which the user is interacting with the chatbot, in a font size easily readable by an average user, and no smaller than the largest font size of other text appearing on the interface on which the chatbot is provided. (2) This notice must be accessible to users with disabilities. (3) This notice must comply with rules adopted by the Attorney General pursuant to this subchapter.
S-01 AI System Safety Program · S-01.4S-01.7 · Deployer · Chatbot
9 V.S.A. § 4193c(c)
Plain Language
Chatbot providers must conduct monthly risk assessments of their chatbots for risks of harm to users, using metrics defined by AG rulemaking, and must actively mitigate any identified risks. This is an unusually frequent assessment cadence — monthly rather than the annual or pre-deployment assessments common in other jurisdictions. The specific metrics and risk categories will be defined by future AG rules, so the scope of this obligation is not yet fully determined. The mitigation obligation is ongoing and immediate upon risk identification.
Statutory Text
(c) Risk assessment. A chatbot provider shall on a monthly basis, according to metrics as set forth in rules adopted by the Attorney General pursuant to this subchapter, assess its chatbot for risks of harm to users and actively mitigate any risks of harm.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Chatbot
9 V.S.A. § 4193c(d)
Plain Language
Chatbot providers must publish information about their chatbot on their website on a monthly basis. The specific categories of information to be disclosed will be defined by AG rulemaking under § 4193d(a)(3). This is a recurring public transparency obligation — not a one-time publication — requiring monthly updates.
Statutory Text
(d) Chatbot information. A chatbot provider shall make information about its chatbot publicly available on its website on a monthly basis as set forth in rules adopted by the Attorney General pursuant to this subchapter.
Other · Chatbot
9 V.S.A. § 4193e(a)-(c)
Plain Language
This provision establishes that chatbots are legally classified as products for product liability purposes — meaning traditional product liability doctrines apply. It imposes a duty on chatbot providers to ensure their chatbots do not cause injury and creates strict liability: a provider is liable for any user injury caused by the chatbot even if the provider exercised all reasonable care (eliminating a negligence defense) and even if the provider did not directly distribute the chatbot to the injured user (eliminating a privity-of-contract defense). This is among the most aggressive liability provisions in any pending U.S. AI bill.
Statutory Text
(a) A chatbot is a product for the purposes of product liability actions. (b) A chatbot provider has a duty to ensure that the use of its chatbot does not cause injury to a user. (c) A chatbot provider is liable for any injury it caused a user through the use of its chatbot, even if: (1) the chatbot provider exercised all reasonable care in the design and distribution of the chatbot; or (2) the chatbot provider did not directly distribute the chatbot to the user or otherwise enter into a contractual relationship with the user.