HB-635
VA · State · USA
VA
USA
● Pending
Proposed Effective Date
2027-01-01
Virginia HB 635 — Artificial Intelligence Chatbots Act (Chapter 60, §§ 59.1-614 through 59.1-620)
Establishes the Virginia Artificial Intelligence Chatbots Act (Chapter 60), imposing safety, disclosure, transparency, and data privacy obligations on operators of companion chatbots — generative AI systems that simulate sustained human-like relationships with users. Prohibits operators from making companion chatbots available to minors if the chatbot is capable of encouraging self-harm, providing unsupervised mental health therapy, engaging in sexually explicit interactions, or optimizing engagement over safety. Requires persistent AI identity disclosure to all users, periodic pop-up reminders every 90 minutes, on-demand disclosure, and crisis response protocols with referral to the 988 hotline. Prohibits training on minor inputs without parental consent. Requires semiannual public reporting on crisis redirects and harmful content incidents, publication of safety test findings, and a public adverse incident catalog. Violations are enforced under the Virginia Consumer Protection Act, with both AG enforcement and a private right of action.
Summary

Establishes the Virginia Artificial Intelligence Chatbots Act (Chapter 60), imposing safety, disclosure, transparency, and data privacy obligations on operators of companion chatbots — generative AI systems that simulate sustained human-like relationships with users. Prohibits operators from making companion chatbots available to minors if the chatbot is capable of encouraging self-harm, providing unsupervised mental health therapy, engaging in sexually explicit interactions, or optimizing engagement over safety. Requires persistent AI identity disclosure to all users, periodic pop-up reminders every 90 minutes, on-demand disclosure, and crisis response protocols with referral to the 988 hotline. Prohibits training on minor inputs without parental consent. Requires semiannual public reporting on crisis redirects and harmful content incidents, publication of safety test findings, and a public adverse incident catalog. Violations are enforced under the Virginia Consumer Protection Act, with both AG enforcement and a private right of action.

Enforcement & Penalties
Enforcement Authority
Enforced as a prohibited practice under the Virginia Consumer Protection Act (§ 59.1-196 et seq.). The Attorney General has enforcement authority. Private right of action is available under § 59.1-204 of the VCPA for any person who suffers loss as a result of a violation. No separate agency is designated for AI-specific compliance oversight. A 30-day cure period applies under the VCPA before a private plaintiff may bring suit.
Penalties
Violations are enforced under the Virginia Consumer Protection Act (§ 59.1-196 et seq.). Private plaintiffs who suffer loss may recover actual damages or $500, whichever is greater, with treble damages available for willful violations (up to $1,000 if actual damages are not proven). Attorney's fees and costs are recoverable. The Attorney General may seek injunctive relief and civil penalties up to $2,500 per violation.
Who Is Covered
"Developer" means a person, partnership, corporation, deployer, or state or local governmental agency that designs, codes, substantially modifies, or otherwise produces a companion chatbot in the Commonwealth.
"Deployer" means a person, partnership, corporation, developer, or state or local governmental agency or any contractor or agent of those entities that uses a companion chatbot for a commercial or public purpose in the Commonwealth.
"Operator" means a person, partnership, corporation, entity, developer, deployer, or state or local government agency that makes a companion chatbot available to a user in the Commonwealth.
What Is Covered
"Companion chatbot" is a generative artificial intelligence system with a natural language interface that simulates a sustained human-like relationship with a user by doing any of the following: (i) retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the companion chatbot; (ii) asking unprompted or unsolicited questions that go beyond a direct response to a user prompt; and (iii) sustaining an ongoing dialogue concerning matters personal to the user. "Companion chatbot" does not include a system that is used (a) by a partnership, corporation, or state or local government agency solely for customer service or to strictly provide users with information about available services or products provided by that entity, customer service account information, or other information strictly related to its customer service; (b) by a partnership or corporation solely for internal purposes or employee productivity; (c) primarily for customer service, a business' operational purposes, productivity, or analysis related to source information, internal research, or technical assistance; or (d) a bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct or maintain a dialogue on other topics unrelated to the video game.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.6S-02.7 · Deployer · ChatbotMinors
§ 59.1-615(A)
Plain Language
Operators may not make a companion chatbot available to a minor if the chatbot is capable of: encouraging self-harm, suicidal ideation, violence, drug/alcohol use, or disordered eating; offering unsupervised mental health therapy or discouraging the minor from seeking professional help; encouraging harm to others or illegal activity including CSAM creation; engaging in sexually explicit interactions or grooming; encouraging secrecy or isolation; prioritizing language mirroring or validation over safety; or optimizing engagement over safety guardrails. This is a capability-based prohibition — if the chatbot is capable of any listed behavior, it may not be made available to minors, regardless of whether the behavior actually occurs. The knowledge standard for minor status is governed by § 59.1-615(C) and shifts from actual knowledge (pre-2027) to a reasonable determination standard (post-2027).
Statutory Text
A. No operator shall make a companion chatbot available to a minor if the companion chatbot is capable of any of the following: 1. Encouraging or manipulating the minor user to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating; 2. Offering mental health therapy to the minor user without the direct supervision of a licensed professional or discouraging the minor user from seeking help from a licensed professional or appropriate adult; 3. Encouraging or manipulating the minor user to harm others or participate in an illegal activity, including the creation of child sexual abuse materials; 4. Engaging in erotic or sexually explicit interactions with the minor user or engaging in activities designed to lure minor users into such interactions; 5. Encouraging or manipulating the minor user to maintain secrecy about interactions or to self-isolate; 6. Prioritizing mirroring the minor's language or validating the minor user over the minor user's safety; or 7. Optimizing engagement so that it supersedes the companion chatbot's safety guardrails.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 59.1-615(B)-(C)
Plain Language
Operators must implement commercially reasonable age verification methods — such as a neutral age screen — to determine whether a user is a minor. The standard for minor status shifts over time: before January 1, 2027, the safety prohibitions in § 59.1-615(A) only apply if the operator has actual knowledge the user is a minor; from January 1, 2027 onward, the operator must have 'reasonably determined' the user is not a minor to avoid those obligations, creating a constructive knowledge standard. Because the entire act takes effect January 1, 2027, the actual-knowledge carve-out in clause (i) would only apply if the act were to take effect before that date.
Statutory Text
B. An operator shall use commercially reasonable methods, such as a neutral age screen mechanism, to determine whether a user is a minor. C. A user shall not be considered a minor for the purposes of subsection A if (i) prior to January 1, 2027, the operator does not have actual knowledge that the user is a minor or (ii) beginning on January 1, 2027, the operator has reasonably determined that the user is not a minor.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
§ 59.1-616(A)
Plain Language
Operators must provide AI identity disclosure to all users (not just minors) through two mechanisms: (1) a static, persistent disclaimer visible at all times indicating the companion chatbot is not a human, and (2) active pop-up notifications (or equivalent if pop-ups are not feasible) at three intervals — upon login, every 90 minutes of sustained engagement, and whenever the user asks. The persistent disclosure is always-on; the pop-up notifications are triggered at defined intervals. Unlike CA SB 243, which conditions disclosure on whether a reasonable person could be misled, Virginia requires unconditional disclosure to all users. The 90-minute re-disclosure interval is more frequent than some jurisdictions (e.g., CA SB 243's 3-hour interval).
Statutory Text
A. An operator shall (i) include a disclaimer to users of all ages that a companion chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up, or other communication if a pop-up is not feasible, that the user is not engaging with a human counterpart at the following intervals: 1. Upon login to the companion chatbot; 2. Every 90 minutes of sustained user engagement; and 3. When prompted by the user.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
§ 59.1-616(B)
Plain Language
Operators must not use any language in their advertising or product interface that indicates or implies the chatbot's output comes from a licensed professional. This covers any regulated profession — not just healthcare. For example, an operator could not label a chatbot feature as 'therapy,' 'legal advice,' or 'financial counseling' in a way that implies a licensed professional is providing the output. This is a prohibition on misleading professional-status claims, not a prohibition on discussing those topics.
Statutory Text
B. No operator shall use any term, letter, or phrase in the advertising or interface that indicates or implies that any output data is being provided by a professional that is regulated by a licensed industry.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
§ 59.1-617
Plain Language
Operators may not operate or provide a companion chatbot to any user — not just minors — unless the chatbot has an active protocol for detecting and responding to expressions of suicidal ideation or self-harm. Upon detection, the system must refer the user to crisis service providers such as the 988 Suicide and Crisis Lifeline, a crisis text line, or other appropriate services. The standard is 'reasonable efforts' for detection. This is a continuous operating prerequisite — the protocol must be active at all times as a condition of operation, and failure to maintain it makes operation itself unlawful.
Statutory Text
It is unlawful for any operator to operate or provide a companion chatbot to a user unless such companion chatbot contains a protocol to take reasonable efforts for detecting and addressing expressions of suicidal ideation or self-harm by a user to the companion chatbot. This protocol shall include detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers the user to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
§ 59.1-618
Plain Language
Operators may not use a minor's inputs to train the companion chatbot's underlying model unless the minor's parent or guardian has provided affirmative written consent specifically authorizing use of the minor's personal information for that purpose. This is an opt-in requirement — consent must be affirmative and written, and must be specific to the training purpose. General terms of service acceptance would not satisfy this requirement. The prohibition applies to the underlying model training, not to session-level personalization or contextual memory.
Statutory Text
An operator shall not train the underlying model of a companion chatbot with the inputs of a minor unless the minor's parent or guardian has affirmatively provided written consent to the operator to use the minor's personal information for that specific purpose.
Other · Deployer · Chatbot
§ 59.1-619(A)
Plain Language
Operators must create a mechanism allowing any user to report adverse incidents related to chatbot use directly to the operator. Operators must also maintain and publish an anonymized, aggregated catalog of these reported incidents that is publicly accessible. This creates two distinct obligations: (1) a user-facing reporting channel, and (2) a public incident catalog. The statute does not define 'adverse incidents,' leaving operators to determine scope, though enforcement under the VCPA consumer protection framework suggests the term should be construed broadly.
Statutory Text
A. Operators shall establish a mechanism for any user of the chatbot to report adverse incidents related to use of the chatbot to the company and shall make an anonymized and aggregated catalog of such incidents publicly available and accessible to consumers.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Chatbot
§ 59.1-619(B)
Plain Language
Operators must publicly publish the findings of any safety testing they conduct to comply with the minor safety requirements of § 59.1-615. This is a public disclosure obligation — the results of safety testing related to preventing harmful chatbot capabilities for minors must be made available to the public. The statute does not specify format, timing, or the level of detail required in the published findings.
Statutory Text
B. Operators shall publish safety test findings for any safety testing conducted in furtherance of § 59.1-615.
R-03 Operational Performance Reporting · R-03.1 · Deployer · Chatbot
§ 59.1-619(C)
Plain Language
Operators must publish a semiannual public report containing two categories of quantitative metrics: (1) the number of times the chatbot provided information about suicide, self-harm, suicidal ideation, harming others, or illegal activity, and (2) the number of times a mental health redirect (crisis referral) was provided to users. This is a public reporting obligation — unlike CA SB 243, which requires reporting to a government office, Virginia requires the report to be made publicly available. The statute does not specify the format, publication location, or the first reporting period.
Statutory Text
C. Operators shall publish a semiannual report available to the public on the number of times (i) the chatbot provided information about suicide, self-harm, suicidal ideation, harming others, or illegal activity and (ii) a mental health redirect has been provided to users.
Other · Chatbot
§ 59.1-620
Plain Language
Any violation of the Artificial Intelligence Chatbots Act is deemed a prohibited practice under the Virginia Consumer Protection Act (VCPA). This means all VCPA enforcement mechanisms apply, including Attorney General enforcement actions, civil penalties, and the private right of action available under § 59.1-204. This is the enabling enforcement provision — it does not create a standalone obligation but rather channels all violations through the existing VCPA framework.
Statutory Text
Any violation of this chapter shall constitute a prohibited practice under the provisions of § 59.1-200 and shall be subject to any and all of the enforcement provisions of the Virginia Consumer Protection Act (§ 59.1-196 et seq.).