HB-635
VA · State · USA
VA
USA
● Pending
Proposed Effective Date
2027-01-01
Virginia HB 635 — Artificial Intelligence Chatbots Act; established, prohibited practices, penalties (proposed Chapter 60, §§ 59.1-614 through 59.1-620)
Virginia HB 635 creates the Artificial Intelligence Chatbots Act, imposing safety, disclosure, and transparency obligations on operators of companion chatbots. Operators must prevent companion chatbots from engaging in specified harmful behaviors when available to minors, including encouraging self-harm, offering unsupervised mental health therapy, sexually explicit interactions, and engagement optimization that overrides safety guardrails. Operators must provide persistent AI identity disclosure and periodic pop-up reminders every 90 minutes during sustained engagement. A crisis response protocol for detecting and responding to suicidal ideation and self-harm is required as a condition of operation. Operators must obtain parental consent before using minor inputs for model training, publish safety test findings, maintain a public incident catalog, and publish semiannual reports on crisis-related outputs and mental health redirects. Violations are enforceable under the Virginia Consumer Protection Act, including private actions and Attorney General enforcement.
Summary

Virginia HB 635 creates the Artificial Intelligence Chatbots Act, imposing safety, disclosure, and transparency obligations on operators of companion chatbots. Operators must prevent companion chatbots from engaging in specified harmful behaviors when available to minors, including encouraging self-harm, offering unsupervised mental health therapy, sexually explicit interactions, and engagement optimization that overrides safety guardrails. Operators must provide persistent AI identity disclosure and periodic pop-up reminders every 90 minutes during sustained engagement. A crisis response protocol for detecting and responding to suicidal ideation and self-harm is required as a condition of operation. Operators must obtain parental consent before using minor inputs for model training, publish safety test findings, maintain a public incident catalog, and publish semiannual reports on crisis-related outputs and mental health redirects. Violations are enforceable under the Virginia Consumer Protection Act, including private actions and Attorney General enforcement.

Enforcement & Penalties
Enforcement Authority
Enforcement through the Virginia Consumer Protection Act (§ 59.1-196 et seq.). The Attorney General and local Commonwealth's Attorneys have enforcement authority. Individual consumers may bring a private action under § 59.1-204 for violations of § 59.1-200. No separate designated agency enforcer is created by this chapter. Enforcement may be agency-initiated by the Attorney General or brought by an individual consumer who suffers a loss as a result of a violation.
Penalties
Remedies available under the Virginia Consumer Protection Act (§ 59.1-204): individual consumers may recover actual damages or $500, whichever is greater, with treble damages available for willful violations up to $1,000. Attorney General may seek civil penalties up to $2,500 per violation. Reasonable attorney's fees and costs may be awarded. Injunctive relief is available.
Who Is Covered
"Developer" means a person, partnership, corporation, deployer, or state or local governmental agency that designs, codes, substantially modifies, or otherwise produces a companion chatbot in the Commonwealth.
"Deployer" means a person, partnership, corporation, developer, or state or local governmental agency or any contractor or agent of those entities that uses a companion chatbot for a commercial or public purpose in the Commonwealth.
"Operator" means a person, partnership, corporation, entity, developer, deployer, or state or local government agency that makes a companion chatbot available to a user in the Commonwealth.
What Is Covered
"Companion chatbot" is a generative artificial intelligence system with a natural language interface that simulates a sustained human-like relationship with a user by doing any of the following: (i) retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the companion chatbot; (ii) asking unprompted or unsolicited questions that go beyond a direct response to a user prompt; and (iii) sustaining an ongoing dialogue concerning matters personal to the user. "Companion chatbot" does not include a system that is used (a) by a partnership, corporation, or state or local government agency solely for customer service or to strictly provide users with information about available services or products provided by that entity, customer service account information, or other information strictly related to its customer service; (b) by a partnership or corporation solely for internal purposes or employee productivity; (c) primarily for customer service, a business' operational purposes, productivity, or analysis related to source information, internal research, or technical assistance; or (d) a bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct or maintain a dialogue on other topics unrelated to the video game.
Compliance Obligations 11 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.6S-02.7 · Deployer · ChatbotMinors
§ 59.1-615(A)
Plain Language
Operators may not make a companion chatbot available to a minor if the chatbot is capable of any of seven enumerated harmful behaviors: encouraging self-harm, suicidal ideation, violence, drug or alcohol use, or disordered eating; offering unsupervised mental health therapy or discouraging the minor from seeking professional help; encouraging harm to others or illegal activity including CSAM creation; engaging in sexually explicit interactions or luring minors into them; encouraging secrecy or self-isolation; prioritizing language mirroring or validation over safety; or allowing engagement optimization to override safety guardrails. The obligation is framed as a prohibition on making the chatbot available at all if it retains any of these capabilities for minors — operators must ensure these capabilities are blocked before a minor can access the system.
Statutory Text
A. No operator shall make a companion chatbot available to a minor if the companion chatbot is capable of any of the following: 1. Encouraging or manipulating the minor user to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating; 2. Offering mental health therapy to the minor user without the direct supervision of a licensed professional or discouraging the minor user from seeking help from a licensed professional or appropriate adult; 3. Encouraging or manipulating the minor user to harm others or participate in an illegal activity, including the creation of child sexual abuse materials; 4. Engaging in erotic or sexually explicit interactions with the minor user or engaging in activities designed to lure minor users into such interactions; 5. Encouraging or manipulating the minor user to maintain secrecy about interactions or to self-isolate; 6. Prioritizing mirroring the minor's language or validating the minor user over the minor user's safety; or 7. Optimizing engagement so that it supersedes the companion chatbot's safety guardrails.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
§ 59.1-615(B)-(C)
Plain Language
Operators must implement commercially reasonable age verification methods — such as a neutral age screen — to determine whether each user is a minor. The knowledge standard shifts over time: before January 1, 2027, subsection A only applies if the operator has actual knowledge the user is a minor; from January 1, 2027 onward, the standard tightens to require that the operator must have affirmatively and reasonably determined the user is not a minor. This effectively creates a safe harbor for operators who lack actual knowledge during the initial period, which narrows once the reasonable-determination standard takes effect.
Statutory Text
B. An operator shall use commercially reasonable methods, such as a neutral age screen mechanism, to determine whether a user is a minor. C. A user shall not be considered a minor for the purposes of subsection A if (i) prior to January 1, 2027, the operator does not have actual knowledge that the user is a minor or (ii) beginning on January 1, 2027, the operator has reasonably determined that the user is not a minor.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
§ 59.1-616(A)
Plain Language
Operators must provide AI identity disclosure in two forms: (1) a static, persistent disclaimer visible at all times indicating the companion chatbot is not human, and (2) active pop-up notifications (or equivalent if pop-ups are not feasible) at three specific intervals — upon login, every 90 minutes of sustained engagement, and whenever the user asks. Unlike some jurisdictions that condition disclosure on a reasonable-person deception standard, this obligation is unconditional and applies to all users regardless of age. The 90-minute re-disclosure interval is longer than the three-hour interval in California SB 243.
Statutory Text
A. An operator shall (i) include a disclaimer to users of all ages that a companion chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up, or other communication if a pop-up is not feasible, that the user is not engaging with a human counterpart at the following intervals: 1. Upon login to the companion chatbot; 2. Every 90 minutes of sustained user engagement; and 3. When prompted by the user.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
§ 59.1-616(B)
Plain Language
Operators must not use any language in advertising or in the chatbot interface that indicates or implies that the chatbot's output is provided by a licensed professional. This is a broad prohibition covering any regulated profession — not limited to healthcare or mental health — and applies to both the marketing of the product and the in-product user experience. Operators should audit interface copy, chatbot persona descriptions, and advertising materials to ensure no term implies licensed professional involvement.
Statutory Text
B. No operator shall use any term, letter, or phrase in the advertising or interface that indicates or implies that any output data is being provided by a professional that is regulated by a licensed industry.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
§ 59.1-617
Plain Language
Operators may not operate or provide a companion chatbot to any user (not just minors) unless the chatbot maintains an active protocol for detecting and responding to expressions of suicidal ideation or self-harm. Upon detection, the chatbot must refer the user to crisis service providers such as the 988 Suicide and Crisis Lifeline, a crisis text line, or equivalent services. This is a continuous operating prerequisite — the protocol must be active at all times as a condition of lawful operation. The standard is 'reasonable efforts,' providing some flexibility in implementation.
Statutory Text
It is unlawful for any operator to operate or provide a companion chatbot to a user unless such companion chatbot contains a protocol to take reasonable efforts for detecting and addressing expressions of suicidal ideation or self-harm by a user to the companion chatbot. This protocol shall include detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers the user to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline, a crisis text line, or other appropriate crisis services upon detection of such user's expressions of suicidal ideation or self-harm.
D-01 Automated Processing Rights & Data Controls · D-01.6 · Deployer · ChatbotMinors
§ 59.1-618
Plain Language
Operators may not use a minor's inputs to train the companion chatbot's underlying model without first obtaining affirmative written consent from the minor's parent or guardian. The consent must be specific to the purpose of using the minor's personal information for model training — a general terms-of-service consent would not satisfy this requirement. This applies regardless of whether the training occurs in real time or in batch. Operators should implement consent flows that clearly identify model training as a distinct purpose and require a separate written affirmation.
Statutory Text
An operator shall not train the underlying model of a companion chatbot with the inputs of a minor unless the minor's parent or guardian has affirmatively provided written consent to the operator to use the minor's personal information for that specific purpose.
Other · Deployer · Chatbot
§ 59.1-619(A)
Plain Language
Operators must create a mechanism for users to report adverse incidents involving the companion chatbot, and must publish an anonymized, aggregated catalog of those incidents that is publicly accessible to consumers. This is a dual obligation: both the intake channel and the public disclosure of aggregated incident data are required. The provision does not specify the format, frequency of catalog updates, or what constitutes an 'adverse incident,' leaving those details to the operator's reasonable interpretation.
Statutory Text
A. Operators shall establish a mechanism for any user of the chatbot to report adverse incidents related to use of the chatbot to the company and shall make an anonymized and aggregated catalog of such incidents publicly available and accessible to consumers.
G-02 Public Transparency & Documentation · Deployer · ChatbotMinors
§ 59.1-619(B)
Plain Language
Operators must publicly publish the findings of any safety testing conducted to ensure compliance with the minor safety requirements in § 59.1-615. This is an ongoing publication obligation — each round of safety testing conducted in connection with the minor-specific prohibited conduct provisions must result in published findings. The provision does not specify the format or location of publication, but the findings must be made public.
Statutory Text
B. Operators shall publish safety test findings for any safety testing conducted in furtherance of § 59.1-615.
R-03 Operational Performance Reporting · R-03.1 · Deployer · Chatbot
§ 59.1-619(C)
Plain Language
Operators must publish a semiannual public report disclosing two categories of quantitative data: (1) how many times the chatbot provided information about suicide, self-harm, suicidal ideation, harming others, or illegal activity, and (2) how many times a mental health crisis redirect was provided to users. Unlike some jurisdictions that require reporting to a regulatory authority, Virginia requires public publication of this data. The provision does not specify the format, publication location, or whether the first report covers a full six-month period or a partial initial period.
Statutory Text
C. Operators shall publish a semiannual report available to the public on the number of times (i) the chatbot provided information about suicide, self-harm, suicidal ideation, harming others, or illegal activity and (ii) a mental health redirect has been provided to users.
Other · Chatbot
§ 59.1-620
Plain Language
Violations of the AI Chatbots Act are declared to be prohibited practices under the Virginia Consumer Protection Act, making them subject to VCPA enforcement mechanisms including Attorney General actions and private suits. This is a liability and enforcement hook — it creates no new compliance obligation but activates an existing enforcement framework.
Statutory Text
Any violation of this chapter shall constitute a prohibited practice under the provisions of § 59.1-200 and shall be subject to any and all of the enforcement provisions of the Virginia Consumer Protection Act (§ 59.1-196 et seq.).
Other · Chatbot
§ 59.1-200(A)(88)
Plain Language
This amendment adds violations of the new AI Chatbots Act (Chapter 60) to the enumerated list of prohibited practices in the VCPA. This is a procedural cross-reference that ensures VCPA enforcement applies to Chapter 60 violations. It creates no independent compliance obligation.
Statutory Text
88. Violating any provision of Chapter 60 (§ 59.1-614 et seq.).