HB-4770
WV · State · USA
WV
USA
● Pending
Proposed Effective Date
2027-01-01
West Virginia House Bill 4770 — Establishing limitations on the use of artificial intelligence and artificial intelligence technology to deliver mental health care, with exceptions for administrative support functions
Restricts the use of artificial intelligence in the delivery of mental health care in West Virginia by prohibiting AI from independently providing therapy or psychotherapy services, making therapeutic decisions, directly engaging in therapeutic communication with clients, or detecting emotions for diagnostic or manipulative purposes. Operators and licensed professionals may use AI for administrative and supplementary support but must retain full responsibility for all interactions, outputs, and data use. Requires clear disclosure to users that they are not communicating with a human at the start of AI companion interactions and at least every three hours. Recording or transcription of therapeutic sessions using AI requires written informed consent. Enforced by the Offices of the Insurance Commissioner with civil penalties up to $10,000 per violation. Applies to policies, plans, or contracts issued or renewed on or after January 1, 2027.
Summary

Restricts the use of artificial intelligence in the delivery of mental health care in West Virginia by prohibiting AI from independently providing therapy or psychotherapy services, making therapeutic decisions, directly engaging in therapeutic communication with clients, or detecting emotions for diagnostic or manipulative purposes. Operators and licensed professionals may use AI for administrative and supplementary support but must retain full responsibility for all interactions, outputs, and data use. Requires clear disclosure to users that they are not communicating with a human at the start of AI companion interactions and at least every three hours. Recording or transcription of therapeutic sessions using AI requires written informed consent. Enforced by the Offices of the Insurance Commissioner with civil penalties up to $10,000 per violation. Applies to policies, plans, or contracts issued or renewed on or after January 1, 2027.

Enforcement & Penalties
Enforcement Authority
The Offices of the Insurance Commissioner has enforcement authority and determines civil penalties. No private right of action is created. Enforcement is agency-initiated. The Insurance Commissioner may adopt rules to implement the section.
Penalties
Civil penalty not to exceed $10,000 per violation, as determined by the Offices of the Insurance Commissioner. No private damages, injunctive relief, or attorney fees provisions.
Who Is Covered
"Operator" means any person, partnership, association, firm, or business entity, or any member, affiliate, subsidiary or beneficial owner of any partnership, association, firm, or business entity who operates for or provides an AI companion to a user, and any insurer subject to §5-16-15 et. seq., §33-15-4 et. seq., §33-16-3 et. seq., §33-24-7 et. seq., §33-25-8 et seq., and §33-25A-8 et. seq. of this code.
"Licensed professional" means an individual who holds a valid license issued by this state to provide therapy or psychotherapy services, including: (i) A licensed psychologist; §30-21-1 et seq.; (ii) A licensed social worker; §30-30-1 et seq.; (iii) A licensed professional counselor; and a licensed marriage and family therapist; §30-31-1 et seq.: (iv) A drug abuse counselor authorized under §16B-13-2; (v) A licensed advanced practice registered nurse; (vi) A physician assistant; §30-3E-1 et seq.: (vii) A licensed physician;§30-3-1 and §30-14-1 et-seq; and (viii) Any other professional authorized by this State to provide therapy or psychotherapy services.
What Is Covered
"AI companion" means a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user by: (i) Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the AI companion; (ii) Asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and (iii) Sustaining an ongoing dialogue concerning matters personal to the user. Human relationships include, but shall not be limited to, intimate, romantic or platonic interactions or companionship. "AI companion" does not include: (i) A system used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by such entity, customer service account information, or other information strictly related to its customer service; (ii) A system that is primarily designed and marketed for providing efficiency improvements or, research or technical assistance; or (iii) A system used by a business entity solely for internal purposes or employee productivity.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1HC-02.2 · DeployerProfessional · HealthcareChatbot
§33-57-2(b)
Plain Language
Operators and licensed professionals may use AI tools for administrative or supplementary support in therapy or psychotherapy, but must maintain full responsibility for all interactions, outputs, and data use associated with the AI system. Patient care decisions, reimbursement decisions, and claims adjudication may not be based exclusively on AI-generated information. This establishes both a scope-of-permitted-use boundary and a professional responsibility requirement — AI is permitted as a tool, not as a substitute for human professional judgment.
Statutory Text
(b) An operator or licensed professional is permitted to use AI tools or systems to assist in providing administrative support or supplementary support in therapy or psychotherapy services with the operator or licensed professional maintaining full responsibility for all interactions, outputs and data use associated with the system and satisfies the requirements of this article. A decision for patient care, reimbursement or claims adjudication may not be based exclusively on AI-generated information.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeployerProfessional · HealthcareChatbot
§33-57-2(c)
Plain Language
Operators and licensed professionals must provide a clear and conspicuous notification at the beginning of any AI companion interaction stating that the user is not communicating with a human. This initial disclosure need not exceed once per day. For continuing interactions, a re-disclosure must be provided at least every three hours. The notification may be verbal or written. This is an unconditional disclosure requirement — it applies to all AI companion interactions regardless of whether a reasonable person would be misled.
Statutory Text
(c) An operator or licensed professional shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day. and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.4 · DeployerProfessional · HealthcareChatbot
§33-57-2(d)
Plain Language
Before using AI to record or transcribe a therapeutic session, the operator or licensed professional must inform the patient (or their legally authorized representative) in writing that AI will be used and disclose the specific purpose of the AI tool. The patient must then provide consent, which must be freely given, specific, informed, written, and revocable. Consent cannot be obtained through general terms of use, passive UI actions, or deceptive practices. This is a hard prerequisite — without written notice and affirmative consent, AI-assisted recording or transcription of therapy sessions is prohibited.
Statutory Text
(d) No operator or licensed professional may be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) The patient or the patient's legally authorized representative is informed in writing of the following: that artificial intelligence will be used; and the specific purpose of the artificial intelligence tool or system that will be used; and (2) The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.3HC-02.5 · DeployerProfessional · HealthcareChatbot
§33-57-2(e)
Plain Language
Therapy or psychotherapy services may only be provided in West Virginia by a licensed professional — AI alone cannot deliver these services, even via internet-based platforms. Additionally, no operator or licensed professional may design, market, or present an AI system in a way that would reasonably cause a person to believe the AI is a licensed professional or a crisis service. This is a dual prohibition: (1) a licensure gatekeeping requirement and (2) an anti-deception rule preventing AI systems from impersonating licensed professionals or crisis services.
Statutory Text
(e) No operator or licensed professional may provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional, and may not design, market or present any AI system that reasonably would cause a person to believe the AI system is a licensed professional or crisis service.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.2 · DeployerProfessional · HealthcareChatbot
§33-57-2(f)
Plain Language
Peer support services, religious counseling services, and digital mental wellness services may not use AI to perform clinical functions — specifically diagnosing, developing or modifying treatment plans, conducting suicide or self-harm risk assessments, or otherwise providing therapy or psychotherapy services — unless a licensed professional approves. This prevents non-clinical services from using AI to cross into clinical territory without professional oversight, even though these services are generally exempt from the therapy/psychotherapy licensing requirement.
Statutory Text
(f) Peer support services, religious counseling services and digital mental wellness services may not, through the use of artificial intelligence, diagnose, develop or modify treatment plans, conduct suicide or self-harm risk assessments, or otherwise provide therapy or psychotherapy services without the approval of a licensed professional.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.2 · ProfessionalDeployer · HealthcareChatbot
§33-57-2(g)
Plain Language
Licensed professionals are prohibited from allowing AI to: (1) make independent therapeutic decisions; (2) directly interact with clients in therapeutic communication; (3) generate therapeutic recommendations or treatment plans without the professional's review and approval; or (4) detect emotions or mental states for diagnostic, therapeutic, or treatment purposes, or to target or manipulate a person's mental or emotional state. AI use is constrained to administrative and supplementary support as defined in subsection (b). This is a comprehensive enumeration of prohibited AI autonomous functions in the clinical mental health context — the professional must remain in the loop for all clinical activities.
Statutory Text
(g) An operator or licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsection (b). A licensed professional may not allow artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) Detect emotions or mental states for the purpose of making diagnostic, therapeutic, or treatment decisions, or for targeting or manipulating a person's mental or emotional state.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1 · DeployerProfessional · HealthcareChatbot
§33-57-2(h)
Plain Language
AI may be used to flag or triage communications indicating self-harm, suicide risk, or other acute safety concerns — but this is the only permitted autonomous AI function in this context. A licensed professional must promptly review all such flags and retains sole authority for clinical assessment and decision-making. This provision creates a narrow safe harbor for AI-assisted safety screening within the broader prohibition on AI clinical functions, while reinforcing that the licensed professional must remain the decision-maker for all clinical responses.
Statutory Text
(h) An operator or licensed professional may use artificial intelligence solely to flag or triage communications that may indicate self-harm, suicide risk, or other acute safety concerns, provided that any such flags are promptly reviewed and addressed by a licensed professional who retains sole authority for clinical assessment and decision-making.
Other · HealthcareChatbot
§33-57-2(i)
Plain Language
Operators and licensed professionals found in violation of any provision of this section face civil penalties up to $10,000 per violation, as determined by the Insurance Commissioner. The Insurance Commissioner may also adopt rules to implement the section. These provisions establish enforcement mechanisms and rulemaking authority but create no independent compliance obligation.
Statutory Text
(i) An operator or a licensed professional found in violation of this section shall pay a civil penalty of an amount not to exceed $10,000 per violation, as determined by the Offices of the Insurance Commissioner. (i) The Offices of the Insurance Commissioner may adopt rule to implement this section.