HB-4770
WV · State · USA
WV
USA
● Pending
Proposed Effective Date
2027-01-01
West Virginia House Bill 4770 — Establishing limitations on the use of artificial intelligence and artificial intelligence technology to deliver mental health care, with exceptions for administrative support functions
Restricts the use of artificial intelligence in delivering mental health care in West Virginia. Operators of AI companions and licensed professionals may use AI only for administrative or supplementary support functions, with the professional maintaining full responsibility for all interactions, outputs, and data use. AI may not independently make therapeutic decisions, directly interact with clients therapeutically, generate treatment plans without licensed professional review, or detect emotions for diagnostic or manipulative purposes. Requires clear disclosure to users that they are not communicating with a human, and mandates written informed consent before AI is used to record or transcribe therapy sessions. Enforcement is by the Offices of the Insurance Commissioner, with civil penalties up to $10,000 per violation. Applies to insurance policies issued or renewed on or after January 1, 2027.
Summary

Restricts the use of artificial intelligence in delivering mental health care in West Virginia. Operators of AI companions and licensed professionals may use AI only for administrative or supplementary support functions, with the professional maintaining full responsibility for all interactions, outputs, and data use. AI may not independently make therapeutic decisions, directly interact with clients therapeutically, generate treatment plans without licensed professional review, or detect emotions for diagnostic or manipulative purposes. Requires clear disclosure to users that they are not communicating with a human, and mandates written informed consent before AI is used to record or transcribe therapy sessions. Enforcement is by the Offices of the Insurance Commissioner, with civil penalties up to $10,000 per violation. Applies to insurance policies issued or renewed on or after January 1, 2027.

Enforcement & Penalties
Enforcement Authority
The Offices of the Insurance Commissioner is the designated enforcement authority. Enforcement is agency-initiated; civil penalties are determined by the Insurance Commissioner. No private right of action is created by the statute. No cure period or safe harbor is specified.
Penalties
Civil penalty not to exceed $10,000 per violation, as determined by the Offices of the Insurance Commissioner. No private damages, injunctive relief, or attorney fee provisions are included.
Who Is Covered
"Operator" means any person, partnership, association, firm, or business entity, or any member, affiliate, subsidiary or beneficial owner of any partnership, association, firm, or business entity who operates for or provides an AI companion to a user, and any insurer subject to §5-16-15 et. seq., §33-15-4 et. seq., §33-16-3 et. seq., §33-24-7 et. seq., §33-25-8 et seq., and §33-25A-8 et. seq. of this code.
"Licensed professional" means an individual who holds a valid license issued by this state to provide therapy or psychotherapy services, including: (i) A licensed psychologist; §30-21-1 et seq.; (ii) A licensed social worker; §30-30-1 et seq.; (iii) A licensed professional counselor; and a licensed marriage and family therapist; §30-31-1 et seq.: (iv) A drug abuse counselor authorized under §16B-13-2; (v) A licensed advanced practice registered nurse; (vi) A physician assistant; §30-3E-1 et seq.: (vii) A licensed physician;§30-3-1 and §30-14-1 et-seq; and (viii) Any other professional authorized by this State to provide therapy or psychotherapy services.
What Is Covered
"AI companion" means a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user by: (i) Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the AI companion; (ii) Asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and (iii) Sustaining an ongoing dialogue concerning matters personal to the user. Human relationships include, but shall not be limited to, intimate, romantic or platonic interactions or companionship. "AI companion" does not include: (i) A system used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by such entity, customer service account information, or other information strictly related to its customer service; (ii) A system that is primarily designed and marketed for providing efficiency improvements or, research or technical assistance; or (iii) A system used by a business entity solely for internal purposes or employee productivity.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1HC-02.2 · DeployerProfessional · HealthcareChatbot
§33-57-2(b)
Plain Language
Operators and licensed professionals may use AI only for administrative or supplementary support in therapy or psychotherapy, and must maintain full responsibility for all interactions, outputs, and data use associated with the AI system. No decision for patient care, reimbursement, or claims adjudication may be based exclusively on AI-generated information. This establishes that AI is a support tool only — the human professional retains ultimate accountability and decisional authority.
Statutory Text
(b) An operator or licensed professional is permitted to use AI tools or systems to assist in providing administrative support or supplementary support in therapy or psychotherapy services with the operator or licensed professional maintaining full responsibility for all interactions, outputs and data use associated with the system and satisfies the requirements of this article. A decision for patient care, reimbursement or claims adjudication may not be based exclusively on AI-generated information.
T-01 AI Identity Disclosure · T-01.1T-01.2 · DeployerProfessional · HealthcareChatbot
§33-57-2(c)
Plain Language
Operators and licensed professionals must provide a clear and conspicuous notification — verbal or written — to users at the beginning of any AI companion interaction stating that the user is not communicating with a human. This initial disclosure need not exceed once per day. For continuing AI companion interactions, a reminder must be provided at least every three hours. This is an unconditional disclosure requirement — it applies to every AI companion interaction regardless of whether a reasonable person would be misled.
Statutory Text
(c) An operator or licensed professional shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day. and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.4 · DeployerProfessional · HealthcareChatbot
§33-57-2(d)
Plain Language
Before using AI to record or transcribe a therapeutic session, the operator or licensed professional must inform the patient (or their legally authorized representative) in writing that AI will be used and explain the specific purpose of the AI tool. The patient must then provide consent. Critically, 'consent' under this bill has a heightened standard — it must be a clear, explicit, freely given, specific written agreement that is revocable at any time. Consent cannot be obtained via general terms of use, passive actions like hovering or closing content, or deceptive actions. This effectively prohibits burying AI recording consent in standard intake paperwork.
Statutory Text
(d) No operator or licensed professional may be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) The patient or the patient's legally authorized representative is informed in writing of the following: that artificial intelligence will be used; and the specific purpose of the artificial intelligence tool or system that will be used; and (2) The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.3HC-02.5 · DeployerProfessional · HealthcareChatbot
§33-57-2(e)
Plain Language
Therapy or psychotherapy services may not be provided, advertised, or offered to the public in West Virginia — including via internet-based AI — unless conducted by a licensed professional. Additionally, no operator or licensed professional may design, market, or present any AI system in a way that would reasonably cause a person to believe the AI system is a licensed professional or crisis service. This is a two-part prohibition: (1) AI-only therapy without a licensed professional is banned, and (2) AI systems cannot be designed to impersonate licensed professionals or crisis services.
Statutory Text
(e) No operator or licensed professional may provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional, and may not design, market or present any AI system that reasonably would cause a person to believe the AI system is a licensed professional or crisis service.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.2 · Deployer · HealthcareChatbot
§33-57-2(f)
Plain Language
Peer support services, religious counseling services, and digital mental wellness services are prohibited from using AI to diagnose conditions, develop or modify treatment plans, conduct suicide or self-harm risk assessments, or otherwise provide therapy or psychotherapy services — unless a licensed professional approves. This prevents non-clinical services from using AI to effectively practice therapy without professional oversight, even if the service itself is not marketed as therapy.
Statutory Text
(f) Peer support services, religious counseling services and digital mental wellness services may not, through the use of artificial intelligence, diagnose, develop or modify treatment plans, conduct suicide or self-harm risk assessments, or otherwise provide therapy or psychotherapy services without the approval of a licensed professional.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.2 · ProfessionalDeployer · HealthcareChatbot
§33-57-2(g)
Plain Language
Licensed professionals face four categorical prohibitions on AI use: AI may not (1) make independent therapeutic decisions, (2) directly interact with clients in any form of therapeutic communication, (3) generate treatment plans or therapeutic recommendations without the licensed professional's review and approval, or (4) detect emotions or mental states for diagnostic, therapeutic, or treatment purposes or to target or manipulate a person's mental or emotional state. The emotion-detection prohibition is notably broad — it covers both clinical use (emotion detection for diagnosis) and manipulative use (targeting emotional states). The overall framing limits AI use to administrative and supplementary support only, with the professional retaining full decisional authority.
Statutory Text
(g) An operator or licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsection (b). A licensed professional may not allow artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) Detect emotions or mental states for the purpose of making diagnostic, therapeutic, or treatment decisions, or for targeting or manipulating a person's mental or emotional state.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1 · DeployerProfessional · HealthcareChatbot
§33-57-2(h)
Plain Language
AI may be used to flag or triage communications indicating self-harm, suicide risk, or other acute safety concerns — this is a narrow carve-out from the broader prohibition on AI therapeutic interaction. However, any such AI-generated flags must be promptly reviewed and addressed by a licensed professional who retains sole authority for clinical assessment and decision-making. The AI's role is limited to alerting and triaging; it cannot independently assess risk, recommend interventions, or communicate with the patient about the flagged concern.
Statutory Text
(h) An operator or licensed professional may use artificial intelligence solely to flag or triage communications that may indicate self-harm, suicide risk, or other acute safety concerns, provided that any such flags are promptly reviewed and addressed by a licensed professional who retains sole authority for clinical assessment and decision-making.