HC-02
Healthcare AI
AI in Licensed Professional Practice Restrictions
Licensed professionals (including psychotherapists, counselors, psychologists, social workers, and similar practitioners) must maintain full professional responsibility for all AI interactions, outputs, and data use within their practice. AI systems may not independently make therapeutic decisions, directly interact with clients in therapeutic communication, generate treatment plans without professional review, or detect emotions or mental states in clinical contexts. No person or entity may offer, advertise, or provide therapy or psychotherapy services through AI unless conducted by a licensed professional. Use of AI to record or transcribe therapeutic sessions requires written, informed, revocable consent obtained in advance.
Applies to DeveloperDeployerProfessionalGovernment Sector HealthcareMental HealthProfessional ServicesChatbot
Bills — Enacted
0
unique bills
Bills — Proposed
22
Last Updated
2026-03-29
Core Obligation

Licensed professionals (including psychotherapists, counselors, psychologists, social workers, and similar practitioners) must maintain full professional responsibility for all AI interactions, outputs, and data use within their practice. AI systems may not independently make therapeutic decisions, directly interact with clients in therapeutic communication, generate treatment plans without professional review, or detect emotions or mental states in clinical contexts. No person or entity may offer, advertise, or provide therapy or psychotherapy services through AI unless conducted by a licensed professional. Use of AI to record or transcribe therapeutic sessions requires written, informed, revocable consent obtained in advance.

Sub-Obligations5 sub-obligations
ID
Name & Description
Enacted
Proposed
HC-02.1
Professional Responsibility for AI Outputs Licensed professionals must maintain full responsibility for all interactions, outputs, and data use associated with any AI system they use in delivering professional services. AI outputs used in clinical contexts — including therapeutic recommendations, treatment plans, and medical necessity determinations — must be reviewed and approved by the responsible licensed professional before being acted upon. The reviewing professional must hold credentials in the same or similar specialty as the subject matter of the determination.
0 enacted
16 proposed
HC-02.2
Prohibited AI Functions in Licensed Practice AI systems must not independently make therapeutic decisions, directly interact with clients in therapeutic communication, generate treatment plans without licensed professional review, or detect or infer emotions or mental states in clinical or consumer-facing professional contexts.
0 enacted
18 proposed
HC-02.3
Unlicensed AI Therapy Prohibition No person or entity may offer, advertise, or provide therapy, psychotherapy, or other licensed professional services through AI systems unless those services are conducted by a state-licensed, registered, or certified professional.
0 enacted
17 proposed
HC-02.4
AI Session Recording Consent Before using AI to record or transcribe a therapeutic or counseling session, the licensed professional must inform the patient in writing of the AI's use and specific purpose and obtain written, informed consent that is revocable at any time. Consent must be obtained at least 24 hours in advance where required by applicable law. Services may not be denied based on refusal to consent.
0 enacted
15 proposed
HC-02.5
AI Professional Representation Prohibition Operators and providers are prohibited from using any term, letter, phrase, or interface design in advertising, outputs, or system features that indicates or implies AI output is provided by, endorsed by, or equivalent to services from a licensed healthcare, mental health, legal, accounting, or financial professional.
0 enacted
2 proposed
Bills That Map This Requirement 22 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
HC-02.1HC-02.3
Section 3(3)-(6)
Plain Language
A therapeutic chatbot may only be made available to minors if: (1) a licensed mental health professional individually assesses the minor user's suitability, prescribes the tool within a comprehensive treatment plan, and monitors its ongoing use and impact; (2) the covered entity has robust, independent, peer-reviewed clinical trial data demonstrating safety and efficacy for the specific conditions and populations at issue; (3) the system's functions, limitations, and data privacy policies are transparent to both the prescribing professional and the user; and (4) the covered entity establishes clear lines of accountability for harm. These are cumulative conditions — all must be met alongside the Section 3(1)-(2) disclosure requirements. This effectively gates minor access to therapeutic chatbots behind a physician-prescribes-and-monitors model with clinical evidence requirements more akin to FDA-cleared digital therapeutics than typical consumer chatbot regulation.
(3) A licensed mental health professional assesses a user's suitability and prescribes the tool as part of a comprehensive treatment plan and monitors its use and impact. (4) The covered entity provides robust, independent, and peer-reviewed clinical trial data demonstrating the safety and efficacy of the tool for specific conditions and populations. (5) The system's functions, limitations, and data privacy policies are transparent to both the licensed mental health professional and the user. (6) The covered entity establishes clear lines of accountability for any harm caused by the therapeutic AI chatbot.
Pending 2027-01-01
HC-02.3
C.R.S. § 10-16-112.7(6)(a)-(c)
Plain Language
Health insurance carriers may not provide coverage for psychotherapy services that are provided directly to an individual and conducted by an AI system, for any health benefit plan issued or renewed on or after the effective date. This effectively prohibits insurers from paying for AI-delivered therapy. The prohibition does not apply to billing software, electronic health records, video platforms, or other nontherapeutic tools used incident to services provided by a human provider. Videoconferencing or messaging platforms used to enable supervision or consultation by a licensed professional are also excluded — using Zoom for a licensed therapist's supervision session is not AI-conducted supervision.
(6) Prohibition on payment for AI-delivered psychotherapy services. (a) A CARRIER OFFERING A HEALTH BENEFIT PLAN ISSUED OR RENEWED IN THE STATE ON OR AFTER THE EFFECTIVE DATE OF THIS SECTION SHALL NOT PROVIDE COVERAGE FOR SERVICES THAT CONSTITUTE PSYCHOTHERAPY SERVICES, AS DEFINED IN SECTION 12-245-202 (14), THAT ARE PROVIDED DIRECTLY TO AN INDIVIDUAL AND THAT ARE CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM. (b) SUBSECTION (6)(a) OF THIS SECTION DOES NOT PROHIBIT THE USE OF BILLING SOFTWARE, ELECTRONIC HEALTH RECORDS, VIDEO PLATFORMS, OR OTHER NONTHERAPEUTIC SOFTWARE TOOLS INCIDENT TO SERVICES PROVIDED BY A HUMAN PROVIDER. (c) THE USE OF VIDEOCONFERENCING, MESSAGING PLATFORMS, OR OTHER COMMUNICATIONS SOFTWARE TO ENABLE SUPERVISION OR CONSULTATION BY A LICENSED, REGISTERED, OR CERTIFIED INDIVIDUAL DOES NOT CONSTITUTE SUPERVISION OR CONSULTATION THAT IS CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, AS REFERENCED IN SUBSECTION (6)(a) OF THIS SECTION.
Pending 2027-01-01
HC-02.3
C.R.S. § 25.5-1-209
Plain Language
Payers under Colorado Medicaid (the Colorado Medical Assistance Act) and the Children's Basic Health Plan (CHP+) may not pay for psychotherapy services that are provided directly to an individual and conducted by an AI system. This extends the same prohibition that applies to commercial health insurance carriers to public program payers, ensuring that AI-delivered psychotherapy cannot be billed to any payer in the state, public or private.
A PAYER OF MENTAL OR BEHAVIORAL HEALTH-CARE SERVICES PROVIDED UNDER THE "COLORADO MEDICAL ASSISTANCE ACT", AS SPECIFIED IN ARTICLES 4, 5, AND 6 OF THIS TITLE 25.5, OR THE "CHILDREN'S BASIC HEALTH PLAN ACT", AS SPECIFIED IN ARTICLE 8 OF THIS TITLE 25.5, SHALL NOT PAY FOR SERVICES THAT CONSTITUTE PSYCHOTHERAPY SERVICES, AS DEFINED IN SECTION 12-245-202 (14), THAT ARE PROVIDED DIRECTLY TO AN INDIVIDUAL AND THAT ARE CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, AS THAT TERM IS DEFINED IN SECTION 10-16-112.7 (1)(b).
Pre-filed 2026-07-01
HC-02.2
Fla. Stat. § 490.016(2)
Plain Language
Licensed psychologists and school psychologists may not use AI in the practice of psychology, except for administrative and supplementary support functions. Permitted administrative uses include scheduling, non-therapeutic communications, billing, patient records management, and operational data analysis. Any clinical or therapeutic use of AI — including treatment recommendations, diagnostic support, or therapeutic interaction — is prohibited. The enumerated administrative exceptions are non-exhaustive ('include, but are not limited to'), so other genuinely administrative uses may also be permitted.
(2) Except as otherwise provided in this section, a licensee may not use artificial intelligence in the practice of psychology or school psychology. A licensee may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Pre-filed 2026-07-01
HC-02.4
Fla. Stat. § 490.016(2)(b)
Plain Language
Licensed psychologists and school psychologists may use AI to record or transcribe counseling or therapy sessions, but only if they obtain written, informed consent from the patient at least 24 hours before the session in which the AI recording or transcription will be used. This is the sole permissible clinical-adjacent use of AI under the statute. Consent must be both written and informed — verbal consent is insufficient, and the 24-hour advance requirement means same-day consent is not compliant.
(b) Record or transcribe a counseling or therapy session if a licensee obtains written, informed consent at least 24 hours before the provision of services.
Pre-filed 2026-07-01
HC-02.2
Fla. Stat. § 491.019(2)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders may not use AI in the practice of clinical social work, marriage and family therapy, or mental health counseling, except for administrative and supplementary support functions. The permitted administrative uses mirror those in § 490.016 — scheduling, non-therapeutic logistics communications, billing, records management, and operational data analysis. The covered population is broader than § 490.016, extending to registered interns and certificateholders in addition to licensees. Any clinical or therapeutic AI use is prohibited.
(2) Except as otherwise provided in this section, a licensee, registered intern, or certificateholder may not use artificial intelligence in the practice of clinical social work, marriage and family therapy, or mental health counseling. A licensee, registered intern, or certificateholder may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Pre-filed 2026-07-01
HC-02.4
Fla. Stat. § 491.019(2)(b)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders may use AI to record or transcribe counseling or therapy sessions, but only with written, informed consent obtained at least 24 hours before the session. This mirrors the consent requirement in § 490.016(2)(b) but applies to the broader set of professionals regulated under Chapter 491. Same-day consent is not compliant, and consent must be in writing.
(b) Record or transcribe a counseling or therapy session if a licensee, registered intern, or certificateholder obtains written, informed consent at least 24 hours before the provision of services.
Pending 2026-07-01
HC-02.2
Fla. Stat. § 490.016(2)
Plain Language
Licensed psychologists and school psychologists are prohibited from using AI in their clinical practice. The only permitted uses are administrative and supplementary support tasks — scheduling, logistics communications that do not involve therapeutic advice, billing, patient records management, and operational data analysis. This is a near-total ban on clinical AI use: AI may not be used for diagnosis, treatment planning, therapeutic interaction, assessment scoring, or any other activity constituting the practice of psychology. The enumerated administrative exceptions are illustrative ('include, but are not limited to'), so other non-clinical administrative uses may also be permissible.
(2) Except as otherwise provided in this section, a licensee may not use artificial intelligence in the practice of psychology or school psychology. A licensee may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Pending 2026-07-01
HC-02.2
Fla. Stat. § 491.019(2)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders are prohibited from using AI in their clinical practice. The same narrow administrative exceptions apply as under § 490.016: scheduling, non-therapeutic logistics communications, billing, records management, and operational data analysis. All clinical uses of AI — including therapeutic interaction, treatment planning, diagnostic support, and clinical decision-making — are prohibited. This is the chapter 491 parallel to the chapter 490 prohibition, extending coverage to additional mental health professional categories including registered interns and certificateholders.
(2) Except as otherwise provided in this section, a licensee, registered intern, or certificateholder may not use artificial intelligence in the practice of clinical social work, marriage and family therapy, or mental health counseling. A licensee, registered intern, or certificateholder may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Pending 2026-07-01
HC-02.4
Fla. Stat. § 490.016(2)(b)
Plain Language
Licensed psychologists and school psychologists may use AI to record or transcribe counseling or therapy sessions, but only if they obtain written, informed consent from the patient at least 24 hours before the session. This is the only permitted non-administrative clinical-adjacent AI use under the statute. The 24-hour advance requirement means consent cannot be obtained at the time of the session. The bill does not specify whether consent is revocable or whether services may be denied for refusal to consent.
(b) Record or transcribe a counseling or therapy session if a licensee obtains written, informed consent at least 24 hours before the provision of services.
Pending 2026-07-01
HC-02.4
Fla. Stat. § 491.019(2)(b)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders may use AI to record or transcribe counseling or therapy sessions, but only with written, informed consent obtained at least 24 hours before the session. This is the chapter 491 parallel to § 490.016(2)(b), extending the same consent requirement to the additional professional categories covered under chapter 491.
(b) Record or transcribe a counseling or therapy session if a licensee, registered intern, or certificateholder obtains written, informed consent at least 24 hours before the provision of services.
Failed 2026-07-01
HC-02.2
Fla. Stat. § 490.016(2)
Plain Language
Licensed psychologists and school psychologists are categorically prohibited from using AI in clinical practice — meaning AI may not independently make therapeutic decisions, generate treatment plans, directly interact with patients in therapeutic communication, or otherwise participate in the delivery of psychological services. The only exceptions are narrowly defined administrative tasks: scheduling, non-therapeutic logistics communications, billing, patient record management, and operational data analysis. This is one of the strictest state-level prohibitions on AI in licensed professional practice, going further than states that permit AI use under professional supervision. Enforcement would flow through the Florida Board of Psychology's existing disciplinary authority.
(2) Except as otherwise provided in this section, a licensee may not use artificial intelligence in the practice of psychology or school psychology. A licensee may use artificial intelligence to:
(a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following:
1. Managing appointment scheduling and reminders.
2. Drafting general communications related to therapy logistics that do not involve therapeutic advice.
3. Processing billing and insurance claims.
4. Preparing and managing patient records.
5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.2
Fla. Stat. § 491.019(2)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders are categorically prohibited from using AI in clinical practice. This is the Chapter 491 parallel to the Chapter 490 prohibition on psychologists. The same narrow administrative exceptions apply: scheduling, non-therapeutic communications, billing, patient records, and operational data analysis. The covered population is broader than § 490.016 — it extends to registered interns and certificateholders in addition to licensees. Enforcement would flow through the Florida Board of Clinical Social Work, Marriage and Family Therapy, and Mental Health Counseling.
(2) Except as otherwise provided in this section, a licensee, registered intern, or certificateholder may not use artificial intelligence in the practice of clinical social work, marriage and family therapy, or mental health counseling. A licensee, registered intern, or certificateholder may use artificial intelligence to:
(a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following:
1. Managing appointment scheduling and reminders.
2. Drafting general communications related to therapy logistics that do not involve therapeutic advice.
3. Processing billing and insurance claims.
4. Preparing and managing patient records.
5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 490.016(2)(b)
Plain Language
Licensed psychologists and school psychologists may use AI to record or transcribe therapy sessions, but only if they obtain written, informed consent from the patient at least 24 hours before the session. This is one of the narrow exceptions to the categorical ban on AI in clinical psychology practice. The 24-hour advance requirement is notable — it prevents obtaining consent at the start of a session. The bill does not specify whether consent is revocable, unlike CA SB 243 which explicitly requires revocability.
(b) Record or transcribe a counseling or therapy session if a licensee obtains written, informed consent at least 24 hours before the provision of services.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 491.019(2)(b)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders may use AI to record or transcribe counseling or therapy sessions, but only with written, informed consent obtained at least 24 hours in advance. This is the Chapter 491 parallel to the Chapter 490 recording exception. The same 24-hour advance consent requirement applies. The covered population again extends to registered interns and certificateholders.
(b) Record or transcribe a counseling or therapy session if a licensee, registered intern, or certificateholder obtains written, informed consent at least 24 hours before the provision of services.
Pending 2026-01-01
HC-02.1
225 ILCS 60/67(c)
Plain Language
If a licensed or certified human health care provider reads and reviews an AI-generated patient communication before it is sent, the disclaimer and human-contact-instructions requirements do not apply. This creates a human-review safe harbor: facilities can avoid the labeling obligations entirely by having a credentialed provider review each AI-generated clinical communication. The bill does not specify what 'read and reviewed' entails beyond the plain meaning — it does not require the provider to affirmatively approve, sign, or modify the communication, only that they have read and reviewed it.
(c) If a communication is generated by generative artificial intelligence and read and reviewed by a human licensed or certified health care provider, the requirements of subdivision (b) do not apply.
Pending 2025-10-08
HC-02.1HC-02.2
G.L. c. 112, § 298(b), (e)
Plain Language
Licensed mental health professionals may use AI only for administrative support (scheduling, billing, logistics) and supplementary support (records, anonymized data analysis, resource organization) — never for therapeutic communication. The professional must maintain full responsibility for all AI interactions, outputs, and data use. AI is categorically prohibited from: making independent therapeutic decisions, directly interacting with clients therapeutically, generating treatment plans without the professional's review and approval, or detecting emotions or mental states. The definitions of administrative support, supplementary support, and therapeutic communication are quite detailed and create a bright-line boundary: if a task involves any clinical interaction or emotional engagement with a client, AI cannot perform it.
(b) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (c). (e) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsections (b) and (c). A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
Pending 2025-10-08
HC-02.4
G.L. c. 112, § 298(c)
Plain Language
Before using AI to record or transcribe a therapy session, the licensed professional must provide the patient (or their legally authorized representative) written notice that AI will be used and explain the specific purpose of the AI tool. The patient must then provide consent — which is defined strictly as a clear, explicit, affirmative written agreement that is freely given, informed, and revocable. Burying the disclosure in a general terms-of-use agreement, passive UI interactions like hovering or closing content, or deceptive tactics do not constitute valid consent. This obligation applies specifically when AI is used for supplementary support involving session recording or transcription.
(c) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pending 2025-10-08
HC-02.3
G.L. c. 112, § 298(d)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services in Massachusetts — including through internet-based AI — unless the services are conducted by a licensed professional. This effectively prohibits standalone AI therapy products that operate without a licensed human professional delivering the services. The prohibition applies to any entity offering such services to the public in Massachusetts, not just to licensed professionals themselves. Religious counseling and peer support are excluded from the definition of therapy or psychotherapy services.
(d) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pending 2026-06-16
HC-02.1
22 MRSA § 1730-B(2)
Plain Language
Licensed mental health professionals may use AI only for administrative support and supplementary support tasks — they must maintain full responsibility for all AI interactions, outputs, and data use. AI use is limited to non-therapeutic tasks: scheduling, billing, record-keeping, anonymized data analysis, and resource identification. Any task that constitutes therapeutic communication is off-limits to AI. This provision establishes the overarching accountability requirement: the licensed professional bears personal responsibility for everything the AI does in connection with their practice.
2. Permitted use of artificial intelligence. A licensed professional may use artificial intelligence to assist in providing administrative support or supplementary support in therapy or psychotherapy services only if the licensed professional maintains full responsibility for all interactions, outputs and data use associated with the use of artificial intelligence and satisfies the requirements of subsection 3.
Pending 2026-06-16
HC-02.4
22 MRSA § 1730-B(3)(A)-(B)
Plain Language
When a licensed professional uses AI to record or transcribe a therapeutic session, the professional must first provide written notice to the client (or their legal representative) that AI will be used, what its specific purpose is, and how the session data will be stored, retained, used for training, and deleted upon termination of services. The client must then provide consent — defined strictly as a clear, explicit, affirmative written agreement that is revocable. Consent buried in general terms of service, obtained through passive interactions like hovering or closing content, or obtained through deceptive actions does not count. This obligation applies specifically to the recording or transcription use case — administrative support tasks that do not involve session data have a lower bar under subsection 2.
3. Requirements of use. A licensed professional may use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy services when the client's therapeutic session is recorded or transcribed only if: A. The client or the client's legally authorized representative is informed in writing of the following: (1) That artificial intelligence will be used; (2) The specific purpose of the artificial intelligence tool or system that will be used; and (3) How session data collected by artificial intelligence will be stored, retained, used for training and deleted upon termination of therapy or psychotherapy services; and B. The client or the client's legally authorized representative provides consent to the use of artificial intelligence.
Pending 2026-06-16
HC-02.2HC-02.3
22 MRSA § 1730-B(4)
Plain Language
This provision creates three distinct obligations. First, it is a blanket prohibition on offering therapy or psychotherapy services to the public through AI unless a licensed professional actually provides the services — effectively banning autonomous AI therapy products in Maine. Second, licensed professionals may not allow AI to make independent therapeutic decisions or directly interact with clients in any form of therapeutic communication. Third, AI-generated therapeutic recommendations or treatment plans must be reviewed and approved by the licensed professional before being acted upon. This means AI chatbots, AI therapy apps, and similar products that provide therapeutic communication directly to users without a licensed professional are prohibited, regardless of whether they carry disclaimers. The obligation applies to any "person" — not just licensed professionals — so it reaches AI companies offering direct-to-consumer therapy products.
4. Prohibition of use. A person may not provide, advertise or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public unless the therapy or psychotherapy services are provided by a licensed professional. A licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsection 3. A licensed professional may not allow artificial intelligence to: A. Make independent therapeutic decisions; B. Directly interact with clients in any form of therapeutic communication; or C. Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional.
Pending 2026-01-01
HC-02.2HC-02.3
Sec. 5(1)(b)
Plain Language
Operators may not make a companion chatbot available to a covered minor unless the chatbot is not foreseeably capable of (1) offering mental health therapy without direct supervision by a licensed or credentialed professional, or (2) discouraging the minor from seeking help from a qualified professional or a parent or guardian. This effectively prohibits unsupervised AI therapy for minors and requires that chatbots not steer minors away from human professional or parental help. Beginning January 1, 2027, this obligation applies regardless of whether the operator has actual knowledge the user is a minor.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (b) Offering mental health therapy to the covered minor without the direct supervision of a licensed or credentialed professional, or discouraging the covered minor from seeking help from a qualified professional or a parent or guardian.
Pending 2027-01-01
HC-02.2HC-02.3
Sec. 5(1)(b)
Plain Language
Operators must ensure companion chatbots are not foreseeably capable of offering mental health therapy to minors without direct supervision of a licensed or credentialed professional, or of discouraging minors from seeking help from qualified professionals or their parents/guardians. This is a dual prohibition: (1) unsupervised AI therapy to minors is blocked, and (2) the chatbot must not discourage minors from seeking human help. The 'direct supervision' standard is stricter than many jurisdictions that merely require licensed professional review — here, supervision must be active and contemporaneous. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (b) Offering mental health therapy to the covered minor without the direct supervision of a licensed or credentialed professional, or discouraging the covered minor from seeking help from a qualified professional or a parent or guardian.
Pre-filed 2026-03-10
HC-02.3
Minn. Stat. § 214.165, subd. 2(a)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services to the public in Minnesota unless those services are conducted by a licensed professional. This effectively prohibits standalone AI therapy products — an AI chatbot or platform cannot independently deliver therapy or psychotherapy without a licensed human practitioner conducting the services. The prohibition applies to any entity, not just licensees, capturing both AI developers and deployers who market AI-only therapy products.
(a) An individual, corporation, or entity must not provide, advertise, or otherwise offer therapy or psychotherapy services to the public in Minnesota unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pre-filed 2026-03-10
HC-02.2
Minn. Stat. § 214.165, subd. 2(b)
Plain Language
Licensed professionals are prohibited from using AI systems in three specific ways: (1) allowing AI to make independent therapeutic decisions without human involvement; (2) allowing AI to directly interact with clients in any form of therapeutic communication (which is broadly defined to include emotional support, guidance, treatment plan collaboration, and behavioral feedback); and (3) allowing AI to generate treatment plans or therapeutic recommendations that the licensed professional has not reviewed and approved before they reach the client. This is stricter than a human-in-the-loop requirement — it categorically bars AI from any direct therapeutic client interaction, even supervised ones.
(b) A licensed professional must not use artificial intelligence systems to: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; or (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional.
Pre-filed 2026-03-10
HC-02.1
Minn. Stat. § 214.165, subd. 3
Plain Language
Licensed professionals are permitted to use AI for administrative and supplementary support tasks — such as note-taking, scheduling, billing, anonymized data analysis, resource identification, and logistical communications — but only if they maintain full responsibility for all interactions, outputs, and data use associated with the AI system. The key boundary is that permitted AI use must not involve therapeutic communication. This creates a safe harbor for back-office and logistical AI tools while preserving the prohibition on client-facing therapeutic AI use.
A licensed professional may use artificial intelligence systems to assist in providing administrative or supplementary support in therapy or psychotherapy services if the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system.
Pre-filed 2025-11-01
HC-02.1
63 O.S. § 5502(B)
Plain Language
AI medical devices may only be used by licensed physicians who (1) are independently qualified to perform the same diagnostic, prognostic, or therapeutic procedure without the AI device, and (2) have specific training in the AI device's use, including the ability to assess output validity. No other staff — nurses, technicians, or unlicensed personnel — may operate the device. Deployers must ensure this exclusivity is maintained in practice.
B. An AI device shall be used exclusively by a qualified end-user.
Pre-filed 2025-11-01
HC-02.1
63 O.S. § 5503(A)
Plain Language
Before any patient care decision is made based on AI device output, a qualified end-user (licensed physician with appropriate training) must review and validate the AI-generated data for accuracy. This review must follow the deployer's documented policies and procedures. No AI output may be acted upon in patient care without this human validation step.
A. All relevant artificial intelligence (AI) device-generated data shall be reviewed for accuracy and validated by a qualified end-user in accordance with deployer-documented policies and procedures before patient care decisions are rendered.
Pending 2026-11-01
HC-02.1HC-02.2HC-02.3HC-02.4HC-02.5
Section 2(C)(2)-(5)
Plain Language
The therapeutic chatbot exemption for minors requires meeting four additional conditions beyond AI identity disclosure: (1) the chatbot must not be marketed as a substitute for a human professional; (2) a licensed mental health professional must assess the minor user's suitability, prescribe the tool as part of a treatment plan, and monitor its use; (3) developers must provide independent, peer-reviewed clinical trial data demonstrating safety and efficacy for the specific conditions and populations served; and (4) the system's functions, limitations, and data privacy policies must be transparent to both the prescribing professional and the user, with clear accountability lines for harms. All five conditions (including the AI disclosure in C(1)) are cumulative — failure to satisfy any one means the therapeutic chatbot cannot be made available to minors.
C. Therapeutic chatbots that meet all of the following requirements may be made available to minors: ... 2. The chatbot is not marketed or designated as a substitute for a human professional; 3. A licensed mental health professional (such as a clinical psychologist) assesses a user's suitability and prescribes the tool as part of a comprehensive treatment plan, and monitors its use and impact; 4. Developers provide robust, independent, peer-reviewed clinical trial data demonstrating both the safety and efficacy of the tool for specific conditions and populations; and 5. The system's functions, limitations, and data privacy policies are transparent to both the licensed mental health professional and the user. Clear lines of accountability are established for any harms caused by the system.
Pre-filed 2026-01-15
HC-02.4
63 O.S. § 7102(A)
Plain Language
Before using AI to record or transcribe a therapeutic session or to provide supplementary support involving such recordings, the licensed mental health professional must inform the patient (or their legally authorized representative) in writing that AI will be used and specify the AI tool's purpose, and must obtain the patient's affirmative written consent. Consent must be specific, informed, freely given, and revocable — it cannot be buried in a general terms of use agreement or obtained through deceptive means. This applies to supplementary support tasks like preparing therapy notes, analyzing anonymized client data, or organizing referrals where recording or transcription is involved.
A. A licensed mental health professional shall not use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: 1. The patient or the patient's legally authorized representative is informed in writing of the following: a. that artificial intelligence will be used, and b. the specific purpose of the artificial intelligence tool or system that will be used; and 2. The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pre-filed 2026-01-15
HC-02.1HC-02.2
63 O.S. § 7102(B)-(C)
Plain Language
Licensed mental health professionals may use AI for administrative and supplementary support tasks, but only if they maintain full professional responsibility for all AI interactions, outputs, and data use. AI is categorically prohibited from: (1) making independent therapeutic decisions, (2) directly interacting with clients in any therapeutic communication, (3) generating treatment plans or therapeutic recommendations without the professional's review, or (4) detecting emotions or mental states. The licensed professional — not the AI — must make all final decisions in therapy or psychotherapy. These restrictions effectively confine AI to back-office and record-keeping functions and bar it from any client-facing therapeutic role.
B. A licensed mental health professional may use artificial intelligence tools or systems to assist in providing administrative support or supplementary support in therapy or psychotherapy services if the licensed mental health professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection A of this section. A licensed mental health professional shall not allow artificial intelligence to do any of the following: 1. Make independent therapeutic decisions; 2. Directly interact with clients in any form of therapeutic communication; 3. Generate therapeutic recommendations or treatment plans without review by the licensed mental health professional; or 4. Detect emotions or mental states. C. A licensed mental health provider, not artificial intelligence or similar systems, shall make final decisions in the provision of therapy or psychotherapy services.
Pre-filed 2026-01-15
HC-02.3
63 O.S. § 7102(E)(1)
Plain Language
No person, company, or entity may provide, advertise, or offer therapy or psychotherapy services through internet-based AI to the Oklahoma public unless those services are actually conducted by a licensed mental health professional. This effectively prohibits standalone AI therapy products that operate without a licensed professional conducting the services. Enforcement is by the Attorney General, who may investigate and levy administrative fines up to $10,000 per violation after an administrative hearing. This obligation applies broadly to any individual, corporation, or entity — not just licensed professionals.
E. 1. An individual, corporation, or entity shall not provide, advertise, or otherwise offer therapy or psychotherapy services through the use of Internet-based artificial intelligence to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed mental health professional.
Pre-filed 2026-01-15
HC-02.4
63 O.S. § 7103(A)
Plain Language
Licensed health care providers may not use AI in any aspect of patient care unless the patient (or authorized representative) has been informed in writing that AI will be used, told the specific purpose of the AI tool, and has provided affirmative written consent. The consent standard is strict: it must be specific, informed, freely given, and revocable. It cannot be obtained through a general terms of use agreement, passive UI interactions, or deceptive means. This applies to all health care services — medical, dental, optometric care, hospitalization, and related preventive or curative services.
A. A licensed health care provider shall not use artificial intelligence to assist in the provision of a patient's care unless: 1. The patient or the patient's legally authorized representative is informed in writing of the following: a. that artificial intelligence will be used, and b. the specific purpose of the artificial intelligence tool or system that will be used; and 2. The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pre-filed 2026-01-15
HC-02.1HC-02.2
63 O.S. § 7103(B)-(C)
Plain Language
Licensed health care providers may use AI to assist in providing health care services, but only if they maintain full responsibility for all AI interactions, outputs, and data use. AI is categorically prohibited from: (1) making independent medical decisions, (2) directly interacting with patients in any medical communication, (3) diagnosing medical conditions, or (4) generating medical advice, recommendations, or treatment plans without the provider's review. The licensed provider — not AI — must make all final decisions regarding patient care. These restrictions permit AI as a behind-the-scenes clinical support tool but bar it from any patient-facing or autonomous clinical decision-making role.
B. A licensed health care provider may use artificial intelligence tools or systems to assist in providing health care services if the licensed health care provider maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection A of this section. A licensed health care provider shall not allow artificial intelligence to do any of the following: 1. Make independent medical decisions; 2. Directly interact with patients in any form of medical communication; 3. Diagnose medical conditions; or 4. Generate medical advice or recommendations or treatment plans without review by the licensed health care provider. C. A licensed health care provider, not artificial intelligence or similar systems, shall make final decisions in the provision of health care services.
Pending 2026-01-28
HC-02.4
R.I. Gen. Laws § 40.1-5.5-3(a)
Plain Language
Before a licensed mental health professional may use AI — specifically AI designed to simulate emotional attachment, bonding, or dependency, or AI companions for mental health or emotional support — to assist with supplementary support tasks where a client's therapeutic session is recorded or transcribed, the professional must provide the patient (or their parent, guardian, or legally authorized representative) with written notice that AI will be used, the specific purpose of the AI tool, and must obtain affirmative written consent. Consent must be explicit, informed, freely given, specific, and revocable — it cannot be buried in general terms of use or obtained through deceptive actions.
(a) No licensed professional shall be permitted to use artificial intelligence, designed to simulate emotional attachment, bonding, or dependency or artificial intelligence companions for mental health or emotional support, to assist in providing supplementary support in therapy or psychotherapy services where the client's therapeutic session is recorded or transcribed unless the patient or the patient's parent, guardian or other legally authorized representative is informed in writing of the following: (1) That artificial intelligence will be used; (2) The specific purpose of the artificial intelligence tool or system that will be used; and (3) The patient or the patient's parents, or other legally authorized representative, provides consent to the use of artificial intelligence.
Pending 2026-01-28
HC-02.3
R.I. Gen. Laws § 40.1-5.5-3(b)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services to the public in Rhode Island through AI — including Internet-based AI — unless those services are actually conducted by a state-licensed professional. This effectively prohibits standalone AI therapy products that operate without a licensed professional conducting the service. The prohibition applies to any entity offering services to the Rhode Island public, not just to licensed professionals themselves. Religious counseling and peer support are excluded from the definition of therapy services and are therefore not covered.
(b) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pending 2026-01-28
HC-02.1HC-02.2
R.I. Gen. Laws § 40.1-5.5-3(c)
Plain Language
Licensed professionals may only use AI for administrative and supplementary support tasks — and even then only if they comply with the consent requirements of subsection (a). AI is categorically prohibited from: (1) making independent therapeutic decisions, (2) directly interacting with clients in any form of therapeutic communication, (3) generating therapeutic recommendations or treatment plans, and (4) detecting emotions or mental states. The definition of therapeutic communication is extremely broad, covering any verbal, non-verbal, or written interaction in a clinical setting intended to diagnose, treat, or address mental, emotional, or behavioral health concerns. This means AI cannot deliver empathetic responses, provide guidance, or collaborate with clients on treatment goals — all of these constitute prohibited therapeutic communication. The licensed professional must maintain full responsibility for all AI interactions, outputs, and data use.
(c) A licensed professional may use artificial intelligence only to the extent that such use meets the requirements of subsection (a) of this section. A licensed professional may not allow or otherwise use artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans; or (4) Detect emotions or mental states.
Pending 2026-01-23
HC-02.4
R.I. Gen. Laws § 40.1-5.5-3(a)
Plain Language
Before a licensed mental health professional may use AI to record or transcribe a client's therapeutic session, the professional must provide the patient (or parent/guardian/legal representative) with written notice that AI will be used and the specific purpose of the AI tool, and must obtain affirmative written consent that is revocable at any time. Consent cannot be bundled into general terms of use or obtained through deceptive actions. This obligation applies specifically when AI is used for supplementary support involving session recording or transcription.
(a) No licensed professional shall be permitted to use artificial intelligence, designed to simulate emotional attachment, bonding, or dependency or artificial intelligence companions for mental health or emotional support, to assist in providing supplementary support in therapy or psychotherapy services where the client's therapeutic session is recorded or transcribed unless the patient or the patient's parent, guardian or other legally authorized representative is informed in writing of the following: (1) That artificial intelligence will be used; (2) The specific purpose of the artificial intelligence tool or system that will be used; and (3) The patient or the patient's parents, or other legally authorized representative, provides consent to the use of artificial intelligence.
Pending 2026-01-23
HC-02.3
R.I. Gen. Laws § 40.1-5.5-3(b)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services to the public in Rhode Island using AI — including Internet-based AI — unless those services are actually conducted by a licensed professional. This effectively prohibits AI-only therapy products that operate without a licensed professional conducting the service. The prohibition covers advertising as well as actual provision, so marketing an AI system as providing therapy without a licensed professional behind it is independently a violation. Religious counseling and peer support are excluded from the definition of therapy or psychotherapy services.
(b) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pending 2026-01-23
HC-02.1HC-02.2
R.I. Gen. Laws § 40.1-5.5-3(c)
Plain Language
Licensed professionals are categorically prohibited from using AI to: (1) make independent therapeutic decisions, (2) directly interact with clients in any form of therapeutic communication, (3) generate therapeutic recommendations or treatment plans, or (4) detect emotions or mental states. AI use is permitted only for administrative support (scheduling, billing, logistics) and supplementary support (record-keeping, anonymized data analysis, organizing referrals), and only when the licensed professional maintains full responsibility for all interactions, outputs, and data use. The therapeutic communication definition is broad — it covers any interaction intended to diagnose, treat, or address mental, emotional, or behavioral health concerns, including emotional support, therapeutic strategies, and behavioral feedback. This effectively confines AI to back-office functions and bars it from any client-facing clinical role.
(c) A licensed professional may use artificial intelligence only to the extent that such use meets the requirements of subsection (a) of this section. A licensed professional may not allow or otherwise use artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans; or (4) Detect emotions or mental states.
Pending 2026-01-23
R.I. Gen. Laws § 40.1-5.5-4
Plain Language
All records maintained by a licensed professional — including any records generated or maintained with the assistance of AI — and all communications between a therapy client and a licensed professional are confidential and may only be disclosed as permitted under existing Rhode Island law (§ 40.1-5-26). This extends the existing confidentiality framework to cover AI-assisted record-keeping in the mental health context. Violations are subject to the penalties in § 5-37.3-9.
All records kept by a licensed professional and all communications between an individual seeking therapy or psychotherapy services and a licensed professional shall be confidential and shall not be disclosed except as provided pursuant to the provisions of § 40.1-5-26.
Pending 2025-01-13
HC-02.4
S.C. Code § 40-1-710
Plain Language
Before using AI to record or transcribe a therapy or psychotherapy session for supplementary support purposes, the licensed professional must inform the patient (or their legally authorized representative) in writing that AI will be used and explain the specific purpose of that use. The patient or representative must provide written consent. This applies only when the AI use involves recording or transcription of sessions — administrative support tasks that do not involve session recording are not subject to this consent requirement.
A licensed professional shall not be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing: (a) that artificial intelligence will be used; and (b) of the specific purpose for which the artificial intelligence tool or system will be used; and (2) the patient or the patient's legally authorized representative provides written consent to the use of artificial intelligence.
Pending 2025-01-13
HC-02.3
S.C. Code § 40-1-720(A)
Plain Language
No person, company, or entity may offer, advertise, or provide therapy or psychotherapy services to the public in South Carolina — including via internet-based AI — unless those services are conducted by a state-licensed professional. This effectively prohibits standalone AI therapy products that operate without a licensed professional conducting the services. Religious counseling and peer support are carved out from the definition of therapy or psychotherapy services and are therefore not subject to this requirement.
(A) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by a licensed professional.
Pending 2025-01-13
HC-02.1HC-02.2
S.C. Code § 40-1-720(B)
Plain Language
Licensed professionals face four categorical prohibitions on AI use in therapy: (1) AI may not make independent therapeutic decisions; (2) AI may not directly interact with clients in any therapeutic communication; (3) AI may not generate therapeutic recommendations or treatment plans unless the licensed professional reviews and approves them first; and (4) AI may not detect emotions or mental states. Any AI use must also comply with the informed consent requirements of Section 40-1-710, and the licensed professional must maintain full responsibility for all interactions, outputs, and data use. Permitted AI uses are limited to administrative and supplementary support under these conditions.
(B) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of Section 40-1-710. A licensed professional may not allow artificial intelligence to: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
Pending 2025-07-01
HC-02.1
§ 54.1-2400.1:1(B)
Plain Language
Mental health service providers may use AI systems to assist in therapy or counseling, but only if the provider maintains full responsibility for all interactions, outputs, and data use associated with the system. This is a gating condition — AI use in therapy is permissible only under this supervisory framework. The provider cannot delegate professional responsibility to the AI system.
B. A mental health service provider may use an artificial intelligence system to assist in providing therapy or counseling services if such mental health service provider maintains full responsibility for all interactions, outputs, and data use associated with the system.
Pending 2025-07-01
HC-02.4
§ 54.1-2400.1:1(B)(1)-(2)
Plain Language
When a mental health service provider uses AI to record or transcribe a therapy or counseling session, two requirements must be met: (1) the provider must disclose to the patient (or their legally authorized representative) that AI will be used and explain the specific purpose; and (2) at the initial appointment, the provider must disclose its AI use policies and obtain written or digital consent. If AI policies change after consent is obtained, the provider must notify all patients who previously consented. Unlike CA SB 243's 24-hour advance consent requirement, this bill requires disclosure and consent at the initial appointment with no minimum advance notice period. Consent is revocable by implication of the consent framework but no explicit revocability language appears.
No licensed professional shall be permitted to use an artificial intelligence system in providing therapy or counseling services pursuant to this subsection when the session is recorded or transcribed unless: 1. The mental health service provider discloses to the patient or the patient's legally authorized representative (i) that an artificial intelligence system will be used and (ii) the specific purpose of the artificial intelligence system that will be used; and 2. At the initial appointment, the mental health service provider discloses its artificial intelligence system use and policies related to such use and the patient or the patient's legally authorized representative provides written or digital consent to the use of an artificial intelligence system as permitted by this section. The mental health service provider shall give notice of any change in its policies related to the use of an artificial intelligence system to any patient or such patient's legally authorized representative who has consented to the use of an artificial intelligence system pursuant to this subdivision.
Pending 2025-07-01
HC-02.3
§ 54.1-2400.1:1(C)
Plain Language
No person or business entity may offer, advertise, or provide therapy or counseling services — including through an AI system — to the public in Virginia unless those services are conducted by a licensed mental health service provider. This effectively prohibits standalone AI therapy products that operate without a licensed provider. It also prohibits advertising AI-only therapy or counseling services. The prohibition applies to any person or business entity, not just licensed providers — it catches AI companies and platforms that might attempt to offer unlicensed AI therapy directly to consumers.
C. A person or business entity may not provide, advertise, or otherwise offer therapy or counseling services, including through the use of an artificial intelligence system, to the public in the Commonwealth unless the therapy or counseling services are conducted by a mental health service provider.
Pending 2025-07-01
HC-02.2
§ 54.1-2400.1:1(D)(1)-(3)
Plain Language
Mental health service providers face three categorical prohibitions on AI use: (1) AI may not make independent therapeutic decisions — every therapeutic decision requires professional judgment; (2) AI may not directly interact with clients in therapeutic communication without provider oversight — the definition of therapeutic communication is broad, covering clinical interactions, emotional support, guidance, treatment plan collaboration, and behavioral feedback; and (3) AI may not generate therapeutic recommendations, diagnose, or implement treatment plans without the licensed professional's review, oversight, and approval. AI may still be used for administrative support tasks that do not involve therapeutic communication (e.g., scheduling, billing, logistics communications).
D. A mental health service provider may use an artificial intelligence system only to the extent that such use meets the requirements of subsection B. No mental health service provider shall allow an artificial intelligence system to: 1. Make independent therapeutic decisions; 2. Directly interact with clients in any form of therapeutic communication without provider oversight; or 3. Generate therapeutic recommendations, diagnose, or implement treatment plans without review, oversight, and approval by the licensed professional.
Pending 2025-07-01
§ 54.1-2400.1:1(E)
Plain Language
All records maintained by mental health service providers — including records generated through or involving AI systems — and all communications between patients and providers must remain confidential under Virginia's existing health records privacy law (§ 32.1-127.1:03). Disclosure is permitted only as allowed under that statute. This extends existing health privacy protections to AI-involved records and communications, ensuring that AI-generated transcripts, session recordings, and related data receive the same confidentiality protections as traditional clinical records.
E. All records kept by a mental health service provider and all communications between an individual seeking therapy or counseling services and a mental health service provider shall be confidential pursuant to the requirements of § 32.1-127.1:03 and shall not be disclosed unless such disclosure complies with the requirements of § 32.1-127.1:03.
Pre-filed 2026-01-01
HC-02.3
18 V.S.A. § 7115(b)
Plain Language
No person, corporation, or entity may offer, provide, or advertise mental health services in Vermont that use AI in whole or in part — unless the use falls within the narrow exceptions for licensed mental health professionals under 26 V.S.A. § 7101 (administrative support and transcription with consent). This is a broad prohibition that applies to any entity, not just licensed professionals. Companies offering AI therapy chatbots, AI counseling products, or AI-powered mental health platforms to Vermont users are categorically prohibited from doing so. The only carve-out is for licensed mental health professionals using AI in the limited ways authorized by § 7101.
(b) A person, corporation, or entity shall not offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as authorized pursuant to 26 V.S.A. § 7101.
Pre-filed 2026-01-01
HC-02.2
26 V.S.A. § 7101(d)
Plain Language
Licensed mental health professionals are categorically prohibited from using AI to make therapeutic decisions, issue direct therapeutic communications, generate treatment plans or recommendations, or detect or interpret emotions or mental states. They are also prohibited from offering, providing, or advertising any mental health services that use AI in whole or in part — except for the narrow carve-outs in subsection (b) for administrative support (with professional review) and transcription (with written disclosure and consent). The scope of 'therapeutic communication' is very broad and includes any interaction intended to diagnose, treat, provide recovery support, or advise on mental health matters. This effectively confines AI use by mental health professionals to back-office and documentation functions only.
(d) Prohibited uses. A mental health professional shall neither: (1) use artificial intelligence in the State to make therapeutic decisions, issue direct therapeutic communications, generate treatment plans or recommendations, or detect or interpret emotions or mental states; nor (2) offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as provided in subsection (b) of this section.
Pre-filed 2026-01-01
HC-02.1
26 V.S.A. § 7101(b)(1)
Plain Language
Mental health professionals may use AI for administrative support tasks — such as scheduling, billing, logistics communications, clinical note preparation, deidentified data analysis, and resource organization — but only if the professional personally reviews and assumes responsibility for all tasks performed, all outputs created, and all data use associated with the AI system. This is the primary permissible use carve-out. The definition of 'administrative support' expressly excludes therapeutic communication, so AI may not be used for any patient-facing clinical interaction even under this exception. The professional's review-and-responsibility obligation is a condition of the permission, not an independent obligation — failure to review and assume responsibility means the use is unauthorized.
(b) Permitted uses. (1) A mental health professional may use artificial intelligence for administrative support to the extent that the professional reviews and assumes responsibility for all tasks performed by, outputs created by, and data use associated with the artificial intelligence system employed.
Pre-filed 2026-01-01
HC-02.4
26 V.S.A. § 7101(b)(2)
Plain Language
Before using AI for transcription or recording in a therapeutic setting, the mental health professional must: (1) provide written notice to the patient or client (or their legal guardian) stating the specific purpose of the AI use and that any resulting transcription or recording is subject to existing mental health confidentiality protections under 18 V.S.A. §§ 1881 and 7103; and (2) obtain consent, which must be an explicit, written, voluntary, informed, and revocable affirmative act. Both steps must be completed before the AI transcription or recording begins — this is a prerequisite, not something that can be obtained retroactively.
(2) If a mental health professional uses artificial intelligence for transcription and recording purposes, the mental health professional shall first: (A) inform the patient or client, or the patient's or client's legal guardian, in writing of the specific purpose for which artificial intelligence is being used and that any transcription or recording performed by artificial intelligence shall be subject to the disclosure prohibitions in subsection (c) of this section; and (B) obtain consent from the patient or client, or the patient's or client's legal guardian.
Pre-filed 2026-01-01
HC-02.1
26 V.S.A. § 7101(c)
Plain Language
All AI-generated outputs from administrative support tasks — including transcription and recording — are subject to the same confidentiality protections that apply to mental health records under existing Vermont law (18 V.S.A. §§ 1881 and 7103). This means AI-processed data receives the same disclosure prohibitions as therapist-created records. Practitioners must ensure that any AI vendor or system used for administrative support complies with these existing confidentiality requirements.
(c) Confidentiality. Any administrative support tasks conducted using artificial intelligence shall be subject to the disclosure prohibitions in 18 V.S.A. §§ 1881 and 7103, including transcription and recording.
Passed 2026-01-01
HC-02.3
18 V.S.A. § 7115(b)
Plain Language
No person, corporation, or other entity may offer, provide, or advertise mental health services in Vermont that represent AI as providing therapeutic judgment, diagnosis, treatment, or therapeutic communication. This is a blanket prohibition covering any entity — not just licensed professionals — and targets the marketing and delivery of AI-as-therapist services. The carve-out preserves the ability to use and disclose AI for administrative, documentation, operational, or quality-improvement purposes, so long as a mental health professional retains clinical responsibility under 26 V.S.A. § 7101. Violations are enforceable under the Vermont Consumer Protection Act.
(b) A person, corporation, or other entity shall not offer, provide, or advertise mental health services in the State that represent artificial intelligence as providing therapeutic judgment, diagnosis, treatment, or therapeutic communication. Nothing in this subsection shall prohibit the use or disclosure of the use of artificial intelligence for administrative, documentation, operational, or quality-improvement purposes when a mental health professional retains clinical responsibility as authorized pursuant to 26 V.S.A. § 7101.
Passed 2026-01-01
HC-02.1HC-02.2
26 V.S.A. § 7101(b), (d)(1)-(2)
Plain Language
Mental health professionals may use AI for administrative support (scheduling, billing, claims processing), supplementary support, and operational/quality-improvement tasks — but must retain sole responsibility for all therapeutic decisions. AI may not independently make therapeutic decisions, independently diagnose, independently determine treatment, or independently generate treatment plans. Importantly, the definition of 'therapeutic decision' carves out algorithmic risk scoring, data analytics, and clinical decision support tools used under professional supervision — these are not prohibited. The professional must review, modify where necessary, and approve the final product of any AI-assisted work. Professionals remain free to disclose their use of AI for permitted purposes to patients.
(b) Permitted uses. A mental health professional may use artificial intelligence systems for administrative support, supplementary support, and operational or quality-improvement functions, provided the professional retains sole responsibility for therapeutic decisions. Permitted uses include scheduling, billing, coding, and claims processing; transcription and documentation support; preparation and maintenance of clinical records; deidentified data analysis for quality improvement; and workforce and capacity planning where the mental health professional reviews, modifies where necessary, and approves the final product. (d) Prohibited uses. (1) A mental health professional shall not use artificial intelligence in a manner that allows the artificial intelligence to independently make therapeutic decisions, independently diagnose, independently determine treatment, or independently generate treatment plans. (2) Nothing in this subsection shall prohibit a mental health professional from disclosing or describing the mental health professional's use of artificial intelligence for administrative support or supplementary support purposes to a prospective, current, or former patient or client.
Passed 2026-01-01
HC-02.4
26 V.S.A. § 7101(c)(1)-(2)
Plain Language
When mental health professionals use AI for administrative or supplementary support tasks — including transcription and recording — all existing confidentiality obligations under Vermont's mental health disclosure statutes (18 V.S.A. §§ 1881 and 7103) continue to apply. Additionally, before using AI to record identifiable therapeutic communications, the professional must obtain written, informed, revocable consent from the patient or client. Consent obtained through broad terms-of-use agreements, passive actions, or deceptive practices does not qualify. Consent is not required for administrative or deidentified operational uses of AI.
(c) Confidentiality and consent. (1) Any administrative support or supplementary support tasks conducted using artificial intelligence, including transcription and recording, shall be subject to the disclosure prohibitions in 18 V.S.A. §§ 1881 and 7103. (2) Consent by a patient or client is required when artificial intelligence is used to record identifiable therapeutic communications.
Pre-filed 2025-01-01
HC-02.3
18 V.S.A. § 7115(b)
Plain Language
No person, corporation, or entity may offer, provide, or advertise mental health services in Vermont that use AI in whole or in part. This is a near-total ban on AI-delivered mental health services. The only exception is where a licensed mental health professional uses AI for the limited administrative support and transcription purposes authorized under 26 V.S.A. § 7101. This prohibition applies broadly — it covers any entity, not just licensed professionals — meaning AI therapy chatbot operators, technology companies, and any other entity offering AI-powered mental health services in Vermont are covered. Violations are deemed Consumer Protection Act violations with a $10,000 per-violation civil penalty.
(b) A person, corporation, or entity shall not offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as authorized pursuant to 26 V.S.A. § 7101.
Pre-filed 2025-01-01
HC-02.2
26 V.S.A. § 7101(d)
Plain Language
Licensed mental health professionals are categorically prohibited from using AI to make therapeutic decisions, issue direct therapeutic communications with patients, generate treatment plans or recommendations, or detect or interpret emotions or mental states. They are also prohibited from offering, providing, or advertising any AI-powered mental health services, except for the narrow administrative support and transcription uses permitted under subsection (b). This is a professional conduct prohibition — violations constitute unprofessional conduct under 3 V.S.A. § 129a, exposing the professional to license denial or disciplinary action. The prohibition extends to both licensed and non-licensed/non-certified practitioners who provide mental health services.
(d) Prohibited uses. A mental health professional shall neither: (1) use artificial intelligence in the State to make therapeutic decisions, issue direct therapeutic communications, generate treatment plans or recommendations, or detect or interpret emotions or mental states; nor (2) offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as provided in subsection (b) of this section.
Pre-filed 2025-01-01
HC-02.1
26 V.S.A. § 7101(b)(1)
Plain Language
Mental health professionals may use AI for administrative support tasks — such as scheduling, billing, record-keeping, deidentified data analysis, and organizing referrals — but only if the professional personally reviews and assumes responsibility for all tasks performed by, outputs created by, and data use associated with the AI system. This is the sole permitted use of AI in mental health service delivery (alongside transcription under subsection (b)(2)). The professional responsibility requirement is absolute: delegating review to another AI system or failing to review AI outputs before they are used would violate this provision. Administrative support explicitly excludes therapeutic communication.
(b) Permitted uses. (1) A mental health professional may use artificial intelligence for administrative support to the extent that the professional reviews and assumes responsibility for all tasks performed by, outputs created by, and data use associated with the artificial intelligence system employed.
Pre-filed 2025-01-01
HC-02.4
26 V.S.A. § 7101(b)(2)
Plain Language
Before using AI for transcription or recording of therapeutic sessions, the mental health professional must (1) inform the patient (or their legal guardian) in writing of the specific purpose of the AI use and that any transcription or recording will be subject to existing mental health confidentiality protections under 18 V.S.A. §§ 1881 and 7103, and (2) obtain written, informed, revocable consent. Both steps must occur before the AI transcription or recording begins. The consent requirement is strict — it must be explicit, affirmative, in writing, voluntary, informed, and revocable at any time. Services may not proceed with AI transcription absent this consent.
(2) If a mental health professional uses artificial intelligence for transcription and recording purposes, the mental health professional shall first: (A) inform the patient or client, or the patient's or client's legal guardian, in writing of the specific purpose for which artificial intelligence is being used and that any transcription or recording performed by artificial intelligence shall be subject to the disclosure prohibitions in subsection (c) of this section; and (B) obtain consent from the patient or client, or the patient's or client's legal guardian.
Pre-filed 2025-01-01
HC-02.1
26 V.S.A. § 7101(c)
Plain Language
All administrative support tasks performed using AI — including transcription and recording — remain subject to existing Vermont mental health confidentiality protections under 18 V.S.A. §§ 1881 (general health information confidentiality) and 7103 (mental health treatment information confidentiality). This means AI-processed patient data may not be disclosed in violation of these existing confidentiality statutes. The provision does not create a new confidentiality standard but expressly extends existing prohibitions to cover AI-processed data in therapeutic settings.
(c) Confidentiality. Any administrative support tasks conducted using artificial intelligence shall be subject to the disclosure prohibitions in 18 V.S.A. §§ 1881 and 7103, including transcription and recording.
Pending 2027-01-01
HC-02.1HC-02.2
§33-57-2(b)
Plain Language
Operators and licensed professionals may use AI only for administrative or supplementary support in therapy or psychotherapy, and must maintain full responsibility for all interactions, outputs, and data use associated with the AI system. No decision for patient care, reimbursement, or claims adjudication may be based exclusively on AI-generated information. This establishes that AI is a support tool only — the human professional retains ultimate accountability and decisional authority.
(b) An operator or licensed professional is permitted to use AI tools or systems to assist in providing administrative support or supplementary support in therapy or psychotherapy services with the operator or licensed professional maintaining full responsibility for all interactions, outputs and data use associated with the system and satisfies the requirements of this article. A decision for patient care, reimbursement or claims adjudication may not be based exclusively on AI-generated information.
Pending 2027-01-01
HC-02.4
§33-57-2(d)
Plain Language
Before using AI to record or transcribe a therapeutic session, the operator or licensed professional must inform the patient (or their legally authorized representative) in writing that AI will be used and explain the specific purpose of the AI tool. The patient must then provide consent. Critically, 'consent' under this bill has a heightened standard — it must be a clear, explicit, freely given, specific written agreement that is revocable at any time. Consent cannot be obtained via general terms of use, passive actions like hovering or closing content, or deceptive actions. This effectively prohibits burying AI recording consent in standard intake paperwork.
(d) No operator or licensed professional may be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) The patient or the patient's legally authorized representative is informed in writing of the following: that artificial intelligence will be used; and the specific purpose of the artificial intelligence tool or system that will be used; and (2) The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pending 2027-01-01
HC-02.3HC-02.5
§33-57-2(e)
Plain Language
Therapy or psychotherapy services may not be provided, advertised, or offered to the public in West Virginia — including via internet-based AI — unless conducted by a licensed professional. Additionally, no operator or licensed professional may design, market, or present any AI system in a way that would reasonably cause a person to believe the AI system is a licensed professional or crisis service. This is a two-part prohibition: (1) AI-only therapy without a licensed professional is banned, and (2) AI systems cannot be designed to impersonate licensed professionals or crisis services.
(e) No operator or licensed professional may provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional, and may not design, market or present any AI system that reasonably would cause a person to believe the AI system is a licensed professional or crisis service.
Pending 2027-01-01
HC-02.2
§33-57-2(f)
Plain Language
Peer support services, religious counseling services, and digital mental wellness services are prohibited from using AI to diagnose conditions, develop or modify treatment plans, conduct suicide or self-harm risk assessments, or otherwise provide therapy or psychotherapy services — unless a licensed professional approves. This prevents non-clinical services from using AI to effectively practice therapy without professional oversight, even if the service itself is not marketed as therapy.
(f) Peer support services, religious counseling services and digital mental wellness services may not, through the use of artificial intelligence, diagnose, develop or modify treatment plans, conduct suicide or self-harm risk assessments, or otherwise provide therapy or psychotherapy services without the approval of a licensed professional.
Pending 2027-01-01
HC-02.2
§33-57-2(g)
Plain Language
Licensed professionals face four categorical prohibitions on AI use: AI may not (1) make independent therapeutic decisions, (2) directly interact with clients in any form of therapeutic communication, (3) generate treatment plans or therapeutic recommendations without the licensed professional's review and approval, or (4) detect emotions or mental states for diagnostic, therapeutic, or treatment purposes or to target or manipulate a person's mental or emotional state. The emotion-detection prohibition is notably broad — it covers both clinical use (emotion detection for diagnosis) and manipulative use (targeting emotional states). The overall framing limits AI use to administrative and supplementary support only, with the professional retaining full decisional authority.
(g) An operator or licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsection (b). A licensed professional may not allow artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) Detect emotions or mental states for the purpose of making diagnostic, therapeutic, or treatment decisions, or for targeting or manipulating a person's mental or emotional state.
Pending 2027-01-01
HC-02.1
§33-57-2(h)
Plain Language
AI may be used to flag or triage communications indicating self-harm, suicide risk, or other acute safety concerns — this is a narrow carve-out from the broader prohibition on AI therapeutic interaction. However, any such AI-generated flags must be promptly reviewed and addressed by a licensed professional who retains sole authority for clinical assessment and decision-making. The AI's role is limited to alerting and triaging; it cannot independently assess risk, recommend interventions, or communicate with the patient about the flagged concern.
(h) An operator or licensed professional may use artificial intelligence solely to flag or triage communications that may indicate self-harm, suicide risk, or other acute safety concerns, provided that any such flags are promptly reviewed and addressed by a licensed professional who retains sole authority for clinical assessment and decision-making.