HC-02
Healthcare AI
AI in Licensed Professional Practice Restrictions
Licensed professionals (including psychotherapists, counselors, psychologists, social workers, and similar practitioners) must maintain full professional responsibility for all AI interactions, outputs, and data use within their practice. AI systems may not independently make therapeutic decisions, directly interact with clients in therapeutic communication, generate treatment plans without professional review, or detect emotions or mental states in clinical contexts. No person or entity may offer, advertise, or provide therapy or psychotherapy services through AI unless conducted by a licensed professional. Use of AI to record or transcribe therapeutic sessions requires written, informed, revocable consent obtained in advance.
Applies to DeveloperDeployerProfessionalGovernment Sector HealthcareMental HealthProfessional ServicesChatbot
Bills — Enacted
0
unique bills
Bills — Proposed
23
Last Updated
2026-03-29
Core Obligation

Licensed professionals (including psychotherapists, counselors, psychologists, social workers, and similar practitioners) must maintain full professional responsibility for all AI interactions, outputs, and data use within their practice. AI systems may not independently make therapeutic decisions, directly interact with clients in therapeutic communication, generate treatment plans without professional review, or detect emotions or mental states in clinical contexts. No person or entity may offer, advertise, or provide therapy or psychotherapy services through AI unless conducted by a licensed professional. Use of AI to record or transcribe therapeutic sessions requires written, informed, revocable consent obtained in advance.

Sub-Obligations5 sub-obligations
ID
Name & Description
Enacted
Proposed
HC-02.1
Professional Responsibility for AI Outputs Licensed professionals must maintain full responsibility for all interactions, outputs, and data use associated with any AI system they use in delivering professional services. AI outputs used in clinical contexts — including therapeutic recommendations, treatment plans, and medical necessity determinations — must be reviewed and approved by the responsible licensed professional before being acted upon. The reviewing professional must hold credentials in the same or similar specialty as the subject matter of the determination.
0 enacted
18 proposed
HC-02.2
Prohibited AI Functions in Licensed Practice AI systems must not independently make therapeutic decisions, directly interact with clients in therapeutic communication, generate treatment plans without licensed professional review, or detect or infer emotions or mental states in clinical or consumer-facing professional contexts.
0 enacted
20 proposed
HC-02.3
Unlicensed AI Therapy Prohibition No person or entity may offer, advertise, or provide therapy, psychotherapy, or other licensed professional services through AI systems unless those services are conducted by a state-licensed, registered, or certified professional.
0 enacted
17 proposed
HC-02.4
AI Session Recording Consent Before using AI to record or transcribe a therapeutic or counseling session, the licensed professional must inform the patient in writing of the AI's use and specific purpose and obtain written, informed consent that is revocable at any time. Consent must be obtained at least 24 hours in advance where required by applicable law. Services may not be denied based on refusal to consent.
0 enacted
15 proposed
HC-02.5
AI Professional Representation Prohibition Operators and providers are prohibited from using any term, letter, phrase, or interface design in advertising, outputs, or system features that indicates or implies AI output is provided by, endorsed by, or equivalent to services from a licensed healthcare, mental health, legal, accounting, or financial professional.
0 enacted
3 proposed
Bills That Map This Requirement 23 bills
Bill
Status
Sub-Obligations
Section
Failed 2026-07-01
HC-02.2
Fla. Stat. § 490.016(2)
Plain Language
Licensed psychologists and school psychologists are broadly prohibited from using AI in the practice of their profession. The sole exception under this subsection is for administrative and supplementary support tasks — scheduling, non-therapeutic logistics communications, billing, patient records management, and operational data analysis. Any clinical, therapeutic, diagnostic, or treatment-related use of AI is prohibited. The enumerated administrative uses are non-exhaustive ('include, but are not limited to'), so other genuinely administrative uses may also be permissible, but the line between administrative support and clinical practice will require careful judgment.
(2) Except as otherwise provided in this section, a licensee may not use artificial intelligence in the practice of psychology or school psychology. A licensee may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 490.016(2)(b)
Plain Language
Licensed psychologists and school psychologists may use AI to record or transcribe counseling or therapy sessions, but only if they first obtain written, informed consent from the patient at least 24 hours before the session. Consent cannot be obtained at the time of the session — the 24-hour advance requirement is a hard rule. The bill does not specify what 'informed consent' must include, whether consent is revocable, or whether services may be denied for refusal to consent.
(b) Record or transcribe a counseling or therapy session if a licensee obtains written, informed consent at least 24 hours before the provision of services.
Failed 2026-07-01
HC-02.2
Fla. Stat. § 491.019(2)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders are broadly prohibited from using AI in their professional practice. The sole exception under this subsection is for administrative and supplementary support — scheduling, non-therapeutic logistics communications, billing, patient records management, and operational data analysis. This provision is substantively identical to § 490.016(2) but extends the prohibition to a broader set of mental health practitioners regulated under Chapter 491. The enumerated administrative uses are non-exhaustive.
(2) Except as otherwise provided in this section, a licensee, registered intern, or certificateholder may not use artificial intelligence in the practice of clinical social work, marriage and family therapy, or mental health counseling. A licensee, registered intern, or certificateholder may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 491.019(2)(b)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders may use AI to record or transcribe counseling or therapy sessions, but only if they obtain written, informed consent from the patient at least 24 hours before the session. This is substantively identical to the parallel consent requirement in § 490.016(2)(b) but applies to the broader set of Chapter 491 practitioners. The same gaps apply — no specification of consent content, revocability, or consequences for refusal.
(b) Record or transcribe a counseling or therapy session if a licensee, registered intern, or certificateholder obtains written, informed consent at least 24 hours before the provision of services.
Failed 2026-07-01
HC-02.2
Fla. Stat. § 490.016(2)
Plain Language
Licensed psychologists and school psychologists are broadly prohibited from using any AI system in professional practice. The only permitted uses are administrative and supplementary support tasks — scheduling, non-therapeutic logistics communications, billing, patient records management, and operational data analysis. Any clinical, therapeutic, or diagnostic use of AI is prohibited. This is a categorical ban on AI in clinical practice rather than a set of guardrails, making it one of the most restrictive approaches to AI in licensed professional services.
(2) Except as otherwise provided in this section, a licensee may not use artificial intelligence in the practice of psychology or school psychology. A licensee may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 490.016(2)(b)
Plain Language
Licensed psychologists and school psychologists may use AI to record or transcribe therapy sessions, but only after obtaining the patient's written, informed consent at least 24 hours before the session in which AI recording or transcription will be used. This is a narrow carve-out from the general prohibition on AI use in practice — the 24-hour advance consent requirement is stricter than typical point-of-service consent obligations.
(b) Record or transcribe a counseling or therapy session if a licensee obtains written, informed consent at least 24 hours before the provision of services.
Failed 2026-07-01
HC-02.2
Fla. Stat. § 491.019(2)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders are broadly prohibited from using any AI system in professional practice. The only permitted uses are administrative and supplementary support tasks — scheduling, non-therapeutic logistics communications, billing, patient records management, and operational data analysis. This mirrors the § 490.016 prohibition for psychologists and extends the same categorical ban to the Chapter 491 professions, including interns and certificateholders — not just fully licensed practitioners.
(2) Except as otherwise provided in this section, a licensee, registered intern, or certificateholder may not use artificial intelligence in the practice of clinical social work, marriage and family therapy, or mental health counseling. A licensee, registered intern, or certificateholder may use artificial intelligence to: (a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following: 1. Managing appointment scheduling and reminders. 2. Drafting general communications related to therapy logistics that do not involve therapeutic advice. 3. Processing billing and insurance claims. 4. Preparing and managing patient records. 5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 491.019(2)(b)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders may use AI to record or transcribe therapy sessions, but only after obtaining the patient's written, informed consent at least 24 hours before the session. This mirrors the § 490.016(2)(b) exception for psychologists and extends it to the Chapter 491 professions.
(b) Record or transcribe a counseling or therapy session if a licensee, registered intern, or certificateholder obtains written, informed consent at least 24 hours before the provision of services.
Failed 2026-07-01
HC-02.2
Fla. Stat. § 490.016(2)
Plain Language
Licensed psychologists and school psychologists are categorically prohibited from using AI in clinical practice — meaning AI may not independently make therapeutic decisions, generate treatment plans, directly interact with patients in therapeutic communication, or otherwise participate in the delivery of psychological services. The only exceptions are narrowly defined administrative tasks: scheduling, non-therapeutic logistics communications, billing, patient record management, and operational data analysis. This is one of the strictest state-level prohibitions on AI in licensed professional practice, going further than states that permit AI use under professional supervision. Enforcement would flow through the Florida Board of Psychology's existing disciplinary authority.
(2) Except as otherwise provided in this section, a licensee may not use artificial intelligence in the practice of psychology or school psychology. A licensee may use artificial intelligence to:
(a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following:
1. Managing appointment scheduling and reminders.
2. Drafting general communications related to therapy logistics that do not involve therapeutic advice.
3. Processing billing and insurance claims.
4. Preparing and managing patient records.
5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.2
Fla. Stat. § 491.019(2)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders are categorically prohibited from using AI in clinical practice. This is the Chapter 491 parallel to the Chapter 490 prohibition on psychologists. The same narrow administrative exceptions apply: scheduling, non-therapeutic communications, billing, patient records, and operational data analysis. The covered population is broader than § 490.016 — it extends to registered interns and certificateholders in addition to licensees. Enforcement would flow through the Florida Board of Clinical Social Work, Marriage and Family Therapy, and Mental Health Counseling.
(2) Except as otherwise provided in this section, a licensee, registered intern, or certificateholder may not use artificial intelligence in the practice of clinical social work, marriage and family therapy, or mental health counseling. A licensee, registered intern, or certificateholder may use artificial intelligence to:
(a) Assist in administrative or supplementary support services. Administrative and supplementary support services include, but are not limited to, all of the following:
1. Managing appointment scheduling and reminders.
2. Drafting general communications related to therapy logistics that do not involve therapeutic advice.
3. Processing billing and insurance claims.
4. Preparing and managing patient records.
5. Analyzing data for operational purposes.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 490.016(2)(b)
Plain Language
Licensed psychologists and school psychologists may use AI to record or transcribe therapy sessions, but only if they obtain written, informed consent from the patient at least 24 hours before the session. This is one of the narrow exceptions to the categorical ban on AI in clinical psychology practice. The 24-hour advance requirement is notable — it prevents obtaining consent at the start of a session. The bill does not specify whether consent is revocable, unlike CA SB 243 which explicitly requires revocability.
(b) Record or transcribe a counseling or therapy session if a licensee obtains written, informed consent at least 24 hours before the provision of services.
Failed 2026-07-01
HC-02.4
Fla. Stat. § 491.019(2)(b)
Plain Language
Licensed clinical social workers, marriage and family therapists, mental health counselors, registered interns, and certificateholders may use AI to record or transcribe counseling or therapy sessions, but only with written, informed consent obtained at least 24 hours in advance. This is the Chapter 491 parallel to the Chapter 490 recording exception. The same 24-hour advance consent requirement applies. The covered population again extends to registered interns and certificateholders.
(b) Record or transcribe a counseling or therapy session if a licensee, registered intern, or certificateholder obtains written, informed consent at least 24 hours before the provision of services.
Passed 2025-01-01
HC-02.1
Section 15(a)-(b)
Plain Language
Licensed professionals may use AI in therapy or psychotherapy only for administrative support (scheduling, billing, logistics communications) and supplementary support (record-keeping, anonymized data analysis, resource organization). The licensed professional must maintain full responsibility for all interactions, outputs, and data use associated with the AI system. AI may not be used for any task involving therapeutic communication. This is a continuing accountability obligation — the professional cannot delegate responsibility to the AI system.
(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).
Passed 2025-01-01
HC-02.4
Section 15(b)(1)-(2)
Plain Language
Before using AI to record or transcribe a therapeutic session, the licensed professional must inform the patient (or their legally authorized representative) in writing that AI will be used and what its specific purpose is, and must obtain written, informed, revocable consent. Consent cannot be obtained through general terms of service, passive UI interactions, or deceptive practices — it must be a clear, explicit, specific affirmative act. The patient may revoke consent at any time. This obligation applies specifically to the recording/transcription use case within supplementary support.
(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Passed 2025-01-01
HC-02.3
Section 20(a)
Plain Language
No person or entity — including AI platform operators, corporations, or individuals — may provide, advertise, or offer therapy or psychotherapy services to the public in Illinois unless those services are conducted by a state-licensed professional. This is a categorical prohibition on AI-only therapy: an AI system cannot independently deliver therapy or psychotherapy, even if clearly labeled as AI. The prohibition covers Internet-based AI specifically. Exceptions exist for religious counseling, peer support, and self-help materials that do not purport to offer therapy.
(a) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Passed 2025-01-01
HC-02.2
Section 20(b)
Plain Language
Licensed professionals face four specific prohibitions on AI use in therapy: AI may not (1) make independent therapeutic decisions, (2) directly interact with clients in any form of therapeutic communication, (3) generate treatment plans or therapeutic recommendations without the licensed professional's review and approval, or (4) detect emotions or mental states. The definition of therapeutic communication is broad — it covers diagnosis, guidance, emotional support, treatment planning collaboration, and behavioral feedback. This effectively bars AI from any client-facing therapeutic role and from emotion recognition in clinical contexts.
(b) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of Section 15. A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
Pending 2025-10-08
HC-02.1
G.L. c. 112, § 298(b)
Plain Language
Licensed professionals may only use AI in therapy or psychotherapy for administrative support (scheduling, billing, logistics) or supplementary support (record-keeping, anonymized data analysis, resource organization) — never for therapeutic communication. The licensed professional must maintain full responsibility for all interactions, outputs, and data use associated with the AI system. This provision defines the permitted envelope of AI use and ties it to the consent requirements in subsection (c).
(b) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (c).
Pending 2025-10-08
HC-02.4
G.L. c. 112, § 298(c)
Plain Language
When a licensed professional uses AI to record or transcribe a therapy session, the professional must first inform the patient (or their legally authorized representative) in writing that AI will be used and explain the specific purpose of the AI tool. The patient must then provide consent that meets a high bar: it must be a clear, explicit, freely given, informed, written affirmative act and must be revocable. Buried-in-TOS consent, passive interactions like hovering or closing content, and deceptively obtained agreements do not qualify. This consent requirement applies only when sessions are recorded or transcribed — use of AI for other supplementary support tasks (e.g., organizing referrals) does not independently trigger this consent obligation.
(c) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pending 2025-10-08
HC-02.3
G.L. c. 112, § 298(d)
Plain Language
No person or entity may provide, advertise, or offer therapy or psychotherapy services in Massachusetts — including through internet-based AI — unless those services are conducted by a state-licensed professional. This effectively prohibits AI-only therapy products that lack a licensed professional conducting the services. Religious counseling and peer support are excluded from the definition of therapy or psychotherapy services and are therefore not subject to this restriction.
(d) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pending 2025-10-08
HC-02.2
G.L. c. 112, § 298(e)
Plain Language
Licensed professionals must not allow AI to: (1) make independent therapeutic decisions, (2) directly interact with clients in therapeutic communication (which is broadly defined to include emotional support, guidance, therapeutic strategies, and collaborative goal-setting), (3) generate treatment plans or therapeutic recommendations without the professional's review and approval, or (4) detect emotions or mental states. These are categorical prohibitions — there is no safe harbor or compliance pathway that would permit these uses. The emotion detection ban is notably broad, covering any use of AI to detect emotions or mental states in a therapeutic context.
(e) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsections (b) and (c). A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
Pending 2025-10-08
G.L. c. 112, § 298(f)
Plain Language
All records maintained by a licensed professional and all communications between a patient and the professional are confidential and may not be disclosed except as required by law. This broadly covers any AI-processed records, transcriptions, and data generated through AI supplementary support — the confidentiality obligation extends to AI-generated outputs as well as traditional clinical records.
(f) All records kept by a licensed professional and all communications between an individual seeking therapy or psychotherapy services and a licensed professional shall be confidential and shall not be disclosed except as required by law.
Passed 2026-01-01
HC-02.1
22 MRSA § 1730-B(2)
Plain Language
Licensed mental health professionals may use AI only for administrative support or supplementary support — both of which are defined to exclude therapeutic communication. The licensed professional must maintain full responsibility for all AI interactions, outputs, and data use. This is a gatekeeping obligation: any use of AI that goes beyond these permitted categories is unlawful, and the professional remains personally accountable for everything the AI produces. The professional must also satisfy the informed consent requirements of subsection 3 for supplementary support involving session recording or transcription.
2. Permitted use of artificial intelligence. A licensed professional may use artificial intelligence to assist in providing administrative support or supplementary support in therapy or psychotherapy services only if the licensed professional maintains full responsibility for all interactions, outputs and data use associated with the use of artificial intelligence and satisfies the requirements of subsection 3.
Passed 2026-01-01
HC-02.4
22 MRSA § 1730-B(3)(A)-(B)
Plain Language
When a licensed professional uses AI to record or transcribe a therapeutic session for supplementary support purposes, the professional must first provide the client (or their legally authorized representative) with written notice covering three specific items: that AI will be used, the specific purpose of the AI tool, and how session data will be stored, retained, used for training, and deleted upon termination of services. The client must then provide affirmative written consent that meets a high bar — general terms-of-use acceptance, passive UI interactions, and deceptively obtained agreements do not qualify. Consent is revocable at any time.
3. Requirements of use. A licensed professional may use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy services when the client's therapeutic session is recorded or transcribed only if: A. The client or the client's legally authorized representative is informed in writing of the following: (1) That artificial intelligence will be used; (2) The specific purpose of the artificial intelligence tool or system that will be used; and (3) How session data collected by artificial intelligence will be stored, retained, used for training and deleted upon termination of therapy or psychotherapy services; and B. The client or the client's legally authorized representative provides consent to the use of artificial intelligence.
Passed 2026-01-01
HC-02.2HC-02.3
22 MRSA § 1730-B(4)
Plain Language
This provision imposes three distinct prohibitions. First, no person — licensed or not — may offer therapy or psychotherapy to the public through AI unless a licensed professional is actually providing the services; this effectively bans standalone AI therapy products not supervised by a licensee. Second, licensed professionals may not allow AI to make independent therapeutic decisions or directly interact with clients in therapeutic communication. Third, AI may not generate treatment plans or therapeutic recommendations unless the licensed professional reviews and approves them before they are acted upon. The definition of therapeutic communication is broad, encompassing emotional support, diagnostic interactions, treatment planning, and behavioral feedback.
4. Prohibition of use. A person may not provide, advertise or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public unless the therapy or psychotherapy services are provided by a licensed professional. A licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsection 3. A licensed professional may not allow artificial intelligence to: A. Make independent therapeutic decisions; B. Directly interact with clients in any form of therapeutic communication; or C. Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional.
Pending 2027-01-01
HC-02.2HC-02.3
Sec. 5(1)(b)
Plain Language
Operators may not make a companion chatbot available to a covered minor if the chatbot is foreseeably capable of offering mental health therapy without direct supervision by a licensed or credentialed professional, or of discouraging the minor from seeking help from a qualified professional or parent/guardian. This effectively prohibits unsupervised AI therapy for minors and ensures chatbots do not become a barrier to professional care. The actual knowledge requirement is removed beginning January 1, 2027.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (b) Offering mental health therapy to the covered minor without the direct supervision of a licensed or credentialed professional, or discouraging the covered minor from seeking help from a qualified professional or a parent or guardian.
Pending 2027-01-01
HC-02.2HC-02.3
Sec. 5(1)(b)
Plain Language
Operators must ensure companion chatbots are not foreseeably capable of offering mental health therapy to minors without direct supervision of a licensed or credentialed professional, or of discouraging minors from seeking help from qualified professionals or their parents/guardians. This is a dual prohibition: (1) unsupervised AI therapy to minors is blocked, and (2) the chatbot must not discourage minors from seeking human help. The 'direct supervision' standard is stricter than many jurisdictions that merely require licensed professional review — here, supervision must be active and contemporaneous. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (b) Offering mental health therapy to the covered minor without the direct supervision of a licensed or credentialed professional, or discouraging the covered minor from seeking help from a qualified professional or a parent or guardian.
Pending 2026-03-10
HC-02.3
Minn. Stat. § 214.165, subd. 2(a)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services to the public in Minnesota unless those services are conducted by a licensed professional. This effectively prohibits AI-only therapy products — any therapy or psychotherapy service must have a licensed human professional conducting the services. This obligation applies broadly to any person or entity, not just licensed professionals themselves, capturing AI companies and platforms that might offer autonomous therapy services.
(a) An individual, corporation, or entity must not provide, advertise, or otherwise offer therapy or psychotherapy services to the public in Minnesota unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pending 2026-03-10
HC-02.2
Minn. Stat. § 214.165, subd. 2(b)
Plain Language
Licensed professionals are prohibited from using AI systems in three specific ways: (1) allowing AI to make independent therapeutic decisions, (2) allowing AI to directly interact with clients in any form of therapeutic communication, and (3) using AI to generate therapeutic recommendations or treatment plans without the licensed professional personally reviewing and approving them before they are acted upon. The definition of therapeutic communication is broad — covering emotional support, empathy, guidance, treatment planning collaboration, and behavioral feedback. This effectively bars AI chatbots from engaging in any client-facing therapeutic dialogue within a licensed professional's practice.
(b) A licensed professional must not use artificial intelligence systems to: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; or (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional.
Pending 2026-03-10
HC-02.1
Minn. Stat. § 214.165, subd. 3
Plain Language
Licensed professionals may use AI systems for administrative and supplementary support tasks — such as maintaining records, scheduling, billing, analyzing anonymized data, organizing referrals, and drafting logistical communications — but only if the professional maintains full responsibility for all interactions, outputs, and data use associated with the AI system. This is the affirmative permission carve-out to the prohibitions in subdivision 2. The key condition is that the licensed professional retains full accountability; delegation of administrative tasks to AI does not relieve the professional of responsibility for what the system produces or how it handles client data.
A licensed professional may use artificial intelligence systems to assist in providing administrative or supplementary support in therapy or psychotherapy services if the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system.
Passed 2025-07-01
HC-02.2
Sec. 2(1), (3), (4)
Plain Language
Public schools (including charter schools and university schools for profoundly gifted pupils) may not use AI to perform the mental-health-related functions and duties of school counselors, school psychologists, or school social workers. However, those personnel may still use AI for administrative tasks such as scheduling, managing records, analyzing data for operational purposes, and organizing files or notes about pupils, and may use AI in accordance with the Department of Education policy once developed. This is a categorical prohibition on AI substituting for human mental health functions in schools, not a restriction on AI-assisted administrative work.
1. A public school, including, without limitation, a charter school or university school for profoundly gifted pupils, shall not use artificial intelligence to perform the functions and duties of a school counselor, school psychologist or school social worker as prescribed in NRS 391.293, 391.294 and 391.296, respectively, which relate to the mental health of pupils. 3. The provisions of subsection 1 do not prohibit a school counselor, school psychologist, school social worker or other educational personnel from using artificial intelligence in accordance with the policy developed pursuant to subsection 2 or to perform tasks for administrative support, which may include, without limitation: (a) Scheduling; (b) Managing records; (c) Analyzing data for operational purposes; and (d) Organizing, tracking and managing files or notes pertaining to a pupil. 4. As used in this section, "artificial intelligence" means a machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including, without limitation, content, decisions, predictions or recommendations, that can influence physical or virtual environments.
Passed 2025-07-01
HC-02.3
Sec. 7(2), (6)(a)-(b)
Plain Language
AI providers may not make available in Nevada any AI system that is specifically programmed to provide a service or experience that would constitute the practice of professional mental or behavioral health care if performed by a human. This is a categorical ban on AI therapy products — not merely a labeling or disclosure requirement. The ban covers psychology, clinical counseling, marriage and family therapy, social work, substance abuse counseling, problem gambling counseling, and psychiatry. Self-help guidance products that do not purport to offer professional care are exempted, as are AI systems designed solely for licensed provider administrative support. Violations carry civil penalties up to $15,000 per violation.
2. An artificial intelligence provider shall not make available for use by a person in this State an artificial intelligence system that is specifically programmed to provide a service or experience to a user that would constitute the practice of professional mental or behavioral health care if provided by a natural person. 6. This section shall not be construed to prohibit: (a) Any advertisement, statement or representation for or relating to materials, literature and other products which are meant to provide advice and guidance for self-help relating to mental or behavioral health, if the material, literature or product does not purport to offer or provide professional mental or behavioral health care. (b) Offering or operating an artificial intelligence system that is designed to be used by a provider of professional mental or behavioral health care to perform tasks for administrative support in conformity with subsection 2 of section 8 of this act.
Passed 2025-07-01
HC-02.5
Sec. 7(3)
Plain Language
An unlicensed individual may not represent themselves as qualified to provide professional mental or behavioral health care — including by using titles such as 'therapist,' 'psychotherapist,' 'counselor,' or similar terms — unless they hold a valid state credential authorizing them to practice. While this provision applies to natural persons generally and not specifically to AI, it is placed within the same section as the AI provider restrictions and is enforced through the same $15,000 per violation penalty. It targets individuals who may use AI tools to offer quasi-therapeutic services while claiming professional credentials they do not hold.
3. A natural person shall not represent himself or herself as being qualified to provide professional mental or behavioral health care, including, without limitation, by using the title of "therapist," "psychotherapist" or "counselor," or any similar title, if the person does not possess a valid credential issued by a governmental entity that authorizes the person to practice professional mental or behavioral health care in this State.
Passed 2025-07-01
HC-02.2
Sec. 8(1)-(2)
Plain Language
Licensed mental and behavioral health care providers may not use AI in connection with providing professional mental or behavioral health care directly to a patient. AI use is permitted only for administrative support tasks: scheduling, records management, billing, data analysis for operational purposes, and organizing session files or notes. This is a near-absolute ban on AI in direct patient-facing therapeutic care, with a narrow carve-out for back-office administrative functions. School-based providers may also follow the Department of Education policy developed under Section 2 if applicable. Violations constitute unprofessional conduct subject to licensing board disciplinary action.
1. Except as otherwise provided by subsection 2 and, where applicable, the policy adopted by the Department of Education pursuant to section 2 of this act, a provider of mental and behavioral health care shall not use an artificial intelligence system in connection with providing professional mental and behavioral health care directly to a patient. 2. A provider of mental and behavioral health care may use an artificial intelligence system to assist the provider with performing tasks for administrative support, which may include, without limitation: (a) Scheduling appointments; (b) Managing records; (c) Billing patients and managing records relating to billing; (d) Analyzing data for operational purposes; and (e) Organizing, tracking and managing files or notes relating to an individual session with a patient.
Passed 2025-07-01
HC-02.1
Sec. 8(4)
Plain Language
When a licensed mental health provider uses AI for billing-related tasks (paragraph (c)) or for organizing/tracking/managing session files or notes (paragraph (e)), the provider must independently review the accuracy of any AI-generated output — including reports, data compilations, summaries, and analyses. This review obligation does not apply to all administrative AI use, only to the two categories most likely to contain patient-specific clinical or financial information. The provider bears personal responsibility for catching AI errors before relying on the output.
4. A provider of mental and behavioral health care shall independently review the accuracy of any report, data or other information compiled, summarized, analyzed or generated by an artificial intelligence system for a purpose described in paragraph (c) or (e) of subsection 2.
Pending
HC-02.1
CPLR Rule 2107(c)
Plain Language
Before using generative AI to draft any paper or filing in a civil matter, the attorney must obtain informed consent from the client. The attorney must first warn the client about the dangers of using generative AI for legal research, document review, and document creation. This is a precondition to any AI-assisted drafting — if consent is not obtained, AI may not be used at all. The provision does not specify the form of consent (written vs. oral) or define what constitutes adequate warning.
(c) No paper or file shall be drafted with the use of generative artificial intelligence without the informed consent of the client after being warned of the dangers of using generative artificial intelligence in performing legal research, document review, and document creation.
Pending
HC-02.1
CPL § 10.50(3)
Plain Language
Before using generative AI to draft any paper or filing in a criminal matter, counsel must obtain informed consent from the defendant. The defendant must first be warned about the dangers of using generative AI for legal research, document review, and document creation. This mirrors the civil requirement but substitutes 'defendant' for 'client,' reflecting the criminal procedure context. No AI-assisted drafting is permitted without this consent.
3. No paper or file shall be drafted with the use of generative artificial intelligence without the informed consent of the defendant after being warned of the dangers of using generative artificial intelligence in performing legal research, document review, and document creation.
Pending 2025-11-01
HC-02.1
63 O.S. § 5502(B)
Plain Language
Only qualified end-users may operate AI medical devices. This means the user must be a licensed physician who can independently perform the same clinical procedure without the AI device and who has been specifically trained on the AI device itself, including the ability to assess whether its outputs are valid. No non-physician staff, nurse practitioner, or untrained physician may use the AI device. Deployers must ensure access is restricted accordingly.
B. An AI device shall be used exclusively by a qualified end-user.
Pending 2025-11-01
HC-02.1
63 O.S. § 5503(A)
Plain Language
Before any patient care decision is made based on AI device output, a qualified end-user (licensed physician with appropriate training) must review and validate the AI-generated data for accuracy. The deployer must have documented policies and procedures governing this review process. This is a mandatory human-in-the-loop requirement — no AI output may be acted upon in patient care without prior physician validation.
A. All relevant artificial intelligence (AI) device-generated data shall be reviewed for accuracy and validated by a qualified end-user in accordance with deployer-documented policies and procedures before patient care decisions are rendered.
Pending 2026-11-01
HC-02.1HC-02.2HC-02.3HC-02.5
75A O.S. § 701(C)(1)-(5)
Plain Language
Therapeutic chatbots may only be made available to minors if all five conditions are satisfied: (1) the chatbot displays a clear, conspicuous AI disclaimer at the start of every interaction stating it is not a licensed professional; (2) the chatbot is not marketed as a substitute for a human professional; (3) a licensed mental health professional assesses the minor user's suitability, prescribes the tool within a comprehensive treatment plan, and monitors use; (4) developers produce robust, independent, peer-reviewed clinical trial data on safety and efficacy for the specific conditions and populations served; and (5) the system's functions, limitations, and data privacy policies are transparent to both the professional and user, with clear accountability lines. This is a conditional exemption from the general prohibition — if any condition is unmet, the therapeutic chatbot cannot be used by minors.
C. Therapeutic chatbots that meet all of the following requirements may be made available to minors: 1. The chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is AI and not a licensed professional; 2. The chatbot is not marketed or designated as a substitute for a human professional; 3. A licensed mental health professional (such as a clinical psychologist) assesses a user's suitability and prescribes the tool as part of a comprehensive treatment plan, and monitors its use and impact; 4. Developers provide robust, independent, peer-reviewed clinical trial data demonstrating both the safety and efficacy of the tool for specific conditions and populations; and 5. The system's functions, limitations, and data privacy policies are transparent to both the licensed mental health professional and the user. Clear lines of accountability are established for any harms caused by the system.
Pending 2026-01-15
HC-02.4
63 O.S. § 7102(A)(1)-(2)
Plain Language
Before using AI to record or transcribe a therapy or psychotherapy session for supplementary support purposes (e.g., preparing therapy notes, analyzing anonymized progress data), the licensed mental health professional must provide written notice to the patient (or their legally authorized representative) disclosing that AI will be used and the specific purpose of the AI tool. The patient must then provide affirmative, informed, written consent that is revocable at any time. Consent buried in general terms of service, obtained through passive UI interactions, or obtained through deceptive means does not qualify.
A. A licensed mental health professional shall not use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: 1. The patient or the patient's legally authorized representative is informed in writing of the following: a. that artificial intelligence will be used, and b. the specific purpose of the artificial intelligence tool or system that will be used; and 2. The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pending 2026-01-15
HC-02.1HC-02.2
63 O.S. § 7102(B)(1)-(4)
Plain Language
Licensed mental health professionals may use AI for administrative support (scheduling, billing, logistics) and supplementary support (notes, anonymized data analysis, resource organization), but they must maintain full responsibility for all AI interactions, outputs, and data use. AI is categorically prohibited from: (1) making independent therapeutic decisions, (2) directly interacting with clients in any form of therapeutic communication, (3) generating therapeutic recommendations or treatment plans without the professional's review, or (4) detecting emotions or mental states. This means AI outputs like suggested treatment plans must be reviewed and approved by the licensed professional before being acted upon, and AI cannot serve as a direct clinical interlocutor with patients.
B. A licensed mental health professional may use artificial intelligence tools or systems to assist in providing administrative support or supplementary support in therapy or psychotherapy services if the licensed mental health professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection A of this section. A licensed mental health professional shall not allow artificial intelligence to do any of the following: 1. Make independent therapeutic decisions; 2. Directly interact with clients in any form of therapeutic communication; 3. Generate therapeutic recommendations or treatment plans without review by the licensed mental health professional; or 4. Detect emotions or mental states.
Pending 2026-01-15
HC-02.1
63 O.S. § 7102(C)
Plain Language
All final decisions in therapy or psychotherapy services must be made by the licensed mental health provider — never by an AI system. This is a bright-line rule: regardless of AI's supporting role in administrative or supplementary tasks, the human professional must retain ultimate decision-making authority over all therapeutic and clinical matters. This reinforces the prohibitions in subsection B and ensures that AI remains a tool under professional supervision, not an autonomous decision-maker.
C. A licensed mental health provider, not artificial intelligence or similar systems, shall make final decisions in the provision of therapy or psychotherapy services.
Pending 2026-01-15
HC-02.3
63 O.S. § 7102(E)(1)
Plain Language
No person, company, or entity may provide, advertise, or offer therapy or psychotherapy services to the Oklahoma public through Internet-based AI unless those services are conducted by a licensed mental health professional. This effectively prohibits standalone AI therapy products that operate without a licensed professional. The obligation applies broadly — not just to licensed professionals but to any individual, corporation, or entity — capturing AI developers and deployers who market therapy chatbots or AI-driven mental health services in Oklahoma without professional involvement.
E. 1. An individual, corporation, or entity shall not provide, advertise, or otherwise offer therapy or psychotherapy services through the use of Internet-based artificial intelligence to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed mental health professional.
Pending 2026-01-15
HC-02.4
63 O.S. § 7103(A)(1)-(2)
Plain Language
Before any licensed health care provider uses AI to assist in patient care — including medical, dental, optometric, or other health services — the provider must give written notice to the patient (or legally authorized representative) disclosing that AI will be used and the specific purpose of the AI tool. The patient must then provide affirmative, informed, written consent that is revocable at any time. As with the mental health consent requirement, general terms-of-service acceptance, passive UI interactions, and deceptively obtained agreements do not constitute valid consent. This applies broadly to all AI-assisted patient care, not just recording or transcription.
A. A licensed health care provider shall not use artificial intelligence to assist in the provision of a patient's care unless: 1. The patient or the patient's legally authorized representative is informed in writing of the following: a. that artificial intelligence will be used, and b. the specific purpose of the artificial intelligence tool or system that will be used; and 2. The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pending 2026-01-15
HC-02.1HC-02.2
63 O.S. § 7103(B)(1)-(4)
Plain Language
Licensed health care providers may use AI to assist in delivering health care services, but they must maintain full responsibility for all AI interactions, outputs, and data use. AI is categorically prohibited from: (1) making independent medical decisions, (2) directly interacting with patients in any form of medical communication, (3) diagnosing medical conditions, or (4) generating medical advice, recommendations, or treatment plans without the provider's review. This means all AI-generated clinical outputs must pass through a licensed provider before reaching or affecting the patient. The scope is broader than the mental health prohibitions — it adds an explicit prohibition on AI diagnosis and covers medical advice and recommendations alongside treatment plans.
B. A licensed health care provider may use artificial intelligence tools or systems to assist in providing health care services if the licensed health care provider maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection A of this section. A licensed health care provider shall not allow artificial intelligence to do any of the following: 1. Make independent medical decisions; 2. Directly interact with patients in any form of medical communication; 3. Diagnose medical conditions; or 4. Generate medical advice or recommendations or treatment plans without review by the licensed health care provider.
Pending 2026-01-15
HC-02.1
63 O.S. § 7103(C)
Plain Language
All final decisions in the provision of health care services must be made by a licensed health care provider — never by AI or similar automated systems. This parallels the mental health provision in § 7102(C) and establishes a categorical bright-line rule for all health care contexts: AI may inform but may never control clinical decision-making.
C. A licensed health care provider, not artificial intelligence or similar systems, shall make final decisions in the provision of health care services.
Pending 2026-01-28
HC-02.4
R.I. Gen. Laws § 40.1-5.5-3(a)
Plain Language
Before using AI to record or transcribe a client's therapeutic session, the licensed professional must inform the patient (or their parent, guardian, or legal representative) in writing that AI will be used, explain the specific purpose of the AI tool, and obtain affirmative written consent that meets the statute's strict consent requirements. Consent cannot be bundled into general terms of use, cannot be inferred from passive actions, and is revocable at any time. This applies specifically when AI is used for supplementary support tasks involving session recording or transcription.
(a) No licensed professional shall be permitted to use artificial intelligence, designed to simulate emotional attachment, bonding, or dependency or artificial intelligence companions for mental health or emotional support, to assist in providing supplementary support in therapy or psychotherapy services where the client's therapeutic session is recorded or transcribed unless the patient or the patient's parent, guardian or other legally authorized representative is informed in writing of the following: (1) That artificial intelligence will be used; (2) The specific purpose of the artificial intelligence tool or system that will be used; and (3) The patient or the patient's parents, or other legally authorized representative, provides consent to the use of artificial intelligence.
Pending 2026-01-28
HC-02.3
R.I. Gen. Laws § 40.1-5.5-3(b)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services to the public in Rhode Island — including through Internet-based AI — unless those services are actually conducted by a state-licensed professional. This effectively prohibits standalone AI therapy products that operate without a licensed professional conducting the services. The prohibition extends to advertising as well as actual delivery, and applies to any entity type, not just licensed professionals.
(b) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pending 2026-01-28
HC-02.1HC-02.2
R.I. Gen. Laws § 40.1-5.5-3(c)
Plain Language
Licensed professionals may only use AI for administrative and supplementary support tasks and must maintain full responsibility for all interactions, outputs, and data use. AI is categorically prohibited from: (1) making independent therapeutic decisions, (2) directly interacting with clients in any form of therapeutic communication (broadly defined to include emotional support, guidance, treatment planning collaboration, and behavioral feedback), (3) generating therapeutic recommendations or treatment plans, and (4) detecting emotions or mental states. This is one of the most restrictive AI-in-therapy provisions in the country — it effectively bars any client-facing AI use in a clinical mental health setting and prohibits AI emotion detection entirely within the licensed professional's practice.
(c) A licensed professional may use artificial intelligence only to the extent that such use meets the requirements of subsection (a) of this section. A licensed professional may not allow or otherwise use artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans; or (4) Detect emotions or mental states.
Pending 2026-01-23
HC-02.4
R.I. Gen. Laws § 40.1-5.5-3(a)(1)-(3)
Plain Language
Before using AI to record or transcribe a client's therapeutic session — specifically AI designed to simulate emotional attachment, bonding, or dependency, or AI companions for mental health support — the licensed professional must inform the patient (or their parent, guardian, or authorized representative) in writing that AI will be used and what its specific purpose is, and must obtain affirmative written consent that is revocable. Consent cannot be buried in general terms of use or obtained through deceptive actions. This consent requirement is a prerequisite — AI may not be used for session recording or transcription without it.
(a) No licensed professional shall be permitted to use artificial intelligence, designed to simulate emotional attachment, bonding, or dependency or artificial intelligence companions for mental health or emotional support, to assist in providing supplementary support in therapy or psychotherapy services where the client's therapeutic session is recorded or transcribed unless the patient or the patient's parent, guardian or other legally authorized representative is informed in writing of the following: (1) That artificial intelligence will be used; (2) The specific purpose of the artificial intelligence tool or system that will be used; and (3) The patient or the patient's parents, or other legally authorized representative, provides consent to the use of artificial intelligence.
Pending 2026-01-23
HC-02.3
R.I. Gen. Laws § 40.1-5.5-3(b)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services to the public in Rhode Island — including through internet-based AI tools — unless those services are actually conducted by a state-licensed mental health professional. This effectively prohibits standalone AI therapy products that operate without a licensed professional conducting the services. The prohibition extends to advertising such services, not just delivering them. Religious counseling, peer support, and self-help materials are excluded from the definition of therapy or psychotherapy services and thus not affected by this prohibition.
(b) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Pending 2026-01-23
HC-02.1HC-02.2
R.I. Gen. Laws § 40.1-5.5-3(c)(1)-(4)
Plain Language
Licensed professionals may use AI only for administrative and supplementary support tasks — and even then, only when the professional maintains full responsibility for all interactions, outputs, and data use. AI is categorically prohibited from: (1) making independent therapeutic decisions, (2) directly interacting with clients in any form of therapeutic communication (which is broadly defined to include emotional support, guidance, treatment planning collaboration, and behavioral feedback), (3) generating therapeutic recommendations or treatment plans, and (4) detecting emotions or mental states. This effectively confines AI in mental health practice to back-office functions like scheduling, billing, record preparation, and anonymized data analysis — any client-facing therapeutic function is off-limits for AI.
(c) A licensed professional may use artificial intelligence only to the extent that such use meets the requirements of subsection (a) of this section. A licensed professional may not allow or otherwise use artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans; or (4) Detect emotions or mental states.
Pending
HC-02.4
S.C. Code § 40-1-710
Plain Language
Before a licensed therapist or psychotherapist may use AI to record or transcribe a client's therapeutic session for supplementary support purposes, the professional must provide written notice to the patient (or their legally authorized representative) disclosing both that AI will be used and the specific purpose of its use. The patient must then provide written consent. Without both written notice and written consent, AI-assisted recording or transcription of therapeutic sessions is prohibited. This applies only to supplementary support involving recording or transcription — administrative support tasks that do not involve recording sessions are not covered by this provision.
A licensed professional shall not be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing: (a) that artificial intelligence will be used; and (b) of the specific purpose for which the artificial intelligence tool or system will be used; and (2) the patient or the patient's legally authorized representative provides written consent to the use of artificial intelligence.
Pending
HC-02.3
S.C. Code § 40-1-720(A)
Plain Language
No individual, corporation, or entity may offer, advertise, or provide therapy or psychotherapy services in South Carolina — including through internet-based AI — unless those services are conducted by a state-licensed professional. This effectively prohibits standalone AI therapy products that operate without a licensed professional conducting the services. The prohibition extends to advertising such services, not merely delivering them. Religious counseling and peer support are carved out by the definitions section.
(A) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by a licensed professional.
Pending
HC-02.1HC-02.2
S.C. Code § 40-1-720(B)
Plain Language
Licensed professionals face four categorical prohibitions on AI use in therapy: (1) AI may not make independent therapeutic decisions; (2) AI may not directly interact with clients in any form of therapeutic communication — a broadly defined term covering virtually any clinical interaction intended to diagnose, treat, or address mental health; (3) AI may not generate treatment plans or therapeutic recommendations without the licensed professional's review and approval; and (4) AI may not detect emotions or mental states. The overall framing limits AI use to administrative and supplementary support roles, with the licensed professional maintaining full responsibility for all AI outputs. The emotion detection prohibition is notably absolute — it applies regardless of whether the professional reviews the output.
(B) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of Section 40-1-710. A licensed professional may not allow artificial intelligence to: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
Passed 2025-09-01
HC-02.1
Health & Safety Code § 183.005(a)
Plain Language
Health care practitioners are permitted to use AI for diagnostic purposes — including AI-generated diagnosis recommendations and treatment course suggestions based on patient records — subject to three conditions: (1) the practitioner must be acting within the scope of their professional license; (2) the specific AI use must not be otherwise prohibited by law; and (3) the practitioner must review all AI-generated records in accordance with Texas Medical Board medical records standards. This establishes an affirmative authorization framework with a mandatory human review requirement — practitioners bear full responsibility for reviewing AI outputs before relying on them.
A health care practitioner may use artificial intelligence for diagnostic purposes, including the use of artificial intelligence for recommendations on a diagnosis or course of treatment based on a patient's medical record, if: (1) the practitioner is acting within the scope of the practitioner's license, certification, or other authorization to provide health care services in this state, regardless of the use of artificial intelligence; (2) the particular use of artificial intelligence is not otherwise restricted or prohibited by state or federal law; and (3) the practitioner reviews all records created with artificial intelligence in a manner that is consistent with medical records standards developed by the Texas Medical Board.
Failed 2025-07-01
HC-02.1
§ 54.1-2400.1:1(B) (first paragraph)
Plain Language
Mental health service providers are permitted to use AI systems to assist in therapy or counseling, but only if the provider maintains full responsibility for all interactions, outputs, and data use associated with the AI system. This is a gatekeeping condition — AI-assisted therapy is only lawful if the provider accepts and maintains complete professional accountability. The provider cannot delegate responsibility to the AI system or its developer.
A mental health service provider may use an artificial intelligence system to assist in providing therapy or counseling services if such mental health service provider maintains full responsibility for all interactions, outputs, and data use associated with the system.
Failed 2025-07-01
HC-02.4
§ 54.1-2400.1:1(B) (second paragraph, subdivisions 1–2)
Plain Language
When a mental health service provider uses an AI system to record or transcribe a therapy or counseling session, two prerequisites must be met: (1) the provider must disclose to the patient (or their legally authorized representative) that AI is being used and the specific purpose of that use, and (2) at the initial appointment, the provider must disclose its AI use policies and obtain written or digital consent. If the provider later changes its AI policies, it must notify any patient who previously consented. Unlike some jurisdictions that require a 24-hour advance consent window, Virginia requires consent at the initial appointment but does not specify an advance notice period. The consent appears to be ongoing — the statute does not explicitly provide for revocability, though policy change notification is required.
No licensed professional shall be permitted to use an artificial intelligence system in providing therapy or counseling services pursuant to this subsection when the session is recorded or transcribed unless:
1. The mental health service provider discloses to the patient or the patient's legally authorized representative (i) that an artificial intelligence system will be used and (ii) the specific purpose of the artificial intelligence system that will be used; and
2. At the initial appointment, the mental health service provider discloses its artificial intelligence system use and policies related to such use and the patient or the patient's legally authorized representative provides written or digital consent to the use of an artificial intelligence system as permitted by this section. The mental health service provider shall give notice of any change in its policies related to the use of an artificial intelligence system to any patient or such patient's legally authorized representative who has consented to the use of an artificial intelligence system pursuant to this subdivision.
Failed 2025-07-01
HC-02.3
§ 54.1-2400.1:1(C)
Plain Language
No person or business entity may provide, advertise, or offer therapy or counseling services to the Virginia public through AI unless those services are conducted by a licensed mental health service provider. This effectively prohibits AI-only therapy products — a companion chatbot or AI therapy app cannot offer therapy or counseling in Virginia unless a licensed provider is conducting the services. The prohibition extends to advertising and offering, not just actual delivery, broadening the reach to marketing of AI therapy products.
A person or business entity may not provide, advertise, or otherwise offer therapy or counseling services, including through the use of an artificial intelligence system, to the public in the Commonwealth unless the therapy or counseling services are conducted by a mental health service provider.
Failed 2025-07-01
HC-02.2
§ 54.1-2400.1:1(D)
Plain Language
Mental health service providers face three categorical prohibitions on AI use: (1) AI may not make independent therapeutic decisions; (2) AI may not directly interact with clients in any form of therapeutic communication without provider oversight; and (3) AI may not generate therapeutic recommendations, diagnose, or implement treatment plans without the licensed professional's review, oversight, and approval. The definition of therapeutic communication is broad — it covers not only direct clinical interactions but also emotional support, empathy in response to distress (including suicidal ideation), treatment plan collaboration, and behavioral feedback. This effectively means an AI system cannot autonomously deliver any aspect of therapy, even supportive or empathic responses, without provider oversight.
A mental health service provider may use an artificial intelligence system only to the extent that such use meets the requirements of subsection B. No mental health service provider shall allow an artificial intelligence system to:
1. Make independent therapeutic decisions;
2. Directly interact with clients in any form of therapeutic communication without provider oversight; or
3. Generate therapeutic recommendations, diagnose, or implement treatment plans without review, oversight, and approval by the licensed professional.
Pending 2026-01-01
HC-02.3
18 V.S.A. § 7115(b)
Plain Language
No person, corporation, or entity may offer, provide, or advertise mental health services in Vermont that use AI in any capacity — whether in whole or in part — unless the use falls within the narrow exceptions available to licensed mental health professionals under 26 V.S.A. § 7101 (administrative support with professional review, and transcription with prior written consent). This is a broad prohibition that applies to all entities, not just licensed professionals. An unlicensed entity operating an AI therapy chatbot in Vermont would violate this section with no available exception. Violations are deemed Consumer Protection Act violations carrying $10,000 per violation penalties.
(b) A person, corporation, or entity shall not offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as authorized pursuant to 26 V.S.A. § 7101.
Pending 2026-01-01
HC-02.1
26 V.S.A. § 7101(b)(1)
Plain Language
Mental health professionals may use AI for administrative support tasks — scheduling, billing, drafting logistics communications, maintaining clinical records, analyzing deidentified data, and organizing referrals — but only if the professional personally reviews and assumes full responsibility for all tasks performed, outputs generated, and data use associated with the AI system. Administrative support is explicitly defined to exclude therapeutic communication, so any AI output that crosses into diagnosis, treatment advice, emotional support, or therapeutic interaction falls outside this permission. The professional's review obligation is not passive — they must affirmatively verify and take ownership of every AI output before it is acted upon.
(b) Permitted uses. (1) A mental health professional may use artificial intelligence for administrative support to the extent that the professional reviews and assumes responsibility for all tasks performed by, outputs created by, and data use associated with the artificial intelligence system employed.
Pending 2026-01-01
HC-02.4
26 V.S.A. § 7101(b)(2)
Plain Language
Before using AI for transcription or recording in a therapeutic session, the mental health professional must: (1) provide the patient, client, or their legal guardian a written notice identifying the specific purpose of the AI use and informing them that any AI-generated transcription or recording is subject to existing confidentiality protections under 18 V.S.A. §§ 1881 and 7103; and (2) obtain the patient's or guardian's written, informed, revocable consent. Both steps must occur before the AI transcription begins. Consent must be explicit, affirmative, and in writing — implied consent or oral agreement is insufficient. The consent is revocable at any time.
(2) If a mental health professional uses artificial intelligence for transcription and recording purposes, the mental health professional shall first: (A) inform the patient or client, or the patient's or client's legal guardian, in writing of the specific purpose for which artificial intelligence is being used and that any transcription or recording performed by artificial intelligence shall be subject to the disclosure prohibitions in subsection (c) of this section; and (B) obtain consent from the patient or client, or the patient's or client's legal guardian.
Pending 2026-01-01
HC-02.2
26 V.S.A. § 7101(d)(1)
Plain Language
Licensed mental health professionals are categorically prohibited from using AI to make therapeutic decisions, issue direct therapeutic communications, generate treatment plans or recommendations, or detect or interpret emotions or mental states. They are also prohibited from offering, providing, or advertising any mental health services that use AI, except for the narrow administrative support and transcription permissions in subsection (b). This is a bright-line prohibition — there is no safe harbor for AI-assisted therapeutic work even with professional oversight. The only AI uses permitted for mental health professionals are administrative support (with full professional review) and transcription (with prior written informed consent). Violations constitute unprofessional conduct under 3 V.S.A. § 129a, subjecting the professional to licensing discipline.
(d) Prohibited uses. A mental health professional shall neither: (1) use artificial intelligence in the State to make therapeutic decisions, issue direct therapeutic communications, generate treatment plans or recommendations, or detect or interpret emotions or mental states; nor (2) offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as provided in subsection (b) of this section.
Passed 2026-01-01
HC-02.3
18 V.S.A. § 7115(b)
Plain Language
No person, corporation, or other entity may offer, provide, or advertise mental health services in Vermont that represent AI as providing therapeutic judgment, diagnosis, treatment, or therapeutic communication. This is a broad prohibition that applies to any entity — not just licensed professionals — and effectively bars AI-as-therapist products from the Vermont market. The carve-out permits AI use for administrative, documentation, operational, or quality-improvement purposes, but only where a mental health professional retains clinical responsibility under 26 V.S.A. § 7101.
(b) A person, corporation, or other entity shall not offer, provide, or advertise mental health services in the State that represent artificial intelligence as providing therapeutic judgment, diagnosis, treatment, or therapeutic communication. Nothing in this subsection shall prohibit the use or disclosure of the use of artificial intelligence for administrative, documentation, operational, or quality-improvement purposes when a mental health professional retains clinical responsibility as authorized pursuant to 26 V.S.A. § 7101.
Passed 2026-01-01
HC-02.1HC-02.2
26 V.S.A. § 7101(b)
Plain Language
Mental health professionals may use AI for administrative support (scheduling, billing, claims processing), supplementary support (clinical record preparation, deidentified data analysis, workforce planning), and operational or quality-improvement functions — but only if the professional retains sole responsibility for all therapeutic decisions. The professional must review, modify where necessary, and approve the final product of any AI-assisted work. Notably, clinical decision support tools like algorithmic risk scoring and data analytics are permitted when used under professional supervision, because the definition of 'therapeutic decision' expressly excludes them. This provision establishes a permissive envelope around non-therapeutic AI uses while reinforcing that the human professional must remain the decision-maker.
(b) Permitted uses. A mental health professional may use artificial intelligence systems for administrative support, supplementary support, and operational or quality-improvement functions, provided the professional retains sole responsibility for therapeutic decisions. Permitted uses include scheduling, billing, coding, and claims processing; transcription and documentation support; preparation and maintenance of clinical records; deidentified data analysis for quality improvement; and workforce and capacity planning where the mental health professional reviews, modifies where necessary, and approves the final product.
Passed 2026-01-01
HC-02.4
26 V.S.A. § 7101(c)(1)-(2)
Plain Language
All AI-assisted administrative and supplementary support tasks — including transcription and recording — are subject to Vermont's existing mental health confidentiality protections (18 V.S.A. §§ 1881 and 7103). When AI is used to record identifiable therapeutic communications, the patient or client must provide written, informed, voluntary, and revocable consent. Consent cannot be obtained through broad terms-of-use agreements, passive acceptance, or deceptive practices. Consent is not required for AI use in administrative tasks or deidentified operational uses. This is a narrower consent requirement than some jurisdictions that require consent for any AI use in therapeutic settings — here it is triggered specifically by AI recording of identifiable therapeutic communications.
(c) Confidentiality and consent. (1) Any administrative support or supplementary support tasks conducted using artificial intelligence, including transcription and recording, shall be subject to the disclosure prohibitions in 18 V.S.A. §§ 1881 and 7103. (2) Consent by a patient or client is required when artificial intelligence is used to record identifiable therapeutic communications.
Passed 2026-01-01
HC-02.2
26 V.S.A. § 7101(d)(1)-(2)
Plain Language
Mental health professionals are prohibited from using AI in any way that allows the AI to independently make therapeutic decisions, independently diagnose, independently determine treatment, or independently generate treatment plans. The word 'independently' is critical — AI may assist with all of these functions so long as the professional retains final decision-making authority. Clinical decision support tools (algorithmic risk scoring, data analytics) are expressly excluded from the definition of 'therapeutic decision' and therefore not prohibited when used under professional supervision. The savings clause confirms that professionals may freely disclose their use of AI for permitted administrative or supplementary support purposes.
(d) Prohibited uses. (1) A mental health professional shall not use artificial intelligence in a manner that allows the artificial intelligence to independently make therapeutic decisions, independently diagnose, independently determine treatment, or independently generate treatment plans. (2) Nothing in this subsection shall prohibit a mental health professional from disclosing or describing the mental health professional's use of artificial intelligence for administrative support or supplementary support purposes to a prospective, current, or former patient or client.
Pre-filed 2026-01-01
HC-02.3
18 V.S.A. § 7115(b)
Plain Language
No person, corporation, or entity may offer, provide, or advertise mental health services in Vermont that use AI in any way — whether in whole or in part — unless the use falls within the narrow exceptions available to licensed mental health professionals under 26 V.S.A. § 7101 (administrative support with professional review, and transcription with written consent). This is a near-categorical ban on AI-delivered mental health services in Vermont. The definition of 'mental health services' is extremely broad, encompassing peer support, counseling, therapy, psychotherapy, therapeutic decisions, treatment plans, emotion detection, and ongoing recovery support. Notably, the § 7115 prohibition applies to any person or entity — not just licensed professionals — meaning AI companies, platforms, and chatbot operators offering AI therapy or mental health chatbots to Vermont users are covered.
(b) A person, corporation, or entity shall not offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as authorized pursuant to 26 V.S.A. § 7101.
Pre-filed 2026-01-01
HC-02.2
26 V.S.A. § 7101(d)
Plain Language
Licensed mental health professionals in Vermont are categorically prohibited from using AI to make therapeutic decisions, communicate directly with patients in a therapeutic context, generate treatment plans or recommendations, or detect or interpret emotions or mental states. They are further prohibited from offering, providing, or advertising any mental health services using AI in whole or in part, except for the narrow administrative support and transcription uses permitted under subsection (b). This means a therapist cannot, for example, use an AI tool to draft therapeutic responses, generate a treatment plan for review, or deploy emotion-recognition software during sessions — even if the professional plans to review the output before acting on it. The only permitted AI uses are purely administrative tasks and transcription with consent.
(d) Prohibited uses. A mental health professional shall neither: (1) use artificial intelligence in the State to make therapeutic decisions, issue direct therapeutic communications, generate treatment plans or recommendations, or detect or interpret emotions or mental states; nor (2) offer, provide, or advertise mental health services in the State that use artificial intelligence in whole or in part, except as provided in subsection (b) of this section.
Pre-filed 2026-01-01
HC-02.1
26 V.S.A. § 7101(b)(1)
Plain Language
Mental health professionals may use AI for administrative support tasks — scheduling, billing, logistics communications, clinical records, deidentified data analysis, and resource organization — but only if the professional reviews and assumes full responsibility for all AI tasks, outputs, and data use. This is the primary carve-out from the otherwise categorical prohibition on AI use in mental health services. The professional responsibility obligation is absolute: there is no delegation of oversight to another staff member or automated quality check. Note that 'administrative support' is explicitly defined to exclude therapeutic communication, so any AI output that crosses into therapeutic territory — even indirectly — falls outside this safe harbor.
(b) Permitted uses. (1) A mental health professional may use artificial intelligence for administrative support to the extent that the professional reviews and assumes responsibility for all tasks performed by, outputs created by, and data use associated with the artificial intelligence system employed.
Pre-filed 2026-01-01
HC-02.4
26 V.S.A. § 7101(b)(2)
Plain Language
Before using AI for transcription or recording of therapeutic sessions, the mental health professional must: (1) provide written notice to the patient, client, or their legal guardian specifying the purpose of the AI use and informing them that transcriptions and recordings remain subject to confidentiality protections; and (2) obtain explicit, affirmative, written, informed, and revocable consent. Both steps must be completed before the AI is used — not contemporaneously. Consent must be informed and revocable at any time. Unlike some jurisdictions, Vermont does not specify a 24-hour advance notice requirement, but the 'shall first' language requires completion before AI use begins.
(2) If a mental health professional uses artificial intelligence for transcription and recording purposes, the mental health professional shall first: (A) inform the patient or client, or the patient's or client's legal guardian, in writing of the specific purpose for which artificial intelligence is being used and that any transcription or recording performed by artificial intelligence shall be subject to the disclosure prohibitions in subsection (c) of this section; and (B) obtain consent from the patient or client, or the patient's or client's legal guardian.
Pre-filed 2026-01-01
26 V.S.A. § 7101(c)
Plain Language
All AI-generated outputs from administrative support tasks — including transcriptions and recordings — are subject to the same confidentiality protections as other mental health records under Vermont's mental health disclosure prohibition statutes (18 V.S.A. §§ 1881 and 7103). This means AI-processed clinical notes, deidentified data analyses, and transcriptions cannot be disclosed outside the protections of existing Vermont mental health privacy law. Practitioners must ensure their AI vendors and systems comply with these confidentiality requirements.
(c) Confidentiality. Any administrative support tasks conducted using artificial intelligence shall be subject to the disclosure prohibitions in 18 V.S.A. §§ 1881 and 7103, including transcription and recording.
Pending 2027-01-01
HC-02.1HC-02.2
§33-57-2(b)
Plain Language
Operators and licensed professionals may use AI tools for administrative or supplementary support in therapy or psychotherapy, but must maintain full responsibility for all interactions, outputs, and data use associated with the AI system. Patient care decisions, reimbursement decisions, and claims adjudication may not be based exclusively on AI-generated information. This establishes both a scope-of-permitted-use boundary and a professional responsibility requirement — AI is permitted as a tool, not as a substitute for human professional judgment.
(b) An operator or licensed professional is permitted to use AI tools or systems to assist in providing administrative support or supplementary support in therapy or psychotherapy services with the operator or licensed professional maintaining full responsibility for all interactions, outputs and data use associated with the system and satisfies the requirements of this article. A decision for patient care, reimbursement or claims adjudication may not be based exclusively on AI-generated information.
Pending 2027-01-01
HC-02.4
§33-57-2(d)
Plain Language
Before using AI to record or transcribe a therapeutic session, the operator or licensed professional must inform the patient (or their legally authorized representative) in writing that AI will be used and disclose the specific purpose of the AI tool. The patient must then provide consent, which must be freely given, specific, informed, written, and revocable. Consent cannot be obtained through general terms of use, passive UI actions, or deceptive practices. This is a hard prerequisite — without written notice and affirmative consent, AI-assisted recording or transcription of therapy sessions is prohibited.
(d) No operator or licensed professional may be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) The patient or the patient's legally authorized representative is informed in writing of the following: that artificial intelligence will be used; and the specific purpose of the artificial intelligence tool or system that will be used; and (2) The patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Pending 2027-01-01
HC-02.3HC-02.5
§33-57-2(e)
Plain Language
Therapy or psychotherapy services may only be provided in West Virginia by a licensed professional — AI alone cannot deliver these services, even via internet-based platforms. Additionally, no operator or licensed professional may design, market, or present an AI system in a way that would reasonably cause a person to believe the AI is a licensed professional or a crisis service. This is a dual prohibition: (1) a licensure gatekeeping requirement and (2) an anti-deception rule preventing AI systems from impersonating licensed professionals or crisis services.
(e) No operator or licensed professional may provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this state unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional, and may not design, market or present any AI system that reasonably would cause a person to believe the AI system is a licensed professional or crisis service.
Pending 2027-01-01
HC-02.2
§33-57-2(f)
Plain Language
Peer support services, religious counseling services, and digital mental wellness services may not use AI to perform clinical functions — specifically diagnosing, developing or modifying treatment plans, conducting suicide or self-harm risk assessments, or otherwise providing therapy or psychotherapy services — unless a licensed professional approves. This prevents non-clinical services from using AI to cross into clinical territory without professional oversight, even though these services are generally exempt from the therapy/psychotherapy licensing requirement.
(f) Peer support services, religious counseling services and digital mental wellness services may not, through the use of artificial intelligence, diagnose, develop or modify treatment plans, conduct suicide or self-harm risk assessments, or otherwise provide therapy or psychotherapy services without the approval of a licensed professional.
Pending 2027-01-01
HC-02.2
§33-57-2(g)
Plain Language
Licensed professionals are prohibited from allowing AI to: (1) make independent therapeutic decisions; (2) directly interact with clients in therapeutic communication; (3) generate therapeutic recommendations or treatment plans without the professional's review and approval; or (4) detect emotions or mental states for diagnostic, therapeutic, or treatment purposes, or to target or manipulate a person's mental or emotional state. AI use is constrained to administrative and supplementary support as defined in subsection (b). This is a comprehensive enumeration of prohibited AI autonomous functions in the clinical mental health context — the professional must remain in the loop for all clinical activities.
(g) An operator or licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsection (b). A licensed professional may not allow artificial intelligence to do any of the following: (1) Make independent therapeutic decisions; (2) Directly interact with clients in any form of therapeutic communication; (3) Generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) Detect emotions or mental states for the purpose of making diagnostic, therapeutic, or treatment decisions, or for targeting or manipulating a person's mental or emotional state.
Pending 2027-01-01
HC-02.1
§33-57-2(h)
Plain Language
AI may be used to flag or triage communications indicating self-harm, suicide risk, or other acute safety concerns — but this is the only permitted autonomous AI function in this context. A licensed professional must promptly review all such flags and retains sole authority for clinical assessment and decision-making. This provision creates a narrow safe harbor for AI-assisted safety screening within the broader prohibition on AI clinical functions, while reinforcing that the licensed professional must remain the decision-maker for all clinical responses.
(h) An operator or licensed professional may use artificial intelligence solely to flag or triage communications that may indicate self-harm, suicide risk, or other acute safety concerns, provided that any such flags are promptly reviewed and addressed by a licensed professional who retains sole authority for clinical assessment and decision-making.