S-2632
MA · State · USA
MA
USA
● Pending
Proposed Effective Date
2025-10-08
An Act relative to the use of artificial intelligence and other software tools in healthcare decision-making
This bill regulates AI use in two healthcare contexts: therapy/psychotherapy services (Section 1) and health insurance utilization review (Section 2). Section 1 prohibits anyone from offering therapy or psychotherapy through AI unless conducted by a licensed professional, bars AI from making independent therapeutic decisions or directly interacting with clients in therapeutic communication, and requires written informed consent before AI is used to record or transcribe sessions. Section 2 requires carriers and utilization review organizations using AI tools for medical necessity determinations to base decisions on individualized patient clinical data, prohibits AI from supplanting provider decision-making or serving as the basis for denying care, mandates periodic review for accuracy, and requires the tool to be open to regulatory inspection. Section 1 is enforced by the Division of Occupational Licensure with civil penalties up to $10,000 per violation. Section 2 creates a private right of action for insureds with statutory damages up to $5,000 per violation, punitive damages, injunctive relief, and attorney's fees.
Summary

This bill regulates AI use in two healthcare contexts: therapy/psychotherapy services (Section 1) and health insurance utilization review (Section 2). Section 1 prohibits anyone from offering therapy or psychotherapy through AI unless conducted by a licensed professional, bars AI from making independent therapeutic decisions or directly interacting with clients in therapeutic communication, and requires written informed consent before AI is used to record or transcribe sessions. Section 2 requires carriers and utilization review organizations using AI tools for medical necessity determinations to base decisions on individualized patient clinical data, prohibits AI from supplanting provider decision-making or serving as the basis for denying care, mandates periodic review for accuracy, and requires the tool to be open to regulatory inspection. Section 1 is enforced by the Division of Occupational Licensure with civil penalties up to $10,000 per violation. Section 2 creates a private right of action for insureds with statutory damages up to $5,000 per violation, punitive damages, injunctive relief, and attorney's fees.

Enforcement & Penalties
Enforcement Authority
Section 1 (Chapter 112, § 298): The Division of Occupational Licensure has authority to investigate actual, alleged, or suspected violations and to assess civil penalties after a hearing. No private right of action is created under Section 1. Section 2 (Chapter 176O, § 12(g)): Private right of action. An insured may bring a civil action against the party that commits a violation. The Division of Insurance and the Executive Office of Health and Human Services also have oversight and audit authority. A violation of Section 2 constitutes an injury to the insured as a matter of law — no separate proof of injury in fact is required for standing.
Penalties
Section 1 (Chapter 112, § 298): Civil penalties up to $10,000 per violation, assessed by the Division of Occupational Licensure after a hearing, based on degree of harm and circumstances. Section 2 (Chapter 176O, § 12(g)(8)): Greater of actual damages or up to $5,000 per insured per violation; punitive damages; injunctive relief; and reasonable attorney's fees and litigation costs. A violation constitutes an injury to the insured as a matter of law, so statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Licensed professional" means an individual who holds a valid license issued by this State to provide therapy or psychotherapy services, including: (1) a licensed clinical psychologist; (2) a licensed clinical social worker; (3) a licensed social worker; (4) a licensed professional counselor; (5) a licensed clinical professional counselor; (6) a licensed marriage and family therapist; (7) a certified alcohol and other drug counselor authorized to provide therapy or psychotherapy services; (8) a licensed professional music therapist; (9) a licensed advanced practice registered nurse; and (10) any other professional authorized by this State to provide therapy or psychotherapy services, except for a physician.
Compliance Obligations 14 obligations · click obligation ID to open requirement page
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1 · Professional · Healthcare
G.L. c. 112, § 298(b)
Plain Language
Licensed professionals may only use AI in therapy or psychotherapy for administrative support (scheduling, billing, logistics) or supplementary support (record-keeping, anonymized data analysis, resource organization) — never for therapeutic communication. The licensed professional must maintain full responsibility for all interactions, outputs, and data use associated with the AI system. This provision defines the permitted envelope of AI use and ties it to the consent requirements in subsection (c).
Statutory Text
(b) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (c).
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.4 · Professional · Healthcare
G.L. c. 112, § 298(c)
Plain Language
When a licensed professional uses AI to record or transcribe a therapy session, the professional must first inform the patient (or their legally authorized representative) in writing that AI will be used and explain the specific purpose of the AI tool. The patient must then provide consent that meets a high bar: it must be a clear, explicit, freely given, informed, written affirmative act and must be revocable. Buried-in-TOS consent, passive interactions like hovering or closing content, and deceptively obtained agreements do not qualify. This consent requirement applies only when sessions are recorded or transcribed — use of AI for other supplementary support tasks (e.g., organizing referrals) does not independently trigger this consent obligation.
Statutory Text
(c) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.3 · DeployerProfessional · Healthcare
G.L. c. 112, § 298(d)
Plain Language
No person or entity may provide, advertise, or offer therapy or psychotherapy services in Massachusetts — including through internet-based AI — unless those services are conducted by a state-licensed professional. This effectively prohibits AI-only therapy products that lack a licensed professional conducting the services. Religious counseling and peer support are excluded from the definition of therapy or psychotherapy services and are therefore not subject to this restriction.
Statutory Text
(d) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.2 · Professional · Healthcare
G.L. c. 112, § 298(e)
Plain Language
Licensed professionals must not allow AI to: (1) make independent therapeutic decisions, (2) directly interact with clients in therapeutic communication (which is broadly defined to include emotional support, guidance, therapeutic strategies, and collaborative goal-setting), (3) generate treatment plans or therapeutic recommendations without the professional's review and approval, or (4) detect emotions or mental states. These are categorical prohibitions — there is no safe harbor or compliance pathway that would permit these uses. The emotion detection ban is notably broad, covering any use of AI to detect emotions or mental states in a therapeutic context.
Statutory Text
(e) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsections (b) and (c). A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
HC-02 AI in Licensed Professional Practice Restrictions · Professional · Healthcare
G.L. c. 112, § 298(f)
Plain Language
All records maintained by a licensed professional and all communications between a patient and the professional are confidential and may not be disclosed except as required by law. This broadly covers any AI-processed records, transcriptions, and data generated through AI supplementary support — the confidentiality obligation extends to AI-generated outputs as well as traditional clinical records.
Statutory Text
(f) All records kept by a licensed professional and all communications between an individual seeking therapy or psychotherapy services and a licensed professional shall be confidential and shall not be disclosed except as required by law.
HC-01 Healthcare AI Decision Restrictions · HC-01.3 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(A)-(B)
Plain Language
AI tools used in utilization review must base their determinations on the individual insured's medical or clinical history, the individual clinical circumstances presented by the requesting provider, and other relevant information in the insured's clinical record. The tool may not base determinations solely on group-level datasets. This requires carriers and utilization review organizations to ensure their AI tools are configured to ingest and weigh individualized patient data, not merely statistical profiles or population-level models.
Statutory Text
(A) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history. (ii) Individual clinical circumstances as presented by the requesting provider. (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (B) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
HC-01 Healthcare AI Decision Restrictions · HC-01.1HC-01.2 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(D), (g)(2)
Plain Language
AI tools may not supplant healthcare provider decision-making, and — critically — may not deny, delay, or modify healthcare services based on medical necessity at all. Medical necessity determinations must be made exclusively by a licensed physician or a licensed healthcare professional competent to evaluate the specific clinical issues at hand, who must review the requesting provider's recommendation and the insured's individual medical history and clinical circumstances. This is a stronger prohibition than many comparable state laws: it bars AI from making any medical necessity determination, not merely from being the sole or primary basis.
Statutory Text
(D) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making. (2) Notwithstanding paragraph (1), the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (a), by reviewing and considering the requesting providers recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
H-02 Non-Discrimination & Bias Assessment · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(E)-(F)
Plain Language
Carriers and utilization review organizations must ensure that AI tools used in utilization review do not discriminate directly or indirectly against any insured in violation of state or federal law, including Massachusetts anti-discrimination law (Chapter 151B). The tools must also be applied fairly and equitably, consistent with applicable state and federal agency regulations and guidance. This imposes both a non-discrimination obligation and an affirmative fairness standard, though it does not specify testing methodology or audit requirements.
Statutory Text
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
HC-01 Healthcare AI Decision Restrictions · HC-01.7 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(G)
Plain Language
AI tools used in utilization review must be made available for inspection, audit, and compliance review by the Division of Insurance and the Executive Office of Health and Human Services. This is a regulatory access obligation — carriers must ensure their AI tools (including third-party vendor tools) are subject to regulatory examination on demand under applicable state and federal law.
Statutory Text
(G) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the division and by the executive office of health and human services pursuant to applicable state and federal law.
HC-01 Healthcare AI Decision Restrictions · HC-01.6 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(H)
Plain Language
Carriers and utilization review organizations must include disclosures about the use and oversight of AI tools in their written utilization review policies and procedures, as already required under existing subsection (a) of Section 12. This effectively extends the existing policy-documentation requirement to cover AI-specific disclosures, ensuring that enrollees, providers, and regulators can identify when and how AI tools are involved in utilization review.
Statutory Text
(H) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by subsection (a).
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(I)
Plain Language
Carriers and utilization review organizations must periodically review and revise the performance, use, and outcomes of AI tools used in utilization review to maximize accuracy and reliability. This is a continuing obligation — not a one-time pre-deployment check — and requires ongoing operational monitoring and improvement of the AI system.
Statutory Text
(I) The artificial intelligence, algorithm, or other software tools performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
HC-01 Healthcare AI Decision Restrictions · HC-01.5 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(J)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond its intended and stated purpose, and all use must be consistent with state and federal law (including HIPAA and Massachusetts health privacy law). This is a purpose-limitation obligation that restricts secondary uses of patient data processed by AI systems in the utilization review context.
Statutory Text
(J) Patient data is not used beyond its intended and stated purpose, and consistent with state and federal law.
Other · Healthcare
G.L. c. 176O, § 12(g)(1)(K)
Plain Language
AI tools used in utilization review must not directly or indirectly cause harm to the insured. This is a general harm-avoidance mandate without further specification — it likely serves as a catch-all liability provision rather than a prescriptive compliance obligation. In practice, compliance with the specific obligations in (A)–(J) would substantially address this requirement, but (K) preserves a separate basis for enforcement if an AI tool causes harm through a mechanism not specifically addressed elsewhere in the subsection.
Statutory Text
(K) The artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to the insured.
Other · Healthcare
G.L. c. 176O, § 12(g)(1)(C), (g)(5)
Plain Language
Carriers must ensure AI tools comply with Chapter 176O and all applicable state and federal law, and health benefit plans must comply with applicable state and federal rules and guidance on AI. The Division and EOHHS may issue implementing guidance within one year of federal or state rulemaking. This creates no new standalone obligation — it is a compliance pass-through and a delegation of future rulemaking authority.
Statutory Text
(C) The artificial intelligence, algorithm, or other software tools criteria and guidelines complies with this chapter and applicable state and federal law. (5) A health benefit plan subject to this subsection shall comply with applicable state and federal rules and guidance regarding the use of artificial intelligence, algorithm, or other software tools. The division and the executive office of health and human services may issue guidance to implement this paragraph within one year of the adoption of state or federal rules or the issuance of guidance by the federal Department of Health and Human Services regarding the use of artificial intelligence, algorithm, or other software tools. Such guidance shall not be subject to chapter 30A.