S-2632
MA · State · USA
MA
USA
● Pending
Proposed Effective Date
2025-10-08
An Act relative to the use of artificial intelligence and other software tools in healthcare decision-making
This bill has two main parts. Section 1 adds a new section to Chapter 112 restricting how licensed mental health professionals may use AI in therapy and psychotherapy: AI is limited to administrative and supplementary support roles, may not independently interact with clients therapeutically or detect emotions, and use for recording or transcribing sessions requires written informed consent. No person or entity may offer therapy through AI unless conducted by a licensed professional. Violations carry civil penalties up to $10,000 per violation assessed by the Division of Occupational Licensure. Section 2 amends Chapter 176O to regulate carriers and utilization review organizations using AI for medical necessity determinations: AI tools must use individualized clinical data, may not rely solely on group datasets, may not supplant physician decision-making, and must be open to regulatory audit. Section 2 creates a private right of action for insureds with damages up to $5,000 per violation, punitive damages, injunctive relief, and attorney's fees.
Summary

This bill has two main parts. Section 1 adds a new section to Chapter 112 restricting how licensed mental health professionals may use AI in therapy and psychotherapy: AI is limited to administrative and supplementary support roles, may not independently interact with clients therapeutically or detect emotions, and use for recording or transcribing sessions requires written informed consent. No person or entity may offer therapy through AI unless conducted by a licensed professional. Violations carry civil penalties up to $10,000 per violation assessed by the Division of Occupational Licensure. Section 2 amends Chapter 176O to regulate carriers and utilization review organizations using AI for medical necessity determinations: AI tools must use individualized clinical data, may not rely solely on group datasets, may not supplant physician decision-making, and must be open to regulatory audit. Section 2 creates a private right of action for insureds with damages up to $5,000 per violation, punitive damages, injunctive relief, and attorney's fees.

Enforcement & Penalties
Enforcement Authority
Section 1 (therapy/psychotherapy provisions): The Division of Occupational Licensure has authority to investigate actual, alleged, or suspected violations and to assess civil penalties after a hearing. No private right of action is created under Section 1. Section 2 (utilization review provisions): Private right of action. An insured who is injured by a violation may bring a civil action against the violating party. The statute declares that a violation constitutes an injury to the insured, effectively eliminating the need for separate proof of standing. The Division of Insurance and the Executive Office of Health and Human Services also have regulatory oversight authority including audit and compliance review powers.
Penalties
Section 1: Civil penalty not to exceed $10,000 per violation, assessed by the Division of Occupational Licensure after a hearing, based on degree of harm and circumstances. Penalty must be paid within 60 days; the order constitutes a judgment enforceable as a court judgment. Section 2: Greater of actual damages or up to $5,000 per insured per violation; punitive damages; injunctive relief; and reasonable attorney's fees and litigation costs. The statute declares that a violation constitutes an injury to the insured, so statutory damages do not require independent proof of actual monetary harm.
Who Is Covered
"Licensed professional" means an individual who holds a valid license issued by this State to provide therapy or psychotherapy services, including: (1) a licensed clinical psychologist; (2) a licensed clinical social worker; (3) a licensed social worker; (4) a licensed professional counselor; (5) a licensed clinical professional counselor; (6) a licensed marriage and family therapist; (7) a certified alcohol and other drug counselor authorized to provide therapy or psychotherapy services; (8) a licensed professional music therapist; (9) a licensed advanced practice registered nurse; and (10) any other professional authorized by this State to provide therapy or psychotherapy services, except for a physician.
Compliance Obligations 12 obligations · click obligation ID to open requirement page
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1HC-02.2 · Professional · Healthcare
G.L. c. 112, § 298(b), (e)
Plain Language
Licensed mental health professionals may use AI only for administrative support (scheduling, billing, logistics) and supplementary support (records, anonymized data analysis, resource organization) — never for therapeutic communication. The professional must maintain full responsibility for all AI interactions, outputs, and data use. AI is categorically prohibited from: making independent therapeutic decisions, directly interacting with clients therapeutically, generating treatment plans without the professional's review and approval, or detecting emotions or mental states. The definitions of administrative support, supplementary support, and therapeutic communication are quite detailed and create a bright-line boundary: if a task involves any clinical interaction or emotional engagement with a client, AI cannot perform it.
Statutory Text
(b) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (c). (e) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of subsections (b) and (c). A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.4 · Professional · Healthcare
G.L. c. 112, § 298(c)
Plain Language
Before using AI to record or transcribe a therapy session, the licensed professional must provide the patient (or their legally authorized representative) written notice that AI will be used and explain the specific purpose of the AI tool. The patient must then provide consent — which is defined strictly as a clear, explicit, affirmative written agreement that is freely given, informed, and revocable. Burying the disclosure in a general terms-of-use agreement, passive UI interactions like hovering or closing content, or deceptive tactics do not constitute valid consent. This obligation applies specifically when AI is used for supplementary support involving session recording or transcription.
Statutory Text
(c) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.3 · DeployerProfessional · Healthcare
G.L. c. 112, § 298(d)
Plain Language
No individual, corporation, or entity may provide, advertise, or offer therapy or psychotherapy services in Massachusetts — including through internet-based AI — unless the services are conducted by a licensed professional. This effectively prohibits standalone AI therapy products that operate without a licensed human professional delivering the services. The prohibition applies to any entity offering such services to the public in Massachusetts, not just to licensed professionals themselves. Religious counseling and peer support are excluded from the definition of therapy or psychotherapy services.
Statutory Text
(d) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
HC-01 Healthcare AI Decision Restrictions · HC-01.3 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(A)-(B)
Plain Language
Carriers and utilization review organizations using AI for utilization review must ensure the AI tool bases its determinations on the individual insured's medical history, the clinical circumstances presented by the requesting provider, and other relevant clinical information from the insured's records. The AI tool may not base determinations solely on group-level or aggregate datasets — it must incorporate individualized patient data. This applies to prospective, retrospective, and concurrent utilization review functions.
Statutory Text
(A) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history. (ii) Individual clinical circumstances as presented by the requesting provider. (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (B) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
HC-01 Healthcare AI Decision Restrictions · HC-01.1HC-01.2 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(D), (g)(2)
Plain Language
AI tools used in utilization review may not supplant healthcare provider decision-making. More specifically, AI may not deny, delay, or modify healthcare services on the basis of medical necessity — that determination must be made exclusively by a licensed physician or licensed healthcare professional who is competent to evaluate the specific clinical issues involved. The human reviewer must consider the requesting provider's recommendation, the insured's clinical history, and individual clinical circumstances. This is a hard prohibition: AI cannot independently make any adverse medical necessity determination, even with human oversight available after the fact.
Statutory Text
(D) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making. (2) Notwithstanding paragraph (1), the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (a), by reviewing and considering the requesting providers recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(E)-(F)
Plain Language
Carriers and utilization review organizations must ensure that AI tools used in utilization review do not discriminate — directly or indirectly — against any insured in violation of state or federal antidiscrimination law, including Massachusetts Chapter 151B. The tools must also be applied fairly and equitably, in accordance with applicable state and federal agency regulations and guidance. This is both a non-discrimination obligation and a fairness standard that applies on an ongoing basis.
Statutory Text
(E) The use of the artificial intelligence, algorithm, or other software tool does not discriminate, directly or indirectly, against any insured in violation of state or federal law, including but not limited to chapter 151B. (F) The artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by state and federal agencies.
HC-01 Healthcare AI Decision Restrictions · HC-01.7 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(G)-(H)
Plain Language
AI tools used in utilization review must be open to inspection for audit or compliance review by both the Division of Insurance and the Executive Office of Health and Human Services. Additionally, carriers must include disclosures about the use and oversight of AI tools in their written utilization review policies and procedures required under existing Section 12(a). This creates both a regulatory access obligation and a documentation disclosure obligation.
Statutory Text
(G) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the division and by the executive office of health and human services pursuant to applicable state and federal law. (H) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by subsection (a).
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(I)
Plain Language
Carriers and utilization review organizations must periodically review and revise the performance, use, and outcomes of AI tools used in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify review frequency, methodology, or documentation requirements, leaving those details to the carrier's discretion.
Statutory Text
(I) The artificial intelligence, algorithm, or other software tools performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
HC-01 Healthcare AI Decision Restrictions · HC-01.5 · Deployer · Healthcare
G.L. c. 176O, § 12(g)(1)(J)
Plain Language
Patient data used by AI tools in utilization review functions must not be used beyond its intended and stated purpose. This data use limitation applies in addition to existing HIPAA and state health privacy law requirements. Carriers must ensure that AI tools and their vendors do not repurpose patient data collected during utilization review for other uses such as training models, marketing, or unrelated analytics.
Statutory Text
(J) Patient data is not used beyond its intended and stated purpose, and consistent with state and federal law.
Other · Healthcare
G.L. c. 176O, § 12(g)(1)(K)
Plain Language
AI tools used in utilization review must not directly or indirectly cause harm to the insured. This is a broad, general no-harm standard that goes beyond specific anti-discrimination or accuracy requirements — it creates potential liability for any harm traceable to the AI tool's use in the utilization review process. The statute does not define 'harm,' leaving it open to interpretation. Combined with the private right of action in paragraph (8), which declares that any violation constitutes an injury to the insured, this provision could provide a broad basis for litigation.
Statutory Text
(K) The artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to the insured.
Other · Healthcare
G.L. c. 176O, § 12(g)(1)(C)
Plain Language
The criteria and guidelines used by AI tools in utilization review must comply with Chapter 176O and all applicable state and federal law. This is a compliance pass-through confirming that using AI does not exempt carriers from existing regulatory requirements governing utilization review.
Statutory Text
(C) The artificial intelligence, algorithm, or other software tools criteria and guidelines complies with this chapter and applicable state and federal law.
Other · Healthcare
G.L. c. 112, § 298(f)
Plain Language
All records and communications between therapy clients and licensed professionals remain confidential and may not be disclosed except as required by law. This preserves existing professional confidentiality obligations in the context of AI use — it confirms that AI-assisted record-keeping and communications are subject to the same confidentiality standards as traditional practice.
Statutory Text
(f) All records kept by a licensed professional and all communications between an individual seeking therapy or psychotherapy services and a licensed professional shall be confidential and shall not be disclosed except as required by law.