LD-1301
ME · State · USA
ME
USA
● Failed
Effective Date
2026-01-01
An Act to Prohibit the Use of Artificial Intelligence in the Denial of Health Insurance Claims (LD 1301, S.P. 531, 132nd Maine Legislature)
This bill regulates the use of AI by health insurance carriers (or their contracted third parties) in making medical review or utilization review determinations relating to coverage under health plans. Beginning January 1, 2026, AI-derived determinations must be based on individual enrollee clinical data (not solely group-level data), must not discriminate on enumerated protected characteristics, must be fairly applied, and must be open to inspection and disclosed to enrollees. Adverse determinations based on medical necessity must be made by a clinical peer competent in the relevant specialty. The bill also requires AI use to be governed by policies establishing accountability for performance, use, and outcomes, with periodic review for accuracy and reliability, and imposes data use limitations.
Summary

This bill regulates the use of AI by health insurance carriers (or their contracted third parties) in making medical review or utilization review determinations relating to coverage under health plans. Beginning January 1, 2026, AI-derived determinations must be based on individual enrollee clinical data (not solely group-level data), must not discriminate on enumerated protected characteristics, must be fairly applied, and must be open to inspection and disclosed to enrollees. Adverse determinations based on medical necessity must be made by a clinical peer competent in the relevant specialty. The bill also requires AI use to be governed by policies establishing accountability for performance, use, and outcomes, with periodic review for accuracy and reliability, and imposes data use limitations.

Enforcement & Penalties
Enforcement Authority
The bill amends Title 24-A MRSA §4304, which is administered by the Maine Bureau of Insurance (Superintendent of Insurance). Enforcement would be through existing insurance regulatory authority. The bill does not create a private right of action or specify any new enforcement mechanism beyond the existing insurance regulatory framework.
Penalties
The bill does not specify any damages, penalties, or remedies. Enforcement remedies would derive from the existing insurance regulatory framework under Title 24-A.
Who Is Covered
Compliance Obligations 6 obligations · click obligation ID to open requirement page
HC-01 Healthcare AI Decision Restrictions · HC-01.3 · Deployer · Healthcare
24-A MRSA §4304(8)(A)(1)
Plain Language
When carriers or their contracted third parties use AI to make medical review or utilization review determinations, those determinations must be based on the individual enrollee's medical history and clinical circumstances as presented by the requesting provider, plus other relevant clinical information from the enrollee's medical record. AI determinations may not supplant provider decision making — meaning the AI tool cannot override or replace the treating provider's clinical judgment as the basis for the determination. This effectively requires individualized clinical review rather than reliance on aggregate or group-level data alone.
Statutory Text
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (1) Be based upon an enrollee's medical history, as applicable, and individual clinical circumstances as presented by the requesting provider, as well as other relevant clinical information contained in the enrollee's medical record, and not supplant provider decision making;
HC-01 Healthcare AI Decision Restrictions · HC-01.2 · Deployer · Healthcare
24-A MRSA §4304(8)(B)
Plain Language
Any adverse determination — denial, delay, modification, or adjustment — based on medical necessity must be made by a clinical peer who is competent to evaluate the specific clinical issues at hand. The clinical peer must consider the treating provider's recommendation and the enrollee's individual medical history and clinical circumstances. This effectively prohibits AI from serving as the sole or primary decision-maker for adverse medical necessity determinations; a qualified human clinical professional must make or independently affirm such decisions. The bill does not define 'clinical peer' but the term is used in the existing utilization review framework under Maine law.
Statutory Text
A denial, delay, modification or adjustment of health care services based on medical necessity must be made by a clinical peer competent to evaluate the specific clinical issues involved in the health care services requested by the enrollee's provider. The clinical peer making the medical review or utilization review determination shall consider the enrollee's provider's recommendation and the enrollee's medical history, as applicable, and individual clinical circumstances.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Healthcare
24-A MRSA §4304(8)(A)(2)-(3)
Plain Language
AI-derived utilization review determinations must not directly or indirectly discriminate against enrollees on an extensive list of protected characteristics, including race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. Determinations must also be fairly and equitably applied. Notably, the protected categories go beyond typical employment discrimination lists to include health-specific characteristics such as predicted disability, expected length of life, and degree of medical dependency — carriers should ensure their AI tools are tested against these categories.
Statutory Text
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (2) Not directly or indirectly discriminate against an enrollee on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life or other health conditions; (3) Be fairly and equitably applied;
HC-01 Healthcare AI Decision Restrictions · HC-01.7 · Deployer · Healthcare
24-A MRSA §4304(8)(A)(4)
Plain Language
AI tools used in utilization review determinations must be open to inspection — meaning regulators or other authorized parties can examine how the tools work. Additionally, carriers must disclose in their written policies and procedures to enrollees that AI is being used in coverage determinations. This creates two distinct requirements: (1) an inspection/auditability obligation, and (2) a written disclosure obligation to enrollees about AI use.
Statutory Text
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (4) Be open to inspection, and the use of artificial intelligence must be disclosed in the written policies and procedures to an enrollee.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
24-A MRSA §4304(8)(A) (final paragraph)
Plain Language
Carriers must adopt and maintain governance policies for AI used in utilization review that establish accountability for the AI's performance, use, and outcomes, and that are periodically reviewed and revised for accuracy and reliability. This is an ongoing governance obligation — not a one-time pre-deployment check. Additionally, data used by the AI may not be repurposed beyond its intended and stated purpose, and must be protected against risks that could harm enrollees. The data use limitation functions as a purpose limitation principle similar to HC-01.5, while the governance and review requirements align with HC-01.4's periodic review obligation.
Statutory Text
Use of artificial intelligence pursuant to this paragraph must be governed by policies that establish accountability for performance, use and outcomes that are reviewed and revised for accuracy and reliability. Data under this paragraph may not be used beyond its intended and stated purpose. Data under this paragraph must be protected from risk that may directly or indirectly cause harm to the enrollee.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Healthcare
24-A MRSA §4304(8)(A) (final paragraph)
Plain Language
Data used by AI in utilization review determinations is subject to a strict purpose limitation: it may not be used beyond its intended and stated purpose. Additionally, such data must be affirmatively protected from risks that could directly or indirectly harm enrollees. This is a data governance obligation that constrains secondary use of enrollee data and requires affirmative data protection measures. It aligns with HC-01.5 (patient data purpose limitation) as well as D-01.4 (data minimization and purpose limitation for AI systems).
Statutory Text
Data under this paragraph may not be used beyond its intended and stated purpose. Data under this paragraph must be protected from risk that may directly or indirectly cause harm to the enrollee.