LD-1301
ME · State · USA
ME
USA
● Pending
Proposed Effective Date
2026-01-01
An Act to Prohibit the Use of Artificial Intelligence in the Denial of Health Insurance Claims (LD 1301, S.P. 531, 132nd Maine Legislature)
This bill regulates the use of artificial intelligence by health insurance carriers (and their contracted third parties) to make medical review or utilization review determinations relating to the approval, denial, delay, modification, or adjustment of coverage under a health plan. AI-derived determinations must be based on individual enrollee clinical data and medical history, may not discriminate on protected characteristics, and must be fairly and equitably applied. Carriers must disclose AI use in written policies to enrollees and keep AI tools open to inspection. Any adverse determination based on medical necessity must be made by a clinical peer competent in the relevant clinical area. The bill also imposes data use limitations, requiring that data used in AI determinations not be used beyond its intended purpose. Effective January 1, 2026.
Summary

This bill regulates the use of artificial intelligence by health insurance carriers (and their contracted third parties) to make medical review or utilization review determinations relating to the approval, denial, delay, modification, or adjustment of coverage under a health plan. AI-derived determinations must be based on individual enrollee clinical data and medical history, may not discriminate on protected characteristics, and must be fairly and equitably applied. Carriers must disclose AI use in written policies to enrollees and keep AI tools open to inspection. Any adverse determination based on medical necessity must be made by a clinical peer competent in the relevant clinical area. The bill also imposes data use limitations, requiring that data used in AI determinations not be used beyond its intended purpose. Effective January 1, 2026.

Enforcement & Penalties
Enforcement Authority
The bill amends Title 24-A MRSA §4304, which is administered by the Maine Bureau of Insurance within the Department of Professional and Financial Regulation. No private right of action is created by this bill. Enforcement would be through the existing regulatory authority of the Superintendent of Insurance over carriers. The bill requires AI tools to be open to inspection, implying regulatory audit authority.
Penalties
The bill does not specify monetary penalties, damages, or remedies. Enforcement remedies would be governed by existing Title 24-A enforcement provisions available to the Superintendent of Insurance, which may include administrative penalties, corrective action orders, and license-related sanctions.
Who Is Covered
Compliance Obligations 5 obligations · click obligation ID to open requirement page
HC-01 Healthcare AI Decision Restrictions · HC-01.3 · Deployer · Healthcare
24-A MRSA §4304(8)(A)(1)
Plain Language
When a carrier or its contracted third party uses AI to make utilization review or medical review determinations, those determinations must be based on the individual enrollee's medical history and clinical circumstances as presented by the requesting provider and contained in the enrollee's medical record. AI tools may not supplant provider decision-making — the treating provider's clinical judgment must remain central. This effectively prohibits AI systems from making coverage determinations based solely on aggregate or group-level data without considering individualized clinical information.
Statutory Text
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (1) Be based upon an enrollee's medical history, as applicable, and individual clinical circumstances as presented by the requesting provider, as well as other relevant clinical information contained in the enrollee's medical record, and not supplant provider decision making;
HC-01 Healthcare AI Decision Restrictions · HC-01.2 · Deployer · Healthcare
24-A MRSA §4304(8)(B)
Plain Language
Any adverse coverage determination — denial, delay, modification, or adjustment — based on medical necessity must be made by a clinical peer who is competent in the specific clinical area at issue. The clinical peer must consider the treating provider's recommendation and the enrollee's individual medical history and clinical circumstances. This effectively prohibits AI from serving as the sole or final decision-maker for adverse medical necessity determinations — a qualified human clinical professional must make or affirm every such decision.
Statutory Text
A denial, delay, modification or adjustment of health care services based on medical necessity must be made by a clinical peer competent to evaluate the specific clinical issues involved in the health care services requested by the enrollee's provider. The clinical peer making the medical review or utilization review determination shall consider the enrollee's provider's recommendation and the enrollee's medical history, as applicable, and individual clinical circumstances.
H-02 Non-Discrimination & Bias Assessment · Deployer · Healthcare
24-A MRSA §4304(8)(A)(2)-(3)
Plain Language
AI-derived utilization review and medical review determinations must not directly or indirectly discriminate against enrollees on an extensive list of protected characteristics, including race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life, or other health conditions. Determinations must also be fairly and equitably applied across all enrollees. The inclusion of 'indirectly' discriminate suggests proxy discrimination and disparate impact are covered, not only intentional discrimination. The protected class list is notably broader than typical employment or civil rights statutes — it includes predicted disability, expected length of life, degree of medical dependency, and quality of life.
Statutory Text
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (2) Not directly or indirectly discriminate against an enrollee on the basis of race, color, religion, national origin, ancestry, age, sex, gender, gender identity, gender expression, sexual orientation, present or predicted disability, expected length of life, degree of medical dependency, quality of life or other health conditions; (3) Be fairly and equitably applied;
HC-01 Healthcare AI Decision Restrictions · HC-01.6HC-01.7 · Deployer · Healthcare
24-A MRSA §4304(8)(A)(4)
Plain Language
AI tools used in utilization review or medical review determinations must be open to inspection — implying regulatory audit access to the AI systems and their decision logic. Additionally, carriers must disclose the use of AI in their written policies and procedures provided to enrollees. This creates two distinct obligations: a transparency-to-regulators obligation (open to inspection) and a transparency-to-enrollees obligation (written disclosure in policies and procedures).
Statutory Text
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (4) Be open to inspection, and the use of artificial intelligence must be disclosed in the written policies and procedures to an enrollee.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
24-A MRSA §4304(8)(A) (final paragraph)
Plain Language
Carriers must establish governance policies for AI used in utilization and medical review that create accountability for the AI's performance, use, and outcomes. These policies must be periodically reviewed and revised to ensure accuracy and reliability — this is an ongoing obligation, not a one-time setup. Additionally, data used in AI-derived determinations may not be repurposed beyond its intended and stated purpose, and must be protected from risks that could directly or indirectly harm the enrollee. This creates three distinct requirements: (1) governance policies with accountability, (2) periodic review and revision for accuracy, and (3) data use limitations and data protection.
Statutory Text
Use of artificial intelligence pursuant to this paragraph must be governed by policies that establish accountability for performance, use and outcomes that are reviewed and revised for accuracy and reliability. Data under this paragraph may not be used beyond its intended and stated purpose. Data under this paragraph must be protected from risk that may directly or indirectly cause harm to the enrollee.