HB-1385
MD · State · USA
MD
USA
● Pending
Proposed Effective Date
2026-10-01
Maryland House Bill 1385 — Health Insurance – Use of Artificial Intelligence – Human Evaluation
Amends Maryland's existing law governing the use of AI, algorithms, and software tools in health insurance utilization review. Requires that audits and compliance reviews of such tools include a human evaluation of patient medical records by a licensed health care professional who can question, modify, or override the tool's determinations. Also requires that periodic performance reviews of these tools include a human evaluation of real-world health outcomes and that findings be used to improve accuracy and patient responsiveness. Applies to carriers, pharmacy benefits managers, and private review agents using AI for utilization review. Enforced by the Maryland Insurance Commissioner through existing regulatory authority.
Summary

Amends Maryland's existing law governing the use of AI, algorithms, and software tools in health insurance utilization review. Requires that audits and compliance reviews of such tools include a human evaluation of patient medical records by a licensed health care professional who can question, modify, or override the tool's determinations. Also requires that periodic performance reviews of these tools include a human evaluation of real-world health outcomes and that findings be used to improve accuracy and patient responsiveness. Applies to carriers, pharmacy benefits managers, and private review agents using AI for utilization review. Enforced by the Maryland Insurance Commissioner through existing regulatory authority.

Enforcement & Penalties
Enforcement Authority
Maryland Insurance Commissioner enforces through audit and compliance review authority over carriers, pharmacy benefits managers, and private review agents subject to § 15–10B–05.1. Enforcement is agency-initiated through the Commissioner's existing regulatory authority over health insurance utilization review under Title 15 of the Insurance Article. No private right of action is created by this bill.
Penalties
The bill does not specify independent penalty amounts, damages, or remedies. Enforcement and penalties would be governed by the Maryland Insurance Commissioner's existing regulatory authority under the Insurance Article, which may include administrative penalties, corrective action orders, and license sanctions.
Who Is Covered
"Carrier" means: (i) an insurer; (ii) a nonprofit health service plan; (iii) a health maintenance organization; (iv) a dental plan organization; or (v) any other person that provides health benefit plans subject to regulation by the State.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
HC-01 Healthcare AI Decision Restrictions · HC-01.3 · Deployer · Healthcare
Ins. § 15–10B–05.1(c)(1)-(2)
Plain Language
Carriers, pharmacy benefits managers, and private review agents using AI tools for utilization review must ensure those tools base their determinations on the individual enrollee's medical history, the clinical circumstances presented by the requesting provider, or other relevant clinical information from the enrollee's records. The tools may not base determinations solely on group-level datasets. This requires individualized clinical data inputs rather than population-level statistical proxies alone.
Statutory Text
(c) Subject to subsection (d) of this section, an entity subject to this section shall ensure that: (1) an artificial intelligence, algorithm, or other software tool bases its determinations on: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by a requesting provider; or (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (2) an artificial intelligence, algorithm, or other software tool does not base its determinations solely on a group dataset;
HC-01 Healthcare AI Decision Restrictions · HC-01.1 · Deployer · Healthcare
Ins. § 15–10B–05.1(c)(4), (d)
Plain Language
AI tools used for utilization review may not replace the role of a health care provider in the determination process and may not independently deny, delay, or modify health care services. These provisions together ensure that a human clinical professional retains the final decision-making role — the AI tool can inform or support the process but cannot issue adverse determinations on its own.
Statutory Text
(4) an artificial intelligence, algorithm, or other software tool does not replace the role of a health care provider in the determination process under § 15–10B–07 of this subtitle; (d) An artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services.
HC-01 Healthcare AI Decision Restrictions · HC-01.5 · Deployer · Healthcare
Ins. § 15–10B–05.1(c)(10)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond its intended and stated purpose, consistent with HIPAA. This imposes a purpose-limitation obligation on data flowing through AI systems used in coverage determinations, preventing secondary uses of clinical data collected for utilization review purposes.
Statutory Text
(10) patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable;
HC-01 Healthcare AI Decision Restrictions · HC-01.7 · Deployer · Healthcare
Ins. § 15–10B–05.1(c)(7)-(8), (e)
Plain Language
Covered entities must make their AI utilization review tools open to audit and compliance review by the Maryland Insurance Commissioner, must file written policies and procedures describing AI use and oversight in their utilization plans, and — as newly added by this bill — must ensure that every such audit or compliance review includes a licensed health care professional's human evaluation of patient medical records. The evaluating professional must consider the patient's specific circumstances and must have the authority to question, modify, or override the AI tool's determination. This is the core new obligation of HB 1385: it adds a mandatory human clinical evaluation component to the existing audit/compliance review framework.
Statutory Text
(7) an artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Commissioner IN ACCORDANCE WITH SUBSECTION (E) OF THIS SECTION; (8) written policies and procedures are included in the utilization plan submitted under § 15–10B–05 of this subtitle, including how an artificial intelligence, algorithm, or other software tool will be used and what oversight will be provided; (E) AN AUDIT OR COMPLIANCE REVIEW OF AN ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL UNDER SUBSECTION (C)(7) OF THIS SECTION SHALL INCLUDE THE HUMAN EVALUATION OF A PATIENT'S MEDICAL RECORDS BY A LICENSED HEALTH CARE PROFESSIONAL THAT TAKES INTO CONSIDERATION THE PATIENT'S SPECIFIC CIRCUMSTANCES AND ALLOWS THE LICENSED HEALTH CARE PROFESSIONAL TO QUESTION, MODIFY, OR OVERRIDE A DETERMINATION MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
Ins. § 15–10B–05.1(c)(9), (f)
Plain Language
Covered entities must review and revise the performance, use, and outcomes of their AI utilization review tools at least quarterly to maximize accuracy and reliability. As newly added by this bill, those quarterly reviews must include a human evaluation of the real-world health outcomes of decisions made by the AI tool, and the findings from that evaluation must be used to improve the tool — making it safer, more accurate, and more responsive to patient needs. This creates a continuous improvement loop: human evaluators assess actual patient outcomes, and those assessments must feed back into tool refinement.
Statutory Text
(9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability, IN ACCORDANCE WITH SUBSECTION (F) OF THIS SECTION; (F) A REVIEW OF THE PERFORMANCE, USE, AND OUTCOMES OF ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS UNDER SUBSECTION (C)(9) OF THIS SECTION SHALL INCLUDE: (1) A HUMAN EVALUATION OF THE REAL–WORLD HEALTH OUTCOMES OF DECISIONS MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL; AND (2) USE OF THE FINDINGS MADE BY THE EVALUATION REQUIRED UNDER ITEM (1) OF THIS SUBSECTION TO IMPROVE THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL AND MAKE THE DECISIONS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL SAFER, MORE ACCURATE, AND MORE RESPONSIVE TO PATIENT NEEDS.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Healthcare
Ins. § 15–10B–05.1(c)(5)-(6)
Plain Language
Covered entities must ensure that AI tools used in utilization review do not result in unfair discrimination and are applied fairly and equitably, including in compliance with applicable HHS regulations and guidance. While this provision does not specify a detailed bias testing methodology, it creates an affirmative obligation to monitor for and prevent discriminatory outcomes from AI-driven utilization review determinations.
Statutory Text
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Other · Deployer · Healthcare
Ins. § 15–10B–05.1(c)(11)
Plain Language
Covered entities must ensure that their AI utilization review tools do not directly or indirectly cause harm to enrollees. This is a broad general-duty standard rather than a specific operational obligation — it functions as a catch-all liability hook that could apply to any adverse outcome from AI-driven utilization review. It creates no discrete compliance action beyond the other specific obligations in the section.
Statutory Text
(11) an artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to an enrollee.
Other · Healthcare
Ins. § 15–10B–05.1(c)(3)
Plain Language
This provision requires that the criteria and guidelines governing AI tool use in utilization review comply with the existing requirements of the Insurance Article's title on health insurance. It is a cross-reference ensuring that AI tools are held to the same standards as non-AI utilization review methods — it creates no new independent obligation beyond compliance with pre-existing law.
Statutory Text
(3) the criteria and guidelines for using an artificial intelligence, algorithm, or other software tool for making determinations comply with the requirements of this title;