HB-1385
MD · State · USA
MD
USA
● Pending
Proposed Effective Date
2026-10-01
Maryland House Bill 1385 — Health Insurance – Use of Artificial Intelligence – Human Evaluation
Amends Maryland Insurance Code § 15-10B-05.1 to strengthen human oversight requirements for AI, algorithms, or other software tools used in health insurance utilization review. Requires that audits and compliance reviews of such tools include evaluation by a licensed health care professional who reviews patient medical records, considers individual circumstances, and has authority to question, modify, or override AI determinations. Additionally requires that periodic performance reviews of AI utilization review tools include human evaluation of real-world health outcomes, with findings used to improve accuracy and patient responsiveness. Applies to carriers, pharmacy benefits managers, and private review agents that use AI for utilization review. Enforced by the Maryland Insurance Commissioner.
Summary

Amends Maryland Insurance Code § 15-10B-05.1 to strengthen human oversight requirements for AI, algorithms, or other software tools used in health insurance utilization review. Requires that audits and compliance reviews of such tools include evaluation by a licensed health care professional who reviews patient medical records, considers individual circumstances, and has authority to question, modify, or override AI determinations. Additionally requires that periodic performance reviews of AI utilization review tools include human evaluation of real-world health outcomes, with findings used to improve accuracy and patient responsiveness. Applies to carriers, pharmacy benefits managers, and private review agents that use AI for utilization review. Enforced by the Maryland Insurance Commissioner.

Enforcement & Penalties
Enforcement Authority
Maryland Insurance Commissioner enforces through audits and compliance reviews of AI tools used for utilization review. Enforcement is agency-initiated through regulatory oversight of carriers, pharmacy benefits managers, and private review agents. No private right of action is created by this bill.
Penalties
The bill does not specify monetary penalties, damages, or remedies. Enforcement relies on the Commissioner's existing regulatory authority over carriers and utilization review entities under Title 15 of the Maryland Insurance Article.
Who Is Covered
"Carrier" means: (i) an insurer; (ii) a nonprofit health service plan; (iii) a health maintenance organization; (iv) a dental plan organization; or (v) any other person that provides health benefit plans subject to regulation by the State.
a pharmacy benefits manager or private review agent that: (i) contracts with a carrier to provide utilization review on behalf of the carrier; and (ii) uses an artificial intelligence, algorithm, or other software tool for the purpose of conducting utilization review on behalf of the carrier.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
HC-01 Healthcare AI Decision Restrictions · HC-01.3 · Deployer · Healthcare
Ins. § 15-10B-05.1(c)(1)-(2)
Plain Language
Carriers, pharmacy benefits managers, and private review agents must ensure that any AI, algorithm, or software tool used for utilization review bases its determinations on the individual enrollee's medical history, clinical circumstances as presented by the requesting provider, or other relevant clinical information from the enrollee's records. The tool may not base determinations solely on group-level datasets. This requires individualized clinical data inputs for each determination.
Statutory Text
(c) Subject to subsection (d) of this section, an entity subject to this section shall ensure that: (1) an artificial intelligence, algorithm, or other software tool bases its determinations on: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by a requesting provider; or (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (2) an artificial intelligence, algorithm, or other software tool does not base its determinations solely on a group dataset;
HC-01 Healthcare AI Decision Restrictions · HC-01.1 · Deployer · Healthcare
Ins. § 15-10B-05.1(c)(4), (d)
Plain Language
AI tools used for utilization review may not replace the role of a health care provider in the determination process, and may not independently deny, delay, or modify health care services. This is an absolute prohibition — the AI tool cannot be the final decision-maker on coverage determinations. A licensed health care provider must make or independently affirm every adverse determination.
Statutory Text
(4) an artificial intelligence, algorithm, or other software tool does not replace the role of a health care provider in the determination process under § 15–10B–07 of this subtitle; (d) An artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services.
HC-01 Healthcare AI Decision Restrictions · HC-01.5 · Deployer · Healthcare
Ins. § 15-10B-05.1(c)(10)
Plain Language
Patient data used by AI tools in the utilization review process must not be used beyond its intended and stated purpose. This obligation must be applied consistently with HIPAA requirements. Covered entities must ensure their AI vendors and utilization review contractors also comply with this data use limitation.
Statutory Text
(10) patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable;
HC-01 Healthcare AI Decision Restrictions · HC-01.7 · Deployer · Healthcare
Ins. § 15-10B-05.1(c)(7)-(8), (e)
Plain Language
AI tools used for utilization review must be open to inspection by the Maryland Insurance Commissioner for audits and compliance reviews. Written policies and procedures describing how the AI tool will be used and what oversight will be provided must be included in the utilization plan filed with the Commissioner. Critically, audits and compliance reviews must now include a human evaluation component: a licensed health care professional must review patient medical records, consider the patient's specific circumstances, and have the authority to question, modify, or override any determination made by the AI tool. This is new language added by the bill — it ensures that regulatory audits are not purely technical assessments of the tool but include clinical review of actual patient outcomes.
Statutory Text
(7) an artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Commissioner IN ACCORDANCE WITH SUBSECTION (E) OF THIS SECTION; (8) written policies and procedures are included in the utilization plan submitted under § 15–10B–05 of this subtitle, including how an artificial intelligence, algorithm, or other software tool will be used and what oversight will be provided; (E) AN AUDIT OR COMPLIANCE REVIEW OF AN ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL UNDER SUBSECTION (C)(7) OF THIS SECTION SHALL INCLUDE THE HUMAN EVALUATION OF A PATIENT'S MEDICAL RECORDS BY A LICENSED HEALTH CARE PROFESSIONAL THAT TAKES INTO CONSIDERATION THE PATIENT'S SPECIFIC CIRCUMSTANCES AND ALLOWS THE LICENSED HEALTH CARE PROFESSIONAL TO QUESTION, MODIFY, OR OVERRIDE A DETERMINATION MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
Ins. § 15-10B-05.1(c)(9), (f)
Plain Language
Covered entities must review and, if necessary, revise the performance, use, and outcomes of their AI utilization review tools at least quarterly to maximize accuracy and reliability. The bill adds a new requirement that these quarterly reviews must include a human evaluation of real-world health outcomes resulting from AI-driven decisions. The findings from this human evaluation must then be used to improve the AI tool, making its decisions safer, more accurate, and more responsive to patient needs. This creates a continuous feedback loop: human clinicians assess actual patient outcomes, and those assessments must drive concrete improvements to the AI system.
Statutory Text
(9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability, IN ACCORDANCE WITH SUBSECTION (F) OF THIS SECTION; (F) A REVIEW OF THE PERFORMANCE, USE, AND OUTCOMES OF ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS UNDER SUBSECTION (C)(9) OF THIS SECTION SHALL INCLUDE: (1) A HUMAN EVALUATION OF THE REAL–WORLD HEALTH OUTCOMES OF DECISIONS MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL; AND (2) USE OF THE FINDINGS MADE BY THE EVALUATION REQUIRED UNDER ITEM (1) OF THIS SUBSECTION TO IMPROVE THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL AND MAKE THE DECISIONS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL SAFER, MORE ACCURATE, AND MORE RESPONSIVE TO PATIENT NEEDS.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Healthcare
Ins. § 15-10B-05.1(c)(5)-(6)
Plain Language
Covered entities must ensure their AI utilization review tools do not result in unfair discrimination and are applied fairly and equitably. Compliance must align with applicable HHS regulations and guidance. While the bill does not prescribe a specific testing methodology, the obligation to ensure non-discrimination implicitly requires some form of monitoring or testing to verify the tool's outputs are not discriminatory across patient populations.
Statutory Text
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Other · Healthcare
Ins. § 15-10B-05.1(c)(3)
Plain Language
The criteria and guidelines governing how AI tools make utilization review determinations must comply with all existing requirements of Title 15 of the Maryland Insurance Article. This preserves the applicability of existing insurance regulatory requirements to AI-driven processes but does not create a new standalone compliance obligation.
Statutory Text
(3) the criteria and guidelines for using an artificial intelligence, algorithm, or other software tool for making determinations comply with the requirements of this title;
Other · Healthcare
Ins. § 15-10B-05.1(c)(11)
Plain Language
Covered entities must ensure that their AI utilization review tools do not directly or indirectly cause harm to enrollees. This is a broad, general-duty provision establishing a baseline standard of care. It provides a regulatory hook for the Commissioner to take enforcement action when AI tools produce harmful outcomes, even if no other specific subsection is violated.
Statutory Text
(11) an artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to an enrollee.