HB-795
MD · State · USA
MD
USA
● Pending
Proposed Effective Date
2026-10-01
Maryland House Bill 795 — Health Insurance – Artificial Intelligence – Grievance Process and Reporting (AI Health Insurance Accountability Act of 2026)
Amends Maryland's existing health insurance grievance and reporting framework to address the use of AI, algorithms, and other software tools in adverse coverage decisions. Requires carriers' internal grievance processes to provide for human review of adverse decisions made using AI tools, including review for compliance with existing § 15–10B–05.1 requirements governing AI use in utilization review. Adds a new quarterly reporting obligation requiring carriers to report AI-related grievance data aggregated by claim type, member demographics, and policy type. Creates a model review trigger: if a Commissioner-specified percentage of adverse decisions from the same AI tool result in grievances within six months, the carrier must conduct a model review and submit findings. Enforced by the Maryland Insurance Commissioner through existing regulatory authority; no private right of action is created.
Summary

Amends Maryland's existing health insurance grievance and reporting framework to address the use of AI, algorithms, and other software tools in adverse coverage decisions. Requires carriers' internal grievance processes to provide for human review of adverse decisions made using AI tools, including review for compliance with existing § 15–10B–05.1 requirements governing AI use in utilization review. Adds a new quarterly reporting obligation requiring carriers to report AI-related grievance data aggregated by claim type, member demographics, and policy type. Creates a model review trigger: if a Commissioner-specified percentage of adverse decisions from the same AI tool result in grievances within six months, the carrier must conduct a model review and submit findings. Enforced by the Maryland Insurance Commissioner through existing regulatory authority; no private right of action is created.

Enforcement & Penalties
Enforcement Authority
Maryland Insurance Commissioner. Enforcement is agency-initiated through the Commissioner's existing authority over carriers, including the power to conduct examinations under Title 2, Subtitle 2 of the Insurance Article and to report violations or take actions under § 15–10B–11. The Commissioner may use carrier-submitted quarterly reports as the basis for initiating an examination. No private right of action is created by this bill.
Penalties
The bill does not specify new monetary penalties, statutory damages, or remedies. Enforcement actions and violations are addressed under existing § 15–10B–11 of the Insurance Article, which governs the Commissioner's authority to take action against carriers for violations of the subtitle.
Who Is Covered
"Carrier" means: (i) an insurer; (ii) a nonprofit health service plan; (iii) a health maintenance organization; (iv) a dental plan organization; or (v) any other person that provides health benefit plans subject to regulation by the State.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
HC-01 Healthcare AI Decision Restrictions · HC-01.1 · Deployer · Healthcare
Insurance § 15–10A–02(b)(2)(vi)
Plain Language
When a member files a grievance challenging an adverse coverage decision that was made using AI, an algorithm, or other software tools, the carrier's internal grievance process must provide for human review of that adverse decision. The human review must include verification of compliance with § 15–10B–05.1, which requires that AI tools base determinations on individual clinical data, do not replace the role of a healthcare provider, and do not result in unfair discrimination, among other requirements. This is a new procedural requirement layered onto the existing internal grievance framework — carriers must build this AI-specific human review into their existing grievance workflows.
Statutory Text
(VI) FOR A GRIEVANCE RESULTING FROM AN ADVERSE DECISION MADE USING ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS, PROVIDE FOR THE HUMAN REVIEW OF THE ADVERSE DECISION, INCLUDING FOR COMPLIANCE WITH § 15–10B–05.1 OF THIS TITLE.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Healthcare
Insurance § 15–10A–06(a)(1)(iii)(9)
Plain Language
Carriers must include in their existing quarterly reports to the Commissioner the total number of grievances that received human review under the new AI-related grievance provision (§ 15–10A–02(b)(2)(vi)), broken down by type of claim, member race/gender/profession, and type of policy (individual, small group, large group, and whether purchased on the Health Benefit Exchange). This demographic disaggregation enables the Commissioner to monitor for potential disparate impact of AI-driven adverse decisions across protected classes.
Statutory Text
9. THE TOTAL NUMBER OF GRIEVANCES REVIEWED UNDER § 15–10A–02(B)(2)(VI) OF THIS SUBTITLE AND AGGREGATED BY: A. TYPE OF CLAIM; B. RACE, GENDER, AND PROFESSION OF MEMBER; AND C. TYPE OF POLICY, INCLUDING INDIVIDUAL, SMALL GROUP, OR LARGE GROUP AND WHETHER THE POLICY WAS PURCHASED ON THE HEALTH BENEFIT EXCHANGE; AND
S-01 AI System Safety Program · S-01.7 · Deployer · Healthcare
Insurance § 15–10A–06(a)(3)
Plain Language
If more than a Commissioner-determined percentage of a carrier's adverse decisions made using the same AI, algorithm, or software tool result in grievances within any six-month period, the carrier must conduct a model review of that AI tool and submit the findings in its quarterly report. The specific grievance-rate threshold that triggers this obligation will be set by the Commissioner — the statute delegates that threshold determination. This creates a performance-triggered audit requirement: carriers must monitor grievance rates per AI tool and initiate a formal review process when the threshold is exceeded. The review findings must be documented and submitted to the Commissioner alongside the regular quarterly reporting.
Statutory Text
(3) IF, WITHIN A 6–MONTH PERIOD, MORE THAN A SPECIFIED PERCENTAGE, AS DETERMINED BY THE COMMISSIONER, OF A CARRIER'S ADVERSE DECISIONS MADE USING THE SAME ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL RESULT IN A GRIEVANCE, THE CARRIER SHALL PROVIDE FOR A MODEL REVIEW PROCESS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL AND SUBMIT THE FINDINGS IN THE REPORT REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Healthcare
Insurance § 15–10A–06(a)(1)(iii)(6)
Plain Language
Carriers must report in their quarterly submissions to the Commissioner whether an AI, algorithm, or other software tool was used in making each adverse decision, alongside existing reporting on the number of adverse decisions, whether prior authorization or step therapy was involved, and the type of service at issue. While the underlying reporting obligation is pre-existing, this bill amends the existing statutory text to add the AI-usage disclosure requirement — carriers must now track and report AI involvement in adverse decisions as part of their standard quarterly reporting.
Statutory Text
6. the number of adverse decisions issued by the carrier under § 15–10A–02(f) of this subtitle, whether the adverse decision involved a prior authorization or step therapy protocol, the type of service at issue in the adverse decisions, and whether an artificial intelligence, algorithm, or other software tool was used in making the adverse decision;
HC-01 Healthcare AI Decision Restrictions · HC-01.1HC-01.3 · Deployer · Healthcare
Insurance § 15–10B–05.1(c)(1)-(4), (d)
Plain Language
Carriers and their contracted pharmacy benefits managers and private review agents must ensure that AI tools used in utilization review base determinations on individual enrollee clinical data — medical history, provider-presented clinical circumstances, and clinical records — and do not base determinations solely on group-level datasets. AI tools may not replace the healthcare provider's role in the determination process and may not independently deny, delay, or modify healthcare services. This is a re-enacted existing provision (§ 15–10B–05.1) that the bill incorporates by cross-reference in the new grievance human review requirement. While this section is not newly added by HB 795, it is the substantive standard against which the new human review obligation measures compliance.
Statutory Text
(c) Subject to subsection (d) of this section, an entity subject to this section shall ensure that: (1) an artificial intelligence, algorithm, or other software tool bases its determinations on: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by a requesting provider; or (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (2) an artificial intelligence, algorithm, or other software tool does not base its determinations solely on a group dataset; (3) the criteria and guidelines for using an artificial intelligence, algorithm, or other software tool for making determinations comply with the requirements of this title; (4) an artificial intelligence, algorithm, or other software tool does not replace the role of a health care provider in the determination process under § 15–10B–07 of this subtitle; (d) An artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
Insurance § 15–10B–05.1(c)(5)-(9)
Plain Language
Carriers must ensure that AI tools used in utilization review do not result in unfair discrimination and are applied fairly and equitably in accordance with federal HHS guidance. AI tools must be open to Commissioner inspection for audit or compliance reviews. Written policies and procedures for AI use must be included in the utilization plan filed under § 15–10B–05. AI tool performance, use, and outcomes must be reviewed and revised at least quarterly to maximize accuracy and reliability. These are existing requirements under § 15–10B–05.1 that are re-enacted without amendment; they form the substantive compliance standard that the new human review grievance provision incorporates by cross-reference.
Statutory Text
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services; (7) an artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Commissioner; (8) written policies and procedures are included in the utilization plan submitted under § 15–10B–05 of this subtitle, including how an artificial intelligence, algorithm, or other software tool will be used and what oversight will be provided; (9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability;
HC-01 Healthcare AI Decision Restrictions · Deployer · Healthcare
Insurance § 15–10B–05.1(c)(10)-(11)
Plain Language
Carriers must ensure that patient data used by AI tools in utilization review is not used beyond its intended and stated purpose, consistent with HIPAA. Carriers must also ensure that AI tools do not directly or indirectly cause harm to enrollees. These are existing requirements under § 15–10B–05.1 re-enacted without amendment, forming part of the compliance standard referenced by the new grievance review provision.
Statutory Text
(10) patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable; and (11) an artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to an enrollee.