HC-01
Healthcare AI
Healthcare AI Decision Restrictions
Entities using AI, algorithms, or automated tools in healthcare insurance coverage determinations, utilization review, prior authorization, or claims adjudication must ensure that such tools do not serve as the sole or primary basis for adverse determinations. Final decisions on medical necessity, claim denials, and coverage modifications must be made by licensed, clinically competent healthcare professionals who review individualized patient clinical circumstances. AI tools used in these contexts must base determinations on individual enrollee medical history and clinical data, not solely on group-level datasets.
Applies to DeployerProfessionalGovernment Sector HealthcareInsurance
Bills — Enacted
0
unique bills
Bills — Proposed
34
Last Updated
2026-03-29
Core Obligation

Entities using AI, algorithms, or automated tools in healthcare insurance coverage determinations, utilization review, prior authorization, or claims adjudication must ensure that such tools do not serve as the sole or primary basis for adverse determinations. Final decisions on medical necessity, claim denials, and coverage modifications must be made by licensed, clinically competent healthcare professionals who review individualized patient clinical circumstances. AI tools used in these contexts must base determinations on individual enrollee medical history and clinical data, not solely on group-level datasets.

Sub-Obligations8 sub-obligations
ID
Name & Description
Enacted
Proposed
HC-01.1
Prohibition on AI as Sole Decision-Maker AI, algorithms, or software tools may not serve as the sole or primary basis for denying, delaying, modifying, or downcoding healthcare coverage, claims, or prior authorization requests. A licensed human clinical professional must make or independently affirm every adverse determination.
0 enacted
30 proposed
HC-01.2
Licensed Clinical Peer Review Requirement Any denial, delay, modification, or downgrade of healthcare services based on medical necessity must be reviewed and decided by a qualified clinical peer — a licensed physician or healthcare professional practicing in the same or similar specialty as the treating provider — who considers the provider's recommendation and the enrollee's individual medical history.
0 enacted
27 proposed
HC-01.3
Individualized Clinical Data Basis AI tools used in utilization review or coverage determinations must base their outputs on individualized enrollee clinical data (medical history, clinical records, individual circumstances) and must not base determinations solely on aggregate or group-level datasets.
0 enacted
20 proposed
HC-01.4
Periodic AI Tool Review and Revision Health insurers and utilization review organizations must periodically review and revise AI tools used in coverage and clinical determinations to maximize accuracy, reliability, fairness, and compliance with applicable clinical standards.
0 enacted
16 proposed
HC-01.5
Patient Data Purpose Limitation Patient data used by AI in utilization review or coverage determination functions must not be used beyond its intended and stated purpose, consistent with HIPAA and applicable state health privacy law.
0 enacted
14 proposed
HC-01.6
Healthcare AI Disclosure to Enrollees and Providers Insurers must provide written disclosure to enrolled patients, contracted providers, and where applicable group plan sponsors, that AI or algorithms are used in utilization management or coverage determinations. Each claim denial communication must identify whether AI was involved and the named human professional who made the final determination.
0 enacted
12 proposed
HC-01.7
Healthcare AI Regulatory Filing and Audit Access Insurers must file AI-related utilization review policies and procedures with the applicable state insurance regulator, make such policies available to enrollees and providers upon request, and ensure that AI tools used in utilization review are open to inspection for regulatory audit or compliance review.
0 enacted
19 proposed
HC-01.8
AI Denial Attestation in Communications Insurers must include in each claim denial communication a statement affirming whether AI, machine learning, or an automated system served as the basis for the denial decision, and must identify the qualified human professional responsible.
0 enacted
4 proposed
Bills That Map This Requirement 34 bills
Bill
Status
Sub-Obligations
Section
Passed 2026-10-01
HC-01.3
Section 1(b)(1)
Plain Language
When an insurer uses AI to make prior authorization determinations, those determinations must be based on the individual enrollee's medical history, unique clinical circumstances as presented by the treating provider, and any additional clinical information in the enrollee's medical record. This effectively prohibits insurers from making AI-driven prior auth decisions based solely on population-level data or algorithmic generalizations without considering the specific patient's individual clinical profile.
(b)(1) An insurer that uses artificial intelligence to make determinations on requests for prior authorization under health benefit plans shall base determinations on all of the following: a. The enrollee's medical history. b. Any clinical circumstances unique to the enrollee which are presented by the requesting health care provider. c. Additional clinical information about the enrollee which may be present in the enrollee's medical record.
Passed 2026-10-01
HC-01.1HC-01.2
Section 1(b)(3)
Plain Language
Every adverse prior authorization decision — whether a denial, reduction, or deferral — must be made by a licensed physician or other competent health care professional, not by AI alone. The human reviewer must be competent to evaluate the AI's recommendation in the context of the specific clinical issues unique to the enrollee and the treating provider's recommendation. This is a mandatory human-in-the-loop requirement: AI may inform the decision, but a qualified clinician must always make the final adverse determination.
(3) In addition to the requirements listed in subdivisions (1) and (2), a determination to deny, reduce, or defer a request for prior authorization shall always be made by a licensed physician or other health care professional who is competent to evaluate any recommendation or conclusion of artificial intelligence in the light of the specific clinical issues involved in the health care service requested which are unique to the enrollee's circumstances or as recommended by the treating health care provider.
Passed 2026-10-01
HC-01.6
Section 1(c)(1)
Plain Language
Insurers that use AI as a tool in utilization review must provide prominent written disclosure of that fact. For group plans, the disclosure goes to the plan sponsor (typically the employer). For individual plans, the disclosure goes directly to the enrollee. This is a general disclosure obligation about AI use in utilization review — it does not require claim-by-claim notification, but rather a prominent written statement that AI contributes information to the utilization review process.
(c) An insurer shall do all of the following: (1) Make prominent written disclosure if artificial intelligence is used as a tool to contribute information in utilization review to: a. The sponsor in the case of a group plan; or b. The enrollee in the case of an individual plan.
Passed 2026-10-01
HC-01.5
Section 1(c)(3)
Plain Language
Patient data that AI systems use in utilization review functions must not be repurposed beyond the intended and stated purpose of that utilization review. This is a use limitation obligation consistent with HIPAA — insurers must ensure that clinical data ingested by AI for prior authorization decisions is not used for secondary purposes such as marketing, product development, or other functions not disclosed to the patient. The obligation references HIPAA as the baseline standard but is independently enforceable under this act.
(3) Ensure that patient data used in utilization review functions by artificial intelligence is not used beyond its intended and stated purpose consistent with the federal Health Insurance Portability and Accountability Act (HIPAA), 42 U.S.C. § 1320d et seq.
Pending 2027-01-01
HC-01.3
C.R.S. § 10-16-112.7(3)(a)-(b)
Plain Language
Entities using AI for utilization review must ensure that the AI system bases its determinations on the individual patient's medical or clinical history, clinical circumstances as presented by the requesting provider, and other relevant clinical information in the individual's record. The AI system may not base determinations solely on group-level or aggregate data without reference to the individual's own data. This requires AI tools to incorporate individualized clinical inputs rather than relying on population-level algorithms alone.
(3) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION THAT USES AN ARTIFICIAL INTELLIGENCE SYSTEM TO CONDUCT UTILIZATION REVIEW SHALL ENSURE THAT: (a) THE ARTIFICIAL INTELLIGENCE SYSTEM BASES ITS DETERMINATION ON THE FOLLOWING INFORMATION, AS APPLICABLE: (I) AN INDIVIDUAL'S MEDICAL OR OTHER CLINICAL HISTORY; (II) INDIVIDUAL CLINICAL CIRCUMSTANCES AS PRESENTED BY THE REQUESTING PROVIDER; AND (III) OTHER RELEVANT CLINICAL INFORMATION CONTAINED IN THE INDIVIDUAL'S MEDICAL OR OTHER CLINICAL RECORD; (b) THE ARTIFICIAL INTELLIGENCE SYSTEM DOES NOT BASE ITS DETERMINATIONS SOLELY ON GROUP DATA, WITHOUT REFERENCE TO THE INDIVIDUAL'S DATA;
Pending 2027-01-01
HC-01.1HC-01.2
C.R.S. § 10-16-112.7(5)(a)-(b)
Plain Language
AI systems may be used to assist with utilization review, including to expedite approvals. However, a carrier may not issue a denial of coverage based in whole or in part on medical necessity solely on an AI system's output. Every such denial must be affirmatively reviewed and approved by a licensed clinician, licensed physician, or other regulated professional who is competent to evaluate the specific clinical issues involved. The human reviewer must also review the health benefit plan's terms of coverage for the service in question. This is a mandatory human-in-the-loop requirement for all adverse medical necessity determinations — the AI may recommend denial, but a qualified human must independently approve it before the denial can issue.
(5) (a) NOTWITHSTANDING SUBSECTION (3) OF THIS SECTION, AN ARTIFICIAL INTELLIGENCE SYSTEM MAY BE USED TO ASSIST WITH UTILIZATION REVIEW, INCLUDING EXPEDITED APPROVALS. (b) A CARRIER'S DENIAL OF COVERAGE BASED IN WHOLE OR IN PART ON MEDICAL NECESSITY SHALL NOT BE ISSUED SOLELY ON THE OUTPUT OF AN ARTIFICIAL INTELLIGENCE SYSTEM WITHOUT HUMAN REVIEW AND APPROVAL OF THE DENIAL BY A LICENSED CLINICIAN, LICENSED PHYSICIAN, OR OTHER REGULATED PROFESSIONAL THAT IS COMPETENT TO EVALUATE THE SPECIFIC CLINICAL ISSUES INVOLVED IN THE HEALTH-CARE SERVICES REQUESTED BY THE PROVIDER AND A REVIEW OF THE HEALTH BENEFIT PLAN'S TERMS OF COVERAGE FOR THE HEALTH-CARE SERVICE.
Pending 2027-01-01
HC-01.4
C.R.S. § 10-16-112.7(3)(f)
Plain Language
Entities using AI for utilization review must periodically review the AI system's performance, use, and outcomes to maximize accuracy and reliability. The bill does not specify a minimum review cadence, but the obligation is ongoing and not limited to pre-deployment testing. This is a continuing operational requirement to ensure the AI system remains accurate over time.
(f) THE ARTIFICIAL INTELLIGENCE SYSTEM'S PERFORMANCE, USE, AND OUTCOMES ARE PERIODICALLY REVIEWED TO MAXIMIZE ACCURACY AND RELIABILITY;
Pending 2027-01-01
HC-01.5
C.R.S. § 10-16-112.7(3)(g)
Plain Language
Patient health data used by AI systems in utilization review must not be used beyond its intended or stated purpose. This is a purpose-limitation requirement consistent with HIPAA and applicable state health privacy law. Entities must ensure their AI systems do not repurpose patient clinical data collected for utilization review for other uses such as marketing, training unrelated models, or secondary analytics.
(g) AN INDIVIDUAL'S HEALTH DATA IS NOT USED BEYOND ITS INTENDED OR STATED PURPOSE, CONSISTENT WITH APPLICABLE STATE AND FEDERAL LAWS;
Pending 2027-01-01
C.R.S. § 10-16-112.7(3)(h)
Plain Language
AI systems used for utilization review must have criteria and guidelines that comply with all other applicable state and federal utilization review and coverage laws. This ensures that AI-driven utilization review does not circumvent existing legal requirements governing clinical criteria, coverage standards, and review processes that apply to human-conducted utilization review.
(h) THE ARTIFICIAL INTELLIGENCE SYSTEM'S OR ALGORITHM'S CRITERIA AND GUIDELINES COMPLY WITH OTHER APPLICABLE STATE OR FEDERAL LAWS CONCERNING UTILIZATION REVIEW AND COVERAGE FOR HEALTH-CARE SERVICES.
Passed 2027-01-01
HC-01.1HC-01.2
O.C.G.A. § 33-46-7.1(c)
Plain Language
AI tools may participate in utilization review processes — including automating tasks and reducing administrative burdens — but may not issue an adverse determination to a patient on their own. Before any adverse determination is issued, a natural person qualifying as a private review agent or utilization review entity must conduct the utilization review, and a clinical peer must participate in that review. The clinical peer's judgment is supreme: AI may never override or supersede it. This effectively requires human-in-the-loop review with clinical peer participation for every adverse coverage decision, while permitting AI to support non-adverse and administrative functions.
Artificial intelligence systems, artificial intelligence, and other software tools may be used to automate tasks, reduce administrative burdens, participate in decision-making processes, and perform other lawful functions; provided, however, that such systems shall not issue an adverse determination to a patient until a natural person qualifying as a private review agent or a utilization review entity conducts a utilization review in which a clinical peer participates. In no event shall artificial intelligence systems, artificial intelligence, or other software tools supersede the judgment of such clinical peer.
Passed 2027-01-01
O.C.G.A. § 33-46-7.1(b)
Plain Language
Private review agents and utilization review entities are permitted to use AI tools, but only if those tools are incorporated into a utilization review plan that complies with the existing standards in Chapter 46 and rules adopted by the Insurance Commissioner. This provision functions as a conditional authorization — it confirms AI use is lawful but imposes a compliance prerequisite that the AI tools must be part of a plan meeting existing regulatory standards. In practice, this means entities must ensure their AI tools are documented within and governed by their utilization review plans, which are subject to Commissioner oversight.
Private review agents and utilization review entities may use artificial intelligence systems, artificial intelligence, or other software tools, provided that such systems or tools are a part of a utilization review plan that is in accordance with the standards set forth in this chapter and the rules and regulations adopted by the Commissioner.
Withdrawn 2027-01-01
HC-01.1
Section 2 (new § 514F.8, subsection 2A)
Plain Language
Utilization review organizations may use AI-based algorithms for initial review of prior authorization requests. However, for requests based on medical necessity, AI may not serve as the sole basis for a decision to deny, delay, or downgrade the request. A human decision-maker must independently review and affirm any adverse determination — AI can inform but not solely drive the outcome. This is a permissive-use-with-restriction framework: AI is allowed for initial screening, but a human must make the final call on adverse medical necessity decisions.
2A. A utilization review organization may use an artificial intelligence-based algorithm to provide an initial review of a request for prior authorization, except that, for a prior authorization request for a health care service based on medical necessity, a utilization review organization shall not use an artificial intelligence-based algorithm as the sole basis for the utilization review organization's decision to deny, delay, or downgrade the prior authorization request.
Withdrawn 2027-01-01
HC-01.1HC-01.2
Section 3 (new § 514F.8A, subsections 2-3)
Plain Language
A utilization review organization may not deny or downgrade a prior authorization request unless: (1) the decision is made by a qualified reviewer (if the requesting provider is a physician) or a clinical peer (if not), both of whom must practice in the same or similar specialty and have relevant clinical expertise; (2) the URO provides the requesting provider a signed written statement citing specific reasons for the denial, a written explanation of the appeals process (also provided to the covered person), and a written attestation confirming the reviewer's qualifications including name, NPI, board certifications, specialty, and education; and (3) within seven business days of the denial, the URO conducts a consultation between the requesting provider and the qualified reviewer or clinical peer. This creates a multi-step procedural requirement that must be satisfied for every denial or downgrade.
2. A utilization review organization shall not deny or downgrade a request for prior authorization unless all of the following requirements are met:
a. The decision to deny or downgrade the request is made by either of the following:
(1) A qualified reviewer, if the health care provider requesting prior authorization is a physician.
(2) A clinical peer, if the health care provider requesting prior authorization is not a physician.
b. The utilization review organization provides the health care provider that requested the prior authorization all of the following:
(1) A written statement that cites the specific reasons for the denial or downgrade, including any coverage criteria or limits, or clinical criteria, that the utilization review organization considered or that was the basis for the denial or downgrade. The written statement shall be signed by either of the following:
(a) The qualified reviewer that made the denial or downgrade determination, if the health care provider that requested prior authorization is a physician.
(b) The clinical peer that made the denial or downgrade determination, if the health care provider that requested prior authorization is not a physician.
(2) A written explanation of the utilization review organization's appeals process. The utilization review organization shall also provide the written explanation to the covered person for whom prior authorization was requested.
(3) A written attestation that is either of the following:
(a) If the health care provider that requested prior authorization is a physician, a written attestation that the qualified reviewer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and has the requisite training and expertise to treat the medical condition that is the subject of the request for prior authorization, including sufficient knowledge to determine whether the health care service is medically necessary or clinically appropriate. The attestation shall include the qualified reviewer's name, national provider identifier, board certifications, specialty expertise, and educational background.
(b) If the health care provider that requested prior authorization is not a physician, a written attestation that the clinical peer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and the clinical peer has experience managing the specific medical condition or administering the health care service that is the subject of the request for prior authorization. The attestation shall include the clinical peer's name, national provider identifier, board certifications, specialty expertise, and educational background.
3. A utilization review organization that denies a request for prior authorization shall, no later than seven business days after the date that the utilization review organization notifies the requesting health care provider of the denial, conduct a consultation either in person or remotely, as follows:
a. Between the health care provider and a qualified reviewer, if the health care provider requesting prior authorization is a physician.
b. Between the health care provider and a clinical peer, if the health care provider requesting prior authorization is not a physician.
Withdrawn 2027-01-01
HC-01.1
Section 3 (new § 514F.8A, subsection 4)
Plain Language
When a denial or downgrade is appealed by the requesting provider or covered person, the appeal must be conducted by a qualified reviewer or clinical peer (matched to the requesting provider's professional status) who was not involved in the initial adverse determination. The appeal reviewer must consider the known clinical aspects of the services under review, including the covered person's medical records and any medical literature submitted by the provider. This ensures independent review and individualized clinical assessment on appeal.
4. a. If a utilization review organization's decision to deny or downgrade a request for prior authorization is appealed by the requesting health care provider or covered person, the appeal shall be conducted by either of the following:
(1) A qualified reviewer, if the health care provider requesting prior authorization is a physician.
(2) A clinical peer, if the health care provider requesting prior authorization is not a physician.
b. A qualified reviewer or clinical peer involved in the initial denial or downgrade determination of a request for prior authorization that is the subject of an appeal shall not conduct the appeal.
c. When conducting an appeal of a request for prior authorization, the qualified reviewer or clinical peer shall consider the known clinical aspects of the health care services under review, including but not limited to medical records relevant to the covered person's medical condition that is the subject of the health care services for which prior authorization is requested, and any relevant medical literature submitted by the health care provider as part of the appeal.
Withdrawn 2027-01-01
HC-01.6
Section 3 (new § 514F.8A, subsection 1, paragraph j)
Plain Language
The qualified reviewer definition establishes the substantive clinical competency standard that the bill's peer review requirements enforce. A qualified reviewer must be a licensed physician practicing in the same or similar specialty as the requesting provider, with sufficient training and expertise to evaluate medical necessity for the specific condition at issue. This definition, combined with the operative provisions requiring qualified reviewer sign-off on denials and downgrades, ensures that adverse prior authorization determinations are made by clinicians with directly relevant expertise — not generalist reviewers or AI systems alone.
"Qualified reviewer" means a physician that meets all of the following requirements: (1) The physician practices in the same or a similar specialty as the health care provider that requested a prior authorization. (2) The physician has the training and expertise to treat the specific medical condition that is the subject of a request for prior authorization, including sufficient knowledge to determine whether the health care service that is the subject of the request is medically necessary or clinically appropriate. (3) The physician is employed by or contracted with the utilization review organization or health carrier to which a health care provider submitted a request for prior authorization.
Withdrawn 2027-01-01
HC-01.8
Section 3 (new § 514F.8A, subsection 2, paragraph b, subparagraph 1)
Plain Language
Each denial or downgrade communication must be signed by the named qualified reviewer or clinical peer who made the determination. This serves a disclosure function analogous to HC-01.8: the requesting provider receives a signed statement identifying the human professional responsible for the adverse decision, ensuring accountability and enabling the provider to evaluate whether the reviewer met the bill's qualification standards.
The written statement shall be signed by either of the following: (a) The qualified reviewer that made the denial or downgrade determination, if the health care provider that requested prior authorization is a physician. (b) The clinical peer that made the denial or downgrade determination, if the health care provider that requested prior authorization is not a physician.
Pending 2027-01-01
HC-01.1
Iowa Code § 514F.8, subsection 2A (new)
Plain Language
Utilization review organizations may use AI-based algorithms for initial review of prior authorization requests. However, when a prior authorization request is based on medical necessity, the URO may not rely on an AI algorithm as the sole basis for denying, delaying, or downgrading the request. This means a human reviewer must be involved in any adverse determination on medical necessity grounds — the AI can screen and triage, but cannot make the final adverse call alone.
2A. A utilization review organization may use an artificial intelligence-based algorithm to provide an initial review of a request for prior authorization, except that, for a prior authorization request for a health care service based on medical necessity, a utilization review organization shall not use an artificial intelligence-based algorithm as the sole basis for the utilization review organization's decision to deny, delay, or downgrade the prior authorization request.
Pending 2027-01-01
HC-01.1HC-01.2
Iowa Code § 514F.8A(2) (new)
Plain Language
A URO may not deny or downgrade a prior authorization request unless all of the following occur: (1) the decision is made by a qualified reviewer (if the requesting provider is a physician) or a clinical peer (if the requesting provider is not a physician) — both must practice in the same or similar specialty; (2) the URO provides the requesting provider a signed written statement citing the specific reasons for the denial or downgrade, including the coverage or clinical criteria relied upon; (3) the URO provides both the requesting provider and the covered person a written explanation of the appeals process; and (4) the URO provides a written attestation identifying the reviewer by name, NPI, board certifications, specialty expertise, and educational background, and attesting to their qualifications to review the specific medical condition at issue. The reviewer type is keyed to the requesting provider type — physician requests require physician reviewers, non-physician requests require clinical peers.
2. A utilization review organization shall not deny or downgrade a request for prior authorization unless all of the following requirements are met: a. The decision to deny or downgrade the request is made by either of the following: (1) A qualified reviewer, if the health care provider requesting prior authorization is a physician. (2) A clinical peer, if the health care provider requesting prior authorization is not a physician. b. The utilization review organization provides the health care provider that requested the prior authorization all of the following: (1) A written statement that cites the specific reasons for the denial or downgrade, including any coverage criteria or limits, or clinical criteria, that the utilization review organization considered or that was the basis for the denial or downgrade. The written statement shall be signed by either of the following: (a) The qualified reviewer that made the denial or downgrade determination, if the health care provider that requested prior authorization is a physician. (b) The clinical peer that made the denial or downgrade determination, if the health care provider that requested prior authorization is not a physician. (2) A written explanation of the utilization review organization's appeals process. The utilization review organization shall also provide the written explanation to the covered person for whom prior authorization was requested. (3) A written attestation that is either of the following: (a) If the health care provider that requested prior authorization is a physician, a written attestation that the qualified reviewer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and has the requisite training and expertise to treat the medical condition that is the subject of the request for prior authorization, including sufficient knowledge to determine whether the health care service is medically necessary or clinically appropriate. The attestation shall include the qualified reviewer's name, national provider identifier, board certifications, specialty expertise, and educational background. (b) If the health care provider that requested prior authorization is not a physician, a written attestation that the clinical peer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and the clinical peer has experience managing the specific medical condition or administering the health care service that is the subject of the request for prior authorization. The attestation shall include the clinical peer's name, national provider identifier, board certifications, specialty expertise, and educational background.
Pending 2027-01-01
HC-01.2
Iowa Code § 514F.8A(3) (new)
Plain Language
When a URO denies a prior authorization request, it must arrange a consultation — in person or remote — between the requesting provider and the appropriate reviewer (qualified reviewer for physician requestors, clinical peer for non-physician requestors) within seven business days of notifying the provider of the denial. This is a mandatory post-denial peer-to-peer review opportunity that allows the requesting provider to discuss the clinical basis for the denial directly with the reviewer.
3. A utilization review organization that denies a request for prior authorization shall, no later than seven business days after the date that the utilization review organization notifies the requesting health care provider of the denial, conduct a consultation either in person or remotely, as follows: a. Between the health care provider and a qualified reviewer, if the health care provider requesting prior authorization is a physician. b. Between the health care provider and a clinical peer, if the health care provider requesting prior authorization is not a physician.
Pending 2025-01-01
HC-01.1HC-01.2
Section 10(b)
Plain Language
Health insurance issuers may not deny, reduce, or terminate health insurance coverage or benefits based solely on the output of an AI system or predictive model. Every such AI-informed adverse decision must be meaningfully reviewed by a human who has authority to override the AI's determination, following procedures the Department of Insurance will establish by rule. When the adverse outcome constitutes an adverse determination under the Managed Care Reform and Patient Rights Act, the human reviewer must be a clinical peer as defined under that Act — i.e., a licensed physician or health professional in the same or similar specialty as the treating provider.
(b) A health insurance issuer authorized to do business in this State shall not issue an adverse consumer outcome with regard to the denial, reduction, or termination of health insurance coverage or benefits that result solely from the use or application of any AI system or predictive model. Any decision-making process concerning the denial, reduction, or termination of insurance plans or benefits that results from the use of AI systems or predictive models shall be meaningfully reviewed, in accordance with review procedures established by Department rules, by an individual with authority to override the AI systems and the determinations of the AI systems. When an adverse consumer outcome is an adverse determination regulated under the Managed Care Reform and Patient Rights Act, the individual with authority to override the AI systems and the determinations of the AI systems shall be a clinical peer as required and defined under that Act.
Pending 2025-01-01
HC-01.6HC-01.7
Section 15
Plain Language
The Department of Insurance may adopt rules establishing disclosure standards for health insurance issuers' use of AI systems affecting consumers. Potential rule content includes pre-use notice, post-adverse-decision notice, explanation of how personal information informs decisions, a process for correcting inaccurate information, and appeal instructions. This is currently a rulemaking authorization — the specific disclosure obligations will not be operative until the Department promulgates rules. However, issuers should anticipate that future rules may require all of these disclosure elements and should begin developing compliance infrastructure accordingly.
Section 15. Disclosure of AI system utilization. The Department of Insurance may adopt rules that include standards for the full and fair disclosure of a health insurance issuer's use of AI systems that may impact consumers, that set forth the manner, content, and required disclosures including notice before the use of AI systems, notice after an adverse decision, the way personal information is used to inform decisions, a process for correcting inaccurate information, and instructions for appealing decisions.
Pending 2025-06-01
HC-01.1
Section 10(b)
Plain Language
Insurers may not deny, reduce, or terminate insurance plans or benefits based solely on AI system or predictive model outputs. Every decision-making process involving AI that results in such adverse actions must be meaningfully reviewed by a human individual who has the authority to override the AI system's determination. The review procedures will be further specified by Department rules. This is a dual obligation: (1) an outright prohibition on fully automated adverse decisions, and (2) a requirement for meaningful human review with override authority on any AI-assisted adverse decision. Note that the definition of adverse consumer outcome is broad — it encompasses both decisions that violate regulatory standards and any claim denial determined by an AI system.
(b) An insurer authorized to do business in this State shall not issue an adverse consumer outcome with regard to the denial, reduction, or termination of insurance plans or benefits that result solely from the use or application of any AI system or predictive model. Any decision-making process concerning the denial, reduction, or termination of insurance plans or benefits that results from the use of AI systems or predictive models shall be meaningfully reviewed, in accordance with review procedures established by Department rules, by an individual with authority to override the AI systems and their determinations.
Pending 2025-06-01
HC-01.7
Section 15
Plain Language
The Department of Insurance is authorized to adopt rules establishing standards for how insurers must disclose their use of AI systems — including the manner, content, and specific required disclosures. While this section is permissive rather than mandatory (the Department 'may' adopt such rules), once rules are adopted, insurers will be required to comply with whatever disclosure standards are established. Insurers should anticipate potential disclosure requirements and begin preparing compliance infrastructure for AI use transparency in insurance decision-making.
The Department of Insurance may adopt rules that include standards for the full and fair disclosure of an insurer's use of AI systems that set forth the manner, content, and required disclosures.
Pending 2026-01-01
HC-01.3
Section 1(c)(1)(A)-(B)
Plain Language
Health insurers and utilization review organizations must ensure that any AI, algorithm, or software tool used for utilization review bases its determinations on the individual enrollee's medical history, clinical circumstances as presented by the requesting provider, and other relevant clinical information in the enrollee's record. The tool may not make determinations based solely on a group dataset — it must incorporate individualized patient data. This requires configuring AI tools to pull and consider individual-level clinical inputs rather than relying exclusively on population-level or actuarial data.
Each health insurer and utilization review organization shall ensure that the artificial intelligence, algorithm or other software tool used to review and approve, modify and delay or deny requests by providers: (A) Makes a determination based on the following information, as applicable: (i) An enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting healthcare provider; and (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (B) does not make a determination based solely on a group dataset;
Pending 2026-01-01
HC-01.1HC-01.2
Section 1(c)(2), Section 1(d)
Plain Language
AI, algorithms, and software tools are categorically prohibited from denying, delaying, or modifying healthcare services based on medical necessity — even in part. All medical necessity determinations must be made exclusively by a licensed physician or healthcare professional competent in the relevant clinical specialty, who must affirmatively review the treating provider's recommendation, the enrollee's medical history, and individual clinical circumstances. Subsection (d) reinforces this by prohibiting any individual who is not a qualified licensed physician or healthcare professional from denying or modifying authorization requests on medical necessity grounds. In practice, AI tools may inform or support the review process, but the final adverse determination must rest with a qualified human clinician.
(2) Notwithstanding the provisions of paragraph (1), the artificial intelligence, algorithm or other software tool shall not deny, delay or modify healthcare services based in whole or in part on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed healthcare professional who is competent to evaluate the specific clinical issues involved in the healthcare services requested by the healthcare provider by reviewing and considering such healthcare provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances. (d) No individual, other than a licensed physician or a licensed healthcare professional who is competent to evaluate the specific clinical issues involved in the healthcare services requested by the provider, shall deny or modify requests for authorization of healthcare services for an enrollee for reasons of medical necessity.
Pending 2026-01-01
HC-01.4
Section 1(c)(1)(F)
Plain Language
Health insurers and utilization review organizations must periodically review and revise their AI, algorithm, or software tools used in utilization review to maximize accuracy and reliability. The statute does not specify a minimum review frequency, but the obligation is ongoing and requires affirmative action — merely deploying the tool and leaving it in place without reassessment would not comply.
Each health insurer and utilization review organization shall ensure that the artificial intelligence, algorithm or other software tool used to review and approve, modify and delay or deny requests by providers: (F) is periodically reviewed and revised to maximize accuracy and reliability;
Pending 2026-01-01
HC-01.5
Section 1(c)(1)(G)
Plain Language
Health insurers and utilization review organizations must ensure that their AI or software tools use patient data in compliance with HIPAA. This codifies a HIPAA compliance requirement specifically in the context of AI-driven utilization review, reinforcing that patient data fed into or processed by AI tools remains subject to existing federal health privacy protections.
Each health insurer and utilization review organization shall ensure that the artificial intelligence, algorithm or other software tool used to review and approve, modify and delay or deny requests by providers: (G) uses patient data in compliance with the health insurance portability and accountability act of 1996, public law 104-191;
Pending 2026-01-01
HC-01.7
Section 1(e), Section 1(f)(1)-(3)
Plain Language
Health insurers must establish written policies and procedures describing their utilization review process for medical necessity decisions, and those policies must require that decisions be consistent with criteria supported by clinical principles. Insurers must file these policies and procedures with the Kansas Department of Insurance and must disclose them to enrollees, healthcare providers, and the public upon request. This creates three distinct obligations: (1) establish the written policies, (2) file them with the regulator, and (3) make them available to stakeholders on request. The filing requirement subjects AI-related utilization review practices to regulatory inspection.
(e) Each health insurer subject to this act shall establish written policies and procedures that: (1) Describe the process by which the health benefit plan prospectively, retrospectively or concurrently reviews and approves, modifies and delays or denies requests, based in whole or in part on medical necessity, by healthcare providers of healthcare services for health benefit plan enrollees; and (2) require decisions to be based on the medical necessity of proposed healthcare services are consistent with criteria or guidelines that are supported by clinical principles and processes. (f) (1) Each health insurer subject to this act shall file with the department such health insurer's policies and procedures establishing the process by which such health insurer prospectively, retrospectively or concurrently reviews and approves, modifies and delays or denies requests, based in whole or in part on medical necessity, by providers of healthcare services for health benefit plan enrollees. (2) Pursuant to paragraph (1), such policies and procedures shall ensure that healthcare decisions based on the medical necessity of proposed healthcare services are consistent with criteria or guidelines that are supported by clinical principles and processes. (3) Each health insurer shall disclose such policies and procedures to insureds, healthcare providers and the public upon request.
Pending 2027-01-01
HC-01.1HC-01.3
R.S. 22:1260.49(C)(1)-(3)
Plain Language
Covered entities may not use AI or automated decision systems in utilization review in a way that discriminates under federal or state law, violates HHS regulations or guidance, or delays, denies, or modifies healthcare services. AI systems must not base determinations solely on group-level data sets — they must consider the individual insured's medical history, clinical circumstances as presented by the requesting provider, and other relevant individual clinical information. This is a categorical prohibition on using AI as the basis for adverse coverage actions and a requirement to individualize AI-assisted determinations.
C.(1) No entity subject to this Section shall utilize an artificial intelligence or an automated decision system that does any of the following: (a) Engages in discrimination that is prohibited by federal or state law. (b) Violates regulations or guidance disseminated by the United States Department of Health and Human Services. (c) Delays, denies, or modifies healthcare services. (2) Artificial intelligence or an automated decision system used in the determination process shall not base its determination or determination recommendation solely on a group data set. (3) Artificial intelligence or an automated decision system shall base its determination or determination recommendation on any the following: (a) The insured's medical or other clinical history. (b) Individual clinical circumstances as presented by a requesting provider. (c) Other relevant clinical information contained in the insured's medical or other clinical history.
Pending 2027-01-01
HC-01.1HC-01.2
R.S. 22:1260.49(D)(1)-(2)(a)-(b)
Plain Language
Covered entities may not replace a healthcare provider's role in the utilization review determination process with AI. Every adverse determination must be signed by a licensed physician who personally reviewed the medical record and bears responsibility for the clinical judgment. Before making any adverse determination on a medical necessity claim or a prior authorization claim, the entity must require independent judgment from human utilization review personnel — AI may inform but not drive the adverse decision. Entities must also comply with applicable HHS regulations and guidance on AI use. This effectively requires a human-in-the-loop for all adverse determinations, with a licensed physician sign-off requirement on top.
D.(1)(a) An entity subject to this Section shall not replace the role of a healthcare provider in the determination process with artificial intelligence or an automated decision system. (b) Any adverse determination shall be signed by a licensed physician who personally reviewed the medical record and is responsible for the clinical judgment. (2) An entity subject to this Section shall do all of the following: (a) Require independent judgment from human utilization review personnel in the utilization review process before making an adverse determination for either of the following: (i) Any claim submitted by a provider based on medical necessity. (ii) Any claim submitted by a provider for a procedure requiring prior authorization. (b) Comply with applicable regulations and guidance for artificial intelligence or automated decision system use issued by the United States Department of Health and Human Services.
Pending 2027-01-01
HC-01.4
R.S. 22:1260.49(D)(2)(c)
Plain Language
Covered entities must conduct at least quarterly reviews of the performance, use, and outcomes of any AI or automated decision system used in utilization review. Based on the review findings, the entity must revise policies and procedures as needed to maintain compliance. This is notably more frequent than the annual review cadence typical in other jurisdictions — Louisiana requires quarterly reviews.
(c) Review the performance, use, and outcomes of an artificial intelligence or an automated decision system at a minimum of once per quarter, and revise the policies and procedures as needed to ensure compliance with this Section.
Pending 2027-01-01
HC-01.5
R.S. 22:1260.49(D)(2)(d)
Plain Language
Patient data used by AI or automated decision systems in utilization review must be limited to its intended and stated purpose, consistent with HIPAA. This is a purpose limitation requirement that constrains secondary uses of patient data within AI-driven coverage determination processes.
(d) Use patient data within its intended and stated purpose consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable.
Pending 2027-01-01
HC-01.6HC-01.8
R.S. 22:1260.49(D)(3)(a)-(b)
Plain Language
Health insurance issuers must disclose to both the enrollee and the Louisiana Department of Insurance whenever AI or an automated decision system was used in any part of a coverage determination or utilization review. The issuer must also document the extent to which AI influenced each determination. This is a dual-audience disclosure — the enrollee receives notice that AI was involved, and the department receives the same notification. The documentation requirement creates an internal record that can be inspected by the commissioner under subsection F.
(3)(a) A health insurance issuer shall disclose to the enrollee and the department when artificial intelligence or an automated decision system was used in any part of a coverage determination or utilization review. (b) The health insurance issuer shall document the extent to which any artificial intelligence or automated decision system influenced the determination.
Pending 2027-01-01
HC-01.8
R.S. 22:1260.44(E)(2)
Plain Language
When issuing a written or electronic adverse determination notice, the health insurance issuer must now include — in addition to the existing requirements of stating all reasons, clinical rationale, and appeal instructions — a statement of whether AI or an automated decision system was used in the determination process. This amends existing adverse determination notice requirements to add a mandatory AI disclosure element.
(2) A health insurance issuer shall include in its written or electronic notification of an adverse determination all of the reasons for the determination, including the clinical rationale, and the instructions for initiating an appeal or reconsideration of the determination, and whether artificial intelligence or an automated decision system, as defined in R.S. 22:1260.49, was used in the determination process.
Pending 2027-01-01
HC-01.7
R.S. 22:1260.49(F)(1)-(4)
Plain Language
Covered entities must allow the Commissioner of Insurance to inspect and audit their AI and automated decision systems, including review of policies and procedures governing AI use in determinations. The commissioner may require submission and independent review of any such system. Upon request, health insurance issuers must disclose data sources, training parameters, and validation methods used to develop AI systems for coverage determinations. The issuer bears the cost of any independent review the commissioner orders. This grants broad regulatory audit authority and creates a responsive disclosure obligation for technical AI system details.
F.(1) An entity subject to this Section shall allow the commissioner to inspect and audit the artificial intelligence or automated decision system for compliance with this Section and review policies and procedures for how the artificial intelligence or automated decision system is used in the determination process. (2) The commissioner may require submission and independent review of any artificial intelligence or automated decision system used in utilization review. (3) Upon request of the commissioner, a health insurance issuer shall disclose the data sources, training parameters, and validation methods used to develop any artificial intelligence or automated decision system used in coverage determinations. (4) The health insurance issuer shall pay for any independent review that the commissioner deems necessary.
Pending 2025-10-08
HC-01.3
G.L. c. 176O, § 12(g)(1)(A)-(B)
Plain Language
Carriers and utilization review organizations using AI for utilization review must ensure the AI tool bases its determinations on the individual insured's medical history, the clinical circumstances presented by the requesting provider, and other relevant clinical information from the insured's records. The AI tool may not base determinations solely on group-level or aggregate datasets — it must incorporate individualized patient data. This applies to prospective, retrospective, and concurrent utilization review functions.
(A) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history. (ii) Individual clinical circumstances as presented by the requesting provider. (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (B) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-10-08
HC-01.1HC-01.2
G.L. c. 176O, § 12(g)(1)(D), (g)(2)
Plain Language
AI tools used in utilization review may not supplant healthcare provider decision-making. More specifically, AI may not deny, delay, or modify healthcare services on the basis of medical necessity — that determination must be made exclusively by a licensed physician or licensed healthcare professional who is competent to evaluate the specific clinical issues involved. The human reviewer must consider the requesting provider's recommendation, the insured's clinical history, and individual clinical circumstances. This is a hard prohibition: AI cannot independently make any adverse medical necessity determination, even with human oversight available after the fact.
(D) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making. (2) Notwithstanding paragraph (1), the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (a), by reviewing and considering the requesting providers recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-10-08
HC-01.7
G.L. c. 176O, § 12(g)(1)(G)-(H)
Plain Language
AI tools used in utilization review must be open to inspection for audit or compliance review by both the Division of Insurance and the Executive Office of Health and Human Services. Additionally, carriers must include disclosures about the use and oversight of AI tools in their written utilization review policies and procedures required under existing Section 12(a). This creates both a regulatory access obligation and a documentation disclosure obligation.
(G) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the division and by the executive office of health and human services pursuant to applicable state and federal law. (H) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by subsection (a).
Pending 2025-10-08
HC-01.4
G.L. c. 176O, § 12(g)(1)(I)
Plain Language
Carriers and utilization review organizations must periodically review and revise the performance, use, and outcomes of AI tools used in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify review frequency, methodology, or documentation requirements, leaving those details to the carrier's discretion.
(I) The artificial intelligence, algorithm, or other software tools performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-10-08
HC-01.5
G.L. c. 176O, § 12(g)(1)(J)
Plain Language
Patient data used by AI tools in utilization review functions must not be used beyond its intended and stated purpose. This data use limitation applies in addition to existing HIPAA and state health privacy law requirements. Carriers must ensure that AI tools and their vendors do not repurpose patient data collected during utilization review for other uses such as training models, marketing, or unrelated analytics.
(J) Patient data is not used beyond its intended and stated purpose, and consistent with state and federal law.
Pre-filed 2025-01-10
HC-01.3
Ch. 176O § 12(g)(1)(A)-(B)
Plain Language
AI tools used for utilization review must base their determinations on the individual insured's medical history, clinical circumstances as presented by the requesting provider, and other relevant clinical information in the insured's record. The tool may not base determinations solely on aggregate or group-level datasets. This means carriers must configure or select AI tools that ingest and weigh individualized patient data — a tool that applies only population-level statistical models without considering the specific patient's clinical record would violate this requirement.
(A) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history. (ii) Individual clinical circumstances as presented by the requesting provider. (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (B) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pre-filed 2025-01-10
HC-01.1HC-01.2
Ch. 176O § 12(g)(2)
Plain Language
AI tools are categorically prohibited from denying, delaying, or modifying health care services on the basis of medical necessity. All medical necessity determinations must be made by a licensed physician or a licensed health care professional who is competent to evaluate the specific clinical issues at hand. That professional must review and consider the requesting provider's recommendation, the insured's medical/clinical history, and individual clinical circumstances. This means AI may inform or assist a utilization review process, but the final adverse determination on medical necessity must always be made by a qualified human clinician — the AI cannot serve as the decision-maker.
(2) Notwithstanding paragraph (1), the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (a), by reviewing and considering the requesting providers recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pre-filed 2025-01-10
HC-01.4
Ch. 176O § 12(g)(1)(I)
Plain Language
Carriers and utilization review organizations must periodically review the performance, use, and outcomes of AI tools used in utilization review and revise them as needed to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify a review frequency, leaving carriers to establish a reasonable cadence.
(I) The artificial intelligence, algorithm, or other software tools performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pre-filed 2025-01-10
HC-01.5
Ch. 176O § 12(g)(1)(J)
Plain Language
Patient data used by AI tools in utilization review or utilization management may not be repurposed beyond the intended and stated purpose of that review. This aligns with HIPAA minimum necessary principles and state health privacy law, but creates an independent state-law obligation that AI tools processing patient data for UR purposes must not use that data for secondary purposes such as marketing, risk profiling, or model training beyond the stated utilization review function.
(J) Patient data is not used beyond its intended and stated purpose, and consistent with state and federal law.
Pre-filed 2025-01-10
HC-01.7
Ch. 176O § 12(g)(1)(G)-(H)
Plain Language
Carriers must ensure that their AI utilization review tools are available for regulatory inspection by the Division of Insurance and the Executive Office of Health and Human Services. Additionally, the carrier's written utilization review policies and procedures (required under existing subsection (a) of Section 12) must contain disclosures about how the AI tool is used and how it is overseen. This means the use of AI must be documented in the same policies filed with the Division — it cannot be a hidden back-end process undisclosed in the carrier's official UR documentation.
(G) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the division and by the executive office of health and human services pursuant to applicable state and federal law. (H) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by subsection (a).
Pending 2026-10-01
HC-01.3
Ins. § 15-10B-05.1(c)(1)-(2)
Plain Language
Carriers, pharmacy benefits managers, and private review agents must ensure that any AI, algorithm, or software tool used for utilization review bases its determinations on the individual enrollee's medical history, clinical circumstances as presented by the requesting provider, or other relevant clinical information from the enrollee's records. The tool may not base determinations solely on group-level datasets. This requires individualized clinical data inputs for each determination.
(c) Subject to subsection (d) of this section, an entity subject to this section shall ensure that: (1) an artificial intelligence, algorithm, or other software tool bases its determinations on: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by a requesting provider; or (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (2) an artificial intelligence, algorithm, or other software tool does not base its determinations solely on a group dataset;
Pending 2026-10-01
HC-01.1
Ins. § 15-10B-05.1(c)(4), (d)
Plain Language
AI tools used for utilization review may not replace the role of a health care provider in the determination process, and may not independently deny, delay, or modify health care services. This is an absolute prohibition — the AI tool cannot be the final decision-maker on coverage determinations. A licensed health care provider must make or independently affirm every adverse determination.
(4) an artificial intelligence, algorithm, or other software tool does not replace the role of a health care provider in the determination process under § 15–10B–07 of this subtitle; (d) An artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services.
Pending 2026-10-01
HC-01.5
Ins. § 15-10B-05.1(c)(10)
Plain Language
Patient data used by AI tools in the utilization review process must not be used beyond its intended and stated purpose. This obligation must be applied consistently with HIPAA requirements. Covered entities must ensure their AI vendors and utilization review contractors also comply with this data use limitation.
(10) patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable;
Pending 2026-10-01
HC-01.7
Ins. § 15-10B-05.1(c)(7)-(8), (e)
Plain Language
AI tools used for utilization review must be open to inspection by the Maryland Insurance Commissioner for audits and compliance reviews. Written policies and procedures describing how the AI tool will be used and what oversight will be provided must be included in the utilization plan filed with the Commissioner. Critically, audits and compliance reviews must now include a human evaluation component: a licensed health care professional must review patient medical records, consider the patient's specific circumstances, and have the authority to question, modify, or override any determination made by the AI tool. This is new language added by the bill — it ensures that regulatory audits are not purely technical assessments of the tool but include clinical review of actual patient outcomes.
(7) an artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Commissioner IN ACCORDANCE WITH SUBSECTION (E) OF THIS SECTION; (8) written policies and procedures are included in the utilization plan submitted under § 15–10B–05 of this subtitle, including how an artificial intelligence, algorithm, or other software tool will be used and what oversight will be provided; (E) AN AUDIT OR COMPLIANCE REVIEW OF AN ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL UNDER SUBSECTION (C)(7) OF THIS SECTION SHALL INCLUDE THE HUMAN EVALUATION OF A PATIENT'S MEDICAL RECORDS BY A LICENSED HEALTH CARE PROFESSIONAL THAT TAKES INTO CONSIDERATION THE PATIENT'S SPECIFIC CIRCUMSTANCES AND ALLOWS THE LICENSED HEALTH CARE PROFESSIONAL TO QUESTION, MODIFY, OR OVERRIDE A DETERMINATION MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL.
Pending 2026-10-01
HC-01.4
Ins. § 15-10B-05.1(c)(9), (f)
Plain Language
Covered entities must review and, if necessary, revise the performance, use, and outcomes of their AI utilization review tools at least quarterly to maximize accuracy and reliability. The bill adds a new requirement that these quarterly reviews must include a human evaluation of real-world health outcomes resulting from AI-driven decisions. The findings from this human evaluation must then be used to improve the AI tool, making its decisions safer, more accurate, and more responsive to patient needs. This creates a continuous feedback loop: human clinicians assess actual patient outcomes, and those assessments must drive concrete improvements to the AI system.
(9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability, IN ACCORDANCE WITH SUBSECTION (F) OF THIS SECTION; (F) A REVIEW OF THE PERFORMANCE, USE, AND OUTCOMES OF ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS UNDER SUBSECTION (C)(9) OF THIS SECTION SHALL INCLUDE: (1) A HUMAN EVALUATION OF THE REAL–WORLD HEALTH OUTCOMES OF DECISIONS MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL; AND (2) USE OF THE FINDINGS MADE BY THE EVALUATION REQUIRED UNDER ITEM (1) OF THIS SUBSECTION TO IMPROVE THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL AND MAKE THE DECISIONS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL SAFER, MORE ACCURATE, AND MORE RESPONSIVE TO PATIENT NEEDS.
Pending 2026-10-01
HC-01.1
Insurance § 15–10A–02(b)(2)(vi)
Plain Language
When a member files a grievance challenging an adverse coverage decision that was made using AI, an algorithm, or other software tools, the carrier's internal grievance process must provide for human review of that adverse decision. The human review must include verification of compliance with § 15–10B–05.1, which requires that AI tools base determinations on individual clinical data, do not replace the role of a healthcare provider, and do not result in unfair discrimination, among other requirements. This is a new procedural requirement layered onto the existing internal grievance framework — carriers must build this AI-specific human review into their existing grievance workflows.
(VI) FOR A GRIEVANCE RESULTING FROM AN ADVERSE DECISION MADE USING ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS, PROVIDE FOR THE HUMAN REVIEW OF THE ADVERSE DECISION, INCLUDING FOR COMPLIANCE WITH § 15–10B–05.1 OF THIS TITLE.
Pending 2026-10-01
HC-01.1HC-01.3
Insurance § 15–10B–05.1(c)(1)-(4), (d)
Plain Language
Carriers and their contracted pharmacy benefits managers and private review agents must ensure that AI tools used in utilization review base determinations on individual enrollee clinical data — medical history, provider-presented clinical circumstances, and clinical records — and do not base determinations solely on group-level datasets. AI tools may not replace the healthcare provider's role in the determination process and may not independently deny, delay, or modify healthcare services. This is a re-enacted existing provision (§ 15–10B–05.1) that the bill incorporates by cross-reference in the new grievance human review requirement. While this section is not newly added by HB 795, it is the substantive standard against which the new human review obligation measures compliance.
(c) Subject to subsection (d) of this section, an entity subject to this section shall ensure that: (1) an artificial intelligence, algorithm, or other software tool bases its determinations on: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by a requesting provider; or (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (2) an artificial intelligence, algorithm, or other software tool does not base its determinations solely on a group dataset; (3) the criteria and guidelines for using an artificial intelligence, algorithm, or other software tool for making determinations comply with the requirements of this title; (4) an artificial intelligence, algorithm, or other software tool does not replace the role of a health care provider in the determination process under § 15–10B–07 of this subtitle; (d) An artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services.
Pending 2026-10-01
HC-01.4
Insurance § 15–10B–05.1(c)(5)-(9)
Plain Language
Carriers must ensure that AI tools used in utilization review do not result in unfair discrimination and are applied fairly and equitably in accordance with federal HHS guidance. AI tools must be open to Commissioner inspection for audit or compliance reviews. Written policies and procedures for AI use must be included in the utilization plan filed under § 15–10B–05. AI tool performance, use, and outcomes must be reviewed and revised at least quarterly to maximize accuracy and reliability. These are existing requirements under § 15–10B–05.1 that are re-enacted without amendment; they form the substantive compliance standard that the new human review grievance provision incorporates by cross-reference.
(5) the use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination; (6) an artificial intelligence, algorithm, or other software tool is fairly and equitably applied, including in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services; (7) an artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Commissioner; (8) written policies and procedures are included in the utilization plan submitted under § 15–10B–05 of this subtitle, including how an artificial intelligence, algorithm, or other software tool will be used and what oversight will be provided; (9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability;
Pending 2026-10-01
Insurance § 15–10B–05.1(c)(10)-(11)
Plain Language
Carriers must ensure that patient data used by AI tools in utilization review is not used beyond its intended and stated purpose, consistent with HIPAA. Carriers must also ensure that AI tools do not directly or indirectly cause harm to enrollees. These are existing requirements under § 15–10B–05.1 re-enacted without amendment, forming part of the compliance standard referenced by the new grievance review provision.
(10) patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable; and (11) an artificial intelligence, algorithm, or other software tool does not directly or indirectly cause harm to an enrollee.
Pending 2026-01-01
HC-01.3
24-A MRSA §4304(8)(A)(1)
Plain Language
When a carrier or its contracted third party uses AI to make utilization review or medical review determinations, those determinations must be based on the individual enrollee's medical history and clinical circumstances as presented by the requesting provider and contained in the enrollee's medical record. AI tools may not supplant provider decision-making — the treating provider's clinical judgment must remain central. This effectively prohibits AI systems from making coverage determinations based solely on aggregate or group-level data without considering individualized clinical information.
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (1) Be based upon an enrollee's medical history, as applicable, and individual clinical circumstances as presented by the requesting provider, as well as other relevant clinical information contained in the enrollee's medical record, and not supplant provider decision making;
Pending 2026-01-01
HC-01.2
24-A MRSA §4304(8)(B)
Plain Language
Any adverse coverage determination — denial, delay, modification, or adjustment — based on medical necessity must be made by a clinical peer who is competent in the specific clinical area at issue. The clinical peer must consider the treating provider's recommendation and the enrollee's individual medical history and clinical circumstances. This effectively prohibits AI from serving as the sole or final decision-maker for adverse medical necessity determinations — a qualified human clinical professional must make or affirm every such decision.
A denial, delay, modification or adjustment of health care services based on medical necessity must be made by a clinical peer competent to evaluate the specific clinical issues involved in the health care services requested by the enrollee's provider. The clinical peer making the medical review or utilization review determination shall consider the enrollee's provider's recommendation and the enrollee's medical history, as applicable, and individual clinical circumstances.
Pending 2026-01-01
HC-01.6HC-01.7
24-A MRSA §4304(8)(A)(4)
Plain Language
AI tools used in utilization review or medical review determinations must be open to inspection — implying regulatory audit access to the AI systems and their decision logic. Additionally, carriers must disclose the use of AI in their written policies and procedures provided to enrollees. This creates two distinct obligations: a transparency-to-regulators obligation (open to inspection) and a transparency-to-enrollees obligation (written disclosure in policies and procedures).
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (4) Be open to inspection, and the use of artificial intelligence must be disclosed in the written policies and procedures to an enrollee.
Pending 2026-01-01
HC-01.4
24-A MRSA §4304(8)(A) (final paragraph)
Plain Language
Carriers must establish governance policies for AI used in utilization and medical review that create accountability for the AI's performance, use, and outcomes. These policies must be periodically reviewed and revised to ensure accuracy and reliability — this is an ongoing obligation, not a one-time setup. Additionally, data used in AI-derived determinations may not be repurposed beyond its intended and stated purpose, and must be protected from risks that could directly or indirectly harm the enrollee. This creates three distinct requirements: (1) governance policies with accountability, (2) periodic review and revision for accuracy, and (3) data use limitations and data protection.
Use of artificial intelligence pursuant to this paragraph must be governed by policies that establish accountability for performance, use and outcomes that are reviewed and revised for accuracy and reliability. Data under this paragraph may not be used beyond its intended and stated purpose. Data under this paragraph must be protected from risk that may directly or indirectly cause harm to the enrollee.
Pending 2025-06-03
HC-01.1
MCL 500.3406ss
Plain Language
Health insurers operating in Michigan are categorically prohibited from using artificial intelligence to deny, modify, or delay any health insurance claim. This goes further than most healthcare AI statutes, which require human oversight of AI-assisted decisions — Michigan's bill prohibits AI-based adverse claim determinations entirely. The term 'artificial intelligence' is not defined in the bill, creating potential ambiguity about whether the prohibition extends to simple algorithmic tools, rules-based automation, or only machine learning systems. Insurers should treat this as a blanket prohibition on using any AI system in the claims review pipeline if the AI output could result in a denial, modification, or delay of a claim.
Sec. 3406ss. An insurer that delivers, issues for delivery, or renews in this state a health insurance policy shall not deny, modify, or delay a claim based on a review using artificial intelligence.
Pending 2025-06-03
HC-01.1
MCL 400.107b
Plain Language
The Michigan Department of Health and Human Services and any health plan contracted to administer the state Medicaid program are categorically prohibited from using artificial intelligence as the basis for denying, modifying, or delaying any Medicaid claim. Unlike most healthcare AI restrictions that require human oversight of AI-informed decisions, this provision is an outright ban — AI may not serve as the basis for any adverse claim action at all, regardless of whether a human also reviews the decision. The bill does not define 'artificial intelligence,' 'claim,' or 'review,' leaving significant interpretive questions about whether AI tools used for administrative processing, fraud detection, or clinical decision support would fall within scope.
Sec. 107b. The department or a contracted health plan shall not deny, modify, or delay a claim under the medical assistance program based on a review using artificial intelligence.
Pending 2025-08-01
HC-01.1
Minn. Stat. § 62M.20(a)-(b)
Plain Language
Utilization review organizations are categorically prohibited from using artificial intelligence in any part of their operations — including initial review, clinical evaluation, coverage determinations, and appeals. This is broader than the typical HC-01.1 requirement that AI not be the 'sole or primary basis' for adverse determinations; Minnesota bans AI use entirely. Any adverse determination made in violation is automatically null and void, overriding existing limitations on remedies in section 62M.14. The definition of AI incorporates the federal definition at 15 U.S.C. § 9401, which is broad and technology-neutral.
(a) The use of artificial intelligence is prohibited in utilization review. Without limiting the generality of the foregoing, a utilization review organization is prohibited from using artificial intelligence in any part of its review, evaluation, determination, or appeals processes. (b) Notwithstanding section 62M.14, any adverse determination made in violation of this section is null and void.
Pending 2025-08-01
HC-01.2
Minn. Stat. § 62M.09, subd. 3(a)-(b), (f)
Plain Language
Every adverse clinical determination must be reviewed and made by a licensed Minnesota physician in the same or similar specialty as the treating provider. The bill adds a new affirmative attestation requirement: the reviewing physician must attest in writing that AI was not used in the utilization review process. Any adverse determination made in violation of the attestation requirement is automatically null and void. This creates both a documentation obligation (written attestation) and a substantive compliance obligation (ensuring AI was in fact not used). The existing physician-reviewer requirements in paragraphs (a) and (b) are pre-existing law; the new obligation is the written AI-non-use attestation in paragraph (f).
(a) A physician must review and make the adverse determination under section 62M.05 in all cases in which the utilization review organization has concluded that an adverse determination for clinical reasons is appropriate. (b) The physician conducting the review and making the adverse determination must: (1) hold a current, unrestricted license to practice medicine in this state; and (2) have the same or similar medical specialty as a provider that typically treats or manages the condition for which the health care service has been requested. (f) The physician must attest in writing that artificial intelligence was not used in the utilization review process. Notwithstanding section 62M.14, any adverse determination made in violation of this paragraph is null and void.
Pending 2025-10-01
HC-01.3
Section 1(1)(a)-(b)
Plain Language
Health insurance issuers using AI, algorithms, or software tools for utilization review or utilization management must ensure these tools base determinations on the individual enrollee's medical history, the requesting provider's presentation of clinical circumstances, and other relevant clinical information from the enrollee's record. The tools may not base determinations solely on aggregate or group-level datasets. This means insurers must configure and validate their AI tools to ingest and consider individualized patient data for each determination, not rely on population-level statistical outputs alone.
(a) the artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) a covered person's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting provider; and (iii) other relevant clinical information contained in the covered person's medical or other clinical record; (b) the artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset;
Pending 2025-10-01
HC-01.1HC-01.2
Section 1(2)
Plain Language
AI tools are categorically prohibited from denying, delaying, or modifying healthcare services based on medical necessity — even partially. Medical necessity determinations must be made exclusively by a licensed physician or licensed healthcare professional who is clinically competent in the relevant specialty. That professional must review and consider the treating provider's recommendation, the enrollee's medical and clinical history, and the enrollee's individual clinical circumstances. This provision supersedes the general requirements of subsection (1) and establishes an absolute bar: AI cannot make or contribute to medical necessity decisions.
Notwithstanding subsection (1), the artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity must be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (1)(a)(ii), by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-10-01
HC-01.4
Section 1(1)(i)
Plain Language
Health insurance issuers must periodically review and revise the performance, use, and outcomes of AI tools used in utilization review to maximize accuracy and reliability. The statute does not specify the frequency of review, but it is a continuing obligation — not a one-time pre-deployment check. Issuers should establish a documented review cadence and be prepared to demonstrate compliance to the commissioner.
the artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability;
Pending 2025-10-01
HC-01.5
Section 1(1)(j)
Plain Language
Patient data used by AI tools in utilization review or utilization management must not be repurposed beyond its intended and stated purpose. This obligation is explicitly aligned with HIPAA and Montana insurance law (Title 33). Issuers must ensure their AI tools do not use patient data collected for coverage determinations for secondary purposes such as marketing, risk profiling for other products, or model training beyond the stated utilization review function.
patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, Public Law 104-191, and this title, as applicable;
Pending 2025-10-01
HC-01.7
Section 1(1)(g)-(h)
Plain Language
Health insurance issuers must make their AI tools available for inspection, audit, and compliance review by the Montana Department of Insurance pursuant to applicable law. Additionally, issuers must document disclosures regarding the use and oversight of AI tools in their written policies and procedures. This creates both a regulatory accessibility obligation and a documentation obligation — issuers must maintain written policies that describe how AI is used and overseen, and must permit the department to inspect the AI tools themselves.
(g) the artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department pursuant to applicable state and federal law; (h) disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by this section;
Pending 2027-01-01
HC-01.1
RSA 420-J:6-f
Plain Language
Health carriers may not use AI to audit provider billing codes or to adjust those codes based on AI recommendations if doing so would override, change, or amend the treating provider's clinical judgment. This is a categorical prohibition — not a disclosure-and-override regime. The definition of artificial intelligence is incorporated by reference to RSA 5-D:1. In practice, this means carriers cannot deploy AI-driven code-editing or downcoding tools that substitute algorithmic determinations for the provider's original clinical coding decisions.
Health carriers are prohibited from using artificial intelligence, as defined in RSA 5-D:1, to conduct audits of provider codes or to adjust such codes based on recommendations from artificial intelligence that would change, alter, or amend the clinical judgment of a provider.
Pending 2027-01-01
HC-01.7
RSA 420-J:6-f
Plain Language
Health carriers must maintain records that specifically identify where and how AI tools are used in claims processing. These records must be made available to the New Hampshire Insurance Department upon audit. This is an ongoing recordkeeping obligation — carriers need systems to track and document AI tool usage across claims processing functions, not just at time of deployment but on a continuing basis. The records must be sufficient for the insurance department to verify compliance with the prohibition on AI-driven code adjustment.
Each carrier shall maintain records identifying the use of artificial intelligence tools in claims processing and make such records available to the insurance department upon audit.
Pending 2025-03-10
HC-01.6
Insurance Law § 338(b)
Plain Language
Health insurers, Article 43 corporations, and HMOs must publicly disclose on their websites whether or not they use AI-based algorithms in their utilization review process. This is a blanket transparency obligation — entities must affirmatively state both the use and the lack of use of such algorithms. The disclosure must be posted on the entity's accessible Internet website, making it available to all insureds and enrollees. The superintendent is directed to require compliance through rulemaking.
(b) The superintendent shall require all insurers authorized to write accident and health insurance in this state, corporations organized pursuant to article forty-three of this chapter, and a health maintenance organization certified pursuant to article forty-four of the public health law to notify insureds and enrollees about the use or lack of use of artificial intelligence-based algorithms in the utilization review process on the accessible Internet website of such insurer authorized to write accident and health insurance in this state, corporation organized pursuant to article forty-three of this chapter, or health maintenance organization certified pursuant to article forty-four of the public health law.
Pending 2025-03-10
HC-01.7
Insurance Law § 338(c)
Plain Language
Health insurers, Article 43 corporations, and HMOs must submit both their AI-based algorithms and their training datasets to the Department of Financial Services for review and certification. The department must establish a certification process verifying that the algorithms and training data minimize the risk of bias across enumerated protected characteristics (race, color, religious creed, ancestry, age, sex, gender, national origin, handicap or disability) and adhere to evidence-based clinical guidelines. This is a dual obligation: the entity must submit materials, and the department must certify them. Submission covers both algorithms currently in use and those planned for future use. Entities should anticipate that ongoing submissions may be required as algorithms and training data evolve.
(c) Every insurer authorized to write accident and health insurance in this state, corporation organized pursuant to article forty-three of this chapter, and health maintenance organization certified pursuant to article forty-four of the public health law shall submit the artificial intelligence-based algorithms and training data sets that are being used or will be used in the utilization review process to the department. The department shall implement a process that allows the department to certify that these artificial intelligence-based algorithms and training data sets have minimized the risk of bias based on the covered person's race, color, religious creed, ancestry, age, sex, gender, national origin, handicap or disability and adhere to evidence-based clinical guidelines.
Pending 2025-03-10
HC-01.2
Insurance Law § 338(d)
Plain Language
When a utilization review process initially uses AI-based algorithms, a clinical peer reviewer must personally open and review the individual's clinical records or data and document that review before issuing any adverse determination. This ensures that AI-generated initial assessments do not result in denials without individualized human clinical review. The obligation falls on the clinical peer reviewer personally — they cannot rely solely on the AI algorithm's output when making an adverse determination. The documentation requirement creates an audit trail confirming individualized review occurred.
(d) A clinical peer reviewer who participates in a utilization review process for an insurer authorized to write accident and health insurance in this state, a corporation organized pursuant to article forty-three of this chapter, and a health maintenance organization certified pursuant to article forty-four of the public health law that initially uses artificial intelligence-based algorithms for a utilization review shall open and document the utilization review of the individual clinical records or data prior to issuing an adverse determination.
Pending 2025-01-30
HC-01.3
Insurance Law § 3224-e(a)(1)
Plain Language
Health care service plans using AI, algorithms, or other software tools in utilization review must ensure those tools base their determinations on the individual enrollee's medical or dental history, the clinical circumstances presented by the requesting provider, and other relevant clinical information in the enrollee's record. This prohibits determinations based solely on aggregate or group-level data and requires individualized clinical inputs.
(1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An enrollee's medical or dental history; (ii) Individual clinical circumstances as presented by the requesting provider; and (iii) Other relevant clinical information contained in the enrollee's medical or dental record.
Pending 2025-01-30
HC-01.1
Insurance Law § 3224-e(a)(2)
Plain Language
AI, algorithm, or software tools used in utilization review or management must not replace or supplant health care provider decision making. The tool may inform or support decisions, but the final clinical judgment must remain with a human provider. This is a general prohibition against full automation of clinical decisions in the utilization review context.
(2) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision making.
Pending 2025-01-30
HC-01.2
Insurance Law § 3224-e(b)
Plain Language
Any denial, delay, or modification of health care services based on medical necessity must be made by a licensed physician or a health care provider competent to evaluate the specific clinical issues at hand. The reviewing professional must consider the requesting provider's recommendation and the enrollee's individual medical or dental history and clinical circumstances. This effectively requires a qualified clinical peer to make all adverse medical necessity determinations, regardless of what AI tools were used in the process.
(b) Notwithstanding subsection (a) of this section, a denial, delay, or modification of health care services based on medical necessity shall be made by a licensed physician or other health care provider competent to evaluate the specific clinical issues involved in the health care services requested by the provider by considering the requesting provider's recommendation and based on recommendation, the enrollee's medical or dental history, as applicable, and individual clinical circumstances.
Pending 2025-01-30
HC-01.7
Insurance Law § 3224-e(a)(5)-(6)
Plain Language
Health care service plans must ensure that AI tools used in utilization review are open to inspection — meaning regulators or other authorized parties can examine how the tool operates. In addition, the plan's written policies and procedures must contain disclosures about the use and oversight of the AI tool. Together, these provisions require both regulatory transparency (inspection access) and internal documentation (written policies describing use and oversight).
(5) The artificial intelligence, algorithm, or other software tool is open to inspection. (6) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures.
Pending 2025-01-30
HC-01.4
Insurance Law § 3224-e(a)(7)
Plain Language
Health care service plans must periodically review and revise the performance, use, and outcomes of AI tools used in utilization review to maximize their accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The bill does not specify a review frequency, leaving plans to determine an appropriate cadence.
(7) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-01-30
HC-01.5
Insurance Law § 3224-e(a)(8)
Plain Language
Patient data used by AI tools in utilization review must not be repurposed beyond its intended and stated use. This data minimization requirement operates alongside — not in place of — existing HIPAA and applicable New York state health privacy requirements. Plans must ensure AI tools do not use patient data for secondary purposes such as marketing, model training, or other uses outside the utilization review function.
(8) Patient data is not used beyond its intended and stated purpose, consistent with applicable state laws and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191).
Pending 2025-08-18
HC-01.3
Pub. Health Law § 4905-a(1)(a)-(b)
Plain Language
Utilization review agents using AI tools for medical necessity determinations must ensure those tools base their outputs on individualized enrollee clinical data — including medical history, clinical circumstances presented by the requesting provider, and other relevant clinical records. The AI tool may not base its determination solely on aggregate or group-level datasets. This means the tool must evaluate each enrollee's specific situation rather than relying exclusively on population-level patterns or averages.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting provider; and (iii) other relevant clinical information contained in the enrollee's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-18
HC-01.1HC-01.2
Pub. Health Law § 4905-a(2)
Plain Language
AI tools used in utilization review may not independently deny, delay, or modify healthcare services based on medical necessity — not even partially. Every medical necessity determination must be made by a licensed physician or licensed health care professional who is competent in the relevant clinical specialty. That professional must review and consider the requesting provider's recommendation, the enrollee's medical history, and individual clinical circumstances. This effectively prohibits the AI tool from serving as the sole or primary basis for any adverse coverage action and requires a qualified human clinical reviewer for every medical necessity decision.
Notwithstanding subdivision one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-18
HC-01.4
Pub. Health Law § 4905-a(1)(i)
Plain Language
Utilization review agents must periodically review and revise the performance, use, and outcomes of any AI tool used in utilization review to maximize its accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify the review frequency, so agents should establish a reasonable periodic cadence and document the reviews conducted.
The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-18
HC-01.5
Pub. Health Law § 4905-a(1)(j)
Plain Language
Patient data used by AI tools in utilization review must not be repurposed beyond its intended and stated use. This is a purpose-limitation requirement that reinforces HIPAA's minimum necessary standard in the specific context of AI-assisted utilization review. Utilization review agents must ensure their AI vendors and internal systems do not use patient clinical data collected for utilization review purposes for secondary uses such as model training, marketing, or analytics beyond the stated review function.
Patient data is not used beyond its intended and stated purpose, consistent with this section and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-18
HC-01.7
Pub. Health Law § 4905-a(1)(g)-(h)
Plain Language
Utilization review agents must ensure their AI tools are open to inspection by the Department of Health for audit or compliance review purposes. Additionally, disclosures about the use and oversight of the AI tool must be included in the agent's written utilization review policies and procedures required under existing Public Health Law § 4902. This creates both a regulatory transparency obligation (making the tool available for department inspection) and a documentation obligation (incorporating AI use disclosures into existing written UR policies).
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-08-18
HC-01.3
Ins. Law § 4905-a(1)(a)-(b)
Plain Language
Disability insurers (including specialized health insurers) using AI tools for utilization review or utilization management must ensure those tools base their outputs on individualized insured clinical data — including medical history, clinical circumstances from the requesting provider, and other relevant clinical records. The AI tool may not base its determination solely on aggregate or group-level datasets. This mirrors the parallel obligation on utilization review agents under the Public Health Law but applies to insurers regulated under the Insurance Law.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history; (ii) Individual clinical circumstances as presented by the requesting provider; and (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-18
HC-01.1HC-01.2
Ins. Law § 4905-a(2)
Plain Language
AI tools used by disability insurers in utilization review or utilization management may not independently deny, delay, or modify healthcare services based on medical necessity. Every medical necessity determination must be made by a licensed physician or licensed health care professional competent in the relevant clinical specialty, who must review and consider the requesting provider's recommendation, the insured's medical history, and individual clinical circumstances. This mirrors the parallel Public Health Law provision and prohibits AI from serving as the sole or primary basis for any adverse coverage action.
Notwithstanding subsection one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-18
HC-01.4
Ins. Law § 4905-a(1)(i)
Plain Language
Disability insurers must periodically review and revise the performance, use, and outcomes of any AI tool used in utilization review or utilization management to maximize accuracy and reliability. This is an ongoing operational obligation requiring a reasonable periodic cadence of review. This mirrors the parallel Public Health Law provision.
The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-18
HC-01.5
Ins. Law § 4905-a(1)(j)
Plain Language
Patient data used by disability insurers' AI tools in utilization review or utilization management must not be repurposed beyond its intended and stated use, consistent with state law and HIPAA. This is a purpose-limitation requirement that prevents secondary use of patient clinical data collected for coverage determination purposes. This mirrors the parallel Public Health Law provision.
Patient data is not used beyond its intended and stated purpose, consistent with state law and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-18
HC-01.7
Ins. Law § 4905-a(1)(g)-(h)
Plain Language
Disability insurers must ensure their AI tools are open to inspection by the Department of Financial Services for audit or compliance review purposes pursuant to applicable law. Additionally, disclosures about AI tool use and oversight must be included in the insurer's written utilization review policies and procedures required under existing Insurance Law § 4902. This creates both a regulatory transparency obligation and a documentation obligation mirroring the parallel Public Health Law provision.
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department pursuant to applicable state and federal law. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-08-11
HC-01.3
Pub. Health Law § 4905-a(1)(a)-(b)
Plain Language
Utilization review agents using AI tools for medical necessity determinations must ensure those tools base their outputs on the individual enrollee's medical history, the clinical circumstances presented by the requesting provider, and other relevant clinical information from the enrollee's record. The AI tool may not base its determination solely on aggregate or group-level datasets. This requires individualized clinical data inputs for each determination.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting provider; and (iii) other relevant clinical information contained in the enrollee's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-11
HC-01.1HC-01.2
Pub. Health Law § 4905-a(2)
Plain Language
AI tools are categorically prohibited from denying, delaying, or modifying healthcare services based in whole or in part on medical necessity. Every medical necessity determination must be made by a licensed physician or a licensed healthcare professional who is competent to evaluate the specific clinical issues at hand. That human reviewer must consider the requesting provider's recommendation, the enrollee's medical history, and the enrollee's individual clinical circumstances. This is an absolute prohibition — the AI tool cannot make the final determination regardless of any safeguards applied under subdivision 1.
Notwithstanding subdivision one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-11
HC-01.4
Pub. Health Law § 4905-a(1)(i)
Plain Language
Utilization review agents must periodically review and revise the performance, use, and outcomes of any AI tool used in utilization review to maximize accuracy and reliability. The statute does not specify a review cadence, but the obligation is ongoing and requires affirmative periodic action — not merely a one-time pre-deployment check.
The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-11
HC-01.5
Pub. Health Law § 4905-a(1)(j)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond the intended and stated purpose of the utilization review determination. This obligation is framed as consistent with HIPAA, meaning it reinforces and extends HIPAA purpose-limitation principles to the AI utilization review context specifically.
Patient data is not used beyond its intended and stated purpose, consistent with this section and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-11
HC-01.7
Pub. Health Law § 4905-a(1)(g)-(h)
Plain Language
Utilization review agents must ensure their AI tools are open to inspection for audit or compliance reviews by the Department of Health. Additionally, disclosures about the use and oversight of AI tools must be included in the written policies and procedures required under existing Public Health Law § 4902. This creates both a regulatory inspection obligation and a documentation disclosure requirement.
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-08-11
HC-01.3
Ins. Law § 4905-a(1)(a)-(b)
Plain Language
Disability insurers (including specialized health insurers) using AI tools for utilization review or utilization management must ensure those tools base determinations on the individual insured's medical history, the clinical circumstances presented by the requesting provider, and other relevant clinical information from the insured's record. The AI tool may not base its determination solely on aggregate or group-level datasets. This mirrors the parallel obligation on utilization review agents under the Public Health Law.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history; (ii) Individual clinical circumstances as presented by the requesting provider; and (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-11
HC-01.1HC-01.2
Ins. Law § 4905-a(2)
Plain Language
AI tools used by disability insurers are categorically prohibited from denying, delaying, or modifying healthcare services based on medical necessity. Every medical necessity determination must be made by a licensed physician or competent licensed healthcare professional who reviews the requesting provider's recommendation, the insured's medical history, and individual clinical circumstances. This mirrors the parallel prohibition applicable to utilization review agents under the Public Health Law.
Notwithstanding subsection one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-11
HC-01.4
Ins. Law § 4905-a(1)(i)
Plain Language
Disability insurers must periodically review and revise the performance, use, and outcomes of any AI tool used in utilization review or utilization management to maximize accuracy and reliability. This is the Insurance Law parallel to the same obligation on utilization review agents under the Public Health Law.
The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-11
HC-01.5
Ins. Law § 4905-a(1)(j)
Plain Language
Patient data used by AI tools deployed by disability insurers for utilization review must not be used beyond the intended and stated purpose. This obligation is consistent with HIPAA and state law and mirrors the parallel provision applicable to utilization review agents under the Public Health Law.
Patient data is not used beyond its intended and stated purpose, consistent with state law and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-11
HC-01.7
Ins. Law § 4905-a(1)(g)-(h)
Plain Language
Disability insurers must ensure their AI tools are open to inspection for audit or compliance reviews by the Department of Financial Services under applicable state and federal law. Disclosures about AI use and oversight must be included in written policies and procedures required under Insurance Law § 4902. This is the Insurance Law parallel to the same obligations on utilization review agents under the Public Health Law.
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department pursuant to applicable state and federal law. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-01-01
HC-01.7
Ohio Rev. Code § 3902.80(B)(3)
Plain Language
Both the Superintendent of Insurance and the health plan issuer must publish the annual AI utilization review report on their respective public websites. This creates a dual public disclosure obligation — the regulator posts it and the insurer posts it — ensuring that enrollees, providers, and the public can review the insurer's AI practices in utilization review. This is distinct from the filing obligation itself; the report must be both filed and published.
(3) The superintendent shall publish a copy of the report on the web site of the department of insurance. The health plan issuer shall publish a copy of the report on the health plan issuer's publicly accessible web site.
Pending 2025-01-01
HC-01.1HC-01.2
Ohio Rev. Code § 3902.80(C)(1)-(3)
Plain Language
Health plan issuers are prohibited from basing care decisions — including denials, delays, or modifications on medical necessity grounds — solely on AI outputs. Medical necessity determinations must be made by a licensed physician or qualified provider who considers the treating provider's recommendation, the enrollee's clinical history, and individual circumstances. Physicians participating in utilization review must open and document their review of the individual's clinical records before making a decision. This creates a human-in-the-loop requirement: AI may inform the process, but the final determination must be an individualized, documented decision by a qualified human clinician based on the patient's specific clinical data.
(C)(1) No health plan issuer shall make a decision regarding the care of a covered person, including the decision to deny, delay, or modify health care services based on medical necessity, based solely on results derived from the use or application of artificial intelligence. (2) A determination of medical necessity under a health benefit plan must meet both of the following requirements: (a) The determination is made by a licensed physician or a provider that is qualified to evaluate the specific clinical issues involved in the requested health care services. (b) The determination takes into consideration the requesting provider's recommendation, the covered person's medical or other clinical history, and individual clinical circumstances. (3) Any physician who participates in a determination of medical necessity or a utilization review process on behalf of a health plan issuer shall open and document the review of the individual clinical records or data prior to making an individualized documented decision.
Pending 2025-01-01
HC-01.8
Ohio Rev. Code § 3902.80(C)(4)
Plain Language
When an AI-based algorithm is involved in a decision to deny, delay, or modify covered health care services, the health plan issuer must provide a plain-language explanation of the rationale behind the decision. This disclosure accompanies the adverse determination itself and must be understandable to the covered person or provider — not a generic statement that AI was used, but an explanation of the reasoning. This applies regardless of whether the AI was the primary factor or merely one input in the decision.
(4) Any decision to deny, delay, or modify health care services covered under a health benefit plan in which an artificial intelligence-based algorithm is used shall be accompanied by a plain language explanation of the rationale used in making the decision.
Pending 2025-01-01
HC-01.1HC-01.2HC-01.3
Ohio Rev. Code § 3902.80(C)(1)-(3)
Plain Language
Health plan issuers may not base care decisions — including denials, delays, or modifications for medical necessity — solely on AI-derived results. Every medical necessity determination must be made by a licensed physician or a provider qualified to evaluate the specific clinical issues, and must consider the requesting provider's recommendation, the covered person's medical history, and individual clinical circumstances. Physicians participating in medical necessity or utilization review determinations must personally open and review the individual clinical records and document their individualized decision. This effectively requires meaningful human clinical review of every adverse determination — AI may inform but cannot replace the human decision-maker.
(C)(1) No health plan issuer shall make a decision regarding the care of a covered person, including the decision to deny, delay, or modify health care services based on medical necessity, based solely on results derived from the use or application of artificial intelligence. (2) A determination of medical necessity under a health benefit plan must meet both of the following requirements: (a) The determination is made by a licensed physician or a provider that is qualified to evaluate the specific clinical issues involved in the requested health care services. (b) The determination takes into consideration the requesting provider's recommendation, the covered person's medical or other clinical history, and individual clinical circumstances. (3) Any physician who participates in a determination of medical necessity or a utilization review process on behalf of a health plan issuer shall open and document the review of the individual clinical records or data prior to making an individualized documented decision.
Pending 2025-01-01
HC-01.6
Ohio Rev. Code § 3902.80(C)(4)
Plain Language
When an AI-based algorithm is used in a decision to deny, delay, or modify covered health care services, the health plan issuer must provide a plain language explanation of the rationale behind the decision. This applies to every adverse determination involving AI — not only denials, but also delays and modifications. The explanation must accompany the decision, meaning it must be provided to the affected person at or near the time of the determination.
(4) Any decision to deny, delay, or modify health care services covered under a health benefit plan in which an artificial intelligence-based algorithm is used shall be accompanied by a plain language explanation of the rationale used in making the decision.
Pending 2025-01-01
HC-01.7
Ohio Rev. Code § 3902.80(D)
Plain Language
The Superintendent of Insurance has standing authority to audit any health plan issuer's use of AI-based algorithms at any time, without requiring a triggering event or specific cause. The superintendent may also engage third-party auditors for this purpose. From a compliance perspective, health plan issuers must maintain their AI systems, documentation, and records in a state of readiness for audit at all times — this is effectively a continuous preparedness obligation.
(D) The superintendent may audit a health plan issuer's use of an artificial intelligence-based algorithm at any time and may contract with a third party for the purposes of conducting such an audit.
Pre-filed 2026-11-01
HC-01.3
36 O.S. § 6567(A)(1)-(2)
Plain Language
Utilization review organizations, disability insurers, and specialized health insurers that use AI tools — whether directly or through contracted entities — must ensure those tools base their determinations on individualized enrollee clinical data, including the enrollee's medical history, individual clinical circumstances as presented by the requesting provider, and other relevant clinical information from the enrollee's records. The AI tool may not base its determination solely on a group dataset. This requirement applies to the entity using the AI tool even if a third party actually operates the tool.
A. A utilization review organization, disability insurer, or specialized health insurer that uses an artificial intelligence tool or contracts with or otherwise works through an entity that uses an artificial intelligence tool shall ensure that the artificial intelligence tool: 1. Bases its determination on the following information, as applicable: a. an enrollee's medical or other clinical history, b. individual clinical circumstances as presented by the requesting provider, and c. other relevant clinical information contained in the enrollee's medical or other clinical record; 2. Does not base its determination solely on a group dataset;
Pre-filed 2026-11-01
HC-01.1HC-01.2
36 O.S. § 6567(B)
Plain Language
AI tools are categorically prohibited from denying, delaying, or modifying health care services based in whole or in part on medical necessity. All medical necessity determinations must be made by a licensed physician or a licensed health care professional who is competent to evaluate the specific clinical issues at hand. That professional must review and consider the requesting provider's recommendation, the enrollee's medical or clinical history, and individual circumstances. This is a complete prohibition on AI-driven adverse medical necessity determinations — not a human-in-the-loop requirement, but an outright bar on the AI making the determination at all.
B. The artificial intelligence tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, and individual circumstances.
Pre-filed 2026-11-01
HC-01.5
36 O.S. § 6567(A)(5)
Plain Language
Utilization review organizations, disability insurers, and specialized health insurers must ensure that AI tools used in utilization review do not use patient data beyond the tool's intended and stated purpose, consistent with HIPAA. This reinforces existing HIPAA use limitations in the specific context of AI-driven utilization review, ensuring that patient data fed into AI tools is not repurposed for secondary uses.
5. Does not use patient data beyond its intended and stated purpose consistent with the federal Health Insurance Portability and Accountability Act of 1996, P.L. No. 104-191, as applicable;
Pre-filed 2026-11-01
HC-01.4
36 O.S. § 6567(A)(10)
Plain Language
Covered entities must ensure that AI tools used in utilization review are subject to periodic review of their performance, use patterns, and outcomes, and must revise the tools as necessary to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check — requiring continued monitoring and improvement of AI tool performance in live deployment.
10. Requires performance use and outcomes to be periodically reviewed and revised to maximize accuracy and reliability.
Pre-filed 2026-11-01
HC-01.6
36 O.S. § 6567(C)
Plain Language
Every health benefit plan in Oklahoma must disclose on its accessible website whether it does or does not use AI tools in its utilization review process. This is a public-facing disclosure obligation applicable to all health benefit plans — not only those that use AI — since plans must also disclose the lack of AI tool use. The disclosure must be posted on the plan's website.
C. Any health benefit plan in this state shall notify enrollees and insureds about the use or lack of use of artificial intelligence tools in the utilization review process on the accessible Internet website of such health benefit plan.
Pre-filed 2026-11-01
HC-01.7
36 O.S. § 6567(A)(8)-(9)
Plain Language
Covered entities must ensure that their AI tools are open to inspection by the Insurance Commissioner for audit or compliance review purposes. Additionally, the entity's written policies and procedures must contain disclosures about the use and oversight of AI tools. These two requirements together create a regulatory transparency obligation — the AI tool must be auditable by the Commissioner, and internal governance documentation must address how the tool is used and overseen.
8. Is open to inspection for audit or compliance review by the Insurance Commissioner; 9. Contains disclosures pertaining to the use and oversight of the artificial intelligence tool in the written policies and procedures;
Pre-filed 2026-11-01
36 O.S. § 6567(A)(3)-(4), (6)-(7)
Plain Language
Covered entities must ensure that AI tools used in utilization review do not supplant health care provider decision-making, do not discriminate against enrollees in violation of state or federal law, do not cause harm to enrollees, and are applied in accordance with applicable HHS regulations and guidance. The non-supplanting requirement reinforces that the AI tool is a support mechanism — it cannot replace the provider's clinical judgment. The discrimination prohibition incorporates existing state and federal non-discrimination standards. The no-harm and HHS compliance requirements create broad guardrails around AI tool deployment in utilization review.
3. Does not supplant health care provider decision-making; 4. Does not discriminate against enrollees in violation of state and federal law; 6. Does not cause harm to the enrollee; 7. Is applied in accordance with any applicable regulations and guidance issued by the federal Department of Health and Human Services;
Pending 2026-10-06
HC-01.6
35 Pa.C.S. § 3503(b)(1)
Plain Language
When a facility uses AI algorithms for clinical decision making, the AI must not supersede the health care provider's clinical judgment. The provider retains ultimate authority over patient care decisions including gathering information, diagnosing, and planning treatments. This is an ongoing operational requirement — every use of AI in clinical decision making must preserve human clinical authority.
(b) Requirements for artificial intelligence-based algorithms.--For each instance in which a facility uses artificial intelligence-based algorithms for clinical decision making, the facility shall comply with the following: (1) The artificial intelligence-based algorithms must not supersede health care provider clinical decision making.
Pending 2026-10-06
HC-01.6HC-01.2
40 Pa.C.S. § 5205(1)-(3)
Plain Language
Before an insurer's utilization review physician issues or upholds a denial, reduction, or termination of health care benefits — including prior authorization denials — the reviewing provider must individually review the patient's clinical records and other relevant information, document that review, and exercise independent clinical judgment separate from any AI recommendation. This ensures a human clinical professional makes or affirms every adverse determination based on individualized review, not AI output alone.
Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an insurer shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial intelligence-based algorithms.
Pending 2026-10-06
HC-01.3
40 Pa.C.S. § 5203(b)(1)-(2)
Plain Language
When an insurer uses AI in utilization review, the AI must base its determination on the individual covered person's medical history, individual clinical and nonclinical circumstances presented by the requesting provider, and other relevant information from the patient's clinical record. The AI must not base a determination solely on a group data set. This requires individualized assessment — aggregate population-level data alone is insufficient to support any utilization review determination.
(b) Requirements for artificial intelligence-based algorithms.--For each instance in which an insurer uses artificial intelligence-based algorithms in the utilization review process regarding a covered person, the insurer shall comply with the following: (1) The artificial intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the covered person. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the covered person. (2) The artificial intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2026-10-06
HC-01.1
40 Pa.C.S. § 5203(b)(3)
Plain Language
AI algorithms used in the insurer's utilization review process must not supersede the decision making of the health care provider conducting the utilization review. The reviewing provider retains final authority over coverage determinations — AI recommendations are advisory only and cannot override clinical judgment.
(3) The artificial intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2026-10-06
HC-01.3
40 Pa.C.S. § 5303(b)(1)-(2)
Plain Language
When an MA or CHIP managed care plan uses AI in utilization review, the AI must base its determination on the individual enrollee's medical history, individual clinical and nonclinical circumstances presented by the requesting provider, and other relevant clinical record information. Determinations must not be based solely on group-level data sets. This mirrors the insurer requirement in Chapter 52 but applies to Medicaid and CHIP managed care plans.
(b) Requirements for artificial intelligence-based algorithms.--For each instance in which a MA or CHIP managed care plan uses artificial intelligence-based algorithms in the utilization review process regarding an enrollee, the MA or CHIP managed care plan shall comply with the following: (1) The artificial intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the enrollee. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the enrollee. (2) The artificial intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2026-10-06
HC-01.1
40 Pa.C.S. § 5303(b)(3)
Plain Language
AI algorithms used by an MA or CHIP managed care plan in utilization review must not supersede the reviewing health care provider's decision making. The provider retains final authority — AI output is advisory only.
(3) The artificial intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2026-10-06
HC-01.6HC-01.2
40 Pa.C.S. § 5305(1)-(3)
Plain Language
Before issuing or upholding adverse benefit determinations on behalf of an MA or CHIP managed care plan — including prior authorization denials — the reviewing provider must individually review the enrollee's clinical records and other relevant information, document that review, and exercise independent clinical judgment separate from AI recommendations. This mirrors the § 5205 requirement for commercial insurers but applies in the Medicaid/CHIP managed care context.
Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an MA or CHIP managed care plan shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial intelligence-based algorithms.
Pending 2026-10-06
HC-01.6
35 Pa.C.S. § 3502(b)(1)-(2)
Plain Language
When a facility uses AI to generate written or verbal patient communications about clinical information, the communication must include a clear and conspicuous disclaimer identifying it as AI-generated and must provide instructions for contacting a human health care provider. Two exceptions apply: purely administrative communications (scheduling, billing, clerical) are exempt, and communications that have been individually read and reviewed by a human health care provider are also exempt. The human-review exception creates an important safe harbor — if a provider personally reviews the AI-generated communication before it reaches the patient, the disclosure requirements do not apply.
(b) Communications.-- (1) A facility that uses artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall include: (i) A clear and conspicuous disclaimer that indicates that the communication was generated by artificial intelligence. (ii) Clear instructions on how the patient may contact a human health care provider or relevant employee of the facility with questions. (2) The requirements under paragraph (1) shall not apply to communications that: (i) only pertain to administrative matters, including appointment scheduling, billing or other clerical or business matters; or (ii) have been individually read and reviewed by a human health care provider.
Pending 2026-10-06
HC-01.6
35 Pa.C.S. § 3502(a)(1)-(2)
Plain Language
Facilities must disclose to patients when AI-based algorithms are or will be used for clinical decision making or similar tasks. This disclosure must appear in all related written communications and must be posted on the facility's public website. The Department of Health will determine the specific nature and frequency of these disclosures. This is a general use-of-AI disclosure — it is triggered by the facility's use of AI for clinical decision making broadly, not by a specific AI-generated communication.
(a) Duty to disclose.--A facility shall disclose to patients of the facility if artificial intelligence-based algorithms are or will be used for clinical decision making or other similar tasks. The disclosure shall be: (1) Provided in all related written communications. (2) Posted on the publicly accessible Internet website of the facility.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5202(a)-(b)
Plain Language
Insurers must disclose to both participating network providers and all covered persons whether AI-based algorithms are or will be used in the insurer's utilization review process. This information must also be posted on the insurer's public website. The Insurance Department will determine the specific nature and frequency of disclosures to covered persons.
(a) Duty to disclose.--An insurer shall disclose to a participating network provider and all covered persons if artificial intelligence-based algorithms are or will be used in the utilization review process of the insurer. (b) Posting.--An insurer shall post the information about the use of artificial intelligence-based algorithms in the utilization review process of the insurer on the publicly accessible Internet website of the insurer.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5302(a)-(b)
Plain Language
MA or CHIP managed care plans must disclose to participating network providers and all enrollees whether AI-based algorithms are or will be used in utilization review. This disclosure must also be posted on the plan's public website. The Department of Human Services will determine the specific nature and frequency of disclosures to enrollees.
(a) Duty to disclose.--An MA or CHIP managed care plan shall disclose to a participating network provider and all enrollees if artificial intelligence-based algorithms are or will be used in the utilization review process of the MA or CHIP managed care plan. (b) Posting.--An MA or CHIP managed care plan shall post the information about the use of artificial intelligence-based algorithms in the utilization review process of the MA or CHIP managed care plan on the publicly accessible Internet website of the MA or CHIP managed care plan.
Pending 2027-01-09
HC-01.6
35 Pa.C.S. § 3503(b)(1)
Plain Language
When a facility uses AI algorithms for clinical decision making, the AI must not supersede the health care provider's clinical judgment. The human provider retains final decision-making authority over patient care, including diagnosis and treatment planning. This is an absolute requirement — there is no exception for cases where the AI may have higher measured accuracy.
(b) Requirements for artificial-intelligence-based algorithms.--For each instance in which a facility uses artificial-intelligence-based algorithms for clinical decision making, the facility shall comply with the following: (1) The artificial-intelligence-based algorithms must not supersede health care provider clinical decision making.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5203(b)(3)
Plain Language
When an insurer uses AI algorithms in utilization review, the AI must not supersede the decision making of the health care provider conducting the review. The human provider retains independent judgment authority over utilization review determinations.
(3) The artificial-intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2027-01-09
HC-01.2
40 Pa.C.S. § 5205(1)-(3)
Plain Language
Before an insurer denies, reduces, or terminates benefits — including prior authorization denials — the health care provider conducting utilization review must individually review clinical records, document that review, and exercise independent judgment separate from any AI recommendations. This means a human clinical reviewer must affirmatively review the patient's individual records and reach an independent conclusion rather than merely ratifying the AI output.
Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an insurer shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial-intelligence-based algorithms.
Pending 2027-01-09
HC-01.3
40 Pa.C.S. § 5203(b)(1)-(2)
Plain Language
When an insurer uses AI in utilization review, the AI must base its determinations on the individual covered person's medical/clinical history, the circumstances presented by the requesting provider, and other relevant information in the person's clinical record. The AI may not base a determination solely on group-level data. This effectively requires individualized analysis — aggregate data sets can inform the determination but cannot be the sole basis.
(b) Requirements for artificial-intelligence-based algorithms.--For each instance in which an insurer uses artificial-intelligence-based algorithms in the utilization review process regarding a covered person, the insurer shall comply with the following: (1) The artificial-intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the covered person. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the covered person. (2) The artificial-intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5303(b)(3)
Plain Language
When an MA or CHIP managed care plan uses AI in utilization review, the AI must not supersede the decision making of the health care provider conducting the review. This mirrors the identical requirement imposed on commercial insurers under Chapter 52.
(3) The artificial-intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2027-01-09
HC-01.3
40 Pa.C.S. § 5303(b)(1)-(2)
Plain Language
When an MA or CHIP managed care plan uses AI in utilization review, the AI must base its determinations on the individual enrollee's medical/clinical history, circumstances presented by the requesting provider, and other relevant information in the enrollee's clinical record. Determinations may not be based solely on group-level data. This mirrors the identical requirement imposed on commercial insurers under Chapter 52.
(b) Requirements for artificial-intelligence-based algorithms.--For each instance in which a MA or CHIP managed care plan uses artificial-intelligence-based algorithms in the utilization review process regarding an enrollee, the MA or CHIP managed care plan shall comply with the following: (1) The artificial-intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the enrollee. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the enrollee. (2) The artificial-intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2027-01-09
HC-01.2
40 Pa.C.S. § 5305(1)-(3)
Plain Language
Before an MA or CHIP managed care plan denies, reduces, or terminates benefits — including prior authorization denials — the health care provider conducting utilization review must individually review clinical records, document that review, and exercise independent judgment separate from any AI recommendations. This mirrors the identical requirement for commercial insurers under § 5205.
Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an MA or CHIP managed care plan shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial-intelligence-based algorithms.
Pending 2027-01-09
HC-01.6
35 Pa.C.S. § 3502(a)-(b)
Plain Language
Facilities must disclose to patients when AI algorithms are or will be used for clinical decision making, both in related written communications and on the facility's public website. Additionally, when AI generates written or verbal patient communications about clinical information, the communication must include a clear disclaimer that it was AI-generated and instructions for contacting a human provider. Two exceptions apply: purely administrative communications (scheduling, billing) and communications individually reviewed by a human provider are exempt from the AI-generated disclaimer requirement.
(a) Artificial-intelligence-based algorithms.--A facility shall disclose to patients of the facility if artificial-intelligence-based algorithms are or will be used for clinical decision making or other similar tasks. The disclosure shall be: (1) Provided in all related written communications. (2) Posted on the publicly accessible Internet website of the facility. (b) Communications.-- (1) A facility that uses artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall include: (i) A clear and conspicuous disclaimer that indicates that the communication was generated by artificial intelligence. (ii) Clear instructions on how the patient may contact a human health care provider or relevant employee of the facility with questions. (2) The requirements under paragraph (1) shall not apply to communications that: (i) only pertain to administrative matters, including appointment scheduling, billing or other clerical or business matters; or (ii) have been individually read and reviewed by a human health care provider.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5202(a)-(b)
Plain Language
Insurers must disclose to both participating network providers and all covered persons when AI algorithms are or will be used in the insurer's utilization review process. This disclosure must also be posted on the insurer's public website. The Department of Insurance will determine the specific nature and frequency of disclosures to covered persons.
(a) Artificial-intelligence-based algorithms.--An insurer shall disclose to a participating network provider and all covered persons if artificial-intelligence-based algorithms are or will be used in the utilization review process of the insurer. (b) Posting.--An insurer shall post the information about the use of artificial-intelligence-based algorithms in the utilization review process of the insurer on the publicly accessible Internet website of the insurer.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5302(a)-(b)
Plain Language
MA or CHIP managed care plans must disclose to participating network providers and all enrollees when AI algorithms are or will be used in the plan's utilization review process. This disclosure must also be posted on the plan's public website. The Department of Human Services will determine the specific nature and frequency of disclosures to enrollees.
(a) Artificial-intelligence-based algorithms.--An MA or CHIP managed care plan shall disclose to a participating network provider and all enrollees if artificial-intelligence-based algorithms are or will be used in the utilization review process of the MA or CHIP managed care plan. (b) Posting.--An MA or CHIP managed care plan shall post the information about the use of artificial-intelligence-based algorithms in the utilization review process of the MA or CHIP managed care plan on the publicly accessible Internet website of the MA or CHIP managed care plan.
Pending 2027-01-09
HC-01.4
35 Pa.C.S. § 3503(b)(5)
Plain Language
Facilities must periodically review and revise the performance, use, and outcomes of their AI algorithms to maximize accuracy and reliability. This is an ongoing operational review requirement — not a one-time pre-deployment check. The specific frequency is not defined in statute and will likely be set by Department of Health regulations.
(5) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2027-01-09
HC-01.4
40 Pa.C.S. § 5203(b)(7)
Plain Language
Insurers must periodically review and revise the performance, use, and outcomes of their AI algorithms used in utilization review to maximize accuracy and reliability. This is an ongoing operational review obligation.
(7) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2027-01-09
HC-01.4
40 Pa.C.S. § 5303(b)(7)
Plain Language
MA or CHIP managed care plans must periodically review and revise the performance, use, and outcomes of their AI algorithms used in utilization review to maximize accuracy and reliability.
(7) The performance, use and outcomes of the artificial-intelligence-based algorithms must be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2027-01-09
HC-01.5
35 Pa.C.S. § 3503(b)(6)
Plain Language
Patient data used by AI algorithms in facilities must not be used beyond the intended and stated purpose of those algorithms. This purpose limitation is consistent with HIPAA administrative simplification provisions and Pennsylvania law. Facilities must ensure that patient data fed into AI clinical decision-making tools is not repurposed for unrelated uses.
(6) Patient data must not be used beyond the intended and stated purpose of the artificial-intelligence-based algorithms, consistent with the laws of this Commonwealth and 42 U.S.C. Ch. 7 Subch. XI Part C (relating to administrative simplification), as applicable.
Pending 2027-01-09
HC-01.5
40 Pa.C.S. § 5203(b)(8)
Plain Language
Covered person data used by AI algorithms in the insurer's utilization review must not be used beyond the intended and stated purpose of those algorithms. This purpose limitation is consistent with HIPAA and Pennsylvania law.
(8) The data of the covered person must not be used beyond the intended and stated purpose of the artificial-intelligence-based algorithms, consistent with Commonwealth law and 42 U.S.C. Ch. 7, Subch. XI Part C (relating to administrative simplification), as applicable.
Pending 2027-01-09
HC-01.5
40 Pa.C.S. § 5303(b)(8)
Plain Language
Enrollee data used by AI algorithms in the MA or CHIP managed care plan's utilization review must not be used beyond the intended and stated purpose of those algorithms, consistent with HIPAA and Pennsylvania law.
(8) The data of the covered person or enrollees must not be used beyond the intended and stated purpose of the artificial-intelligence-based algorithms, consistent with the laws of this Commonwealth and the Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191, 110 Stat. 1936), as applicable.
Pending 2026-01-21
HC-01.1HC-01.2
R.I. Gen. Laws § 27-84-4(a)
Plain Language
When AI makes or substantially contributes to a non-administrative adverse benefit determination regarding medically necessary care, a licensed provider with the same license status as the ordering professional must review and approve the determination before it is finalized. The reviewing provider must document their rationale in the enrollee's case record. This is a hard-stop gating requirement — the AI determination cannot go into effect without the human clinical peer sign-off. The penalty for noncompliance is automatic reversal of the adverse determination, giving this provision immediate operational consequences. Note this applies only to non-administrative adverse determinations (those requiring medical judgment), not to administrative denials like eligibility or covered-benefit determinations.
Any non-administrative adverse benefit determination where an artificial intelligence system made, or was a substantial factor in making, that determination regarding medically necessary care shall be reviewed and approved by a provider with the same license status of the ordering professional provider before being finalized, with documentation of their rationale included in the enrollee's case record. Failure to follow the requirements set forth in this subsection shall result in reversal of the non-administrative adverse determination.
Pending 2026-01-21
R.I. Gen. Laws § 27-84-4(b)
Plain Language
Even after the required human clinical peer review under § 27-84-4(a), the resulting adverse determination remains subject to the existing appeals process under Rhode Island's utilization review appeals statute (R.I. Gen. Laws ch. 27-18.9). This provision confirms that clinical peer review does not eliminate the enrollee's existing appeal rights — it is an additional safeguard, not a replacement for the appeals process.
Appeals of non-administrative adverse benefit determinations made by an artificial intelligence system regarding medically necessary care that has been reviewed and approved by a provider with the same license status of the ordering professional provider shall comply with the appeals process set forth in chapter 18.9 of title 27.
Pending 2026-01-09
HC-01.1HC-01.2
R.I. Gen. Laws § 27-84-4(a)
Plain Language
When AI makes or substantially contributes to a non-administrative adverse benefit determination regarding medically necessary care, a licensed provider with the same license status as the ordering provider must review and approve the determination before it is finalized. The reviewing provider must document their rationale in the enrollee's case record. This is a hard gating requirement — not merely a recommendation or audit-trail obligation. The automatic reversal remedy for non-compliance is self-executing: if the insurer fails to obtain the required clinical peer review, the adverse determination is reversed as a matter of law, regardless of its clinical merit. This is one of the strongest enforcement mechanisms in the statute.
Any non-administrative adverse benefit determination where an artificial intelligence system made, or was a substantial factor in making, that determination regarding medically necessary care shall be reviewed and approved by a provider with the same license status of the ordering professional provider before being finalized, with documentation of their rationale included in the enrollee's case record. Failure to follow the requirements set forth in this subsection shall result in reversal of the non-administrative adverse determination.
Pending 2026-01-09
R.I. Gen. Laws § 27-84-4(b)
Plain Language
When a non-administrative adverse benefit determination involving AI has been properly reviewed and approved by a same-license-status provider under § 27-84-4(a), appeals of that determination must follow the existing appeals process in R.I. Gen. Laws Chapter 27-18.9. This provision does not create a new appeals process — it confirms that properly reviewed AI-assisted adverse determinations are subject to the same appeals framework that applies to all adverse benefit determinations in Rhode Island.
Appeals of non-administrative adverse benefit determinations made by an artificial intelligence system regarding medically necessary care that has been reviewed and approved by a provider with the same license status of the ordering professional provider shall comply with the appeals process set forth in chapter 18.9 of title 27.
Pending 2026-07-01
HC-01.3
Section 1(1)-(2)
Plain Language
Health carriers using AI tools for utilization review — whether directly or through contracted entities — must ensure those tools base their determinations on individualized patient clinical data: the patient's medical history, the individual clinical circumstances presented by the requesting provider, and other relevant clinical information in the patient's record. The AI tool may not base its determination solely on group-level or aggregate datasets. This means carriers must configure and validate their AI utilization review tools to ingest and weigh individual enrollee data, not just population-level benchmarks.
Any health carrier that makes determinations or provides advice about third-party payment for any health care services using an artificial intelligence, algorithm, or other software tool, for the purpose of utilization review and any health carrier that contracts with or otherwise works through an entity that uses an artificial intelligence, algorithm, or other software tool, for the purpose of utilization review, shall ensure the following: (1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (a) A patient's medical or other clinical history; (b) Individual clinical circumstances, as presented by the requesting provider; and (c) Other relevant clinical information contained in the patient's medical or other clinical record; (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset;
Pending 2026-07-01
Section 1(3)-(4)
Plain Language
Health carriers must ensure their AI utilization review tools are applied equally to all patients and configured consistently across all subscriber groups and individuals covered by a health benefit plan. The tool must produce the same results for patients with similar clinical presentations and considerations — no subscriber group or individual may receive differential AI-driven review outcomes. Carriers must also ensure the tool complies with applicable HHS regulations and guidance. This is a non-discrimination and consistency obligation specific to AI-driven utilization review, distinct from the individualized-data requirement in Section 1(1)-(2).
(3) The artificial intelligence, algorithm, or other software tool is applied equally for all patients, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services; and (4) The artificial intelligence, algorithm, or other software tool is configured and applied in a standard consistent manner for all subscriber groups and individuals covered by a health benefit plan, as defined in § 58-17-66, so that the resulting decisions are the same for all patients with similar clinical presentations and considerations.
Pending 2026-07-01
HC-01.1HC-01.2
Section 2
Plain Language
AI tools used for utilization review are categorically prohibited from independently denying, delaying, or modifying a healthcare coverage determination. Every adverse determination must be made by a licensed physician or a licensed healthcare professional who is competent to evaluate the specific clinical issues at hand. That professional must review and consider the requesting provider's recommendation, the patient's medical history, and the patient's individual clinical circumstances before making the determination. This means the AI tool may inform the process, but the final adverse decision must be made by a qualified human clinician — the AI cannot serve as the sole or primary basis for any denial, delay, or modification.
An artificial intelligence, algorithm, or other software tool used for the purpose of utilization review pursuant to section 1 of this Act may not deny, delay, or modify a determination to provide health care services. Any adverse determination may be made only by a licensed physician or a licensed healthcare professional competent to evaluate the specific clinical issues involved in the requested services, and only after reviewing and considering the requesting provider's recommendation, the patient's medical or other clinical history as applicable, and individual clinical circumstances.
Pending 2026-07-01
HC-01.6HC-01.8
Va. Code § 38.2-3407.15(B)(15)(iv)
Plain Language
When a carrier uses AI to issue an adverse determination (e.g., a claim denial, coverage modification, or other unfavorable coverage decision), the carrier must notify both the affected enrollee and the relevant health care provider that AI was used in reaching the adverse determination. The carrier must also provide a clear and timely appeal process for the determination. This creates two distinct obligations: (1) a transparency/disclosure obligation triggered by any adverse AI-assisted determination, and (2) an appeal process obligation ensuring meaningful recourse. The statute does not specify required timeframes for the notice or appeal, which the Commission may address through rulemaking under subsection K.
Each carrier shall (iv) provide notice to enrollees and health care providers when AI has been used to issue an adverse determination and provide a clear and timely process for appealing the determination.
Pre-filed 2026-07-01
HC-01.3
18 V.S.A. § 9423(a)(1)-(2)
Plain Language
Health plans must ensure that any AI, algorithm, or software tool used in utilization review bases its determinations on the individual enrollee's medical history, the specific clinical circumstances presented by the treating provider, and other relevant clinical information from the enrollee's records. The tool may not rely solely on group-level or aggregate datasets. This requires health plans to configure and validate that their utilization review tools ingest and process individualized patient data for each determination.
(1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (A) an insured's medical or other clinical history; (B) the specific clinical circumstances as presented by the requesting health care provider; and (C) other relevant clinical information contained in the insured's medical or other clinical record. (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pre-filed 2026-07-01
HC-01.1HC-01.2
18 V.S.A. § 9423(b)
Plain Language
AI tools used by health plans are flatly prohibited from making adverse coverage determinations — they may not deny, delay, or modify authorization of health care services. Every adverse determination must be made by a licensed human health care provider who is clinically competent to evaluate the specific issues at hand. That human reviewer must consider the treating provider's recommendation, the enrollee's individual medical history, and the specific clinical circumstances. This is stricter than many peer-state provisions: the AI tool cannot serve as even the primary basis for a denial — it is entirely barred from making the adverse determination.
The artificial intelligence, algorithm, or other software tool utilized by a health plan shall not deny, delay, or modify a determination of whether to authorize the coverage of health care services. An adverse coverage determination shall be made only by a licensed human health care provider who is competent to evaluate the specific clinical issues involved in the health care services requested by a treating health care provider by reviewing and considering the requesting provider's recommendation; the insured's medical or other clinical history, as appropriate; and the specific clinical circumstances.
Pre-filed 2026-07-01
HC-01.4
18 V.S.A. § 9423(a)(7)
Plain Language
Health plans must review and revise the performance, use, and outcomes of their AI utilization review tools at least quarterly. This is a more frequent cadence than many peer-state requirements (which typically require annual review). The review must be substantive enough to drive revisions aimed at maximizing accuracy and reliability — a checkbox review would not satisfy the obligation.
(7) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are reviewed and revised at least quarterly to maximize accuracy and reliability.
Pre-filed 2026-07-01
HC-01.7
18 V.S.A. § 9423(a)(5)
Plain Language
Health plans must ensure their AI utilization review tools are available for inspection by the Vermont Department of Financial Regulation and other state agencies conducting audits or compliance reviews. This means the health plan cannot claim the tool is proprietary and refuse to allow regulatory examination. The obligation extends to any contracted entity's tools used on the health plan's behalf.
(5) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Department of Financial Regulation and by other State agencies and departments pursuant to applicable State and federal law.
Pre-filed 2026-07-01
HC-01.6
18 V.S.A. § 9423(a)(6)
Plain Language
Health plans must include in their written policies and procedures disclosures about their use of AI in utilization review and the nature and degree of human review and oversight applied. The specific content and format of these disclosures are subject to Department of Financial Regulation requirements. This is a documentation and disclosure obligation — the health plan's policies must transparently describe how AI is used and what human oversight exists.
(6) Disclosures pertaining to the use of the artificial intelligence, algorithm, or other software tool in the utilization review process and the nature and degree of human review and oversight are contained in the health plan's written policies and procedures to the extent required by the Department of Financial Regulation.
Passed 2026-07-01
HC-01.1HC-01.2HC-01.3
18 V.S.A. § 9771(a)(1)-(2), (a)(4), (b)
Plain Language
Health plans using AI, algorithms, or other software for utilization review based on medical necessity must ensure the tool bases determinations on individualized clinical data — the enrollee's medical history, the treating provider's clinical presentation, and other relevant records — and does not rely solely on group datasets. The AI tool may not supplant provider decision making. Most critically, subsection (b) provides an absolute prohibition: AI may not deny, delay, or modify health care services based on medical necessity. Only a licensed human provider competent in the relevant clinical specialty may make medical necessity determinations, after reviewing the treating provider's recommendation and the individual's clinical record. This applies to prospective, retrospective, and concurrent utilization review. The obligation extends to contracted utilization review entities.
(a) A health plan, as defined in section 9418 of this title, that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, or that contracts with or otherwise works through an entity that uses artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, shall ensure all of the following: (1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (A) a covered individual's medical or other clinical history; (B) the specific clinical circumstances as presented by the requesting health care provider; and (C) other relevant clinical information contained in the covered individual's medical or other clinical record. (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset. (4) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision making. (b) Notwithstanding subsection (a) of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based in whole or in part on medical necessity. A determination of medical necessity shall be made only by a licensed human health care provider who is competent to evaluate the specific clinical issues involved in the health care services requested by a treating health care provider by reviewing and considering the requesting provider's recommendation; the covered individual's medical or other clinical history, as appropriate; and the specific clinical circumstances.
Passed 2026-07-01
HC-01.4
18 V.S.A. § 9771(a)(9)
Plain Language
Health plans must periodically review and revise the AI tools used in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify a review cadence, leaving the frequency to the health plan's discretion, but the obligation is continuous.
(9) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Passed 2026-07-01
HC-01.5
18 V.S.A. § 9771(a)(10)
Plain Language
Patient data used by AI tools in utilization review may not be used beyond its intended and stated purpose. This purpose limitation must be consistent with Vermont's existing health privacy law (chapter 42B) and HIPAA privacy and security rules. Health plans must ensure that patient clinical data ingested by AI for utilization review is not repurposed for other uses such as marketing, risk profiling, or model training.
(10) Patient data is not used beyond its intended and stated purpose, consistent with chapter 42B of this title and with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A and E, as applicable.
Passed 2026-07-01
HC-01.7
18 V.S.A. § 9771(a)(7)-(8)
Plain Language
Health plans must make their AI utilization review tools available for inspection and audit by the Department of Financial Regulation and other state agencies. Additionally, the health plan's written policies and procedures must contain disclosures about the use and oversight of the AI tool, to the extent the Department of Financial Regulation requires. This creates both a regulatory audit access obligation and a documentation/disclosure requirement, though the scope of the disclosure obligation is partially delegated to DFR rulemaking.
(7) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Department of Financial Regulation and by other State agencies and departments pursuant to applicable State and federal law. (8) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the health plan's written policies and procedures to the extent required by the Department of Financial Regulation.