HC-01
Healthcare AI
Healthcare AI Decision Restrictions
Entities using AI, algorithms, or automated tools in healthcare insurance coverage determinations, utilization review, prior authorization, or claims adjudication must ensure that such tools do not serve as the sole or primary basis for adverse determinations. Final decisions on medical necessity, claim denials, and coverage modifications must be made by licensed, clinically competent healthcare professionals who review individualized patient clinical circumstances. AI tools used in these contexts must base determinations on individual enrollee medical history and clinical data, not solely on group-level datasets.
Applies to DeployerProfessionalGovernment Sector HealthcareInsurance
Bills — Enacted
1
unique bills
Bills — Proposed
35
Last Updated
2026-03-29
Core Obligation

Entities using AI, algorithms, or automated tools in healthcare insurance coverage determinations, utilization review, prior authorization, or claims adjudication must ensure that such tools do not serve as the sole or primary basis for adverse determinations. Final decisions on medical necessity, claim denials, and coverage modifications must be made by licensed, clinically competent healthcare professionals who review individualized patient clinical circumstances. AI tools used in these contexts must base determinations on individual enrollee medical history and clinical data, not solely on group-level datasets.

Sub-Obligations8 sub-obligations
ID
Name & Description
Enacted
Proposed
HC-01.1
Prohibition on AI as Sole Decision-Maker AI, algorithms, or software tools may not serve as the sole or primary basis for denying, delaying, modifying, or downcoding healthcare coverage, claims, or prior authorization requests. A licensed human clinical professional must make or independently affirm every adverse determination.
1 enacted
32 proposed
HC-01.2
Licensed Clinical Peer Review Requirement Any denial, delay, modification, or downgrade of healthcare services based on medical necessity must be reviewed and decided by a qualified clinical peer — a licensed physician or healthcare professional practicing in the same or similar specialty as the treating provider — who considers the provider's recommendation and the enrollee's individual medical history.
1 enacted
25 proposed
HC-01.3
Individualized Clinical Data Basis AI tools used in utilization review or coverage determinations must base their outputs on individualized enrollee clinical data (medical history, clinical records, individual circumstances) and must not base determinations solely on aggregate or group-level datasets.
0 enacted
21 proposed
HC-01.4
Periodic AI Tool Review and Revision Health insurers and utilization review organizations must periodically review and revise AI tools used in coverage and clinical determinations to maximize accuracy, reliability, fairness, and compliance with applicable clinical standards.
0 enacted
15 proposed
HC-01.5
Patient Data Purpose Limitation Patient data used by AI in utilization review or coverage determination functions must not be used beyond its intended and stated purpose, consistent with HIPAA and applicable state health privacy law.
0 enacted
12 proposed
HC-01.6
Healthcare AI Disclosure to Enrollees and Providers Insurers must provide written disclosure to enrolled patients, contracted providers, and where applicable group plan sponsors, that AI or algorithms are used in utilization management or coverage determinations. Each claim denial communication must identify whether AI was involved and the named human professional who made the final determination.
1 enacted
14 proposed
HC-01.7
Healthcare AI Regulatory Filing and Audit Access Insurers must file AI-related utilization review policies and procedures with the applicable state insurance regulator, make such policies available to enrollees and providers upon request, and ensure that AI tools used in utilization review are open to inspection for regulatory audit or compliance review.
1 enacted
22 proposed
HC-01.8
AI Denial Attestation in Communications Insurers must include in each claim denial communication a statement affirming whether AI, machine learning, or an automated system served as the basis for the denial decision, and must identify the qualified human professional responsible.
0 enacted
4 proposed
Bills That Map This Requirement 36 bills
Bill
Status
Sub-Obligations
Section
Passed 2026-10-01
HC-01.3
Section 1(b)(1)
Plain Language
Insurers using AI for prior authorization decisions must base those decisions on the individual enrollee's medical history, the unique clinical circumstances presented by the requesting provider, and any additional clinical information in the enrollee's medical record. This effectively prohibits insurers from using AI to make prior authorization determinations based solely on aggregate or population-level data without individualized clinical review.
(b)(1) An insurer that uses artificial intelligence to make determinations on requests for prior authorization under health benefit plans shall base determinations on all of the following: a. The enrollee's medical history. b. Any clinical circumstances unique to the enrollee which are presented by the requesting health care provider. c. Additional clinical information about the enrollee which may be present in the enrollee's medical record.
Passed 2026-10-01
HC-01.1HC-01.2
Section 1(b)(3)
Plain Language
Every adverse prior authorization determination — whether a denial, reduction, or deferral — must be made by a licensed physician or other competent health care professional, not by the AI system alone. The human reviewer must be competent to evaluate the AI's recommendation in light of the enrollee's individual clinical circumstances and the treating provider's recommendation. AI may inform the decision, but the final adverse determination must rest with a qualified human. This is an unconditional human-in-the-loop requirement for all adverse outcomes — there is no exception for low-risk or routine denials.
(3) In addition to the requirements listed in subdivisions (1) and (2), a determination to deny, reduce, or defer a request for prior authorization shall always be made by a licensed physician or other health care professional who is competent to evaluate any recommendation or conclusion of artificial intelligence in the light of the specific clinical issues involved in the health care service requested which are unique to the enrollee's circumstances or as recommended by the treating health care provider.
Passed 2026-10-01
HC-01.6
Section 1(c)(1)
Plain Language
Insurers must provide prominent written disclosure when AI is used as a tool in utilization review. For group plans, the disclosure goes to the plan sponsor (typically the employer). For individual plans, the disclosure goes directly to the enrollee. This is a general disclosure obligation about the insurer's use of AI in utilization review, not a per-claim disclosure requirement — the statute says 'if artificial intelligence is used' rather than requiring disclosure on each individual determination.
(c) An insurer shall do all of the following: (1) Make prominent written disclosure if artificial intelligence is used as a tool to contribute information in utilization review to: a. The sponsor in the case of a group plan; or b. The enrollee in the case of an individual plan.
Passed 2026-10-01
HC-01.5
Section 1(c)(3)
Plain Language
Insurers must ensure that patient data processed by AI in utilization review functions is not repurposed beyond its intended and stated use, consistent with HIPAA. This is a data use limitation specific to AI-processed patient data in the utilization review context — it prevents insurers from, for example, using patient clinical data gathered for prior authorization AI to train models for marketing, underwriting, or other secondary purposes not disclosed to the patient.
(3) Ensure that patient data used in utilization review functions by artificial intelligence is not used beyond its intended and stated purpose consistent with the federal Health Insurance Portability and Accountability Act (HIPAA), 42 U.S.C. § 1320d et seq.
Pending 2027-01-01
HC-01.3
C.R.S. § 10-16-112.7(3)(a)-(b)
Plain Language
Entities using AI for utilization review must ensure the AI system bases its determinations on the individual patient's medical history, clinical circumstances as presented by the requesting provider, and other relevant clinical information from the patient's records. The system may not base determinations solely on group-level or aggregate data without reference to the individual's own data. This effectively requires individualized clinical assessment — AI tools cannot deny or approve coverage based on population-level patterns alone.
(3) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION THAT USES AN ARTIFICIAL INTELLIGENCE SYSTEM TO CONDUCT UTILIZATION REVIEW SHALL ENSURE THAT: (a) THE ARTIFICIAL INTELLIGENCE SYSTEM BASES ITS DETERMINATION ON THE FOLLOWING INFORMATION, AS APPLICABLE: (I) AN INDIVIDUAL'S MEDICAL OR OTHER CLINICAL HISTORY; (II) INDIVIDUAL CLINICAL CIRCUMSTANCES AS PRESENTED BY THE REQUESTING PROVIDER; AND (III) OTHER RELEVANT CLINICAL INFORMATION CONTAINED IN THE INDIVIDUAL'S MEDICAL OR OTHER CLINICAL RECORD; (b) THE ARTIFICIAL INTELLIGENCE SYSTEM DOES NOT BASE ITS DETERMINATIONS SOLELY ON GROUP DATA, WITHOUT REFERENCE TO THE INDIVIDUAL'S DATA;
Pending 2027-01-01
HC-01.1
C.R.S. § 10-16-112.7(5)(a)-(b)
Plain Language
AI may be used to assist with utilization review, including expedited approvals. However, a carrier may not issue a denial of coverage based on medical necessity solely on AI output. A licensed clinician, licensed physician, or other regulated professional competent to evaluate the specific clinical issues must review and approve every denial. The human reviewer must also review the health benefit plan's terms of coverage for the requested service. This creates a mandatory human-in-the-loop requirement for all adverse medical necessity determinations while permitting AI to drive approvals without the same gatekeeping.
(5) (a) NOTWITHSTANDING SUBSECTION (3) OF THIS SECTION, AN ARTIFICIAL INTELLIGENCE SYSTEM MAY BE USED TO ASSIST WITH UTILIZATION REVIEW, INCLUDING EXPEDITED APPROVALS. (b) A CARRIER'S DENIAL OF COVERAGE BASED IN WHOLE OR IN PART ON MEDICAL NECESSITY SHALL NOT BE ISSUED SOLELY ON THE OUTPUT OF AN ARTIFICIAL INTELLIGENCE SYSTEM WITHOUT HUMAN REVIEW AND APPROVAL OF THE DENIAL BY A LICENSED CLINICIAN, LICENSED PHYSICIAN, OR OTHER REGULATED PROFESSIONAL THAT IS COMPETENT TO EVALUATE THE SPECIFIC CLINICAL ISSUES INVOLVED IN THE HEALTH-CARE SERVICES REQUESTED BY THE PROVIDER AND A REVIEW OF THE HEALTH BENEFIT PLAN'S TERMS OF COVERAGE FOR THE HEALTH-CARE SERVICE.
Pending 2027-01-01
HC-01.4
C.R.S. § 10-16-112.7(3)(f)
Plain Language
Covered entities must periodically review the performance, use, and outcomes of AI systems used in utilization review to maximize accuracy and reliability. This is an ongoing operational review obligation — not a one-time pre-deployment assessment. The bill does not specify a review frequency, leaving it to the entity to determine what 'periodically' means in context.
(f) THE ARTIFICIAL INTELLIGENCE SYSTEM'S PERFORMANCE, USE, AND OUTCOMES ARE PERIODICALLY REVIEWED TO MAXIMIZE ACCURACY AND RELIABILITY;
Pending 2027-01-01
HC-01.5
C.R.S. § 10-16-112.7(3)(g)
Plain Language
Health data used by AI systems in utilization review must not be used beyond its intended or stated purpose. This data purpose limitation obligation is consistent with HIPAA and applicable state health privacy law and creates an independent statutory duty within the utilization review AI context.
(g) AN INDIVIDUAL'S HEALTH DATA IS NOT USED BEYOND ITS INTENDED OR STATED PURPOSE, CONSISTENT WITH APPLICABLE STATE AND FEDERAL LAWS;
Pending 2027-01-01
C.R.S. § 10-16-112.7(6)(a)-(c)
Plain Language
Health insurance carriers may not provide coverage for psychotherapy services that are delivered directly to an individual by an AI system. This effectively prohibits billing for AI-conducted psychotherapy through private insurance. The prohibition does not extend to non-therapeutic software tools (billing software, EHRs, video platforms) used incidentally by a human provider, nor does it treat videoconferencing or messaging platforms used to enable human supervision or consultation as AI-conducted services. The practical effect is that AI systems cannot be the direct provider of psychotherapy for reimbursement purposes — a human must deliver the therapeutic service.
(6) (a) A CARRIER OFFERING A HEALTH BENEFIT PLAN ISSUED OR RENEWED IN THE STATE ON OR AFTER THE EFFECTIVE DATE OF THIS SECTION SHALL NOT PROVIDE COVERAGE FOR SERVICES THAT CONSTITUTE PSYCHOTHERAPY SERVICES, AS DEFINED IN SECTION 12-245-202 (14), THAT ARE PROVIDED DIRECTLY TO AN INDIVIDUAL AND THAT ARE CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM. (b) SUBSECTION (6)(a) OF THIS SECTION DOES NOT PROHIBIT THE USE OF BILLING SOFTWARE, ELECTRONIC HEALTH RECORDS, VIDEO PLATFORMS, OR OTHER NONTHERAPEUTIC SOFTWARE TOOLS INCIDENT TO SERVICES PROVIDED BY A HUMAN PROVIDER. (c) THE USE OF VIDEOCONFERENCING, MESSAGING PLATFORMS, OR OTHER COMMUNICATIONS SOFTWARE TO ENABLE SUPERVISION OR CONSULTATION BY A LICENSED, REGISTERED, OR CERTIFIED INDIVIDUAL DOES NOT CONSTITUTE SUPERVISION OR CONSULTATION THAT IS CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, AS REFERENCED IN SUBSECTION (6)(a) OF THIS SECTION.
Pending 2027-01-01
C.R.S. § 25.5-1-209
Plain Language
Payers of mental or behavioral health services under Medicaid (Colorado Medical Assistance Act) and the Children's Basic Health Plan (CHP+) may not pay for psychotherapy services that are provided directly to an individual by an AI system. This mirrors the private insurance prohibition in § 10-16-112.7(6) but applies to the public payer side — Medicaid and CHP+ — ensuring that AI-delivered psychotherapy cannot be reimbursed through any Colorado payment channel, public or private.
A PAYER OF MENTAL OR BEHAVIORAL HEALTH-CARE SERVICES PROVIDED UNDER THE "COLORADO MEDICAL ASSISTANCE ACT", AS SPECIFIED IN ARTICLES 4, 5, AND 6 OF THIS TITLE 25.5, OR THE "CHILDREN'S BASIC HEALTH PLAN ACT", AS SPECIFIED IN ARTICLE 8 OF THIS TITLE 25.5, SHALL NOT PAY FOR SERVICES THAT CONSTITUTE PSYCHOTHERAPY SERVICES, AS DEFINED IN SECTION 12-245-202 (14), THAT ARE PROVIDED DIRECTLY TO AN INDIVIDUAL AND THAT ARE CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, AS THAT TERM IS DEFINED IN SECTION 10-16-112.7 (1)(b).
Passed 2027-01-01
HC-01.1HC-01.2
O.C.G.A. § 33-46-7.1(c)
Plain Language
Private review agents and utilization review entities may use AI tools to assist with utilization review tasks, but AI may not issue an adverse determination to a patient on its own. Before any adverse determination is issued, a qualified natural person must conduct a utilization review with clinical peer participation. The clinical peer's judgment is supreme — AI may never override it. This effectively requires human-in-the-loop review with clinical peer sign-off for every adverse coverage decision, while permitting AI to support administrative and analytical functions short of the final adverse determination.
Artificial intelligence systems, artificial intelligence, and other software tools may be used to automate tasks, reduce administrative burdens, participate in decision-making processes, and perform other lawful functions; provided, however, that such systems shall not issue an adverse determination to a patient until a natural person qualifying as a private review agent or a utilization review entity conducts a utilization review in which a clinical peer participates. In no event shall artificial intelligence systems, artificial intelligence, or other software tools supersede the judgment of such clinical peer.
Passed 2027-01-01
O.C.G.A. § 33-46-7.1(b)
Plain Language
Any AI system or software tool used by a private review agent or utilization review entity must be incorporated into the entity's utilization review plan, and that plan must comply with the existing standards in Chapter 46 of Title 33 and Commissioner regulations. This is a compliance prerequisite — AI tools cannot be deployed on an ad hoc basis outside the formal utilization review framework. This conditions AI use on conformity with an existing regulatory structure but does not itself specify what those standards require.
Private review agents and utilization review entities may use artificial intelligence systems, artificial intelligence, or other software tools, provided that such systems or tools are a part of a utilization review plan that is in accordance with the standards set forth in this chapter and the rules and regulations adopted by the Commissioner.
Failed 2027-01-01
HC-01.1
Iowa Code § 514F.8, subsection 2A (new)
Plain Language
Utilization review organizations may use AI-based algorithms for initial review of prior authorization requests, but when the request involves medical necessity, the AI tool may not serve as the sole basis for a decision to deny, delay, or downgrade the request. A human decision-maker must independently participate in any adverse determination. This permits AI as a screening or triage tool while prohibiting fully automated adverse decisions on medical necessity grounds.
2A. A utilization review organization may use an artificial intelligence-based algorithm to provide an initial review of a request for prior authorization, except that, for a prior authorization request for a health care service based on medical necessity, a utilization review organization shall not use an artificial intelligence-based algorithm as the sole basis for the utilization review organization's decision to deny, delay, or downgrade the prior authorization request.
Failed 2027-01-01
HC-01.2
Iowa Code § 514F.8A(2)
Plain Language
Prior authorization denials and downgrades must be made by a same-specialty qualified reviewer (if the requesting provider is a physician) or a clinical peer (if the requesting provider is not a physician). The URO must provide the requesting provider a signed written statement citing the specific reasons for the decision including coverage and clinical criteria relied upon, a written explanation of the appeals process (which must also be provided to the covered person), and a written attestation confirming the reviewer's qualifications including name, NPI, board certifications, specialty expertise, and educational background. This creates both a human oversight requirement and a detailed disclosure obligation tied to each adverse determination.
2. A utilization review organization shall not deny or downgrade a request for prior authorization unless all of the following requirements are met: a. The decision to deny or downgrade the request is made by either of the following: (1) A qualified reviewer, if the health care provider requesting prior authorization is a physician. (2) A clinical peer, if the health care provider requesting prior authorization is not a physician. b. The utilization review organization provides the health care provider that requested the prior authorization all of the following: (1) A written statement that cites the specific reasons for the denial or downgrade, including any coverage criteria or limits, or clinical criteria, that the utilization review organization considered or that was the basis for the denial or downgrade. The written statement shall be signed by either of the following: (a) The qualified reviewer that made the denial or downgrade determination, if the health care provider that requested prior authorization is a physician. (b) The clinical peer that made the denial or downgrade determination, if the health care provider that requested prior authorization is not a physician. (2) A written explanation of the utilization review organization's appeals process. The utilization review organization shall also provide the written explanation to the covered person for whom prior authorization was requested. (3) A written attestation that is either of the following: (a) If the health care provider that requested prior authorization is a physician, a written attestation that the qualified reviewer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and has the requisite training and expertise to treat the medical condition that is the subject of the request for prior authorization, including sufficient knowledge to determine whether the health care service is medically necessary or clinically appropriate. The attestation shall include the qualified reviewer's name, national provider identifier, board certifications, specialty expertise, and educational background. (b) If the health care provider that requested prior authorization is not a physician, a written attestation that the clinical peer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and the clinical peer has experience managing the specific medical condition or administering the health care service that is the subject of the request for prior authorization. The attestation shall include the clinical peer's name, national provider identifier, board certifications, specialty expertise, and educational background.
Failed 2027-01-01
HC-01.2
Iowa Code § 514F.8A(3)-(4)
Plain Language
When a prior authorization request is denied, the URO must conduct a consultation between the requesting provider and a same-specialty qualified reviewer or clinical peer within seven business days of the denial notification. If the denial or downgrade is appealed by the provider or covered person, the appeal must be conducted by a different qualified reviewer or clinical peer (not the individual who made the initial determination). The appeal reviewer must consider all known clinical aspects of the services including relevant medical records and medical literature submitted by the provider. This creates a mandatory post-denial consultation requirement and ensures independent review on appeal.
3. A utilization review organization that denies a request for prior authorization shall, no later than seven business days after the date that the utilization review organization notifies the requesting health care provider of the denial, conduct a consultation either in person or remotely, as follows: a. Between the health care provider and a qualified reviewer, if the health care provider requesting prior authorization is a physician. b. Between the health care provider and a clinical peer, if the health care provider requesting prior authorization is not a physician. 4. a. If a utilization review organization's decision to deny or downgrade a request for prior authorization is appealed by the requesting health care provider or covered person, the appeal shall be conducted by either of the following: (1) A qualified reviewer, if the health care provider requesting prior authorization is a physician. (2) A clinical peer, if the health care provider requesting prior authorization is not a physician. b. A qualified reviewer or clinical peer involved in the initial denial or downgrade determination of a request for prior authorization that is the subject of an appeal shall not conduct the appeal. c. When conducting an appeal of a request for prior authorization, the qualified reviewer or clinical peer shall consider the known clinical aspects of the health care services under review, including but not limited to medical records relevant to the covered person's medical condition that is the subject of the health care services for which prior authorization is requested, and any relevant medical literature submitted by the health care provider as part of the appeal.
Pending 2027-01-01
HC-01.1
Iowa Code § 514F.8, subsection 2A (new)
Plain Language
Utilization review organizations may use AI-based algorithms for initial review of prior authorization requests. However, when the request involves a health care service based on medical necessity, the URO may not rely on an AI algorithm as the sole basis for denying, delaying, or downgrading the request. This means a human reviewer must independently evaluate and make or affirm any adverse determination on medical necessity grounds — the AI output alone is insufficient. This effectively permits AI as a triage or screening tool but requires human decision-making for adverse outcomes.
2A. A utilization review organization may use an artificial intelligence-based algorithm to provide an initial review of a request for prior authorization, except that, for a prior authorization request for a health care service based on medical necessity, a utilization review organization shall not use an artificial intelligence-based algorithm as the sole basis for the utilization review organization's decision to deny, delay, or downgrade the prior authorization request.
Pending 2027-01-01
HC-01.1HC-01.2
Iowa Code § 514F.8A(2) (new)
Plain Language
A URO may not deny or downgrade a prior authorization request unless: (1) the decision is made by a qualified reviewer (if the requesting provider is a physician) or a clinical peer (if the requesting provider is not a physician) — both of whom must practice in the same or similar specialty; (2) the URO provides the requesting provider a signed written statement citing the specific reasons for the denial or downgrade, a written explanation of the appeals process (which must also be provided to the covered person), and a written attestation confirming the reviewer's specialty match, credentials, and qualifications including name, NPI, board certifications, specialty expertise, and educational background. This creates a comprehensive peer review and documentation requirement that ensures all adverse prior authorization decisions are made by appropriately credentialed human professionals and fully explained to providers and patients.
2. A utilization review organization shall not deny or downgrade a request for prior authorization unless all of the following requirements are met:
a. The decision to deny or downgrade the request is made by either of the following:
(1) A qualified reviewer, if the health care provider requesting prior authorization is a physician.
(2) A clinical peer, if the health care provider requesting prior authorization is not a physician.
b. The utilization review organization provides the health care provider that requested the prior authorization all of the following:
(1) A written statement that cites the specific reasons for the denial or downgrade, including any coverage criteria or limits, or clinical criteria, that the utilization review organization considered or that was the basis for the denial or downgrade. The written statement shall be signed by either of the following:
(a) The qualified reviewer that made the denial or downgrade determination, if the health care provider that requested prior authorization is a physician.
(b) The clinical peer that made the denial or downgrade determination, if the health care provider that requested prior authorization is not a physician.
(2) A written explanation of the utilization review organization's appeals process. The utilization review organization shall also provide the written explanation to the covered person for whom prior authorization was requested.
(3) A written attestation that is either of the following:
(a) If the health care provider that requested prior authorization is a physician, a written attestation that the qualified reviewer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and has the requisite training and expertise to treat the medical condition that is the subject of the request for prior authorization, including sufficient knowledge to determine whether the health care service is medically necessary or clinically appropriate. The attestation shall include the qualified reviewer's name, national provider identifier, board certifications, specialty expertise, and educational background.
(b) If the health care provider that requested prior authorization is not a physician, a written attestation that the clinical peer who made the denial or downgrade determination practices in the same or a similar specialty as the health care provider, and the clinical peer has experience managing the specific medical condition or administering the health care service that is the subject of the request for prior authorization. The attestation shall include the clinical peer's name, national provider identifier, board certifications, specialty expertise, and educational background.
Pending 2027-01-01
HC-01.1HC-01.6
Iowa Code § 514F.8A(3) (new)
Plain Language
When a URO denies a prior authorization request, it must arrange a consultation between the requesting health care provider and the appropriate reviewer (qualified reviewer for physician providers, clinical peer for non-physician providers) within seven business days of notifying the provider of the denial. The consultation may be in person or remote. This creates a mandatory post-denial peer-to-peer review opportunity — unlike the initial decision requirements in subsection 2, this is a follow-up consultation that must occur after every denial, giving the requesting provider a direct opportunity to discuss the case with the reviewer.
3. A utilization review organization that denies a request for prior authorization shall, no later than seven business days after the date that the utilization review organization notifies the requesting health care provider of the denial, conduct a consultation either in person or remotely, as follows:
a. Between the health care provider and a qualified reviewer, if the health care provider requesting prior authorization is a physician.
b. Between the health care provider and a clinical peer, if the health care provider requesting prior authorization is not a physician.
Pending 2027-01-01
HC-01.2
Iowa Code § 514F.8A(4) (new)
Plain Language
When a prior authorization denial or downgrade is appealed by the requesting provider or covered person, the appeal must be conducted by a qualified reviewer (for physician providers) or clinical peer (for non-physician providers) who was not involved in the initial determination. The appellate reviewer must consider the known clinical aspects of the services under review, including medical records relevant to the patient's condition and any medical literature submitted by the provider. This creates a de novo clinical review on appeal with an independent reviewer, ensuring that the appeal is not a rubber stamp of the initial denial.
4. a. If a utilization review organization's decision to deny or downgrade a request for prior authorization is appealed by the requesting health care provider or covered person, the appeal shall be conducted by either of the following:
(1) A qualified reviewer, if the health care provider requesting prior authorization is a physician.
(2) A clinical peer, if the health care provider requesting prior authorization is not a physician.
b. A qualified reviewer or clinical peer involved in the initial denial or downgrade determination of a request for prior authorization that is the subject of an appeal shall not conduct the appeal.
c. When conducting an appeal of a request for prior authorization, the qualified reviewer or clinical peer shall consider the known clinical aspects of the health care services under review, including but not limited to medical records relevant to the covered person's medical condition that is the subject of the health care services for which prior authorization is requested, and any relevant medical literature submitted by the health care provider as part of the appeal.
Pending 2025-01-01
HC-01.1HC-01.2
Section 10(b)
Plain Language
Health insurance issuers may not deny, reduce, or terminate coverage or benefits based solely on an AI system or predictive model output. Every adverse consumer outcome involving AI must receive meaningful human review — by an individual with actual authority to override the AI's determination — before it is issued. When the adverse outcome is an adverse determination under the Managed Care Reform and Patient Rights Act, the human reviewer must be a clinical peer as defined under that Act. The Department of Insurance will establish specific review procedures by rule. This creates two distinct obligations: (1) no AI-only adverse decisions, and (2) mandatory human review with override authority for all AI-informed adverse decisions.
(b) A health insurance issuer authorized to do business in this State shall not issue an adverse consumer outcome with regard to the denial, reduction, or termination of health insurance coverage or benefits that result solely from the use or application of any AI system or predictive model. Any decision-making process concerning the denial, reduction, or termination of insurance plans or benefits that results from the use of AI systems or predictive models shall be meaningfully reviewed, in accordance with review procedures established by Department rules, by an individual with authority to override the AI systems and the determinations of the AI systems. When an adverse consumer outcome is an adverse determination regulated under the Managed Care Reform and Patient Rights Act, the individual with authority to override the AI systems and the determinations of the AI systems shall be a clinical peer as required and defined under that Act.
Pending 2025-01-01
HC-01.6
Section 15
Plain Language
The Department of Insurance is authorized — but not required — to adopt rules establishing consumer disclosure standards for health insurance issuers' use of AI systems. The potential scope of such rules is broad: pre-decision notice of AI use, post-adverse-decision notice, disclosure of how personal information informs decisions, a correction process for inaccurate data, and appeal instructions. This is a rulemaking authorization rather than a self-executing obligation — health insurance issuers face no immediate disclosure requirement from this section until the Department promulgates rules. However, prudent issuers should anticipate that rules will be adopted and begin designing disclosure processes accordingly.
The Department of Insurance may adopt rules that include standards for the full and fair disclosure of a health insurance issuer's use of AI systems that may impact consumers, that set forth the manner, content, and required disclosures including notice before the use of AI systems, notice after an adverse decision, the way personal information is used to inform decisions, a process for correcting inaccurate information, and instructions for appealing decisions.
Pending 2025-06-01
HC-01.1
Section 10(b)
Plain Language
Insurers may not deny, reduce, or terminate insurance plans or benefits based solely on an AI system or predictive model output. Every decision-making process involving AI or predictive models that leads to a denial, reduction, or termination of plans or benefits must be meaningfully reviewed by a human individual who has the authority to override the AI system's determination. The review procedures themselves will be established by Department of Insurance rules. This is a dual obligation: (1) a categorical prohibition on sole-AI adverse outcomes, and (2) an affirmative human review requirement with override authority for all AI-informed adverse decisions.
An insurer authorized to do business in this State shall not issue an adverse consumer outcome with regard to the denial, reduction, or termination of insurance plans or benefits that result solely from the use or application of any AI system or predictive model. Any decision-making process concerning the denial, reduction, or termination of insurance plans or benefits that results from the use of AI systems or predictive models shall be meaningfully reviewed, in accordance with review procedures established by Department rules, by an individual with authority to override the AI systems and their determinations.
Pending 2025-06-01
HC-01.7
Section 15
Plain Language
The Department of Insurance is authorized to adopt rules establishing disclosure standards for insurers' use of AI systems, including the manner, content, and specific disclosures required. This is a permissive rulemaking authorization rather than a self-executing disclosure obligation — insurers will not have specific disclosure duties until the Department promulgates rules. However, once rules are adopted, insurers must comply with whatever disclosure standards the Department establishes. This could encompass disclosures to consumers, providers, or regulators depending on the rules adopted.
The Department of Insurance may adopt rules that include standards for the full and fair disclosure of an insurer's use of AI systems that set forth the manner, content, and required disclosures.
Enacted 2025-07-01
HC-01.1HC-01.2
IC 27-1-37.5-20(a)-(b)
Plain Language
All adverse determinations based on medical necessity and all appeals must be made by a clinical peer — a licensed practitioner certified in the same specialty as the treating provider. The clinical peer must operate under the clinical direction of a medical director who is an Indiana-licensed physician. Appeals cannot be reviewed by a clinical peer with a financial interest in the outcome or who was involved in the original adverse determination. This effectively ensures that no utilization review denial or appeal decision on medical necessity grounds can be made solely by an algorithm, AI system, or non-clinical staff without clinical peer involvement.
Sec. 20. (a) A utilization review entity must ensure that: (1) all: (A) adverse determinations based on medical necessity are made; and (B) appeals are reviewed and decided; by a clinical peer; and (2) when making an adverse determination based on medical necessity or reviewing and deciding an appeal, the clinical peer is under the clinical direction of a medical director of the utilization review entity who is: (A) responsible for the provision of health care services provided to covered individuals; and (B) a physician licensed in Indiana under IC 25-22.5. (b) An appeal may not be reviewed or decided by a clinical peer who: (1) has a financial interest in the outcome of the appeal; or (2) was involved in making the adverse determination that is the subject of the appeal.
Enacted 2025-07-01
HC-01.2
IC 27-1-37.5-17(b)-(d)
Plain Language
When a utilization review entity issues an adverse determination, it must offer the treating provider the option of a peer-to-peer review with the entity's clinical peer. If requested, the review must occur within 48 hours (excluding weekends and holidays) and must be conducted directly between the clinical peer and the treating provider or their designee. This creates a mandatory right of clinical challenge by the treating provider before any denial becomes final.
(b) If a health plan utilization review entity makes an adverse determination on a prior authorization request by a covered individual's health care provider, the health plan utilization review entity must offer the covered individual's health care provider the option to request a peer to peer review by a clinical peer concerning the adverse determination. (c) A covered individual's health care provider may request a peer to peer review by a clinical peer either in writing or electronically. (d) If a peer to peer review by a clinical peer is requested under this section: (1) the utilization review entity's clinical peer and the covered individual's health care provider or the health care provider's designee shall make every effort to provide the peer to peer review not later than forty-eight (48) hours (excluding weekends and state and federal legal holidays) after the utilization review entity receives the request by the covered individual's health care provider for a peer to peer review if the utilization review entity has received the necessary information for the peer to peer review; and (2) the utilization review entity must have the peer to peer review conducted between the clinical peer and the covered individual's health care provider or the provider's designee.
Enacted 2025-07-01
HC-01.6
IC 27-1-37.5-21
Plain Language
A clinical peer making an adverse determination or deciding an appeal owes a legal duty of care to the covered individual. This creates tort liability for clinical peers who fail to exercise the applicable standard of care when denying coverage or deciding appeals, reinforcing that adverse determinations require genuine clinical judgment rather than rubber-stamping algorithmic outputs.
Sec. 21. A clinical peer who: (1) makes an adverse determination; or (2) reviews and decides an appeal; owes a duty to the covered individual to exercise the applicable standard of care.
Enacted 2025-07-01
HC-01.6
IC 27-1-37.5-23(c)
Plain Language
When a utilization review entity denies a prior authorization request, it must provide the health care provider with specific reasons for the denial and suggest alternative health care services. This prevents boilerplate or generic denials and requires individualized explanation, which constrains how AI or automated systems may generate denial communications — they must produce specific, case-level reasoning.
(c) If a utilization review entity issues an adverse determination in a response under subsection (b), the response must include the following information: (1) Specific reasons for the adverse determination. (2) Suggested alternatives to the requested health care service.
Enacted 2025-07-01
HC-01.7
IC 27-1-37.5-19(a)-(b)
Plain Language
Utilization review entities must publicly post all current prior authorization requirements, restrictions, and clinical criteria on their website in detailed, easily understandable language accessible to covered individuals, providers, and the public. Before implementing any new or amended prior authorization requirement, the entity must update its website and provide written notice to covered individuals and providers at least 60 days in advance. This transparency obligation ensures that the criteria used for medical necessity determinations — including any algorithmic rules or decision protocols — are publicly accessible.
Sec. 19. (a) A utilization review entity shall make any current prior authorization requirements and restrictions, including written clinical criteria, readily accessible on the utilization review entity's website to covered individuals, health care providers, and the general public. The prior authorization requirements and restrictions must be described in detail and in easily understandable language. (b) A utilization review entity may not implement a new prior authorization requirement or restriction or amend an existing requirement or restriction unless: (1) the utilization review entity's website has been updated to reflect the new or amended requirement or restriction; and (2) the utilization review entity provides written notice to covered individuals and health care providers at least sixty (60) days before the requirement or restriction is implemented.
Pending
HC-01.3HC-01.1
Section 1(c)(1)(A)-(B), (c)(2)
Plain Language
Health insurers and utilization review organizations must ensure that any AI, algorithm, or software tool used in utilization review bases its determinations on the enrollee's individual medical history, the requesting provider's clinical presentation, and other relevant clinical information from the enrollee's record — not solely on group-level datasets. Critically, the AI tool itself is categorically prohibited from denying, delaying, or modifying healthcare services based on medical necessity. All medical necessity determinations must be made by a licensed physician or competent licensed healthcare professional who reviews the provider's recommendation and the enrollee's individualized clinical circumstances.
(1) Each health insurer and utilization review organization shall ensure that the artificial intelligence, algorithm or other software tool used to review and approve, modify and delay or deny requests by providers: (A) Makes a determination based on the following information, as applicable: (i) An enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting healthcare provider; and (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (B) does not make a determination based solely on a group dataset; (2) Notwithstanding the provisions of paragraph (1), the artificial intelligence, algorithm or other software tool shall not deny, delay or modify healthcare services based in whole or in part on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed healthcare professional who is competent to evaluate the specific clinical issues involved in the healthcare services requested by the healthcare provider by reviewing and considering such healthcare provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances.
Plain Language
This is a standalone prohibition reinforcing that only a licensed physician or a competent licensed healthcare professional may deny or modify healthcare service authorization requests for medical necessity reasons. No other individual — and by extension no AI system — may make that determination. This operates as a parallel prohibition to Section 1(c)(2), applicable not just to AI tools but to any individual in the utilization review process.
No individual, other than a licensed physician or a licensed healthcare professional who is competent to evaluate the specific clinical issues involved in the healthcare services requested by the provider, shall deny or modify requests for authorization of healthcare services for an enrollee for reasons of medical necessity.
Pending
HC-01.4
Section 1(c)(1)(C)-(H)
Plain Language
Health insurers and utilization review organizations must ensure their AI tools do not supplant provider decision-making, do not discriminate against enrollees, are fairly and equitably applied consistent with HHS guidance, are periodically reviewed and revised for accuracy and reliability, comply with HIPAA for patient data use, and do not directly or indirectly cause harm to enrollees. These are ongoing operational requirements — particularly the periodic review obligation (F) — not one-time checks.
Each health insurer and utilization review organization shall ensure that the artificial intelligence, algorithm or other software tool used to review and approve, modify and delay or deny requests by providers: (C) does not supplant healthcare provider decision-making; (D) does not discriminate, directly or indirectly, against enrollees in violation of state or federal law; (E) is fairly and equitably applied, in accordance with any applicable regulations or guidance issued by the United States department of health and human services; (F) is periodically reviewed and revised to maximize accuracy and reliability; (G) uses patient data in compliance with the health insurance portability and accountability act of 1996, public law 104-191; and (H) does not directly or indirectly cause harm to the enrollee.
Pending
HC-01.7
Section 1(e)(1)-(2)
Plain Language
Each health insurer must create and maintain written policies and procedures describing how it conducts prospective, retrospective, and concurrent utilization review based on medical necessity. These policies must require that medical necessity decisions are consistent with clinically supported criteria or guidelines. This is a documentation and governance obligation — the insurer must formalize its utilization review process in writing and ensure the written process mandates clinical standards compliance.
(e) Each health insurer subject to this act shall establish written policies and procedures that: (1) Describe the process by which the health benefit plan prospectively, retrospectively or concurrently reviews and approves, modifies and delays or denies requests, based in whole or in part on medical necessity, by healthcare providers of healthcare services for health benefit plan enrollees; and (2) require decisions to be based on the medical necessity of proposed healthcare services are consistent with criteria or guidelines that are supported by clinical principles and processes.
Pending
HC-01.7
Section 1(f)(1)-(3)
Plain Language
Each health insurer must file its AI-related utilization review policies and procedures with the Kansas Department of Insurance. The filed policies must ensure that medical necessity decisions are consistent with clinically supported criteria. Insurers must also make these policies available upon request to insureds, healthcare providers, and the general public. This creates both a regulatory filing obligation and a public transparency obligation — insurers cannot keep their AI utilization review processes confidential from affected parties.
(f) (1) Each health insurer subject to this act shall file with the department such health insurer's policies and procedures establishing the process by which such health insurer prospectively, retrospectively or concurrently reviews and approves, modifies and delays or denies requests, based in whole or in part on medical necessity, by providers of healthcare services for health benefit plan enrollees. (2) Pursuant to paragraph (1), such policies and procedures shall ensure that healthcare decisions based on the medical necessity of proposed healthcare services are consistent with criteria or guidelines that are supported by clinical principles and processes. (3) Each health insurer shall disclose such policies and procedures to insureds, healthcare providers and the public upon request.
Pending 2027-01-01
HC-01.1
R.S. 22:1260.49(C)(1)
Plain Language
Covered entities may not use AI or automated decision systems in a manner that discriminates under federal or state law, violates HHS regulations or guidance, or delays, denies, or modifies healthcare services. This is a categorical prohibition — AI tools used in utilization review may not themselves make or effectuate adverse coverage decisions. The prohibition on delaying, denying, or modifying healthcare services effectively bars AI from serving as the basis for adverse determinations without independent human clinical review (addressed in separate provisions).
C.(1) No entity subject to this Section shall utilize an artificial intelligence or an automated decision system that does any of the following: (a) Engages in discrimination that is prohibited by federal or state law. (b) Violates regulations or guidance disseminated by the United States Department of Health and Human Services. (c) Delays, denies, or modifies healthcare services.
Pending 2027-01-01
HC-01.3
R.S. 22:1260.49(C)(2)-(3)
Plain Language
AI and automated decision systems used in utilization review must base their determinations or recommendations on the individual insured's medical history, clinical circumstances presented by the treating provider, and other relevant individual clinical information. They may not base determinations solely on aggregate or group-level data sets. This requires the AI tool to process individualized clinical data, not merely population-level statistics, when informing coverage decisions.
(2) Artificial intelligence or an automated decision system used in the determination process shall not base its determination or determination recommendation solely on a group data set. (3) Artificial intelligence or an automated decision system shall base its determination or determination recommendation on any the following: (a) The insured's medical or other clinical history. (b) Individual clinical circumstances as presented by a requesting provider. (c) Other relevant clinical information contained in the insured's medical or other clinical history.
Pending 2027-01-01
HC-01.1HC-01.2
R.S. 22:1260.49(D)(1)-(2)(a)
Plain Language
Covered entities may not substitute AI for a healthcare provider in the utilization review determination process. Every adverse determination must be signed by a licensed physician who personally reviewed the medical record and bears responsibility for the clinical judgment. Additionally, before any adverse determination on a medical necessity claim or a prior authorization request, independent human judgment from utilization review personnel is required. This effectively imposes a two-layer human oversight requirement: (1) a general prohibition on AI replacing the healthcare provider role, and (2) a specific requirement for independent human review before any adverse determination on medical necessity or prior authorization claims.
D.(1)(a) An entity subject to this Section shall not replace the role of a healthcare provider in the determination process with artificial intelligence or an automated decision system. (b) Any adverse determination shall be signed by a licensed physician who personally reviewed the medical record and is responsible for the clinical judgment. (2) An entity subject to this Section shall do all of the following: (a) Require independent judgment from human utilization review personnel in the utilization review process before making an adverse determination for either of the following: (i) Any claim submitted by a provider based on medical necessity. (ii) Any claim submitted by a provider for a procedure requiring prior authorization.
Pending 2027-01-01
HC-01.4
R.S. 22:1260.49(D)(2)(c)
Plain Language
Covered entities must review the performance, use, and outcomes of their AI and automated decision systems at least quarterly and revise their policies and procedures as needed to ensure ongoing compliance. This is a notably aggressive review cadence — quarterly rather than the annual reviews seen in most comparable statutes. The obligation is continuous and requires documented revision when deficiencies are identified.
(c) Review the performance, use, and outcomes of an artificial intelligence or an automated decision system at a minimum of once per quarter, and revise the policies and procedures as needed to ensure compliance with this Section.
Pending 2027-01-01
HC-01.5
R.S. 22:1260.49(D)(2)(d)
Plain Language
Patient data used by AI or automated decision systems in utilization review must be used only within its intended and stated purpose, consistent with HIPAA. This data purpose limitation is independent of HIPAA compliance itself — it creates an additional state-law requirement that AI tools not repurpose patient data beyond the specific utilization review context for which it was collected.
(d) Use patient data within its intended and stated purpose consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable.
Pending 2027-01-01
HC-01.6HC-01.8
R.S. 22:1260.49(D)(3)(a)-(b)
Plain Language
Health insurance issuers must disclose to both the enrollee and the Louisiana Department of Insurance whenever AI or an automated decision system was used in any part of a coverage determination or utilization review. The issuer must also document the extent to which the AI or automated system influenced the determination. This is a dual-audience disclosure — the enrollee must know AI was involved, and the department must be informed simultaneously. The documentation requirement goes beyond mere binary disclosure and requires the insurer to characterize the degree of AI influence.
(3)(a) A health insurance issuer shall disclose to the enrollee and the department when artificial intelligence or an automated decision system was used in any part of a coverage determination or utilization review. (b) The health insurance issuer shall document the extent to which any artificial intelligence or automated decision system influenced the determination.
Pending 2027-01-01
HC-01.8
R.S. 22:1260.44(E)(2)
Plain Language
When issuing a written or electronic adverse determination notice, the health insurance issuer must include — in addition to the existing requirements for reasons, clinical rationale, and appeal instructions — a statement of whether AI or an automated decision system was used in the determination process. This amends existing adverse determination notification requirements to add a mandatory AI disclosure element.
(2) A health insurance issuer shall include in its written or electronic notification of an adverse determination all of the reasons for the determination, including the clinical rationale, and the instructions for initiating an appeal or reconsideration of the determination, and whether artificial intelligence or an automated decision system, as defined in R.S. 22:1260.49, was used in the determination process.
Pending 2027-01-01
HC-01.7
R.S. 22:1260.49(F)(1)-(4)
Plain Language
Covered entities must allow the Louisiana Commissioner of Insurance to inspect and audit their AI and automated decision systems for compliance, including review of all policies and procedures governing AI use in the determination process. The commissioner may require submission and independent review of any AI system used in utilization review, and upon request, issuers must disclose data sources, training parameters, and validation methods used to develop their AI tools. The insurer bears the cost of any commissioner-ordered independent review. This creates a broad regulatory audit right covering not just the AI system itself but the underlying development methodology.
F.(1) An entity subject to this Section shall allow the commissioner to inspect and audit the artificial intelligence or automated decision system for compliance with this Section and review policies and procedures for how the artificial intelligence or automated decision system is used in the determination process. (2) The commissioner may require submission and independent review of any artificial intelligence or automated decision system used in utilization review. (3) Upon request of the commissioner, a health insurance issuer shall disclose the data sources, training parameters, and validation methods used to develop any artificial intelligence or automated decision system used in coverage determinations. (4) The health insurance issuer shall pay for any independent review that the commissioner deems necessary.
Pending 2025-10-08
HC-01.3
G.L. c. 176O, § 12(g)(1)(A)-(B)
Plain Language
AI tools used in utilization review must base their determinations on the individual insured's medical or clinical history, the individual clinical circumstances presented by the requesting provider, and other relevant information in the insured's clinical record. The tool may not base determinations solely on group-level datasets. This requires carriers and utilization review organizations to ensure their AI tools are configured to ingest and weigh individualized patient data, not merely statistical profiles or population-level models.
(A) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history. (ii) Individual clinical circumstances as presented by the requesting provider. (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (B) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-10-08
HC-01.1HC-01.2
G.L. c. 176O, § 12(g)(1)(D), (g)(2)
Plain Language
AI tools may not supplant healthcare provider decision-making, and — critically — may not deny, delay, or modify healthcare services based on medical necessity at all. Medical necessity determinations must be made exclusively by a licensed physician or a licensed healthcare professional competent to evaluate the specific clinical issues at hand, who must review the requesting provider's recommendation and the insured's individual medical history and clinical circumstances. This is a stronger prohibition than many comparable state laws: it bars AI from making any medical necessity determination, not merely from being the sole or primary basis.
(D) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making. (2) Notwithstanding paragraph (1), the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (a), by reviewing and considering the requesting providers recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-10-08
HC-01.7
G.L. c. 176O, § 12(g)(1)(G)
Plain Language
AI tools used in utilization review must be made available for inspection, audit, and compliance review by the Division of Insurance and the Executive Office of Health and Human Services. This is a regulatory access obligation — carriers must ensure their AI tools (including third-party vendor tools) are subject to regulatory examination on demand under applicable state and federal law.
(G) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the division and by the executive office of health and human services pursuant to applicable state and federal law.
Pending 2025-10-08
HC-01.6
G.L. c. 176O, § 12(g)(1)(H)
Plain Language
Carriers and utilization review organizations must include disclosures about the use and oversight of AI tools in their written utilization review policies and procedures, as already required under existing subsection (a) of Section 12. This effectively extends the existing policy-documentation requirement to cover AI-specific disclosures, ensuring that enrollees, providers, and regulators can identify when and how AI tools are involved in utilization review.
(H) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by subsection (a).
Pending 2025-10-08
HC-01.4
G.L. c. 176O, § 12(g)(1)(I)
Plain Language
Carriers and utilization review organizations must periodically review and revise the performance, use, and outcomes of AI tools used in utilization review to maximize accuracy and reliability. This is a continuing obligation — not a one-time pre-deployment check — and requires ongoing operational monitoring and improvement of the AI system.
(I) The artificial intelligence, algorithm, or other software tools performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-10-08
HC-01.5
G.L. c. 176O, § 12(g)(1)(J)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond its intended and stated purpose, and all use must be consistent with state and federal law (including HIPAA and Massachusetts health privacy law). This is a purpose-limitation obligation that restricts secondary uses of patient data processed by AI systems in the utilization review context.
(J) Patient data is not used beyond its intended and stated purpose, and consistent with state and federal law.
Pending 2025-01-10
HC-01.3
Ch. 176O § 12(g)(1)(A)-(B)
Plain Language
AI tools used in utilization review must base their determinations on the individual insured's medical history, clinical circumstances presented by the requesting provider, and other relevant clinical information in the insured's clinical record. The tool may not base its determination solely on a group dataset — it must incorporate individualized patient data. This is a data-input requirement ensuring AI outputs reflect the specific patient's clinical situation rather than population-level statistics alone.
(A) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history. (ii) Individual clinical circumstances as presented by the requesting provider. (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (B) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-01-10
HC-01.1HC-01.2
Ch. 176O § 12(g)(2)
Plain Language
AI tools are flatly prohibited from denying, delaying, or modifying health care services based on medical necessity — even in part. All medical necessity determinations must be made by a licensed physician or licensed health care professional competent in the specific clinical issues at hand, who must review the requesting provider's recommendation, the insured's medical history, and individual clinical circumstances. This is stronger than a human-in-the-loop requirement: the AI may not make the determination at all, even subject to human review. The human professional must independently make the determination.
(2) Notwithstanding paragraph (1), the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (a), by reviewing and considering the requesting providers recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-01-10
HC-01.4
Ch. 176O § 12(g)(1)(I)
Plain Language
Carriers and utilization review organizations must periodically review the performance, use, and outcomes of their AI tools used in utilization review and revise them as needed to maximize accuracy and reliability. This is a continuing operational obligation — not a one-time pre-deployment check. The bill does not specify a review frequency, leaving it to the carrier's reasonable judgment and any future regulatory guidance.
(I) The artificial intelligence, algorithm, or other software tools performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-01-10
HC-01.5
Ch. 176O § 12(g)(1)(J)
Plain Language
Patient data used by AI tools in utilization review must not be repurposed beyond its intended and stated purpose. This is a purpose limitation requirement consistent with HIPAA and state health privacy law, applied specifically to AI tools in the utilization review context. Carriers must ensure their AI vendors and tools do not use patient clinical data for secondary purposes such as marketing, product development, or training unrelated models.
(J) Patient data is not used beyond its intended and stated purpose, and consistent with state and federal law.
Pending 2025-01-10
HC-01.7
Ch. 176O § 12(g)(1)(G)-(H)
Plain Language
AI tools used in utilization review must be open to inspection, audit, and compliance review by the Division of Insurance and the Executive Office of Health and Human Services. Additionally, carriers must include disclosures about the use and oversight of AI tools in their written utilization review policies and procedures as required by existing Section 12(a). This creates both a regulatory transparency obligation (making the tool auditable) and a documentation obligation (including AI disclosures in existing policy documents).
(G) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the division and by the executive office of health and human services pursuant to applicable state and federal law. (H) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by subsection (a).
Pending 2025-01-10
HC-01.1
Ch. 176O § 12(g)(1)(D)
Plain Language
AI tools used in utilization review must not supplant health care provider decision-making. This is a broader prohibition than the medical necessity restriction in paragraph (2) — it applies to all utilization review functions, not just medical necessity determinations. The AI tool may inform or assist, but the treating provider's clinical judgment must remain the primary basis for care decisions. Carriers must ensure their AI tools are configured as decision-support rather than decision-replacement systems.
(D) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making.
Pending 2026-10-01
HC-01.3
Ins. § 15–10B–05.1(c)(1)-(2)
Plain Language
Carriers, pharmacy benefits managers, and private review agents using AI tools for utilization review must ensure those tools base their determinations on the individual enrollee's medical history, the clinical circumstances presented by the requesting provider, or other relevant clinical information from the enrollee's records. The tools may not base determinations solely on group-level datasets. This requires individualized clinical data inputs rather than population-level statistical proxies alone.
(c) Subject to subsection (d) of this section, an entity subject to this section shall ensure that: (1) an artificial intelligence, algorithm, or other software tool bases its determinations on: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by a requesting provider; or (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (2) an artificial intelligence, algorithm, or other software tool does not base its determinations solely on a group dataset;
Pending 2026-10-01
HC-01.1
Ins. § 15–10B–05.1(c)(4), (d)
Plain Language
AI tools used for utilization review may not replace the role of a health care provider in the determination process and may not independently deny, delay, or modify health care services. These provisions together ensure that a human clinical professional retains the final decision-making role — the AI tool can inform or support the process but cannot issue adverse determinations on its own.
(4) an artificial intelligence, algorithm, or other software tool does not replace the role of a health care provider in the determination process under § 15–10B–07 of this subtitle; (d) An artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services.
Pending 2026-10-01
HC-01.5
Ins. § 15–10B–05.1(c)(10)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond its intended and stated purpose, consistent with HIPAA. This imposes a purpose-limitation obligation on data flowing through AI systems used in coverage determinations, preventing secondary uses of clinical data collected for utilization review purposes.
(10) patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, as applicable;
Pending 2026-10-01
HC-01.7
Ins. § 15–10B–05.1(c)(7)-(8), (e)
Plain Language
Covered entities must make their AI utilization review tools open to audit and compliance review by the Maryland Insurance Commissioner, must file written policies and procedures describing AI use and oversight in their utilization plans, and — as newly added by this bill — must ensure that every such audit or compliance review includes a licensed health care professional's human evaluation of patient medical records. The evaluating professional must consider the patient's specific circumstances and must have the authority to question, modify, or override the AI tool's determination. This is the core new obligation of HB 1385: it adds a mandatory human clinical evaluation component to the existing audit/compliance review framework.
(7) an artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Commissioner IN ACCORDANCE WITH SUBSECTION (E) OF THIS SECTION; (8) written policies and procedures are included in the utilization plan submitted under § 15–10B–05 of this subtitle, including how an artificial intelligence, algorithm, or other software tool will be used and what oversight will be provided; (E) AN AUDIT OR COMPLIANCE REVIEW OF AN ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL UNDER SUBSECTION (C)(7) OF THIS SECTION SHALL INCLUDE THE HUMAN EVALUATION OF A PATIENT'S MEDICAL RECORDS BY A LICENSED HEALTH CARE PROFESSIONAL THAT TAKES INTO CONSIDERATION THE PATIENT'S SPECIFIC CIRCUMSTANCES AND ALLOWS THE LICENSED HEALTH CARE PROFESSIONAL TO QUESTION, MODIFY, OR OVERRIDE A DETERMINATION MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL.
Pending 2026-10-01
HC-01.4
Ins. § 15–10B–05.1(c)(9), (f)
Plain Language
Covered entities must review and revise the performance, use, and outcomes of their AI utilization review tools at least quarterly to maximize accuracy and reliability. As newly added by this bill, those quarterly reviews must include a human evaluation of the real-world health outcomes of decisions made by the AI tool, and the findings from that evaluation must be used to improve the tool — making it safer, more accurate, and more responsive to patient needs. This creates a continuous improvement loop: human evaluators assess actual patient outcomes, and those assessments must feed back into tool refinement.
(9) the performance, use, and outcomes of an artificial intelligence, algorithm, or other software tool are reviewed and revised, if necessary and at least on a quarterly basis, to maximize accuracy and reliability, IN ACCORDANCE WITH SUBSECTION (F) OF THIS SECTION; (F) A REVIEW OF THE PERFORMANCE, USE, AND OUTCOMES OF ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS UNDER SUBSECTION (C)(9) OF THIS SECTION SHALL INCLUDE: (1) A HUMAN EVALUATION OF THE REAL–WORLD HEALTH OUTCOMES OF DECISIONS MADE BY THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL; AND (2) USE OF THE FINDINGS MADE BY THE EVALUATION REQUIRED UNDER ITEM (1) OF THIS SUBSECTION TO IMPROVE THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL AND MAKE THE DECISIONS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOL SAFER, MORE ACCURATE, AND MORE RESPONSIVE TO PATIENT NEEDS.
Pending 2026-10-01
HC-01.1
Insurance Article § 15–10A–02(b)(2)(vi)
Plain Language
When a member files a grievance challenging an adverse decision that was made using AI, an algorithm, or other software tools, the carrier's internal grievance process must provide for human review of that adverse decision. The human review must include an assessment of whether the AI tool complied with the requirements of § 15–10B–05.1 — which mandates individualized clinical data use, non-discrimination, provider role preservation, and other safeguards. This ensures that AI-driven denials receive meaningful human scrutiny through the existing grievance channel rather than being upheld solely on the AI tool's output.
(VI) FOR A GRIEVANCE RESULTING FROM AN ADVERSE DECISION MADE USING ARTIFICIAL INTELLIGENCE, ALGORITHM, OR OTHER SOFTWARE TOOLS, PROVIDE FOR THE HUMAN REVIEW OF THE ADVERSE DECISION, INCLUDING FOR COMPLIANCE WITH § 15–10B–05.1 OF THIS TITLE.
Pending 2026-10-01
HC-01.4
Insurance Article § 15–10A–06(a)(3)
Plain Language
If more than a Commissioner-specified percentage of a carrier's adverse decisions made using the same AI, algorithm, or software tool result in grievances within a six-month period, the carrier must conduct a model review of that AI tool and report the findings in its quarterly report to the Commissioner. The Commissioner sets the threshold percentage — it is not specified in the statute. This creates a grievance-volume trigger for mandatory model review, ensuring that AI tools producing a disproportionate number of challenged decisions receive systematic evaluation. The review findings must be submitted as part of the regular quarterly reporting.
(3) IF, WITHIN A 6–MONTH PERIOD, MORE THAN A SPECIFIED PERCENTAGE, AS DETERMINED BY THE COMMISSIONER, OF A CARRIER'S ADVERSE DECISIONS MADE USING THE SAME ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL RESULT IN A GRIEVANCE, THE CARRIER SHALL PROVIDE FOR A MODEL REVIEW PROCESS OF THE ARTIFICIAL INTELLIGENCE, ALGORITHM, OR SOFTWARE TOOL AND SUBMIT THE FINDINGS IN THE REPORT REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION.
Pending 2026-10-01
HC-01.3
Insurance Article § 15–10B–05.1(c)(1)-(2)
Plain Language
AI tools used in utilization review must base their determinations on the individual enrollee's medical history, individual clinical circumstances as presented by the requesting provider, or other relevant clinical information from the enrollee's records. The AI tool may not base its determinations solely on a group dataset — it must incorporate individualized patient data. This is existing law being reenacted without amendment; it is included here because the new grievance human-review provision (§ 15–10A–02(b)(2)(vi)) expressly requires compliance review against this section.
(c) Subject to subsection (d) of this section, an entity subject to this section shall ensure that: (1) an artificial intelligence, algorithm, or other software tool bases its determinations on: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by a requesting provider; or (iii) other relevant clinical information contained in the enrollee's medical or other clinical record; (2) an artificial intelligence, algorithm, or other software tool does not base its determinations solely on a group dataset;
Pending 2026-10-01
HC-01.1
Insurance Article § 15–10B–05.1(c)(4), (d)
Plain Language
AI tools may not replace the role of a health care provider in the utilization review determination process, and AI tools may not independently deny, delay, or modify health care services. These two provisions together establish that AI is limited to a supportive role — all final coverage determinations must be made by a human health care provider. This is existing law reenacted without amendment but is the foundation for the new human review grievance obligation added by this bill.
(4) an artificial intelligence, algorithm, or other software tool does not replace the role of a health care provider in the determination process under § 15–10B–07 of this subtitle; ... (d) An artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services.
Pending 2026-10-01
HC-01.7
Insurance Article § 15–10B–05.1(c)(7)-(8)
Plain Language
Carriers must ensure that their AI tools used in utilization review are open to inspection by the Insurance Commissioner for audit or compliance reviews. Additionally, the carrier's utilization plan filed under § 15–10B–05 must include written policies and procedures describing how AI tools will be used and what oversight will be provided. Together these provisions create a regulatory transparency obligation — the Commissioner has inspection access to the AI tools themselves, and the carrier's utilization plan must document AI governance. This is existing law reenacted without amendment.
(7) an artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Commissioner; (8) written policies and procedures are included in the utilization plan submitted under § 15–10B–05 of this subtitle, including how an artificial intelligence, algorithm, or other software tool will be used and what oversight will be provided;
Failed 2026-01-01
HC-01.3
24-A MRSA §4304(8)(A)(1)
Plain Language
When carriers or their contracted third parties use AI to make medical review or utilization review determinations, those determinations must be based on the individual enrollee's medical history and clinical circumstances as presented by the requesting provider, plus other relevant clinical information from the enrollee's medical record. AI determinations may not supplant provider decision making — meaning the AI tool cannot override or replace the treating provider's clinical judgment as the basis for the determination. This effectively requires individualized clinical review rather than reliance on aggregate or group-level data alone.
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (1) Be based upon an enrollee's medical history, as applicable, and individual clinical circumstances as presented by the requesting provider, as well as other relevant clinical information contained in the enrollee's medical record, and not supplant provider decision making;
Failed 2026-01-01
HC-01.2
24-A MRSA §4304(8)(B)
Plain Language
Any adverse determination — denial, delay, modification, or adjustment — based on medical necessity must be made by a clinical peer who is competent to evaluate the specific clinical issues at hand. The clinical peer must consider the treating provider's recommendation and the enrollee's individual medical history and clinical circumstances. This effectively prohibits AI from serving as the sole or primary decision-maker for adverse medical necessity determinations; a qualified human clinical professional must make or independently affirm such decisions. The bill does not define 'clinical peer' but the term is used in the existing utilization review framework under Maine law.
A denial, delay, modification or adjustment of health care services based on medical necessity must be made by a clinical peer competent to evaluate the specific clinical issues involved in the health care services requested by the enrollee's provider. The clinical peer making the medical review or utilization review determination shall consider the enrollee's provider's recommendation and the enrollee's medical history, as applicable, and individual clinical circumstances.
Failed 2026-01-01
HC-01.7
24-A MRSA §4304(8)(A)(4)
Plain Language
AI tools used in utilization review determinations must be open to inspection — meaning regulators or other authorized parties can examine how the tools work. Additionally, carriers must disclose in their written policies and procedures to enrollees that AI is being used in coverage determinations. This creates two distinct requirements: (1) an inspection/auditability obligation, and (2) a written disclosure obligation to enrollees about AI use.
Determinations derived from the use of artificial intelligence, including algorithms and other software tools, must: (4) Be open to inspection, and the use of artificial intelligence must be disclosed in the written policies and procedures to an enrollee.
Failed 2026-01-01
HC-01.4
24-A MRSA §4304(8)(A) (final paragraph)
Plain Language
Carriers must adopt and maintain governance policies for AI used in utilization review that establish accountability for the AI's performance, use, and outcomes, and that are periodically reviewed and revised for accuracy and reliability. This is an ongoing governance obligation — not a one-time pre-deployment check. Additionally, data used by the AI may not be repurposed beyond its intended and stated purpose, and must be protected against risks that could harm enrollees. The data use limitation functions as a purpose limitation principle similar to HC-01.5, while the governance and review requirements align with HC-01.4's periodic review obligation.
Use of artificial intelligence pursuant to this paragraph must be governed by policies that establish accountability for performance, use and outcomes that are reviewed and revised for accuracy and reliability. Data under this paragraph may not be used beyond its intended and stated purpose. Data under this paragraph must be protected from risk that may directly or indirectly cause harm to the enrollee.
Pending
HC-01.1
MCL 500.3406ss
Plain Language
Health insurers operating in Michigan are flatly prohibited from using artificial intelligence to deny, modify, or delay any health insurance claim. Unlike most healthcare AI statutes that require human review to accompany AI-informed decisions, this bill imposes an outright ban on AI-based claim review that results in any adverse action. The prohibition is absolute — there is no safe harbor for human-in-the-loop review, no exception for AI used as a decision-support tool, and no definition of 'artificial intelligence' to cabin the scope. Any insurer delivering, issuing, or renewing a health insurance policy in Michigan would need to ensure that no claim denial, modification, or delay is 'based on' an AI review.
Sec. 3406ss. An insurer that delivers, issues for delivery, or renews in this state a health insurance policy shall not deny, modify, or delay a claim based on a review using artificial intelligence.
Pending
HC-01.1
MCL 400.107b
Plain Language
The Michigan Department of Health and Human Services and any health plan contracted to administer Michigan's Medicaid program are categorically prohibited from using artificial intelligence to deny, modify, or delay any Medicaid claim. Unlike most healthcare AI laws that require human oversight of AI-informed decisions, this bill imposes an outright ban — AI may not be used as the basis for any adverse claim action at all. The bill does not define 'artificial intelligence,' does not specify an enforcement mechanism or penalties, and does not create a private right of action. It also does not address whether AI may be used in a supportive or advisory capacity where a human independently makes the final determination, creating potential ambiguity about whether any AI involvement in the claims review process is prohibited.
Sec. 107b. The department or a contracted health plan shall not deny, modify, or delay a claim under the medical assistance program based on a review using artificial intelligence.
Pending 2025-08-01
HC-01.1
Minn. Stat. § 62M.20(a)-(b)
Plain Language
Utilization review organizations are categorically prohibited from using artificial intelligence in any part of their utilization review operations — including initial review, clinical evaluation, adverse determinations, and appeals. This goes beyond HC-01.1's requirement that AI not serve as the sole or primary basis for adverse determinations; it is an outright ban on AI use at any stage. Any adverse determination made using AI is automatically null and void, regardless of whether the determination would otherwise have been correct. This is the broadest possible restriction: not merely a human-in-the-loop requirement, but a complete prohibition on AI involvement.
(a) The use of artificial intelligence is prohibited in utilization review. Without limiting the generality of the foregoing, a utilization review organization is prohibited from using artificial intelligence in any part of its review, evaluation, determination, or appeals processes. (b) Notwithstanding section 62M.14, any adverse determination made in violation of this section is null and void.
Pending 2025-08-01
HC-01.2
Minn. Stat. § 62M.09, subd. 3(a)-(b), (f)
Plain Language
The physician who reviews and makes an adverse clinical determination must hold an unrestricted Minnesota medical license and practice in the same or similar specialty as the treating provider. The bill adds a new requirement that this physician must also attest in writing that artificial intelligence was not used in the utilization review process. This attestation is a compliance artifact — it must be produced for every adverse determination. Any adverse determination made in violation of this attestation requirement is automatically null and void. The existing requirement that a qualified clinical peer make the adverse determination was already in law; the new obligation is the written AI-non-use attestation.
(a) A physician must review and make the adverse determination under section 62M.05 in all cases in which the utilization review organization has concluded that an adverse determination for clinical reasons is appropriate. (b) The physician conducting the review and making the adverse determination must: (1) hold a current, unrestricted license to practice medicine in this state; and (2) have the same or similar medical specialty as a provider that typically treats or manages the condition for which the health care service has been requested. (f) The physician must attest in writing that artificial intelligence was not used in the utilization review process. Notwithstanding section 62M.14, any adverse determination made in violation of this paragraph is null and void.
Failed 2025-10-01
HC-01.3
Section 1(1)(a)-(b)
Plain Language
Health insurance issuers using AI, algorithms, or software tools for utilization review or utilization management based on medical necessity must ensure those tools base their determinations on the individual enrollee's medical history, clinical circumstances as presented by the requesting provider, and other relevant clinical information from the enrollee's record. The tools may not base determinations solely on a group dataset. This requires individualized clinical analysis for every determination rather than reliance on population-level data alone.
(1) A health insurance issuer as defined in 33-22-140 that uses artificial intelligence, an algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, shall comply with this section and shall ensure all of the following: (a) the artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) a covered person's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting provider; and (iii) other relevant clinical information contained in the covered person's medical or other clinical record; (b) the artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset;
Failed 2025-10-01
HC-01.1HC-01.2
Section 1(2)
Plain Language
AI tools may not deny, delay, or modify health care services based on medical necessity — not even as a partial basis. Every medical necessity determination must be made by a licensed physician or a health care professional competent in the specific clinical area at issue. That professional must review the requesting provider's recommendation, the enrollee's clinical history, and individual clinical circumstances. This is an absolute prohibition on AI making adverse coverage decisions, not merely a requirement for human oversight — the AI tool cannot serve as even a partial basis for adverse determinations on medical necessity grounds.
(2) Notwithstanding subsection (1), the artificial intelligence, algorithm, or other software tool may not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity must be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in subsection (1)(a)(ii), by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances.
Failed 2025-10-01
HC-01.1
Section 1(1)(d)
Plain Language
AI tools used in utilization review may not replace or override healthcare provider decision-making. This is a separate obligation from the medical necessity prohibition in subsection (2) — it applies broadly to all utilization review functions, not just adverse medical necessity determinations. The AI tool must remain subordinate to provider clinical judgment across all utilization review contexts.
(d) the artificial intelligence, algorithm, or other software tool does not supplant health care provider decisionmaking;
Failed 2025-10-01
HC-01.7
Section 1(1)(g)
Plain Language
Health insurance issuers must ensure their AI tools are accessible for audit or compliance review by the Montana Department of Insurance. This means the tools, their criteria, and their outputs must be available for regulatory inspection — issuers cannot claim proprietary secrecy to block department review. Access is governed by applicable state and federal law.
(g) the artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department pursuant to applicable state and federal law;
Failed 2025-10-01
HC-01.6
Section 1(1)(h)
Plain Language
Health insurance issuers must include disclosures about their use and oversight of AI tools in their written utilization review policies and procedures. This is a documentation and disclosure obligation — the issuer's written policies must affirmatively address how AI is used, what oversight mechanisms are in place, and how the requirements of this section are satisfied.
(h) disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by this section;
Failed 2025-10-01
HC-01.4
Section 1(1)(i)
Plain Language
Health insurance issuers must periodically review and revise their AI tools' performance, use, and outcomes to maximize accuracy and reliability. This is a continuing obligation — not a one-time pre-deployment check. The bill does not specify a review frequency, but the obligation is ongoing for as long as the tool is in use.
(i) the artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability;
Failed 2025-10-01
HC-01.5
Section 1(1)(j)
Plain Language
Patient data used by AI tools in utilization review must not be repurposed beyond its intended and stated use. This data use limitation aligns with HIPAA requirements and applicable Montana insurance law. Health insurance issuers must ensure their AI tools do not use patient clinical data collected for utilization review for secondary purposes such as marketing, underwriting, or model training beyond the stated purpose.
(j) patient data is not used beyond its intended and stated purpose, consistent with the federal Health Insurance Portability and Accountability Act of 1996, Public Law 104-191, and this title, as applicable;
Pending 2027-01-01
HC-01.1
RSA 420-J:6-f
Plain Language
Health carriers may not use AI to audit provider billing codes or to adjust those codes based on AI recommendations if doing so would override, alter, or amend the treating provider's clinical judgment. This is a categorical prohibition — AI may not be used in code auditing that changes a provider's clinical coding decisions, regardless of whether a human reviews the AI output. The definition of artificial intelligence is incorporated by reference to RSA 5-D:1. This provision goes beyond requiring human oversight; it prohibits the use of AI in this specific function entirely to the extent it would change clinical judgment.
Health carriers are prohibited from using artificial intelligence, as defined in RSA 5-D:1, to conduct audits of provider codes or to adjust such codes based on recommendations from artificial intelligence that would change, alter, or amend the clinical judgment of a provider.
Pending 2027-01-01
HC-01.7
RSA 420-J:6-f
Plain Language
Health carriers must maintain records that identify which AI tools are used in claims processing. These records must be available for inspection by the New Hampshire Insurance Department upon audit. This is an ongoing recordkeeping obligation — carriers need a system to track and document AI tool usage in their claims workflows at all times, not just upon request. The obligation covers all AI tools used in claims processing, which is broader than the code auditing prohibition in the same section.
Each carrier shall maintain records identifying the use of artificial intelligence tools in claims processing and make such records available to the insurance department upon audit.
Pending 2025-03-10
HC-01.6
Insurance Law § 338(b)
Plain Language
Health insurers, Article 43 corporations, and HMOs must publicly disclose on their websites whether or not they use AI-based algorithms in their utilization review processes. This is a blanket disclosure obligation — the entity must affirmatively state either that it uses or does not use such algorithms. The disclosure must be posted on the entity's publicly accessible website and must be directed to insureds and enrollees.
(b) The superintendent shall require all insurers authorized to write accident and health insurance in this state, corporations organized pursuant to article forty-three of this chapter, and a health maintenance organization certified pursuant to article forty-four of the public health law to notify insureds and enrollees about the use or lack of use of artificial intelligence-based algorithms in the utilization review process on the accessible Internet website of such insurer authorized to write accident and health insurance in this state, corporation organized pursuant to article forty-three of this chapter, or health maintenance organization certified pursuant to article forty-four of the public health law.
Pending 2025-03-10
HC-01.7
Insurance Law § 338(c)
Plain Language
Covered insurers, Article 43 corporations, and HMOs must submit their AI-based algorithms and training datasets used in utilization review to the Department of Financial Services. The Department must then implement a certification process to verify that the algorithms and training data have minimized the risk of bias across enumerated protected characteristics (race, color, religious creed, ancestry, age, sex, gender, national origin, handicap or disability) and that they adhere to evidence-based clinical guidelines. This is both a regulatory filing obligation on the covered entity and a mandate on the Department to create a bias certification regime. Entities must submit algorithms currently in use and those planned for future use.
(c) Every insurer authorized to write accident and health insurance in this state, corporation organized pursuant to article forty-three of this chapter, and health maintenance organization certified pursuant to article forty-four of the public health law shall submit the artificial intelligence-based algorithms and training data sets that are being used or will be used in the utilization review process to the department. The department shall implement a process that allows the department to certify that these artificial intelligence-based algorithms and training data sets have minimized the risk of bias based on the covered person's race, color, religious creed, ancestry, age, sex, gender, national origin, handicap or disability and adhere to evidence-based clinical guidelines.
Pending 2025-03-10
HC-01.1HC-01.2
Insurance Law § 338(d)
Plain Language
When a utilization review initially uses AI-based algorithms, a clinical peer reviewer must independently open and review the individual patient's clinical records or data — and document that review — before issuing any adverse determination. This means AI output alone cannot be the final word on a denial; a qualified clinical peer must examine the individual's actual clinical information and create a documented record of having done so. The obligation falls on the clinical peer reviewer personally, not just the insurer. This effectively ensures human review of individualized clinical data is a prerequisite to any AI-informed adverse determination.
(d) A clinical peer reviewer who participates in a utilization review process for an insurer authorized to write accident and health insurance in this state, a corporation organized pursuant to article forty-three of this chapter, and a health maintenance organization certified pursuant to article forty-four of the public health law that initially uses artificial intelligence-based algorithms for a utilization review shall open and document the utilization review of the individual clinical records or data prior to issuing an adverse determination.
Pending 2025-01-30
HC-01.3
Insurance Law § 3224-e(a)(1)
Plain Language
Health care service plans using AI, algorithms, or software tools for utilization review must ensure those tools base their determinations on individualized enrollee clinical data — including the enrollee's medical or dental history, the clinical circumstances presented by the requesting provider, and other relevant clinical information in the enrollee's record. The tool may not rely solely on aggregate or group-level datasets. This applies both to plans that directly use such tools and to plans that contract with entities using them.
(1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An enrollee's medical or dental history; (ii) Individual clinical circumstances as presented by the requesting provider; and (iii) Other relevant clinical information contained in the enrollee's medical or dental record.
Pending 2025-01-30
HC-01.1
Insurance Law § 3224-e(a)(2)
Plain Language
AI, algorithmic, or software tools used in utilization review must not replace or supplant the judgment of health care providers. The tool may inform or assist the decision-making process, but the final clinical decision must remain with a human health care provider. This is a general prohibition that operates alongside the more specific medical necessity review requirement in subsection (b).
(2) The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision making.
Pending 2025-01-30
HC-01.1HC-01.2
Insurance Law § 3224-e(b)
Plain Language
Any denial, delay, or modification of health care services based on medical necessity must be made by a licensed physician or a health care provider who is clinically competent in the relevant specialty. The reviewing professional must consider the requesting provider's recommendation and the enrollee's individualized medical or dental history and clinical circumstances. AI tools cannot make these adverse determinations — a qualified human must do so. This provision operates as an overriding requirement regardless of the other subsection (a) obligations.
(b) Notwithstanding subsection (a) of this section, a denial, delay, or modification of health care services based on medical necessity shall be made by a licensed physician or other health care provider competent to evaluate the specific clinical issues involved in the health care services requested by the provider by considering the requesting provider's recommendation and based on recommendation, the enrollee's medical or dental history, as applicable, and individual clinical circumstances.
Pending 2025-01-30
HC-01.7
Insurance Law § 3224-e(a)(5)
Plain Language
AI, algorithmic, or software tools used in utilization review must be open to inspection. While the bill does not specify who may inspect or the procedures for inspection, this requirement means the tools must be accessible for regulatory audit, compliance review, or other examination. Health care service plans must ensure they can produce the tool for inspection when required.
(5) The artificial intelligence, algorithm, or other software tool is open to inspection.
Pending 2025-01-30
HC-01.6
Insurance Law § 3224-e(a)(6)
Plain Language
Health care service plans must include disclosures about the use and oversight of AI tools in their written policies and procedures. This means the plan's utilization review policies and procedures must document that AI tools are used, describe how they are used, and explain oversight mechanisms. These written policies and procedures would be available to regulators, providers, and enrollees as applicable under existing Insurance Law requirements.
(6) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures.
Pending 2025-01-30
HC-01.4
Insurance Law § 3224-e(a)(7)
Plain Language
Health care service plans must periodically review and revise AI tools used in utilization review to maximize their accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. Reviews must cover the tool's performance, how it is being used, and the outcomes it produces. Plans should document these periodic reviews to demonstrate compliance.
(7) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-01-30
HC-01.5
Insurance Law § 3224-e(a)(8)
Plain Language
Patient data used by AI tools in utilization review must not be repurposed beyond its intended and stated purpose. This data use limitation must be consistent with both applicable New York state privacy laws and HIPAA. Health care service plans must ensure that enrollee clinical data processed by AI tools for utilization review is not used for secondary purposes such as marketing, research unrelated to the enrollee's care, or other functions beyond the stated utilization review purpose.
(8) Patient data is not used beyond its intended and stated purpose, consistent with applicable state laws and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191).
Pending 2025-08-18
HC-01.3
Pub. Health Law § 4905-a(1)(a)-(b)
Plain Language
Utilization review agents using AI tools in coverage determinations must ensure those tools base their outputs on individualized enrollee clinical data — including the enrollee's medical history, clinical circumstances presented by the requesting provider, and other relevant clinical information in the enrollee's record. The AI tool may not base its determination solely on a group dataset. This requires AI systems to incorporate individualized patient information rather than relying exclusively on aggregate population-level data.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting provider; and (iii) other relevant clinical information contained in the enrollee's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-18
HC-01.1HC-01.2
Pub. Health Law § 4905-a(2)
Plain Language
AI tools are categorically prohibited from denying, delaying, or modifying healthcare services based on medical necessity — even partially. All medical necessity determinations must be made by a licensed physician or licensed healthcare professional who is competent in the specific clinical area at issue. The reviewing professional must consider the treating provider's recommendation, the enrollee's medical history, and individual clinical circumstances. This is an absolute prohibition on AI making adverse coverage decisions, not merely a requirement for human oversight — the AI may not make the determination at all.
Notwithstanding subdivision one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-18
HC-01.4
Pub. Health Law § 4905-a(1)(i)
Plain Language
Utilization review agents must periodically review and revise the AI tools they use in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify the review interval, but the obligation is continuous and covers the tool's performance, use patterns, and outcomes.
The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-18
HC-01.5
Pub. Health Law § 4905-a(1)(j)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond its intended and stated purpose. This obligation is explicitly tied to HIPAA compliance and limits secondary use of patient data collected or processed in connection with AI-driven utilization review. Utilization review agents must ensure that data provided for coverage determination purposes is not repurposed for other functions such as marketing, product development, or unrelated analytics.
Patient data is not used beyond its intended and stated purpose, consistent with this section and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-18
HC-01.7
Pub. Health Law § 4905-a(1)(g)-(h)
Plain Language
Utilization review agents must ensure their AI tools are available for regulatory inspection, audit, and compliance review by the Department of Health. Additionally, disclosures about the use and oversight of AI tools must be included in the written utilization review policies and procedures required under existing law (PHL § 4902). This creates both a regulatory transparency obligation (making AI tools open for audit) and a documentation obligation (including AI disclosures in existing UR policy filings).
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-08-18
Pub. Health Law § 4905-a(1)(d)
Plain Language
AI tools used in utilization review must not replace healthcare provider decision-making. This is broader than the medical necessity prohibition in subdivision 2 — it applies to the entire utilization review process, not just medical necessity determinations. The tool may inform or assist provider decisions but may not displace them. This operates as a general principle that AI remains an adjunct to, not a substitute for, clinical judgment.
The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making.
Pending 2025-08-18
HC-01.3
Ins. Law § 4905-a(1)(a)-(b)
Plain Language
Disability insurers (including specialized health insurers) using AI tools in utilization review or utilization management must ensure those tools base their outputs on individualized insured clinical data — including the insured's medical history, clinical circumstances presented by the requesting provider, and other relevant clinical information in the insured's record. The AI tool may not base its determination solely on a group dataset. This mirrors the Public Health Law obligation but applies to the insurance-regulated entity population.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history; (ii) Individual clinical circumstances as presented by the requesting provider; and (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-18
HC-01.1HC-01.2
Ins. Law § 4905-a(2)
Plain Language
AI tools used by disability insurers are categorically prohibited from denying, delaying, or modifying healthcare services based on medical necessity — even partially. All medical necessity determinations must be made by a licensed physician or licensed healthcare professional competent in the relevant clinical area, who reviews the treating provider's recommendation, the insured's medical history, and individual clinical circumstances. This mirrors the Public Health Law obligation but applies to insurers regulated under the Insurance Law.
Notwithstanding subsection one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-18
HC-01.4
Ins. Law § 4905-a(1)(i)
Plain Language
Disability insurers must periodically review and revise AI tools used in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation covering the tool's performance, use patterns, and outcomes. This mirrors the Public Health Law obligation for insurers regulated under the Insurance Law.
The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-18
HC-01.5
Ins. Law § 4905-a(1)(j)
Plain Language
Disability insurers must ensure that patient data used by AI tools in utilization review or utilization management is not used beyond its intended and stated purpose, consistent with state law and HIPAA. This mirrors the Public Health Law obligation for insurers regulated under the Insurance Law.
Patient data is not used beyond its intended and stated purpose, consistent with state law and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-18
HC-01.7
Ins. Law § 4905-a(1)(g)-(h)
Plain Language
Disability insurers must ensure their AI tools are available for regulatory inspection, audit, and compliance review by the Department of Financial Services. Disclosures about the use and oversight of AI tools must be included in the written utilization review policies and procedures already required under existing law (Insurance Law § 4902). This mirrors the Public Health Law obligation for insurers regulated under the Insurance Law.
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department pursuant to applicable state and federal law. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-08-18
Ins. Law § 4905-a(1)(d)
Plain Language
AI tools used by disability insurers in utilization review must not replace healthcare provider decision-making. This mirrors the Public Health Law obligation and operates as a general principle that AI remains an adjunct to, not a substitute for, clinical judgment in the insurance-regulated context.
The artificial intelligence, algorithm, or other software tool does not supplant health care provider decision-making.
Pending 2025-08-11
HC-01.3
Pub. Health Law § 4905-a(1)(a)-(b)
Plain Language
Utilization review agents using AI tools for utilization review based on medical necessity must ensure that the AI tool bases its determinations on the individual enrollee's medical history, clinical circumstances presented by the requesting provider, and other relevant clinical information from the enrollee's record. The tool may not base its determination solely on aggregate or group-level datasets. This requires individualized clinical data inputs, not population-level statistical models alone.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) an enrollee's medical or other clinical history; (ii) individual clinical circumstances as presented by the requesting provider; and (iii) other relevant clinical information contained in the enrollee's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-11
HC-01.1HC-01.2
Pub. Health Law § 4905-a(2)
Plain Language
AI tools may not deny, delay, or modify health care services based on medical necessity — that determination must be made exclusively by a licensed physician or a licensed health care professional competent in the relevant clinical specialty. The human clinician must review and consider the requesting provider's recommendation, the enrollee's medical history, and individual clinical circumstances. This is an absolute prohibition on AI-driven adverse determinations; the AI tool cannot serve as even a partial basis for a medical necessity denial.
Notwithstanding subdivision one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-11
HC-01.4
Pub. Health Law § 4905-a(1)(i)
Plain Language
Utilization review agents must periodically review and revise the AI tool's performance, use, and outcomes to maximize accuracy and reliability. This is a continuing operational obligation — not a one-time pre-deployment check — requiring ongoing monitoring and refinement of the tool over time.
(i) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-11
HC-01.5
Pub. Health Law § 4905-a(1)(j)
Plain Language
Patient data used by the AI tool in utilization review must not be repurposed beyond the intended and stated purpose. This requirement operates in parallel with HIPAA and reinforces data minimization for AI-specific contexts. Utilization review agents must ensure their AI vendors and contracted entities also comply with this limitation.
(j) Patient data is not used beyond its intended and stated purpose, consistent with this section and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-11
HC-01.7
Pub. Health Law § 4905-a(1)(g)-(h)
Plain Language
The AI tool must be open to inspection by the Department of Health for audit or compliance review purposes. In addition, the utilization review agent must include disclosures about the use and oversight of the AI tool in the written policies and procedures already required under Public Health Law § 4902. This effectively requires documenting AI use within existing utilization review policy filings and making the AI system itself accessible for regulatory examination.
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-08-11
HC-01.3
Ins. Law § 4905-a(1)(a)-(b)
Plain Language
Disability insurers (including specialized health insurers) using AI tools for utilization review must ensure the tool bases determinations on the individual insured's medical history, clinical circumstances presented by the requesting provider, and other relevant clinical information from the insured's record. The tool may not base its determination solely on aggregate or group-level datasets. This mirrors the Public Health Law requirement but applies to entities regulated under the Insurance Law.
(a) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (i) An insured's medical or other clinical history; (ii) Individual clinical circumstances as presented by the requesting provider; and (iii) Other relevant clinical information contained in the insured's medical or other clinical record. (b) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pending 2025-08-11
HC-01.1HC-01.2
Ins. Law § 4905-a(2)
Plain Language
Under the Insurance Law, AI tools used by disability insurers may not deny, delay, or modify health care services based on medical necessity. Only a licensed physician or competent licensed health care professional may make medical necessity determinations, considering the requesting provider's recommendation, the insured's medical history, and individual clinical circumstances. This is an absolute prohibition — the AI cannot serve as even a partial basis for a medical necessity denial, delay, or modification.
Notwithstanding subsection one of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, as provided in this title, by reviewing and considering the requesting provider's recommendation, the insured's medical or other clinical history, as applicable, and individual clinical circumstances.
Pending 2025-08-11
HC-01.4
Ins. Law § 4905-a(1)(i)
Plain Language
Disability insurers must periodically review and revise the AI tool's performance, use, and outcomes to maximize accuracy and reliability. This mirrors the Public Health Law obligation and imposes a continuing operational review requirement on insurers regulated under the Insurance Law.
(i) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Pending 2025-08-11
HC-01.5
Ins. Law § 4905-a(1)(j)
Plain Language
Disability insurers must ensure patient data used by the AI tool is not repurposed beyond the intended and stated purpose, consistent with state law and HIPAA. This Insurance Law provision mirrors the Public Health Law data purpose limitation and reinforces data minimization principles in the insurer context.
(j) Patient data is not used beyond its intended and stated purpose, consistent with state law and the federal Health Insurance Portability and Accountability Act of 1996 (Public Law 104-191), as applicable.
Pending 2025-08-11
HC-01.7
Ins. Law § 4905-a(1)(g)-(h)
Plain Language
The AI tool used by disability insurers must be open to inspection by the Department of Financial Services for audit or compliance review purposes, subject to applicable state and federal law. Insurers must also include disclosures about AI use and oversight in the written utilization review policies and procedures required under Insurance Law § 4902. This mirrors the Public Health Law inspection and disclosure provisions for the insurer context.
(g) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the department pursuant to applicable state and federal law. (h) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the written policies and procedures, as required by section forty-nine hundred two of this title.
Pending 2025-01-01
HC-01.7
Ohio Rev. Code § 3902.80(B)(3)
Plain Language
Both the superintendent and the health plan issuer must publish the annual AI utilization review report on their respective websites. The superintendent publishes on the department of insurance website; the health plan issuer publishes on its own publicly accessible website. This creates a dual public disclosure obligation, ensuring public access to information about AI use in utilization review processes.
(3) The superintendent shall publish a copy of the report on the web site of the department of insurance. The health plan issuer shall publish a copy of the report on the health plan issuer's publicly accessible web site.
Pending 2025-01-01
HC-01.1
Ohio Rev. Code § 3902.80(C)(1)
Plain Language
Health plan issuers are prohibited from making any care decision about a covered person — including denials, delays, or modifications of health care services based on medical necessity — when that decision is based solely on AI-generated results. AI may inform the decision, but it cannot be the sole basis. A qualified human must independently make or affirm the determination. This is a hard prohibition with no exceptions or safe harbors.
(C)(1) No health plan issuer shall make a decision regarding the care of a covered person, including the decision to deny, delay, or modify health care services based on medical necessity, based solely on results derived from the use or application of artificial intelligence.
Pending 2025-01-01
HC-01.2HC-01.3
Ohio Rev. Code § 3902.80(C)(2)-(3)
Plain Language
Medical necessity determinations must be made by a licensed physician or a clinically qualified provider who evaluates the specific clinical issues at hand. The determination must account for the treating provider's recommendation, the patient's medical and clinical history, and individual clinical circumstances — meaning group-level or algorithmic outputs alone are insufficient. Additionally, any physician involved in medical necessity or utilization review must actually open and review the individual's clinical records and document that review before issuing a decision. This creates both a qualified-reviewer requirement and a documented individualized-review requirement.
(2) A determination of medical necessity under a health benefit plan must meet both of the following requirements: (a) The determination is made by a licensed physician or a provider that is qualified to evaluate the specific clinical issues involved in the requested health care services. (b) The determination takes into consideration the requesting provider's recommendation, the covered person's medical or other clinical history, and individual clinical circumstances. (3) Any physician who participates in a determination of medical necessity or a utilization review process on behalf of a health plan issuer shall open and document the review of the individual clinical records or data prior to making an individualized documented decision.
Pending 2025-01-01
HC-01.8
Ohio Rev. Code § 3902.80(C)(4)
Plain Language
When a health plan issuer denies, delays, or modifies covered health care services and an AI-based algorithm was used in the decision, the decision must include a plain language explanation of the rationale. This is a disclosure-at-the-point-of-adverse-determination obligation — the explanation must accompany the decision itself, not be available only on request. The requirement applies to all AI-assisted adverse decisions, not just those based solely on AI.
(4) Any decision to deny, delay, or modify health care services covered under a health benefit plan in which an artificial intelligence-based algorithm is used shall be accompanied by a plain language explanation of the rationale used in making the decision.
Pending 2025-01-01
HC-01.7
Ohio Rev. Code § 3902.80(B)(3)
Plain Language
The annual AI utilization review report filed with the Superintendent must also be published on the Department of Insurance's website by the superintendent and on the health plan issuer's own publicly accessible website. This dual-publication requirement ensures both regulatory and public transparency into how health insurers use AI in utilization review. For the issuer, this is an affirmative obligation to post the same report it files with the regulator.
(3) The superintendent shall publish a copy of the report on the web site of the department of insurance. The health plan issuer shall publish a copy of the report on the health plan issuer's publicly accessible web site.
Pending 2025-01-01
HC-01.1HC-01.2HC-01.3
Ohio Rev. Code § 3902.80(C)(1)-(3)
Plain Language
Health plan issuers are prohibited from making care decisions — including denials, delays, or modifications of services on medical necessity grounds — based solely on AI results. Every medical necessity determination must be made by a licensed physician or qualified provider who considers the treating provider's recommendation, the covered person's medical history, and individual clinical circumstances. Physicians participating in utilization review must affirmatively open and review individual clinical records before making a documented decision. This ensures AI is a support tool, not the decision-maker, and that every adverse determination reflects individualized human clinical judgment based on the patient's own records.
(C)(1) No health plan issuer shall make a decision regarding the care of a covered person, including the decision to deny, delay, or modify health care services based on medical necessity, based solely on results derived from the use or application of artificial intelligence. (2) A determination of medical necessity under a health benefit plan must meet both of the following requirements: (a) The determination is made by a licensed physician or a provider that is qualified to evaluate the specific clinical issues involved in the requested health care services. (b) The determination takes into consideration the requesting provider's recommendation, the covered person's medical or other clinical history, and individual clinical circumstances. (3) Any physician who participates in a determination of medical necessity or a utilization review process on behalf of a health plan issuer shall open and document the review of the individual clinical records or data prior to making an individualized documented decision.
Pending 2025-01-01
HC-01.6
Ohio Rev. Code § 3902.80(C)(4)
Plain Language
When a health plan issuer uses an AI-based algorithm in a decision to deny, delay, or modify covered health care services, the issuer must provide a plain language explanation of the rationale behind the decision. This applies to every such adverse determination — not just on request. The explanation must accompany the decision itself, ensuring patients and providers understand the reasoning at the time they receive the adverse action.
(4) Any decision to deny, delay, or modify health care services covered under a health benefit plan in which an artificial intelligence-based algorithm is used shall be accompanied by a plain language explanation of the rationale used in making the decision.
Pending 2026-11-01
HC-01.3
36 O.S. § 6567(A)(1)-(2)
Plain Language
Utilization review organizations, disability insurers, and specialized health insurers using AI tools must ensure those tools base their determinations on individualized enrollee clinical data — including medical history, clinical circumstances presented by the requesting provider, and other relevant clinical record information. The AI tool may not base its determination solely on a group dataset. This applies both to entities that directly use AI tools and those that contract with third parties that use AI tools.
A. A utilization review organization, disability insurer, or specialized health insurer that uses an artificial intelligence tool or contracts with or otherwise works through an entity that uses an artificial intelligence tool shall ensure that the artificial intelligence tool: 1. Bases its determination on the following information, as applicable: a. an enrollee's medical or other clinical history, b. individual clinical circumstances as presented by the requesting provider, and c. other relevant clinical information contained in the enrollee's medical or other clinical record; 2. Does not base its determination solely on a group dataset;
Pending 2026-11-01
HC-01.1HC-01.2
36 O.S. § 6567(B)
Plain Language
AI tools are categorically prohibited from denying, delaying, or modifying health care services based on medical necessity — whether in whole or in part. All medical necessity determinations must be made exclusively by a licensed physician or a licensed health care professional who is competent in the specific clinical issues involved. That human reviewer must consider the requesting provider's recommendation, the enrollee's medical and clinical history, and individual circumstances. This is a complete prohibition on AI-driven adverse determinations, not merely a human-in-the-loop requirement.
B. The artificial intelligence tool shall not deny, delay, or modify health care services based, in whole or in part, on medical necessity. A determination of medical necessity shall be made only by a licensed physician or a licensed health care professional competent to evaluate the specific clinical issues involved in the health care services requested by the provider, by reviewing and considering the requesting provider's recommendation, the enrollee's medical or other clinical history, and individual circumstances.
Pending 2026-11-01
HC-01.5
36 O.S. § 6567(A)(5)
Plain Language
AI tools used in utilization review must not use patient data beyond the tool's intended and stated purpose. This obligation explicitly ties to HIPAA compliance and reinforces that patient data collected for utilization review purposes may not be repurposed for other uses. Covered entities must ensure their AI tool vendors and contractors also comply with this limitation.
5. Does not use patient data beyond its intended and stated purpose consistent with the federal Health Insurance Portability and Accountability Act of 1996, P.L. No. 104-191, as applicable;
Pending 2026-11-01
HC-01.4
36 O.S. § 6567(A)(10)
Plain Language
Covered entities must ensure that AI tools used in utilization review are subject to periodic performance review and revision to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment check. The statute does not specify the review interval, leaving this to the entity's discretion or future Commissioner rulemaking.
10. Requires performance use and outcomes to be periodically reviewed and revised to maximize accuracy and reliability.
Pending 2026-11-01
HC-01.6
36 O.S. § 6567(C)
Plain Language
All health benefit plans in Oklahoma must disclose on their publicly accessible website whether or not they use AI tools in the utilization review process. This is a blanket disclosure obligation that applies regardless of whether AI is actually used — the plan must affirmatively state whether AI tools are used or not used. The notification must be posted on the plan's website, making it a standing disclosure rather than a per-claim communication.
C. Any health benefit plan in this state shall notify enrollees and insureds about the use or lack of use of artificial intelligence tools in the utilization review process on the accessible Internet website of such health benefit plan.
Pending 2026-11-01
HC-01.7
36 O.S. § 6567(A)(8)-(9)
Plain Language
AI tools used in utilization review must be open to inspection by the Oklahoma Insurance Commissioner for audit or compliance review purposes. Additionally, the entity's written policies and procedures must contain disclosures about the use and oversight of the AI tool. Together, these provisions create both a regulatory inspection right and a documentation obligation — the entity must maintain written policies describing AI tool usage and oversight, and must make the AI tool itself available for Commissioner inspection.
8. Is open to inspection for audit or compliance review by the Insurance Commissioner; 9. Contains disclosures pertaining to the use and oversight of the artificial intelligence tool in the written policies and procedures;
Pending 2026-10-06
HC-01.6
35 Pa.C.S. § 3503(b)(1)
Plain Language
When a healthcare facility uses AI-based algorithms for clinical decision making, the AI must not supersede the health care provider's clinical judgment. The provider retains ultimate decision-making authority over patient care, including gathering information, diagnosing, and planning treatments. This is a continuous operating requirement that applies to each instance of AI use in clinical decisions.
(b) Requirements for artificial intelligence-based algorithms.--For each instance in which a facility uses artificial intelligence-based algorithms for clinical decision making, the facility shall comply with the following: (1) The artificial intelligence-based algorithms must not supersede health care provider clinical decision making.
Pending 2026-10-06
HC-01.6
35 Pa.C.S. § 3502(a)
Plain Language
Facilities must disclose to patients when AI-based algorithms are or will be used for clinical decision making or similar tasks. The disclosure must appear in all related written communications and be posted on the facility's public website. The Department of Health will determine the specific nature and frequency of disclosures. This is a general AI-use disclosure obligation — distinct from the per-communication AI-generated content labeling in § 3502(b).
(a) Duty to disclose.--A facility shall disclose to patients of the facility if artificial intelligence-based algorithms are or will be used for clinical decision making or other similar tasks. The disclosure shall be: (1) Provided in all related written communications. (2) Posted on the publicly accessible Internet website of the facility.
Pending 2026-10-06
HC-01.6
35 Pa.C.S. § 3502(b)(1)-(2)
Plain Language
When a facility uses AI to generate patient communications containing clinical information, each such communication must include a clear disclaimer that it was AI-generated and instructions for contacting a human provider. Two exemptions apply: communications limited to administrative matters (scheduling, billing, etc.) and communications that have been individually read and reviewed by a human health care provider. The human-review exemption effectively means that once a provider personally reviews and approves an AI-drafted clinical communication, no AI disclaimer is required.
(b) Communications.-- (1) A facility that uses artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall include: (i) A clear and conspicuous disclaimer that indicates that the communication was generated by artificial intelligence. (ii) Clear instructions on how the patient may contact a human health care provider or relevant employee of the facility with questions. (2) The requirements under paragraph (1) shall not apply to communications that: (i) only pertain to administrative matters, including appointment scheduling, billing or other clerical or business matters; or (ii) have been individually read and reviewed by a human health care provider.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5203(b)(3)
Plain Language
When an insurer uses AI-based algorithms in the utilization review process, the AI must not supersede the decision making of the health care provider conducting the utilization review. The reviewing provider retains independent clinical judgment authority. This parallels the facility-level obligation in Chapter 35 but applies specifically to the insurer's utilization review context.
(b) Requirements for artificial intelligence-based algorithms.--For each instance in which an insurer uses artificial intelligence-based algorithms in the utilization review process regarding a covered person, the insurer shall comply with the following: ... (3) The artificial intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2026-10-06
HC-01.3
40 Pa.C.S. § 5203(b)(1)-(2)
Plain Language
Insurers' AI algorithms used in utilization review must base determinations on the individual covered person's medical history, individual clinical and nonclinical circumstances as presented by the requesting provider, and other relevant information in the patient's clinical record. The AI may not base a determination solely on a group data set — it must consider individualized patient data. This prevents insurers from using AI to make coverage decisions based solely on aggregate population data without considering the individual patient's circumstances.
(1) The artificial intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the covered person. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the covered person. (2) The artificial intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5205
Plain Language
Before an insurer can deny, reduce, or terminate healthcare benefits — including denying a prior authorization — the health care provider conducting utilization review on behalf of the insurer must: review individual clinical records and relevant information, document that review, and exercise independent judgment separate from any AI recommendations. This is a mandatory pre-action human review requirement: a qualified human must affirmatively review and independently decide before any adverse determination takes effect. The provider may not simply ratify the AI output — they must exercise independent clinical judgment.
§ 5205. Health care provider requirements. Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an insurer shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial intelligence-based algorithms.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5202(a)-(b)
Plain Language
Insurers must disclose to both network providers and all covered persons when AI-based algorithms are or will be used in their utilization review process. This disclosure must also be posted on the insurer's public website. The Insurance Department will determine the specific nature and frequency of disclosure to covered persons. This ensures both providers and patients know when AI is being used to inform coverage decisions.
§ 5202. Disclosure. (a) Duty to disclose.--An insurer shall disclose to a participating network provider and all covered persons if artificial intelligence-based algorithms are or will be used in the utilization review process of the insurer. (b) Posting.--An insurer shall post the information about the use of artificial intelligence-based algorithms in the utilization review process of the insurer on the publicly accessible Internet website of the insurer.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5302(a)-(b)
Plain Language
MA/CHIP managed care plans must disclose to network providers and all enrollees when AI-based algorithms are or will be used in utilization review. The disclosure must also be posted on the plan's public website. This parallels the insurer disclosure obligation in Chapter 52 but applies to Medicaid and CHIP managed care plans supervised by the Department of Human Services.
§ 5302. Disclosure. (a) Duty to disclose.--An MA or CHIP managed care plan shall disclose to a participating network provider and all enrollees if artificial intelligence-based algorithms are or will be used in the utilization review process of the MA or CHIP managed care plan. (b) Posting.--An MA or CHIP managed care plan shall post the information about the use of artificial intelligence-based algorithms in the utilization review process of the MA or CHIP managed care plan on the publicly accessible Internet website of the MA or CHIP managed care plan.
Pending 2026-10-06
HC-01.3
40 Pa.C.S. § 5303(b)(1)-(2)
Plain Language
MA/CHIP managed care plans' AI algorithms used in utilization review must base determinations on the individual enrollee's medical history, individual clinical and nonclinical circumstances presented by the requesting provider, and other relevant information in the patient's record. The AI may not rely solely on group data sets. This ensures individualized consideration of each enrollee's circumstances in AI-assisted coverage decisions.
(b) Requirements for artificial intelligence-based algorithms.--For each instance in which a MA or CHIP managed care plan uses artificial intelligence-based algorithms in the utilization review process regarding an enrollee, the MA or CHIP managed care plan shall comply with the following: (1) The artificial intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the enrollee. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the enrollee. (2) The artificial intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5303(b)(3)
Plain Language
When an MA/CHIP managed care plan uses AI in utilization review, the AI must not supersede the clinical decision making of the reviewing health care provider. This parallels the identical requirement for insurers under Chapter 52 and facilities under Chapter 35.
(3) The artificial intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2026-10-06
HC-01.6
40 Pa.C.S. § 5305
Plain Language
Before an MA/CHIP managed care plan can deny, reduce, or terminate benefits — including denying a prior authorization — the reviewing health care provider must review individual clinical records, document that review, and exercise independent judgment separate from AI recommendations. This is a mandatory pre-action human review requirement identical to the insurer obligation under § 5205.
§ 5305. Health care provider requirements. Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an MA or CHIP managed care plan shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial intelligence-based algorithms.
Pending 2027-01-09
HC-01.6
35 Pa.C.S. § 3503(b)(1)
Plain Language
When a facility uses AI-based algorithms for clinical decision making, the algorithms must not supersede the health care provider's own clinical judgment. The human provider retains ultimate authority over patient care decisions involving gathering information, diagnosing, and planning treatments. This is a human-override requirement ensuring AI remains a support tool rather than the final decision-maker in clinical contexts.
(1) The artificial-intelligence-based algorithms must not supersede health care provider clinical decision making.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5203(b)(3)
Plain Language
When an insurer uses AI-based algorithms in utilization review, those algorithms must not supersede the judgment of the health care provider conducting the review. The human reviewer retains final decision-making authority over utilization review determinations.
(3) The artificial-intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2027-01-09
HC-01.1HC-01.2
40 Pa.C.S. § 5205
Plain Language
Before an insurer's utilization review provider issues or upholds any adverse benefit determination (denial, reduction, or termination of a health care service, including prior authorization denials), the reviewing provider must independently review the individual patient's clinical records, document that review, and exercise independent clinical judgment separate from any AI recommendations. This goes beyond simply requiring human oversight — it mandates documented, individualized clinical review as a prerequisite to any adverse action.
Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an insurer shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial-intelligence-based algorithms.
Pending 2027-01-09
HC-01.3
40 Pa.C.S. § 5203(b)(1)-(2)
Plain Language
Insurers' AI-based algorithms used in utilization review must base each determination on the individual covered person's medical history, the clinical circumstances presented by the requesting provider, and other relevant information in the person's clinical record. Determinations may not be based solely on aggregate or group-level datasets — individual patient data must be considered in every case. This ensures individualized rather than population-level decision-making.
(1) The artificial-intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the covered person. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the covered person. (2) The artificial-intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5303(b)(3)
Plain Language
When an MA or CHIP managed care plan uses AI-based algorithms in utilization review, those algorithms must not supersede the judgment of the health care provider conducting the review. The human reviewer retains final decision-making authority.
(3) The artificial-intelligence-based algorithms must not supersede decision making of the health care provider conducting the utilization review.
Pending 2027-01-09
HC-01.3
40 Pa.C.S. § 5303(b)(1)-(2)
Plain Language
MA or CHIP managed care plans' AI-based algorithms used in utilization review must base each determination on the individual enrollee's medical history, clinical circumstances from the requesting provider, and other relevant information in the enrollee's record. Determinations may not rest solely on group-level data. This mirrors the insurer requirement in Chapter 52 but applies specifically to Medicaid and CHIP managed care.
(1) The artificial-intelligence-based algorithms must base a determination on all of the following: (i) The medical or other clinical history of the enrollee. (ii) Individual clinical or nonclinical circumstances as presented by the requesting health care provider. (iii) Other relevant clinical or nonclinical information contained in the medical or other clinical record of the enrollee. (2) The artificial-intelligence-based algorithms must not base a determination solely on a group data set.
Pending 2027-01-09
HC-01.1HC-01.2
40 Pa.C.S. § 5305
Plain Language
Before an MA or CHIP managed care plan's utilization review provider issues or upholds any adverse benefit determination, the reviewing provider must independently review individual clinical records, document the review, and exercise judgment independent of AI recommendations. This mirrors § 5205 for insurers but applies to MA/CHIP managed care plans.
Prior to issuing or upholding a decision to deny, reduce or terminate benefits for a health care service, including a decision to deny a prior authorization request, a health care provider who participates in utilization review on behalf of an MA or CHIP managed care plan shall: (1) Review individual clinical records and other relevant information. (2) Document the review under paragraph (1). (3) Based on the review under paragraph (1), exercise judgment independent of any recommendations by the artificial-intelligence-based algorithms.
Pending 2027-01-09
HC-01.6
35 Pa.C.S. § 3502(b)(1)-(2)
Plain Language
Facilities that use AI to generate written or verbal patient communications about clinical information must include a clear disclaimer that the communication was AI-generated, plus instructions on how to reach a human provider. These requirements do not apply to purely administrative communications (scheduling, billing) or to communications that a human provider has individually read and reviewed before sending. The human-review exemption creates a safe harbor — if a clinician personally reviews the AI output, the disclaimer is not required.
(1) A facility that uses artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall include: (i) A clear and conspicuous disclaimer that indicates that the communication was generated by artificial intelligence. (ii) Clear instructions on how the patient may contact a human health care provider or relevant employee of the facility with questions. (2) The requirements under paragraph (1) shall not apply to communications that: (i) only pertain to administrative matters, including appointment scheduling, billing or other clerical or business matters; or (ii) have been individually read and reviewed by a human health care provider.
Pending 2027-01-09
HC-01.6
35 Pa.C.S. § 3502(a)
Plain Language
Facilities must disclose to patients when AI-based algorithms are or will be used for clinical decision making. The disclosure must appear in all related written communications and be posted on the facility's public website. The Department of Health will determine the specific nature and frequency of disclosure requirements. This is a general disclosure obligation that patients will know AI is involved in their care, distinct from the per-communication disclaimer requirement in § 3502(b).
(a) Artificial-intelligence-based algorithms.--A facility shall disclose to patients of the facility if artificial-intelligence-based algorithms are or will be used for clinical decision making or other similar tasks. The disclosure shall be: (1) Provided in all related written communications. (2) Posted on the publicly accessible Internet website of the facility.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5202(a)-(b)
Plain Language
Insurers must disclose to both participating network providers and all covered persons that AI-based algorithms are or will be used in their utilization review process. This disclosure must also be posted on the insurer's public website. The Insurance Department will determine the specific nature and frequency of disclosure requirements to covered persons.
(a) Artificial-intelligence-based algorithms.--An insurer shall disclose to a participating network provider and all covered persons if artificial-intelligence-based algorithms are or will be used in the utilization review process of the insurer. (b) Posting.--An insurer shall post the information about the use of artificial-intelligence-based algorithms in the utilization review process of the insurer on the publicly accessible Internet website of the insurer.
Pending 2027-01-09
HC-01.6
40 Pa.C.S. § 5302(a)-(b)
Plain Language
MA or CHIP managed care plans must disclose to participating network providers and all enrollees that AI-based algorithms are or will be used in the plan's utilization review process. The information must also be posted on the plan's public website. The Department of Human Services will determine the specific nature and frequency of disclosure requirements.
(a) Artificial-intelligence-based algorithms.--An MA or CHIP managed care plan shall disclose to a participating network provider and all enrollees if artificial-intelligence-based algorithms are or will be used in the utilization review process of the MA or CHIP managed care plan. (b) Posting.--An MA or CHIP managed care plan shall post the information about the use of artificial-intelligence-based algorithms in the utilization review process of the MA or CHIP managed care plan on the publicly accessible Internet website of the MA or CHIP managed care plan.
Pending 2026-01-21
HC-01.1HC-01.2
R.I. Gen. Laws § 27-84-4(a)
Plain Language
When AI makes or substantially contributes to a non-administrative adverse benefit determination regarding medically necessary care, a licensed provider with the same license status as the ordering provider must review and approve the determination before it is finalized. The reviewing provider must document their rationale in the enrollee's case record. This is a mandatory human-in-the-loop requirement — AI cannot serve as the sole or final decision-maker for clinical denials. The remedy for non-compliance is automatic reversal of the adverse determination, creating a strong structural incentive for compliance. Note that this applies only to non-administrative determinations (those requiring medical judgment), not to administrative determinations like eligibility or covered-benefit decisions.
Any non-administrative adverse benefit determination where an artificial intelligence system made, or was a substantial factor in making, that determination regarding medically necessary care shall be reviewed and approved by a provider with the same license status of the ordering professional provider before being finalized, with documentation of their rationale included in the enrollee's case record. Failure to follow the requirements set forth in this subsection shall result in reversal of the non-administrative adverse determination.
Pending 2026-01-09
HC-01.1HC-01.2
R.I. Gen. Laws § 27-84-4(a)
Plain Language
When AI makes or substantially contributes to a non-administrative adverse benefit determination involving medically necessary care, a licensed provider with the same license status as the ordering provider must review and approve the determination before it is finalized. The reviewing provider must document their rationale in the enrollee's case record. This is a mandatory human-in-the-loop requirement — the AI determination cannot take effect without affirmative human clinical review. The penalty for non-compliance is automatic reversal of the adverse determination, creating a strong structural incentive for compliance. This applies only to non-administrative determinations (those involving medical judgment or clinical criteria), not to administrative determinations like eligibility or covered-benefit questions.
Any non-administrative adverse benefit determination where an artificial intelligence system made, or was a substantial factor in making, that determination regarding medically necessary care shall be reviewed and approved by a provider with the same license status of the ordering professional provider before being finalized, with documentation of their rationale included in the enrollee's case record. Failure to follow the requirements set forth in this subsection shall result in reversal of the non-administrative adverse determination.
Pending 2026-01-09
HC-01.7
R.I. Gen. Laws § 27-84-5(a)
Plain Language
OHIC and DBR are jointly empowered to promulgate rules and regulations to implement the chapter. While this is primarily a rulemaking delegation, it has practical compliance significance: insurers should anticipate that OHIC/DBR will issue implementing regulations that may add specificity to the disclosure, documentation, and clinical review requirements. Insurers should monitor OHIC/DBR rulemaking proceedings and be prepared to comply with additional requirements beyond the statutory text.
OHIC, in collaboration with DBR, shall promulgate rules and regulations that may be necessary to effectuate the purposes and implementation of this chapter.
Pending 2026-07-01
HC-01.3
Section 1(1)-(2)
Plain Language
Health carriers using AI, algorithms, or other software tools for utilization review — whether directly or through contracted entities — must ensure that each determination is based on the individual patient's medical history, individual clinical circumstances as presented by the requesting provider, and other relevant clinical information in the patient's record. The tools may not base determinations solely on aggregate or group-level datasets. This applies to both the carrier's own tools and those of any entity with which the carrier contracts for utilization review.
Any health carrier that makes determinations or provides advice about third-party payment for any health care services using an artificial intelligence, algorithm, or other software tool, for the purpose of utilization review and any health carrier that contracts with or otherwise works through an entity that uses an artificial intelligence, algorithm, or other software tool, for the purpose of utilization review, shall ensure the following: (1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (a) A patient's medical or other clinical history; (b) Individual clinical circumstances, as presented by the requesting provider; and (c) Other relevant clinical information contained in the patient's medical or other clinical record; (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset;
Pending 2026-07-01
Section 1(3)-(4)
Plain Language
Health carriers must ensure their AI utilization review tools are applied equally and consistently for all patients and across all subscriber groups and individuals covered by a health benefit plan. The tools must be configured so that patients with similar clinical presentations receive the same decisions, and the tools must comply with applicable HHS regulations and guidance. This is a non-discrimination and consistency requirement specific to healthcare AI utilization review that has no precise sub-obligation match in the taxonomy.
(3) The artificial intelligence, algorithm, or other software tool is applied equally for all patients, including in accordance with any applicable regulations and guidance issued by the United States Department of Health and Human Services; and (4) The artificial intelligence, algorithm, or other software tool is configured and applied in a standard consistent manner for all subscriber groups and individuals covered by a health benefit plan, as defined in § 58-17-66, so that the resulting decisions are the same for all patients with similar clinical presentations and considerations.
Pending 2026-07-01
HC-01.1HC-01.2
Section 2
Plain Language
AI tools used for utilization review are categorically prohibited from independently denying, delaying, or modifying determinations about health care services. Every adverse determination must be made by a licensed physician or a licensed healthcare professional with competence in the specific clinical area at issue. That human reviewer must consider the requesting provider's recommendation, the patient's medical history, and the individual clinical circumstances before making the adverse determination. This is an absolute prohibition on AI-only adverse decisions — not merely a requirement for human review upon appeal.
An artificial intelligence, algorithm, or other software tool used for the purpose of utilization review pursuant to section 1 of this Act may not deny, delay, or modify a determination to provide health care services. Any adverse determination may be made only by a licensed physician or a licensed healthcare professional competent to evaluate the specific clinical issues involved in the requested services, and only after reviewing and considering the requesting provider's recommendation, the patient's medical or other clinical history as applicable, and individual clinical circumstances.
Passed 2025-09-01
HC-01.1
Insurance Code § 4201.156(a), (c)
Plain Language
Utilization review agents are categorically prohibited from using any automated decision system — including AI-based algorithms — to make adverse determinations in whole or in part. This means no algorithm or AI tool may serve as even a partial basis for denying, delaying, modifying, or concluding that health care services are not medically necessary, not appropriate, or experimental/investigational. The prohibition goes beyond HC-01.1's typical requirement that a human must independently affirm adverse determinations; here, automated systems may not participate in the adverse determination at all. Automated systems remain permissible for administrative support and fraud-detection functions. Applies only to utilization review conducted for health benefit plans delivered, issued, or renewed on or after January 1, 2026.
(a) A utilization review agent may not use an automated decision system to make, wholly or partly, an adverse determination. (c) This section does not prohibit the use of an algorithm, artificial intelligence system, or automated decision system for administrative support or fraud-detection functions.
Passed 2025-09-01
HC-01.7
Insurance Code § 4201.156(b)
Plain Language
The Commissioner of Insurance has unrestricted authority to audit and inspect a utilization review agent's use of any automated decision system in the utilization review process at any time — no advance notice, scheduled cadence, or triggering event is required. This means utilization review agents must maintain their automated decision systems and associated documentation in a state of audit readiness at all times. Note that while § 4201.156(a) prohibits automated systems from making adverse determinations, this audit provision covers the use of automated systems in utilization review generally, including permissible uses such as administrative support and fraud detection.
(b) The commissioner may audit and inspect at any time a utilization review agent's use of an automated decision system for utilization review.
Passed 2025-09-01
HC-01.8
Insurance Code § 4201.303(a)(1)-(4)
Plain Language
When issuing an adverse determination, the utilization review agent must provide notice to the enrollee that includes: (1) the principal reasons for the adverse determination, (2) the clinical basis, (3) both a description of and the source of the screening criteria and review procedures used (the amendment changes 'or' to 'and,' requiring both elements), and (4) the complaint and appeal process including the right to independent review. The bill's amendment to subdivision (3) requires that adverse determination notices now include both a description and the source of screening criteria and review procedures — previously, either a description or the source was sufficient. The addition of 'review procedures' alongside 'screening criteria' means that if any automated tools or processes were used in the review (for permissible purposes), those procedures must be described.
(a) Notice of an adverse determination must include: (1) the principal reasons for the adverse determination; (2) the clinical basis for the adverse determination; (3) a description of and the source of the screening criteria and review procedures used as guidelines in making the adverse determination; and (4) a description of the procedure for the complaint and appeal process, including notice to the enrollee of the enrollee's right to appeal an adverse determination to an independent review organization and of the procedures to obtain that review.
Pending 2026-07-01
HC-01.7
§ 38.2-3407.15(B)(15)(i)-(ii)
Plain Language
Health carriers that use AI to manage insurance claims and coverage must publicly disclose to the Bureau of Insurance the details of that AI use, including the underlying algorithms, data used, and resulting determinations. Additionally, carriers must submit to the Bureau upon request all information — including documents and software — necessary for enforcement. This creates both an affirmative proactive disclosure obligation and a responsive production obligation to the regulator.
Each carrier shall (i) publicly disclose, if applicable, to the Bureau the carrier's use of AI to manage insurance claims and coverage, including in underlying algorithms, data used, and resulting determinations; (ii) submit to the Bureau, upon request, all information, including documents and software, necessary for enforcement of this subdivision;
Pending 2026-07-01
HC-01.6HC-01.8
§ 38.2-3407.15(B)(15)(iv)
Plain Language
When a health carrier uses AI to issue an adverse determination (e.g., a claim denial, coverage modification, or prior authorization denial), the carrier must notify both the affected enrollee and the health care provider that AI was used in making that determination. In addition, the carrier must provide a clear and timely appeal process for the adverse determination. This imposes two distinct obligations on the same triggering event: (1) disclosure that AI was involved, and (2) a functional appeal mechanism. The statute does not specify exactly what the notice must contain beyond the AI involvement disclosure, nor does it define the timeline for 'timely,' which the Commission may clarify by regulation.
Each carrier shall ... (iv) provide notice to enrollees and health care providers when AI has been used to issue an adverse determination and provide a clear and timely process for appealing the determination.
Pre-filed 2026-07-01
HC-01.3
18 V.S.A. § 9423(a)(1)-(2)
Plain Language
Health plans must ensure that any AI, algorithm, or software tool used in utilization review bases its determinations on the individual insured's medical history, the specific clinical circumstances presented by the requesting provider, and other relevant clinical information from the insured's record. The tool may not rely solely on group-level datasets — it must incorporate individualized clinical data. This mirrors requirements in other states (e.g., CA, IL) that prohibit AI-driven coverage decisions based exclusively on aggregate data rather than patient-specific circumstances.
(1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (A) an insured's medical or other clinical history; (B) the specific clinical circumstances as presented by the requesting health care provider; and (C) other relevant clinical information contained in the insured's medical or other clinical record. (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset.
Pre-filed 2026-07-01
HC-01.1HC-01.2
18 V.S.A. § 9423(b)
Plain Language
AI tools may not independently deny, delay, or modify health care coverage determinations. Every adverse coverage decision must be made by a licensed human health care provider who is competent to evaluate the specific clinical issues at hand. That human reviewer must consider the treating provider's recommendation, the insured's medical and clinical history, and the specific clinical circumstances. This is stronger than many comparable state laws — it prohibits AI from serving as even a primary basis for adverse determinations, requiring a licensed human to make the decision entirely.
The artificial intelligence, algorithm, or other software tool utilized by a health plan shall not deny, delay, or modify a determination of whether to authorize the coverage of health care services. An adverse coverage determination shall be made only by a licensed human health care provider who is competent to evaluate the specific clinical issues involved in the health care services requested by a treating health care provider by reviewing and considering the requesting provider's recommendation; the insured's medical or other clinical history, as appropriate; and the specific clinical circumstances.
Pre-filed 2026-07-01
HC-01.4
18 V.S.A. § 9423(a)(7)
Plain Language
Health plans must review and revise the performance, use, and outcomes of AI utilization review tools at least quarterly to maximize accuracy and reliability. This is a more frequent cadence than many comparable state laws, which typically require only periodic (often annual) review. The obligation is ongoing and requires affirmative revision — not merely passive monitoring.
(7) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are reviewed and revised at least quarterly to maximize accuracy and reliability.
Pre-filed 2026-07-01
HC-01.7
18 V.S.A. § 9423(a)(5)
Plain Language
Health plans must ensure that AI tools used in utilization review are accessible for inspection by the Department of Financial Regulation and other state agencies for audit or compliance review purposes. This is a regulatory transparency requirement — the AI tool itself, not just documentation about it, must be open to examination. This means health plans must contractually ensure access when they use third-party AI tools.
(5) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Department of Financial Regulation and by other State agencies and departments pursuant to applicable State and federal law.
Pre-filed 2026-07-01
HC-01.6
18 V.S.A. § 9423(a)(6)
Plain Language
Health plans must include in their written policies and procedures disclosures about the use of AI in utilization review and the nature and degree of human review and oversight, to the extent the Department of Financial Regulation requires. This is a conditional disclosure obligation — its specific scope will be determined by DFR rulemaking or guidance. At minimum, it signals that plans should be prepared to document and disclose their AI use and human oversight practices in their UR policies.
(6) Disclosures pertaining to the use of the artificial intelligence, algorithm, or other software tool in the utilization review process and the nature and degree of human review and oversight are contained in the health plan's written policies and procedures to the extent required by the Department of Financial Regulation.
Passed 2026-07-01
HC-01.1HC-01.2HC-01.3
18 V.S.A. § 9771(a)(1)-(2), (b)
Plain Language
Health plans using AI, algorithms, or software tools for utilization review must ensure those tools base determinations on individualized patient data — the individual's medical history, clinical circumstances presented by the provider, and other relevant clinical records — and may not rely solely on group-level datasets. Critically, the AI tool may not deny, delay, or modify health care services based on medical necessity; only a licensed human health care provider competent in the relevant clinical specialty may make medical necessity determinations, considering the requesting provider's recommendation and the patient's individual circumstances. This applies whether the health plan uses AI internally or contracts with a third-party entity. The obligation covers prospective, retrospective, and concurrent utilization review (§ 9771(c)).
(a) A health plan, as defined in section 9418 of this title, that uses an artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, or that contracts with or otherwise works through an entity that uses artificial intelligence, algorithm, or other software tool for the purpose of utilization review or utilization management functions, based in whole or in part on medical necessity, shall ensure all of the following: (1) The artificial intelligence, algorithm, or other software tool bases its determination on the following information, as applicable: (A) a covered individual's medical or other clinical history; (B) the specific clinical circumstances as presented by the requesting health care provider; and (C) other relevant clinical information contained in the covered individual's medical or other clinical record. (2) The artificial intelligence, algorithm, or other software tool does not base its determination solely on a group dataset. (b) Notwithstanding subsection (a) of this section, the artificial intelligence, algorithm, or other software tool shall not deny, delay, or modify health care services based in whole or in part on medical necessity. A determination of medical necessity shall be made only by a licensed human health care provider who is competent to evaluate the specific clinical issues involved in the health care services requested by a treating health care provider by reviewing and considering the requesting provider's recommendation; the covered individual's medical or other clinical history, as appropriate; and the specific clinical circumstances.
Passed 2026-07-01
HC-01.4
18 V.S.A. § 9771(a)(9)
Plain Language
Health plans must periodically review and revise the performance, use, and outcomes of any AI, algorithm, or software tool used in utilization review to maximize accuracy and reliability. This is an ongoing operational obligation — not a one-time pre-deployment assessment.
(9) The artificial intelligence, algorithm, or other software tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability.
Passed 2026-07-01
HC-01.5
18 V.S.A. § 9771(a)(10)
Plain Language
Patient data used by AI tools in utilization review must not be used beyond its intended and stated purpose. Compliance must be consistent with Vermont's health information technology chapter (18 V.S.A. ch. 42B) and with HIPAA privacy and security rules. This is a purpose limitation rule specific to healthcare AI data.
(10) Patient data is not used beyond its intended and stated purpose, consistent with chapter 42B of this title and with the security and privacy protections of 45 C.F.R. Part 160 and 45 C.F.R. Part 164, Subparts A and E, as applicable.
Passed 2026-07-01
HC-01.7
18 V.S.A. § 9771(a)(7)-(8)
Plain Language
Health plans must ensure their AI utilization review tools are open to inspection and audit by the Department of Financial Regulation and other state agencies. Plans must also include disclosures about AI use and oversight in their written policies and procedures to the extent the Department of Financial Regulation requires. This creates both a regulatory access obligation and a documentation/disclosure obligation.
(7) The artificial intelligence, algorithm, or other software tool is open to inspection for audit or compliance reviews by the Department of Financial Regulation and by other State agencies and departments pursuant to applicable State and federal law. (8) Disclosures pertaining to the use and oversight of the artificial intelligence, algorithm, or other software tool are contained in the health plan's written policies and procedures to the extent required by the Department of Financial Regulation.