HB-1139
CO · State · USA
CO
USA
● Pending
Proposed Effective Date
2027-01-01
Colorado HB 26-1139 — Concerning the Use of Artificial Intelligence in Health Care
Regulates the use of AI systems in health-care utilization review and prohibits payment for AI-delivered psychotherapy services. Applies to health insurance carriers, pharmacy benefit managers, private utilization review organizations, behavioral health administrative services organizations, and managed care entities that use AI for utilization review. Requires AI systems used in utilization review to base determinations on individual clinical data rather than solely group data, prohibits coverage denials based solely on AI output without human clinical review, and mandates documentation, audit logs, and periodic performance review. Prohibits carriers and Medicaid/CHP+ payers from covering psychotherapy services provided directly by an AI system. Requires written disclosures to applicable state regulatory agencies identifying AI utilization review functions, oversight processes, and audit procedures.
Summary

Regulates the use of AI systems in health-care utilization review and prohibits payment for AI-delivered psychotherapy services. Applies to health insurance carriers, pharmacy benefit managers, private utilization review organizations, behavioral health administrative services organizations, and managed care entities that use AI for utilization review. Requires AI systems used in utilization review to base determinations on individual clinical data rather than solely group data, prohibits coverage denials based solely on AI output without human clinical review, and mandates documentation, audit logs, and periodic performance review. Prohibits carriers and Medicaid/CHP+ payers from covering psychotherapy services provided directly by an AI system. Requires written disclosures to applicable state regulatory agencies identifying AI utilization review functions, oversight processes, and audit procedures.

Enforcement & Penalties
Enforcement Authority
Enforcement authority is vested in the Division of Insurance (for carriers and private utilization review organizations), the Department of Human Services (for behavioral health administrative services organizations), and the Department of Health Care Policy and Financing (for managed care entities and Medicaid/CHP+ payers), as applicable to each covered entity type. The bill does not create a private right of action. Enforcement appears to be agency-initiated through existing regulatory authority over the respective covered entity types.
Penalties
The bill does not specify monetary penalties, civil damages, or specific remedies. Enforcement remedies would flow from existing regulatory authority of the Division of Insurance and the applicable state departments over their respective regulated entities.
Who Is Covered
Compliance Obligations 10 obligations · click obligation ID to open requirement page
HC-01 Healthcare AI Decision Restrictions · HC-01.3 · Deployer · Healthcare
C.R.S. § 10-16-112.7(3)(a)-(b)
Plain Language
Entities using AI for utilization review must ensure the AI system bases its determinations on the individual patient's medical history, clinical circumstances as presented by the requesting provider, and other relevant clinical information from the patient's records. The system may not base determinations solely on group-level or aggregate data without reference to the individual's own data. This effectively requires individualized clinical assessment — AI tools cannot deny or approve coverage based on population-level patterns alone.
Statutory Text
(3) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION THAT USES AN ARTIFICIAL INTELLIGENCE SYSTEM TO CONDUCT UTILIZATION REVIEW SHALL ENSURE THAT: (a) THE ARTIFICIAL INTELLIGENCE SYSTEM BASES ITS DETERMINATION ON THE FOLLOWING INFORMATION, AS APPLICABLE: (I) AN INDIVIDUAL'S MEDICAL OR OTHER CLINICAL HISTORY; (II) INDIVIDUAL CLINICAL CIRCUMSTANCES AS PRESENTED BY THE REQUESTING PROVIDER; AND (III) OTHER RELEVANT CLINICAL INFORMATION CONTAINED IN THE INDIVIDUAL'S MEDICAL OR OTHER CLINICAL RECORD; (b) THE ARTIFICIAL INTELLIGENCE SYSTEM DOES NOT BASE ITS DETERMINATIONS SOLELY ON GROUP DATA, WITHOUT REFERENCE TO THE INDIVIDUAL'S DATA;
HC-01 Healthcare AI Decision Restrictions · HC-01.1 · Deployer · Healthcare
C.R.S. § 10-16-112.7(5)(a)-(b)
Plain Language
AI may be used to assist with utilization review, including expedited approvals. However, a carrier may not issue a denial of coverage based on medical necessity solely on AI output. A licensed clinician, licensed physician, or other regulated professional competent to evaluate the specific clinical issues must review and approve every denial. The human reviewer must also review the health benefit plan's terms of coverage for the requested service. This creates a mandatory human-in-the-loop requirement for all adverse medical necessity determinations while permitting AI to drive approvals without the same gatekeeping.
Statutory Text
(5) (a) NOTWITHSTANDING SUBSECTION (3) OF THIS SECTION, AN ARTIFICIAL INTELLIGENCE SYSTEM MAY BE USED TO ASSIST WITH UTILIZATION REVIEW, INCLUDING EXPEDITED APPROVALS. (b) A CARRIER'S DENIAL OF COVERAGE BASED IN WHOLE OR IN PART ON MEDICAL NECESSITY SHALL NOT BE ISSUED SOLELY ON THE OUTPUT OF AN ARTIFICIAL INTELLIGENCE SYSTEM WITHOUT HUMAN REVIEW AND APPROVAL OF THE DENIAL BY A LICENSED CLINICIAN, LICENSED PHYSICIAN, OR OTHER REGULATED PROFESSIONAL THAT IS COMPETENT TO EVALUATE THE SPECIFIC CLINICAL ISSUES INVOLVED IN THE HEALTH-CARE SERVICES REQUESTED BY THE PROVIDER AND A REVIEW OF THE HEALTH BENEFIT PLAN'S TERMS OF COVERAGE FOR THE HEALTH-CARE SERVICE.
HC-01 Healthcare AI Decision Restrictions · HC-01.4 · Deployer · Healthcare
C.R.S. § 10-16-112.7(3)(f)
Plain Language
Covered entities must periodically review the performance, use, and outcomes of AI systems used in utilization review to maximize accuracy and reliability. This is an ongoing operational review obligation — not a one-time pre-deployment assessment. The bill does not specify a review frequency, leaving it to the entity to determine what 'periodically' means in context.
Statutory Text
(f) THE ARTIFICIAL INTELLIGENCE SYSTEM'S PERFORMANCE, USE, AND OUTCOMES ARE PERIODICALLY REVIEWED TO MAXIMIZE ACCURACY AND RELIABILITY;
HC-01 Healthcare AI Decision Restrictions · HC-01.5 · Deployer · Healthcare
C.R.S. § 10-16-112.7(3)(g)
Plain Language
Health data used by AI systems in utilization review must not be used beyond its intended or stated purpose. This data purpose limitation obligation is consistent with HIPAA and applicable state health privacy law and creates an independent statutory duty within the utilization review AI context.
Statutory Text
(g) AN INDIVIDUAL'S HEALTH DATA IS NOT USED BEYOND ITS INTENDED OR STATED PURPOSE, CONSISTENT WITH APPLICABLE STATE AND FEDERAL LAWS;
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Deployer · Healthcare
C.R.S. § 10-16-112.7(3)(e)
Plain Language
Covered entities must ensure their AI utilization review systems produce and retain documentation, audit logs, and model-governance records sufficient to demonstrate compliance with both this section and Section 10-3-1104.9 (which governs insurance company record-keeping). This is a contemporaneous documentation and retention obligation — the records must be generated as the system operates, not reconstructed after the fact, and must be maintained in a form available for regulatory inspection.
Statutory Text
(e) THE ARTIFICIAL INTELLIGENCE SYSTEM PRODUCES AND RETAINS DOCUMENTATION, AUDIT LOGS, AND MODEL-GOVERNANCE RECORDS IN ORDER TO DEMONSTRATE COMPLIANCE WITH THIS SECTION AND SECTION 10-3-1104.9;
H-02 Non-Discrimination & Bias Assessment · Deployer · Healthcare
C.R.S. § 10-16-112.7(3)(c)-(d)
Plain Language
Covered entities must ensure their AI utilization review systems do not discriminate against individuals in violation of any state or federal law and are applied fairly and equitably, including in compliance with HHS regulations and guidance. While this cross-references existing anti-discrimination frameworks rather than creating an independent bias testing regime, it creates an affirmative duty to ensure non-discrimination specifically in the AI utilization review context — the entity must actively verify that the AI system's application does not produce discriminatory results.
Statutory Text
(c) THE ARTIFICIAL INTELLIGENCE SYSTEM IS NOT USED IN ANY WAY THAT DISCRIMINATES AGAINST INDIVIDUALS IN VIOLATION OF OTHER STATE OR FEDERAL LAWS; (d) THE ARTIFICIAL INTELLIGENCE SYSTEM IS FAIRLY AND EQUITABLY APPLIED, INCLUDING IN ACCORDANCE WITH APPLICABLE REGULATIONS AND GUIDANCE ISSUED BY THE FEDERAL DEPARTMENT OF HEALTH AND HUMAN SERVICES;
R-02 Regulatory Disclosure & Submissions · R-02.1 · Deployer · Healthcare
C.R.S. § 10-16-112.7(4)(a)-(d)
Plain Language
Covered entities must submit written disclosures to their applicable state regulator — the Division of Insurance, Department of Human Services, or Department of Health Care Policy and Financing — identifying: which utilization review functions use AI, at what points in the process AI is deployed, the human oversight process including reviewer qualifications and whether a human must approve adverse determinations, and the process for maintaining audit records sufficient to demonstrate compliance. This is a proactive regulatory submission — entities must provide these disclosures without waiting for a request.
Statutory Text
(4) A PERSON DESCRIBED IN SUBSECTION (2) OF THIS SECTION SHALL PROVIDE WRITTEN DISCLOSURES TO THE DIVISION, THE DEPARTMENT OF HUMAN SERVICES, OR THE DEPARTMENT OF HEALTH CARE POLICY AND FINANCING, AS APPLICABLE, THAT IDENTIFY: (a) THE UTILIZATION REVIEW FUNCTIONS FOR WHICH THE ARTIFICIAL INTELLIGENCE SYSTEM WILL BE USED; (b) THE POINTS IN THE UTILIZATION REVIEW PROCESS WHEN THE ARTIFICIAL INTELLIGENCE SYSTEM IS USED; (c) THE HUMAN OVERSIGHT PROCESS, INCLUDING THE QUALIFICATIONS OF THE REVIEWER AND WHETHER THE A HUMAN MUST APPROVE AN ADVERSE DETERMINATION; AND (d) THE PROCESS FOR MAINTAINING AUDIT INFORMATION SUFFICIENT TO DEMONSTRATE COMPLIANCE WITH SUBSECTION (3) OF THIS SECTION.
HC-01 Healthcare AI Decision Restrictions · Deployer · Healthcare
C.R.S. § 10-16-112.7(6)(a)-(c)
Plain Language
Health insurance carriers may not provide coverage for psychotherapy services that are delivered directly to an individual by an AI system. This effectively prohibits billing for AI-conducted psychotherapy through private insurance. The prohibition does not extend to non-therapeutic software tools (billing software, EHRs, video platforms) used incidentally by a human provider, nor does it treat videoconferencing or messaging platforms used to enable human supervision or consultation as AI-conducted services. The practical effect is that AI systems cannot be the direct provider of psychotherapy for reimbursement purposes — a human must deliver the therapeutic service.
Statutory Text
(6) (a) A CARRIER OFFERING A HEALTH BENEFIT PLAN ISSUED OR RENEWED IN THE STATE ON OR AFTER THE EFFECTIVE DATE OF THIS SECTION SHALL NOT PROVIDE COVERAGE FOR SERVICES THAT CONSTITUTE PSYCHOTHERAPY SERVICES, AS DEFINED IN SECTION 12-245-202 (14), THAT ARE PROVIDED DIRECTLY TO AN INDIVIDUAL AND THAT ARE CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM. (b) SUBSECTION (6)(a) OF THIS SECTION DOES NOT PROHIBIT THE USE OF BILLING SOFTWARE, ELECTRONIC HEALTH RECORDS, VIDEO PLATFORMS, OR OTHER NONTHERAPEUTIC SOFTWARE TOOLS INCIDENT TO SERVICES PROVIDED BY A HUMAN PROVIDER. (c) THE USE OF VIDEOCONFERENCING, MESSAGING PLATFORMS, OR OTHER COMMUNICATIONS SOFTWARE TO ENABLE SUPERVISION OR CONSULTATION BY A LICENSED, REGISTERED, OR CERTIFIED INDIVIDUAL DOES NOT CONSTITUTE SUPERVISION OR CONSULTATION THAT IS CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, AS REFERENCED IN SUBSECTION (6)(a) OF THIS SECTION.
HC-01 Healthcare AI Decision Restrictions · Government · Healthcare
C.R.S. § 25.5-1-209
Plain Language
Payers of mental or behavioral health services under Medicaid (Colorado Medical Assistance Act) and the Children's Basic Health Plan (CHP+) may not pay for psychotherapy services that are provided directly to an individual by an AI system. This mirrors the private insurance prohibition in § 10-16-112.7(6) but applies to the public payer side — Medicaid and CHP+ — ensuring that AI-delivered psychotherapy cannot be reimbursed through any Colorado payment channel, public or private.
Statutory Text
A PAYER OF MENTAL OR BEHAVIORAL HEALTH-CARE SERVICES PROVIDED UNDER THE "COLORADO MEDICAL ASSISTANCE ACT", AS SPECIFIED IN ARTICLES 4, 5, AND 6 OF THIS TITLE 25.5, OR THE "CHILDREN'S BASIC HEALTH PLAN ACT", AS SPECIFIED IN ARTICLE 8 OF THIS TITLE 25.5, SHALL NOT PAY FOR SERVICES THAT CONSTITUTE PSYCHOTHERAPY SERVICES, AS DEFINED IN SECTION 12-245-202 (14), THAT ARE PROVIDED DIRECTLY TO AN INDIVIDUAL AND THAT ARE CONDUCTED BY AN ARTIFICIAL INTELLIGENCE SYSTEM, AS THAT TERM IS DEFINED IN SECTION 10-16-112.7 (1)(b).
Other · Healthcare
C.R.S. § 10-16-112.7(3)(h)
Plain Language
Covered entities must ensure that the criteria and guidelines used by AI systems in utilization review comply with all other applicable state and federal utilization review and coverage laws. This is a compliance pass-through — it confirms that using AI does not exempt entities from existing regulatory requirements but creates no new independent obligation beyond what those laws already impose.
Statutory Text
(h) THE ARTIFICIAL INTELLIGENCE SYSTEM'S OR ALGORITHM'S CRITERIA AND GUIDELINES COMPLY WITH OTHER APPLICABLE STATE OR FEDERAL LAWS CONCERNING UTILIZATION REVIEW AND COVERAGE FOR HEALTH-CARE SERVICES.