SB-25B-004
CO · State · USA
CO
USA
● Enacted
Effective Date
2026-06-30
Colorado SB 25B-004 — Concerning Measures Effective No Later Than June 30, 2026, to Increase Transparency for Algorithmic Systems
Amends Colorado's existing AI transparency law (SB 205, codified at C.R.S. §§ 6-1-1702 through 6-1-1707) to push the operative date from February 1, 2026 to June 30, 2026. Imposes obligations on developers and deployers of high-risk AI systems to use reasonable care to prevent algorithmic discrimination, provide documentation to deployers (model cards, dataset cards), publish public use case inventories, complete and maintain impact assessments, implement risk management programs, conduct annual deployment reviews, and report discovered algorithmic discrimination to the attorney general. Deployers must also disclose to consumers when they are interacting with an AI system. Enforcement is exclusively through the Colorado attorney general, with a rebuttable presumption of reasonable care for entities that comply with the statute and any AG-adopted rules.
Summary

Amends Colorado's existing AI transparency law (SB 205, codified at C.R.S. §§ 6-1-1702 through 6-1-1707) to push the operative date from February 1, 2026 to June 30, 2026. Imposes obligations on developers and deployers of high-risk AI systems to use reasonable care to prevent algorithmic discrimination, provide documentation to deployers (model cards, dataset cards), publish public use case inventories, complete and maintain impact assessments, implement risk management programs, conduct annual deployment reviews, and report discovered algorithmic discrimination to the attorney general. Deployers must also disclose to consumers when they are interacting with an AI system. Enforcement is exclusively through the Colorado attorney general, with a rebuttable presumption of reasonable care for entities that comply with the statute and any AG-adopted rules.

Enforcement & Penalties
Enforcement Authority
Attorney general has exclusive enforcement authority pursuant to § 6-1-1706. The attorney general may request documentation from developers and deployers and evaluate compliance. There is a rebuttable presumption that a developer or deployer used reasonable care if they complied with the statute and any additional rules adopted by the attorney general pursuant to § 6-1-1707. No private right of action is created by this act.
Penalties
The bill itself does not specify monetary penalties, statutory damages, or remedy types. Enforcement is through the attorney general pursuant to § 6-1-1706, which is part of Colorado's broader consumer protection framework. Remedies available under that framework would apply.
Who Is Covered
a developer of a high-risk artificial intelligence system.
a deployer of a high-risk artificial intelligence system.
a third party contracted by the deployer.
a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers.
What Is Covered
a high-risk artificial intelligence system
an artificial intelligence system that is intended to interact with consumers
Compliance Obligations 16 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · General Consumer App
C.R.S. § 6-1-1702(1)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the system's intended and contracted uses. This is a general duty of care standard — not a checklist. However, developers receive a rebuttable presumption of compliance if they satisfy the specific obligations in this section plus any AG rules adopted under § 6-1-1707. The safe harbor is significant: it shifts the burden to the AG to prove non-compliance after a developer demonstrates statutory compliance.
Statutory Text
(1) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
T-03 Training Data Disclosure · T-03.3 · General Consumer App
C.R.S. § 6-1-1702(2)-(3)(a)
Plain Language
Developers must provide deployers and downstream developers with the documentation and information — such as model cards, dataset cards, and other impact assessment materials — necessary for the deployer or its contracted third party to complete a required impact assessment under § 6-1-1703(3). This is a 'to the extent feasible' obligation. The documentation must be provided at or before the point when the system is made available. This is developer-to-deployer disclosure, not public-facing.
Statutory Text
(2) On and after June 30, 2026, and except as provided in subsection (6) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (3) (a) Except as provided in subsection (6) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system on or after June 30, 2026, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 6-1-1703 (3).
G-02 Public Transparency & Documentation · G-02.4 · General Consumer App
C.R.S. § 6-1-1702(4)(a)
Plain Language
Developers must publish on their website or in a public use case inventory a clear, readily available statement summarizing their high-risk AI systems. The specific content of this summary is defined in the original SB 205 § 6-1-1702(4)(a) (types of high-risk AI systems developed, how the developer manages known or foreseeable risks of algorithmic discrimination, etc.). This is a public-facing transparency obligation distinct from the deployer-facing documentation requirement.
Statutory Text
(4) (a) On and after June 30, 2026, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing:
R-02 Regulatory Disclosure & Submissions · R-02.1 · General Consumer App
C.R.S. § 6-1-1702(5)
Plain Language
Developers must proactively disclose to the attorney general (in a prescribed form) and to all known deployers any known or reasonably foreseeable risks of algorithmic discrimination, within 90 days of discovering such risks. This is not a wait-to-be-asked obligation — it triggers on knowledge or reasonable foreseeability of discrimination risks. The 90-day clock runs from the triggering date specified in the original SB 205 provisions.
Statutory Text
(5) On and after June 30, 2026, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which:
R-02 Regulatory Disclosure & Submissions · R-02.2 · General Consumer App
C.R.S. § 6-1-1702(7)
Plain Language
The attorney general may require developers to disclose documentation described in subsection (2) — including model cards, dataset cards, and related materials — within 90 days of the AG's request. The AG may evaluate these materials for compliance. Importantly, these disclosures are exempt from CORA (Colorado Open Records Act) and developers may designate materials as proprietary or trade secret. Attorney-client privilege and work-product protections are preserved. This on-demand regulatory disclosure power is separate from the proactive disclosure obligations in subsection (5).
Statutory Text
(7) On and after June 30, 2026, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (2) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this part 17, and the statement or documentation is not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (7), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · General Consumer App
C.R.S. § 6-1-1703(1)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Like the parallel developer duty in § 6-1-1702(1), deployers receive a rebuttable presumption of compliance if they meet the section's specific obligations and any AG rules. This is the overarching deployer duty — the specific sub-obligations are mapped separately below.
Statutory Text
(1) On and after June 30, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after June 30, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the attorney general pursuant to section 6-1-1707.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · General Consumer App
C.R.S. § 6-1-1703(2)(a)
Plain Language
Deployers must implement and maintain a formal risk management policy and program governing their deployment of high-risk AI systems. The program must cover the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks. Critically, this is not a one-time exercise — it must be iterative, regularly and systematically reviewed, and updated over the full lifecycle of the AI system. Reasonableness is assessed based on factors specified in the original SB 205 (size/complexity of the deployer, nature/scope of the AI system, sensitivity of data, etc.). This maps closely to the NIST AI RMF approach.
Statutory Text
(2) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (2) must be reasonable considering:
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.10 · General Consumer App
C.R.S. § 6-1-1703(3)(a)
Plain Language
Deployers (or their contracted third parties) must complete an impact assessment for each high-risk AI system at deployment and at least annually thereafter, plus within 90 days of any intentional and substantial modification. This is a continuing obligation — the annual cadence ensures the assessment stays current even absent modifications. Exceptions exist in subsections (3)(d), (3)(e), and (6) of the original statute.
Statutory Text
(3) (a) Except as provided in subsections (3)(d), (3)(e), and (6) of this section: (I) A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system on or after June 30, 2026, shall complete an impact assessment for the high-risk artificial intelligence system; and (II) On and after June 30, 2026, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · General Consumer App
C.R.S. § 6-1-1703(3)(c)
Plain Language
When an impact assessment is triggered by an intentional and substantial modification (as opposed to the annual routine assessment), the deployer must include an additional statement disclosing whether the system was used consistently with or differently from the developer's intended uses. This requirement surfaces deployment drift — if the deployer has been using the system outside the developer's stated intended uses, this must be documented and disclosed in the post-modification impact assessment.
Statutory Text
(c) In addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (3) following an intentional and substantial modification to a high-risk artificial intelligence system on or after June 30, 2026, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system.
H-02 Non-Discrimination & Bias Assessment · H-02.8 · General Consumer App
C.R.S. § 6-1-1703(3)(g)
Plain Language
Deployers must conduct at least annual reviews of each deployed high-risk AI system to affirmatively verify that it is not causing algorithmic discrimination. This is a periodic deployment review obligation — distinct from the pre-deployment impact assessment. The first review must be completed by June 30, 2026, with annual reviews thereafter. This review can be conducted by the deployer itself or a contracted third party.
Statutory Text
(g) On or before June 30, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3 · General Consumer App
C.R.S. § 6-1-1703(4)(a)
Plain Language
Deployers must, no later than the time the high-risk AI system is deployed to make or substantially factor in a consequential decision about a consumer, provide certain disclosures. The specific disclosures required are enumerated in the original SB 205 § 6-1-1703(4)(a) (e.g., that an AI system is being used, categories of decisions it makes, contact information for the deployer, a description of the purpose). This is a pre-decision or at-decision timing requirement — the deployer cannot make the consequential decision and disclose later.
Statutory Text
(4) (a) On and after June 30, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:
H-01 Human Oversight of Automated Decisions · H-01.1H-01.4H-01.5 · General Consumer App
C.R.S. § 6-1-1703(4)(b)
Plain Language
When a high-risk AI system makes or substantially factors into a consequential decision that is adverse to a consumer, the deployer must provide the consumer with specific information. The original SB 205 § 6-1-1703(4)(b) requires: a statement that an AI system was used, contact information for the deployer, a description of the purpose of the AI system, information about the consumer's right to opt out and to appeal, and other relevant details. This post-adverse-decision disclosure obligation gives affected consumers the information they need to exercise appeal rights.
Statutory Text
(b) On and after June 30, 2026, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer:
G-02 Public Transparency & Documentation · G-02.4 · General Consumer App
C.R.S. § 6-1-1703(5)(a)
Plain Language
Deployers must publish a clear, readily available statement on their website summarizing their deployed high-risk AI systems. The specific summary content is defined in the original SB 205 § 6-1-1703(5)(a) (types of systems deployed, how the deployer manages known or foreseeable discrimination risks, etc.). This is the deployer counterpart to the developer's public use case inventory obligation in § 6-1-1702(4)(a).
Statutory Text
(5) (a) On and after June 30, 2026, and except as provided in subsection (6) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing:
R-01 Incident Reporting · R-01.3 · General Consumer App
C.R.S. § 6-1-1703(7)
Plain Language
If a deployer discovers that a deployed high-risk AI system has actually caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, in the AG's prescribed form. This is a post-discovery incident reporting obligation, not a periodic reporting requirement. The 90-day window runs from actual discovery, and the deployer must not unreasonably delay even within that window.
Statutory Text
(7) If a deployer deploys a high-risk artificial intelligence system on or after June 30, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
R-02 Regulatory Disclosure & Submissions · R-02.2 · General Consumer App
C.R.S. § 6-1-1703(9)
Plain Language
The attorney general may require deployers (or contracted third parties) to produce their risk management policy, impact assessments, or maintained records within 90 days of the AG's request. The AG may evaluate these materials for compliance with the statute. Materials are exempt from CORA, and deployers may designate them as proprietary or trade secrets. Attorney-client privilege and work-product protections are preserved. This mirrors the developer on-demand disclosure obligation in § 6-1-1702(7) but applies to deployer-side documentation.
Statutory Text
(9) On and after June 30, 2026, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (2) of this section, the impact assessment completed pursuant to subsection (3) of this section, or the records maintained pursuant to subsection (3)(f) of this section. The attorney general may evaluate such risk management policy, impact assessment, or records to ensure compliance with this part 17, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure made pursuant to this subsection (9), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · General Consumer App
C.R.S. § 6-1-1704(1)
Plain Language
Deployers or developers who make available an AI system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. This is an unconditional disclosure obligation — it does not depend on whether a reasonable person would be misled. It applies broadly to any AI system intended for consumer interaction, not just high-risk systems. Exceptions are provided in subsection (2) of the original statute. This is a broader disclosure trigger than states like California SB 243, which conditions disclosure on a reasonable-person misleading standard.
Statutory Text
(1) On and after June 30, 2026, and except as provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.