HB-1263
CO · State · USA
CO
USA
● Pending
Proposed Effective Date
2027-01-01
Colorado HB 26-1263 — Concerning Requirements for an Operator of a Conversational Artificial Intelligence Service
Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public. For minor users, operators must disclose AI nature, prohibit variable-reward engagement mechanics, prevent sexually explicit content and emotional dependence simulation, and offer privacy management tools. For all users, operators must disclose AI identity when a reasonable person could be misled, implement suicide and self-harm crisis referral protocols, and refrain from implying outputs are equivalent to licensed professional services. Violations are deceptive trade practices enforceable by the Colorado Attorney General with a $1,000 per-violation civil penalty. Annual reporting to the Attorney General on crisis protocols begins July 1, 2027.
Summary

Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public. For minor users, operators must disclose AI nature, prohibit variable-reward engagement mechanics, prevent sexually explicit content and emotional dependence simulation, and offer privacy management tools. For all users, operators must disclose AI identity when a reasonable person could be misled, implement suicide and self-harm crisis referral protocols, and refrain from implying outputs are equivalent to licensed professional services. Violations are deceptive trade practices enforceable by the Colorado Attorney General with a $1,000 per-violation civil penalty. Annual reporting to the Attorney General on crisis protocols begins July 1, 2027.

Enforcement & Penalties
Enforcement Authority
Enforcement by the Colorado Attorney General under the Colorado Consumer Protection Act (C.R.S. § 6-1-1706). Violations of § 6-1-1708 are treated as deceptive trade practices. No private right of action is created by this bill. The Attorney General's office also receives annual reports from operators regarding suicide and self-harm protocols but its enforcement authority derives from the Consumer Protection Act framework.
Penalties
Civil penalty of $1,000 per violation, notwithstanding the general penalty provisions of § 6-1-112. Statutory penalty does not require proof of actual harm.
Who Is Covered
"Operator" means a person that develops and makes publicly available a conversational artificial intelligence service. "Operator" does not include a mobile application store or search engine solely because the store or search engine provides access to a conversational artificial intelligence service.
What Is Covered
"Conversational artificial intelligence service" means an artificial intelligence system that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communications. "Conversational artificial intelligence service" does not include a software application, web interface, or computer program that: (I) Is primarily designed and marketed for use by a developer or researcher; (II) Is a feature within another software application, web interface, or computer program that is not a conversational artificial intelligence service; (III) Is designed to provide outputs relating to a narrow and discrete topic; (IV) Is primarily designed and marketed for commercial use by business entities; (V) Functions as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device; or (VI) Is used by a business solely for internal purposes.
Compliance Obligations 11 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(a)
Plain Language
When an operator knows or has reasonable certainty that a user is a minor (under 18), it must clearly and conspicuously disclose that the user is interacting with AI, not a human. The statute provides three alternative disclosure mechanisms — any one satisfies the requirement: (1) a persistent visible disclaimer displayed throughout the interaction; (2) a notice at the start of each interaction plus a reminder at least every three hours during continuous sessions; or (3) an on-demand response when the user asks whether the system is human or sentient. Unlike the general consumer disclosure in § 6-1-1708(2), this obligation is unconditional — it applies regardless of whether a reasonable person would be misled.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (a) Clearly and conspicuously disclose to the minor user that the minor user is interacting with artificial intelligence that is artificially generated and not human. The disclosure must be: (I) A persistent visible disclaimer; (II) Provided at the beginning of each interaction with a conversational artificial intelligence service and must appear at least once every three hours in a continuous conversational artificial intelligence service interaction; or (III) Provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient;
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(b)
Plain Language
Operators must not give minor users points or similar rewards at unpredictable intervals intended to drive increased engagement with the conversational AI service. This targets variable-ratio reward schedules — a design pattern known to create compulsive engagement. The prohibition requires both unpredictable intervals and intent to encourage increased engagement; predictable reward structures or rewards not tied to engagement goals may not be covered.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (b) Not provide the minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with a conversational artificial intelligence service;
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(c)
Plain Language
When the operator knows or has reasonable certainty a user is a minor, it must implement reasonable measures to prevent the conversational AI from: (1) producing any textual, visual, or aural depictions of sexually explicit conduct; (2) generating statements encouraging the minor to engage in sexually explicit conduct; or (3) engaging in erotic or sexually explicit interactions with the minor. The standard is 'reasonable measures' — not an absolute guarantee — but it covers all modalities (text, visual, audio). 'Sexually explicit conduct' is defined by reference to 18 U.S.C. § 2256(2), the federal child exploitation statute definition.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (c) Institute reasonable measures to prevent a conversational artificial intelligence service from: (I) Producing textual, visual, or aural depictions of sexually explicit conduct; (II) Generating a statement that the minor user should engage in sexually explicit conduct; or (III) Engaging in erotic or sexually explicit interactions with the minor user;
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(d)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI from generating statements that simulate emotional dependence with minor users. The statute provides three specific examples of prohibited outputs: (1) explicit claims that the AI is human or sentient; (2) statements simulating romantic or sexual innuendo; and (3) adult-minor romantic role-playing. The 'including' framing means these are illustrative, not exhaustive — other outputs that simulate emotional dependence are also covered. The standard is reasonable measures, not absolute prevention.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (d) Institute reasonable measures to prevent a conversational artificial intelligence service from generating a statement that simulates emotional dependence, including preventing: (I) An explicit claim that the conversational artificial intelligence service is human or artificially sentient; (II) A statement that simulates a romantic or sexual innuendo; or (III) Role-playing of an adult-minor romantic relationship;
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(f)
Plain Language
Operators must provide minor users with tools to manage their privacy and account settings, specifically including the ability to control whether the AI retains interaction data for personalization and whether the minor's personal data is used for AI training. For minors under 13, these tools must also be offered directly to a parent or guardian. For minors 13 and older, parental tools must be offered as appropriate based on relevant risks — giving operators some discretion in the 13-17 age range. The under-13 parental tool requirement is absolute; the 13+ requirement is risk-calibrated.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (f) (I) Offer tools for the minor user to manage the minor user's privacy and account settings, including the ability to control whether the conversational artificial intelligence service retains substantive information from each interaction with the conversational artificial intelligence service for the purpose of personalizing the content of future interactions and whether the minor user's personal data is used for the purposes of training the conversational artificial intelligence service; (II) For a minor user who is under thirteen years old, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings; and (III) For a minor user who is thirteen years old or older, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings as appropriate, based on relevant risks.
Other · ChatbotMinors
C.R.S. § 6-1-1708(1)(e)
Plain Language
Operators must comply with existing Colorado law regarding minor privacy and data protection (Part 13 of Article 1 of Title 6). This provision incorporates by reference the state's existing minor data privacy framework but creates no new obligation of its own beyond ensuring operators are aware the existing framework applies to their conversational AI service.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (e) Comply with part 13 of this article 1 regarding protecting the privacy and data of a minor;
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
C.R.S. § 6-1-1708(2)
Plain Language
For all users (not just minors), if a reasonable person could be misled into thinking they are interacting with a human, the operator must clearly and conspicuously disclose that the system is AI. Unlike the minor-specific disclosure in § 6-1-1708(1)(a) — where the three methods are alternatives — the general consumer disclosure requires all three simultaneously: (1) disclosure at the beginning of each day's first interaction; (2) a reminder at least every three hours during continuous sessions; and (3) an on-demand response when the user asks if the system is human or sentient. The trigger is conditional — if the system clearly presents as AI and no reasonable person would be misled, the obligation is not activated.
Statutory Text
On and after January 1, 2027, if a reasonable person would be misled to believe that the person is interacting with a human in an interaction with a conversational artificial intelligence service, an operator shall clearly and conspicuously disclose to the person that the conversational artificial intelligence service is artificial intelligence. The disclosure must: (a) Be provided at the beginning of a user's first interaction with a conversational artificial intelligence service for each day of interaction; (b) Appear at least once every three hours in a continuous conversational artificial intelligence service interaction; and (c) Be provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
C.R.S. § 6-1-1708(3)
Plain Language
Operators must implement a protocol for their conversational AI to respond to user prompts about suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to a crisis service provider — such as a suicide hotline or crisis text line — but expressly excludes referring to law enforcement. This applies to all users, not just minors. The standard is 'reasonable efforts' — not an absolute guarantee of referral. The law enforcement exclusion is notable and distinguishes this from some other jurisdictions' crisis response requirements.
Statutory Text
On and after January 1, 2027, an operator shall implement a protocol for a conversational artificial intelligence service to respond to a user prompt regarding suicidal ideation or self-harm, which protocol must include making reasonable efforts to provide a response that refers the user to a crisis service provider such as a suicide hotline, a crisis text line, or another appropriate crisis service, but not including a law enforcement agency.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
C.R.S. § 6-1-1708(4)
Plain Language
Operators are prohibited from using any language in advertising, the interface, or AI outputs that indicates or implies the conversational AI's outputs are provided by, endorsed by, or equivalent to services from a licensed healthcare professional, licensed legal professional, licensed accounting professional, or certified financial fiduciary or planner. This is a broad prohibition covering the full spectrum from marketing to runtime output. The prohibition covers express claims and implied representations alike — for example, branding the service as a 'therapist' or 'financial advisor' would violate this provision even without an explicit claim of licensure.
Statutory Text
On and after January 1, 2027, an operator shall not use any term, letter, or phrase in the advertising, interface, or outputs of a conversational artificial intelligence service that indicates or implies that any output data provided by the conversational artificial intelligence service is being provided by, endorsed by, or equivalent to services provided by: (a) A licensed health-care professional; (b) A licensed legal professional; (c) A licensed accounting professional; or (d) A certified financial fiduciary or planner.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
C.R.S. § 6-1-1708(5)(a)-(d)
Plain Language
Beginning July 1, 2027, operators must submit an annual report to the Colorado Attorney General's office covering: (1) the number of crisis referral notifications issued in the preceding calendar year; (2) protocols for detecting, removing, and responding to suicidal ideation or self-harm; and (3) protocols for preventing AI responses about suicidal ideation or self-harm actions. Reports must not contain user identifiers or personal information. Operators must use evidence-based measurement methods for tracking suicidal ideation and self-harm. The Attorney General's office will publish the report data publicly on its website. Because the report covers the preceding calendar year, operators should begin tracking crisis referral counts from at least January 1, 2027.
Statutory Text
(a) On and after July 1, 2027, an operator shall annually report to the attorney general's office: (I) The number of times the operator has issued a crisis service provider referral notification in the preceding calendar year; (II) Any protocols the operator implemented to detect, remove, and respond to instances of suicidal ideation or self-harm by a user of a conversational artificial intelligence service; and (III) Any protocols the operator implemented to prevent a conversational artificial intelligence service response about suicidal ideation or self-harm actions. (b) The report required by subsection (5)(a) of this section must not include any identifiers or personal information about a user of a conversational artificial intelligence service. (c) The attorney general's office shall post on its public website data from reports submitted pursuant to subsection (5)(a) of this section. (d) For the purpose of creating a report as required by subsection (5)(a) of this section, an operator shall use evidence-based methods for measuring suicidal ideation or self-harm.
Other · Chatbot
C.R.S. § 6-1-1708(6)
Plain Language
A person who develops an underlying AI system is not liable under this section if a separate operator takes that AI system and develops it into a conversational AI service. This shields upstream model providers (e.g., foundation model developers providing APIs) from the obligations placed on operators, provided the upstream developer does not itself make the conversational AI service publicly available. Liability falls on the operator — the entity that both develops and makes the conversational AI service publicly available.
Statutory Text
Nothing in this section creates liability for a person that develops an artificial intelligence system if an operator develops the artificial intelligence system to provide a conversational artificial intelligence service.