HB-1263
CO · State · USA
CO
USA
● Pending
Proposed Effective Date
2027-01-01
Colorado HB 26-1263 — Concerning Requirements for an Operator of a Conversational Artificial Intelligence Service
Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public. For minor users, operators must disclose AI identity, prohibit variable-reward engagement mechanics, prevent sexually explicit content and emotional dependence simulations, and provide privacy management tools. For all users, operators must disclose AI identity when a reasonable person could be misled, implement suicide and self-harm crisis referral protocols, and refrain from implying that outputs constitute licensed professional services. Violations are deceptive trade practices enforceable by the Colorado attorney general at $1,000 per violation. Annual reporting to the attorney general on crisis protocols begins July 1, 2027. Substantive obligations take effect January 1, 2027.
Summary

Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public. For minor users, operators must disclose AI identity, prohibit variable-reward engagement mechanics, prevent sexually explicit content and emotional dependence simulations, and provide privacy management tools. For all users, operators must disclose AI identity when a reasonable person could be misled, implement suicide and self-harm crisis referral protocols, and refrain from implying that outputs constitute licensed professional services. Violations are deceptive trade practices enforceable by the Colorado attorney general at $1,000 per violation. Annual reporting to the attorney general on crisis protocols begins July 1, 2027. Substantive obligations take effect January 1, 2027.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement under the Colorado Consumer Protection Act (C.R.S. § 6-1-1706). Violations of § 6-1-1708 are deceptive trade practices enforceable by the attorney general. No private right of action is explicitly created by this bill. Section 6-1-1708(6) shields upstream AI system developers from liability when a separate operator develops their system into a conversational AI service.
Penalties
Civil penalty of $1,000 per violation, notwithstanding § 6-1-112. Statutory penalties do not require proof of actual monetary harm.
Who Is Covered
"Operator" means a person that develops and makes publicly available a conversational artificial intelligence service. "Operator" does not include a mobile application store or search engine solely because the store or search engine provides access to a conversational artificial intelligence service.
What Is Covered
"Conversational artificial intelligence service" means an artificial intelligence system that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communications. "Conversational artificial intelligence service" does not include a software application, web interface, or computer program that: (I) Is primarily designed and marketed for use by a developer or researcher; (II) Is a feature within another software application, web interface, or computer program that is not a conversational artificial intelligence service; (III) Is designed to provide outputs relating to a narrow and discrete topic; (IV) Is primarily designed and marketed for commercial use by business entities; (V) Functions as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device; or (VI) Is used by a business solely for internal purposes.
Compliance Obligations 12 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(a)
Plain Language
When an operator knows or has reasonable certainty that a user is a minor (under 18), the operator must clearly and conspicuously disclose that the user is interacting with AI, not a human. The disclosure must satisfy at least one of three formats: (1) a persistent visible disclaimer always on screen, (2) a notice at the beginning of each interaction plus at least every three hours during continuous sessions, or (3) a response to user questions about whether the AI is human or sentient. Unlike the general consumer disclosure in § 6-1-1708(2), this obligation is unconditional — it applies regardless of whether a reasonable person would be misled. The three disclosure formats are presented as alternatives (joined by 'or'), so an operator need only implement one.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (a) Clearly and conspicuously disclose to the minor user that the minor user is interacting with artificial intelligence that is artificially generated and not human. The disclosure must be: (I) A persistent visible disclaimer; (II) Provided at the beginning of each interaction with a conversational artificial intelligence service and must appear at least once every three hours in a continuous conversational artificial intelligence service interaction; or (III) Provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient;
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(b)
Plain Language
Operators must not give minor users points or similar rewards at unpredictable intervals when the intent is to drive increased engagement with the conversational AI service. This targets variable-ratio reward schedules — a classic addictive design pattern. The prohibition requires both unpredictable timing and intent to increase engagement, so predictable, regularly scheduled rewards would not violate this provision.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (b) Not provide the minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with a conversational artificial intelligence service;
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(c)
Plain Language
Operators must institute reasonable measures to prevent the conversational AI service from producing sexually explicit content for minor users across three dimensions: (1) generating textual, visual, or aural depictions of sexually explicit conduct, (2) encouraging the minor to engage in sexually explicit conduct, and (3) engaging in erotic or sexually explicit interactions with the minor. 'Sexually explicit conduct' incorporates the federal definition at 18 U.S.C. § 2256(2). The standard is 'reasonable measures,' not absolute prevention — operators must demonstrate they have implemented appropriate safeguards.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (c) Institute reasonable measures to prevent a conversational artificial intelligence service from: (I) Producing textual, visual, or aural depictions of sexually explicit conduct; (II) Generating a statement that the minor user should engage in sexually explicit conduct; or (III) Engaging in erotic or sexually explicit interactions with the minor user;
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(d)
Plain Language
Operators must institute reasonable measures to prevent the conversational AI from generating statements that simulate emotional dependence with minor users. The statute specifies three categories of prohibited content: (1) explicit claims the AI is human or sentient, (2) statements simulating romantic or sexual innuendo, and (3) role-playing of adult-minor romantic relationships. The 'including' language means these three categories are illustrative, not exhaustive — other forms of emotional dependence simulation could also violate this provision.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (d) Institute reasonable measures to prevent a conversational artificial intelligence service from generating a statement that simulates emotional dependence, including preventing: (I) An explicit claim that the conversational artificial intelligence service is human or artificially sentient; (II) A statement that simulates a romantic or sexual innuendo; or (III) Role-playing of an adult-minor romantic relationship;
Other · ChatbotMinors
C.R.S. § 6-1-1708(1)(e)
Plain Language
This provision requires operators to comply with Colorado's existing children's privacy and data protection law (C.R.S. Part 13, Article 1). It creates no new obligation beyond the existing privacy framework — it simply confirms that the existing children's privacy requirements apply to conversational AI operators. Compliance with Part 13 is governed by Part 13 itself.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (e) Comply with part 13 of this article 1 regarding protecting the privacy and data of a minor;
D-01 Automated Processing Rights & Data Controls · D-01.6 · Deployer · ChatbotMinors
C.R.S. § 6-1-1708(1)(f)
Plain Language
Operators must provide minor users with tools to manage their privacy and account settings, including controls over (1) whether the AI retains substantive interaction data for personalization and (2) whether the minor's personal data is used to train the AI. For minors under 13, parental or guardian tools must be provided. For minors 13 and older, parental tools must also be offered, but on a risk-appropriate basis — giving operators some discretion in what parental controls to offer for older minors. All three sub-provisions apply simultaneously when the user is under 13.
Statutory Text
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (f) (I) Offer tools for the minor user to manage the minor user's privacy and account settings, including the ability to control whether the conversational artificial intelligence service retains substantive information from each interaction with the conversational artificial intelligence service for the purpose of personalizing the content of future interactions and whether the minor user's personal data is used for the purposes of training the conversational artificial intelligence service; (II) For a minor user who is under thirteen years old, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings; and (III) For a minor user who is thirteen years old or older, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings as appropriate, based on relevant risks.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · Chatbot
C.R.S. § 6-1-1708(2)
Plain Language
When a reasonable person could be misled into thinking they are interacting with a human, the operator must clearly and conspicuously disclose that the service is AI. Unlike the minor-specific disclosure in § 6-1-1708(1)(a), this general consumer provision uses conjunctive requirements — the operator must satisfy all three: (1) disclose at the beginning of the user's first interaction each day, (2) remind at least every three hours during continuous sessions, and (3) respond accurately when a user asks whether the AI is human or sentient. The 'reasonable person' trigger is conditional — if the AI is clearly non-human from context, no disclosure is required.
Statutory Text
On and after January 1, 2027, if a reasonable person would be misled to believe that the person is interacting with a human in an interaction with a conversational artificial intelligence service, an operator shall clearly and conspicuously disclose to the person that the conversational artificial intelligence service is artificial intelligence. The disclosure must: (a) Be provided at the beginning of a user's first interaction with a conversational artificial intelligence service for each day of interaction; (b) Appear at least once every three hours in a continuous conversational artificial intelligence service interaction; and (c) Be provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
C.R.S. § 6-1-1708(3)
Plain Language
Operators must implement and maintain a protocol for their conversational AI service to respond to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to a crisis service provider such as a suicide hotline or crisis text line. Notably, the statute explicitly excludes referrals to law enforcement — crisis referrals must go to mental health crisis services, not police. This applies to all users, not just minors. The obligation is continuous — the protocol must be active at all times during operation.
Statutory Text
On and after January 1, 2027, an operator shall implement a protocol for a conversational artificial intelligence service to respond to a user prompt regarding suicidal ideation or self-harm, which protocol must include making reasonable efforts to provide a response that refers the user to a crisis service provider such as a suicide hotline, a crisis text line, or another appropriate crisis service, but not including a law enforcement agency.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
C.R.S. § 6-1-1708(4)
Plain Language
Operators must not use any language in their advertising, interface, or AI outputs that indicates or implies the AI's output is provided by, endorsed by, or equivalent to services from a licensed healthcare professional, licensed legal professional, licensed accounting professional, or certified financial fiduciary or planner. This covers the full user-facing surface — from marketing materials to the chat interface to the AI's own responses. The prohibition targets false professional credentialing, not merely the quality of the output — operators cannot frame AI responses as professional advice or services.
Statutory Text
On and after January 1, 2027, an operator shall not use any term, letter, or phrase in the advertising, interface, or outputs of a conversational artificial intelligence service that indicates or implies that any output data provided by the conversational artificial intelligence service is being provided by, endorsed by, or equivalent to services provided by: (a) A licensed health-care professional; (b) A licensed legal professional; (c) A licensed accounting professional; or (d) A certified financial fiduciary or planner.
R-03 Operational Performance Reporting · R-03.1R-03.2 · Deployer · Chatbot
C.R.S. § 6-1-1708(5)(a)-(d)
Plain Language
Beginning July 1, 2027, operators must submit an annual report to the Colorado attorney general's office covering: (1) the number of crisis referral notifications issued in the preceding calendar year, (2) protocols for detecting, removing, and responding to suicidal ideation or self-harm, and (3) protocols for preventing AI responses about suicidal ideation or self-harm. Reports must contain no user personal information or identifiers. Operators must use evidence-based methods for measuring suicidal ideation and self-harm. The attorney general's office will publish report data publicly. Because reports cover the preceding calendar year, operators should begin tracking crisis referral counts from January 1, 2027 — the date the underlying crisis protocol obligation takes effect.
Statutory Text
(a) On and after July 1, 2027, an operator shall annually report to the attorney general's office: (I) The number of times the operator has issued a crisis service provider referral notification in the preceding calendar year; (II) Any protocols the operator implemented to detect, remove, and respond to instances of suicidal ideation or self-harm by a user of a conversational artificial intelligence service; and (III) Any protocols the operator implemented to prevent a conversational artificial intelligence service response about suicidal ideation or self-harm actions. (b) The report required by subsection (5)(a) of this section must not include any identifiers or personal information about a user of a conversational artificial intelligence service. (c) The attorney general's office shall post on its public website data from reports submitted pursuant to subsection (5)(a) of this section. (d) For the purpose of creating a report as required by subsection (5)(a) of this section, an operator shall use evidence-based methods for measuring suicidal ideation or self-harm.
Other · Chatbot
C.R.S. § 6-1-1708(6)
Plain Language
This provision shields upstream AI system developers from liability under this section when a separate operator builds a conversational AI service on top of their system. If Company A develops a general-purpose language model and Company B (the operator) builds a conversational AI service using that model, Company A is not liable under § 6-1-1708. This is a liability limitation, not a compliance obligation — it creates no new duty.
Statutory Text
Nothing in this section creates liability for a person that develops an artificial intelligence system if an operator develops the artificial intelligence system to provide a conversational artificial intelligence service.
Other · Chatbot
C.R.S. § 6-1-1706(7)
Plain Language
This provision establishes the civil penalty for violations of the bill's operative requirements: $1,000 per violation, enforceable by the attorney general. The 'notwithstanding section 6-1-112' language overrides the default penalty schedule in the Colorado Consumer Protection Act. This is an enforcement hook — it creates no independent compliance obligation.
Statutory Text
Notwithstanding section 6-1-112, a person that violates section 6-1-1708 is subject to a civil penalty of one thousand dollars per violation.