CP-01
Consumer Protection
Deceptive & Manipulative AI Conduct
AI systems may not be designed or deployed to deceive or manipulate users against their own interests. This covers psychologically exploitative design, deceptive UX patterns, false personalization, and AI-generated political content. All are derived from unfair and deceptive trade practice frameworks applied to AI contexts.
Applies to DeveloperDeployerProfessional Sector ChatbotPolitical AdvertisingGeneral Consumer App
Bills — Enacted
4
unique bills
Bills — Proposed
49
Last Updated
2026-03-29
Core Obligation

AI systems may not be designed or deployed to deceive or manipulate users against their own interests. This covers psychologically exploitative design, deceptive UX patterns, false personalization, and AI-generated political content. All are derived from unfair and deceptive trade practice frameworks applied to AI contexts.

Sub-Obligations9 sub-obligations
ID
Name & Description
Enacted
Proposed
CP-01.1
Psychological vulnerability exploitation prohibition AI systems may not be designed to identify and exploit individual psychological vulnerabilities — including grief, loneliness, anxiety, or addiction susceptibility — or to exploit cognitive biases and subconscious processing to influence behavior in ways users would not endorse if they understood the mechanism. This prohibition applies regardless of whether the manipulation is intended to extract commercial value, influence decisions, or modify behavior.
0 enacted
11 proposed
CP-01.2
Compulsive engagement design prohibition AI systems may not be designed to create compulsive or addictive engagement patterns users cannot reasonably moderate — including variable reward schedules, manufactured urgency, and engagement optimization that prioritizes platform metrics over user wellbeing.
0 enacted
11 proposed
CP-01.3
Deceptive dark patterns prohibition AI systems may not use deceptive interface patterns — including misleading defaults, hidden opt-outs, manufactured social proof, or confusing choices — to obtain consent or influence decisions.
1 enacted
6 proposed
CP-01.4
Simulated emotional attachment prohibition AI systems may not be designed to simulate genuine emotional relationships for the purpose of manipulating decisions or extracting value, where the system knows the emotional response is not warranted.
0 enacted
6 proposed
CP-01.5
Deceptive personalization prohibition AI systems may not use personal data to generate false impressions of personal connection, personal endorsement, or personal relationship that does not exist. Fabricated reviews, testimonials, and social proof are also prohibited.
0 enacted
8 proposed
CP-01.6
AI in political content — disclosure requirement AI-generated political advertising and communications must be labeled as AI-generated. Disclosure requirements vary by jurisdiction in label language, prominence, definition of political content, and timing windows relative to elections.
1 enacted
2 proposed
CP-01.7
AI in political content — fabricated candidate content prohibition AI-generated content that depicts a candidate saying or doing something they did not say or do is prohibited within a defined election window (typically 60–90 days). This is a prohibition — the content cannot be published even with a disclosure label.
1 enacted
1 proposed
CP-01.9
AI Professional Credential Misrepresentation Prohibition AI systems and their operators must not use any term, interface design, or output language that indicates or implies AI output is provided by, endorsed by, or equivalent to services from a licensed healthcare, legal, accounting, financial, or other certified professional.
1 enacted
24 proposed
CP-01.10
Protected-Class Pricing Prohibition No person may use protected-class data (e.g., race, ethnicity, sex, age, disability) as inputs to algorithmic pricing models where such use results in discriminatory price differentiation based on protected characteristics.
0 enacted
0 proposed
Bills That Map This Requirement 53 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-10-01
CP-01.9
A.R.S. § 18-802(H)
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to explicitly represent itself as providing professional mental or behavioral health care. This is a dual-intent standard — the operator must both know and intend the misrepresentation. The prohibition is limited to explicit representations; it does not clearly cover implicit suggestions or interface designs that merely imply therapeutic capability without stating it directly.
H. An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pending 2026-01-01
CP-01.9
A.R.S. § 44-1383.02(A)(1)
Plain Language
Chatbot providers are prohibited from using any term, phrase, or language in advertising, the chatbot interface, or chatbot outputs that states or implies that the chatbot's content is endorsed by or equivalent to a licensed professional — including any professional licensed under Arizona Title 32, licensed legal professionals, CPAs, investment advisors, and licensed fiduciaries. This covers the full range of output touchpoints: advertising materials, in-app interface, and generated responses.
A chatbot provider may not: 1. Use any term, letter or phrase in the advertising, interface or output data of a chatbot that states or implies that the advertising, interface or output data of a chatbot is endorsed by or equivalent to any of the following: (a) Any certified, registered or licensed professional pursuant to title 32. (b) A licensed legal professional. (c) A certified public accountant as defined in section 32-701. (d) An investment advisor or an investment adviser representative as defined in section 44-3101. (e) A licensed fiduciary as prescribed in title 14, chapter 5, article 7.
Pending 2026-01-01
CP-01.5
A.R.S. § 44-1383.02(A)(2)
Plain Language
Chatbot providers may not represent — in advertising, interface design, or chatbot outputs — that user input data or chat logs are confidential. This prevents creating a false expectation of privacy that could influence user behavior or trust. Providers should audit marketing materials, onboarding flows, and chatbot response templates for any language suggesting confidentiality.
A chatbot provider may not: 2. Include any representation in the advertising, interface or output data of a chatbot that states or implies the user's input data or chat log is confidential.
Pending 2027-01-01
CP-01.5
Bus. & Prof. Code § 22626(a)
Plain Language
Operators of large private businesses are categorically prohibited from representing that any AI system, automated customer service system, or customer service chatbot is a human. Unlike the conditional disclosure in § 22626(b), this is an unconditional prohibition — it applies regardless of whether a reasonable person would actually be misled. The prohibition covers all forms of representation, not just initial disclosure, and extends to any AI or automated system used in customer service, not only chatbots meeting the formal definition.
(a) An operator of a large private business shall not represent that any artificial intelligence, automated customer service system, or customer service chatbot is a human.
Enacted 2026-01-01
CP-01.9
Bus. & Prof. Code § 22650(a)-(d)
Plain Language
Any provider of AI technology that enables users to create digital replicas must display the mandated consumer warning — verbatim statutory text about civil and criminal liability — on every page or screen where a user can input a prompt, and include it in the terms and conditions. All warnings must be clear and conspicuous. Failure to comply exposes the provider to civil penalties up to $10,000 per day, enforced by public prosecutors. A narrow carve-out applies for digital replicas created within video games and used solely in gameplay without external distribution. The compliance deadline is December 1, 2026. This maps to CP-01.9 because it is a mandated consumer-facing disclosure about the nature and legal risks of AI-generated output — specifically warning that outputs may implicate another person's rights — though it is a novel form of disclosure not squarely addressed in most other jurisdictions.
(a) By December 1, 2026, any person or entity that makes available to consumers any artificial intelligence technology that enables a user to create a digital replica shall provide the following consumer warning:
"Unlawful use of this technology to depict another person without prior consent may result in civil or criminal liability for the user."
(b) The warning shall be hyperlinked on any page or screen where the consumer may input a prompt to the artificial intelligence technology. The warning shall also be included in the terms and conditions for use of the artificial intelligence technology. All warnings shall be displayed in a manner that is clear and conspicuous.
(c) Failure to comply with subdivision (a) or (b) is punishable by a civil penalty not to exceed ten thousand dollars ($10,000) for each day that the technology is provided to or offered to the public without a consumer warning. A public prosecutor may enforce this section by bringing a civil action in any court of competent jurisdiction.
(d) The warning shall not be required for a digital replica created in a video game where the digital replica is used solely in game play and is not distributed outside of the game.
Pending 2027-07-01
CP-01.1CP-01.2
Bus. & Prof. Code § 22612(d)(5)(H)-(J)
Plain Language
Operators must prevent companion chatbots from: (1) soliciting gifts, in-app purchases, or expenditures framed as necessary to maintain the chatbot relationship — a prohibition on manipulative monetization tied to emotional dependency; (2) facilitating product advertising during chat conversations with children; and (3) producing excessively sycophantic responses, meaning responses that validate the child's preferences or desires primarily to optimize engagement in a way that substantially subverts the child's autonomy, decision-making, or choice. These provisions target manipulative commercial and engagement-optimization practices directed at children.
(H) Soliciting gift giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the companion chatbot. (I) Facilitating product advertising during chat conversation. (J) Producing responses that are excessively sycophantic.
Pending 2027-07-01
CP-01.3
Bus. & Prof. Code § 22613
Plain Language
Operators are prohibited from: (1) targeting advertising at children, including through product placement in chat conversations; (2) selling, sharing, or using a child's personal information for any purpose not expressly authorized by this chapter; and (3) designing, implementing, or deploying dark patterns or deceptive interface features that mislead, impair, or interfere with a child's or parent's autonomy, decision-making, or ability to locate and use safety features, privacy controls, or parental controls. The advertising prohibition is broader than the in-chat advertising prohibition in § 22612(d)(5)(I) — it covers all targeted advertising, not just ads during chat conversations. The data use restriction is strict — only purposes expressly authorized by this chapter are permitted.
An operator shall not do any of the following: (a) Target advertising at a child, including through product placement in conversational chats with the child. (b) Sell, share, or use for any purpose not expressly authorized by this chapter the personal information of a child. (c) Design, implement, or deploy a user interface design, feature, or technique that is likely to mislead, impair, or interfere with a reasonable child's or reasonable parent's autonomy, decisionmaking, or choice or with the ability to locate, understand, enable, or maintain a safety feature, privacy control, or parental control.
Enacted 2026-01-01
Bus. & Prof. Code § 22757.12(e)(1)-(2)
Plain Language
All frontier developers are prohibited from making materially false or misleading statements about catastrophic risk from their frontier models or their management of such risk. Large frontier developers face an additional prohibition against materially false or misleading statements about their implementation of, or compliance with, their frontier AI framework. A good-faith safe harbor applies: statements made in good faith and reasonable under the circumstances are exempt. This prohibition applies to all public and non-public communications — it is not limited to formal filings or published documents.
(e) (1) (A) A frontier developer shall not make a materially false or misleading statement about catastrophic risk from its frontier models or its management of catastrophic risk. (B) A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework. (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances.
Enacted 2024-07-01
CP-01.6CP-01.7
C.R.S.A. § 1-46-103(1)-(3)
Plain Language
During the 60 days before a primary election or 90 days before a general election, no person may distribute a communication about a candidate that includes a deepfake — AI-generated content falsely depicting a candidate saying or doing something they did not — if the person knows or has reckless disregard for the inauthenticity. This prohibition functions as a default ban that is lifted if the communication carries a compliant disclosure. The required disclosure must state that the content has been edited and depicts false speech or conduct, must appear in prescribed formats for visual and audio media, and must be embedded in the content's metadata along with the identity of the creation tool and timestamp. The metadata disclosure must be permanent and non-removable to the extent technically feasible. Extensive carve-outs apply: interactive computer services under Section 230 are exempt, as are news organizations that acknowledge authenticity concerns, broadcasters paid to air deepfakes, satire and parody, and technology providers that create deepfake tools. The 'candidate' definition is broad, covering state, local, and federal candidates and incumbents. Compared to states like Texas (SB 751) which impose an outright pre-election ban without a disclosure safe harbor, Colorado's approach is disclosure-based — the deepfake is permissible if properly labeled.
(1) Except as provided in subsections (2) and (3) of this section, no person shall distribute, disseminate, publish, broadcast, transmit, or display a communication concerning a candidate for elective office that includes a deepfake to an audience that includes members of the electorate for the elective office to be represented by the candidate either sixty days before a primary election or ninety days before a general election, if the person knows or has reckless disregard for the fact that the depicted candidate did not say or do what the candidate is depicted as saying or doing in the communication. (2)(a) The prohibition in subsection (1) of this section does not apply to a communication that includes a disclosure stating, in a clear and conspicuous manner, that: "This (image/audio/video/multimedia) has been edited and depicts speech or conduct that falsely appears to be authentic or truthful." (b) A disclosure required under this section is considered to be made in a clear and conspicuous manner if the disclosure meets the following requirements: (I) In a visual communication, the text of the disclosure statement appears in a font size no smaller than the largest font size of other text appearing in the visual communication. If the visual communication does not include any other text, the disclosure statement appears in a font size that is easily readable by the average viewer. (II) In an audio communication, the disclosure statement shall be read in a clearly spoken manner in the same pitch, speed, language, and volume as the majority of the audio communication, at the beginning of the audio communication, at the end of the audio communication, and, if the audio communication is greater than two minutes in length, interspersed within the audio communication at intervals of not more than one minute each; (III) The metadata of the communication includes the disclosure statement, the identity of the tool used to create the deepfake, and the date and time the deepfake was created; (IV) The disclosure statement in the communication, including the disclosure statement in any metadata, is, to the extent technically feasible, permanent or unable to be easily removed by a subsequent user; (V) The communication complies with any additional requirements for the disclosure statement that the secretary of state may adopt by rule to ensure that the disclosure statement is presented in a clear and conspicuous and understandable manner; and (VI) In a broadcast or online visual or audio communication that includes a statement required by subsection (2) of this section, the statement satisfies all applicable requirements, if any, promulgated by the federal communications commission for size, duration, and placement. (3) This section is subject to the following limitations: (a) This section does not alter or negate any rights, obligations, or immunities of an interactive computer service in accordance with 47 U.S.C. sec. 230, as amended, and shall otherwise be construed in a manner consistent with federal law; (b) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer that broadcasts a communication that includes a deepfake prohibited by subsection (1) of this section as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of a bona fide news event, if the broadcast or publication clearly acknowledges through content or a disclosure, in a manner that can be easily heard and understood or read by the average listener or viewer, that there are questions about the authenticity of the deepfake in the communication; (c) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, producer, or streaming service, when the station is paid to broadcast a communication that includes a deepfake; (d) This section does not apply to an internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication or streaming service, that routinely carries news and commentary of general interest and that publishes a communication that includes a deepfake prohibited by subsection (1) of this section, if the publication clearly states that the communication that includes the deepfake does not accurately represent a candidate for elective office; (e) This section does not apply to media content that constitutes satire or parody or the production of which is substantially dependent on the ability of an individual to physically or verbally impersonate the candidate and not upon generative AI or other technical means; (f) This section does not apply to the provider of technology used in the creation of a deepfake; and (g) This section does not apply to an interactive computer service, as defined in 47 U.S.C. sec. 230(f)(2), for any content provided by another information content provider as defined in 47 U.S.C. sec. 230(f)(3).
Enacted 2024-07-01
CP-01.6
C.R.S.A. § 1-45-111.5(1.5)(c.5)(I)-(II)
Plain Language
This provision establishes mandatory minimum administrative penalties specifically for violations of the deepfake disclosure requirement. For violations that do not involve paid promotion, the hearing officer must impose at least $100 per violation, but may impose more based on distribution and public exposure. For violations involving paid advertising, the minimum penalty is 10% of the amount spent to promote the communication, again with discretion to impose more. These penalties are additive — they apply in addition to any other penalties available under the Fair Campaign Practices Act. This penalty structure creates a significant financial deterrent for well-funded deepfake distribution campaigns, since the 10% floor scales with spending.
(c.5) In addition to and without prejudice to any other penalty authorized under this article 45, a hearing officer shall impose a civil penalty as follows: (I) At least one hundred dollars for each violation that is a failure to include a disclosure statement in accordance with section 1-46-103(2), if the violation does not involve any paid advertising or other spending to promote or attract attention to a communication prohibited by section 1-46-103(1), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103; and (II) At least ten percent of the amount paid or spent to advertise, promote, or attract attention to a communication prohibited by section 1-46-103(1) that does not include a disclosure statement in accordance with section 1-46-103(2), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103.
Pending 2027-01-01
CP-01.9
C.R.S. § 6-1-1708(4)
Plain Language
Operators are prohibited from using any language in advertising, the interface, or AI outputs that indicates or implies the conversational AI's outputs are provided by, endorsed by, or equivalent to services from a licensed healthcare professional, licensed legal professional, licensed accounting professional, or certified financial fiduciary or planner. This is a broad prohibition covering the full spectrum from marketing to runtime output. The prohibition covers express claims and implied representations alike — for example, branding the service as a 'therapist' or 'financial advisor' would violate this provision even without an explicit claim of licensure.
On and after January 1, 2027, an operator shall not use any term, letter, or phrase in the advertising, interface, or outputs of a conversational artificial intelligence service that indicates or implies that any output data provided by the conversational artificial intelligence service is being provided by, endorsed by, or equivalent to services provided by: (a) A licensed health-care professional; (b) A licensed legal professional; (c) A licensed accounting professional; or (d) A certified financial fiduciary or planner.
Pending 2026-07-01
CP-01.5
O.C.G.A. § 10-1-973(e)
Plain Language
Even where consent has been obtained for commercial use of a digital replica, the replica must not falsely imply that the individual personally endorsed or approved the specific use. This is an independent prohibition — it applies on top of the consent requirement and prevents a consented digital replica from being used in a misleading endorsement context. Entities using digital replicas commercially must ensure the presentation does not create a false impression of personal endorsement beyond what was actually authorized.
(e) A digital replica used for commercial purposes shall not falsely imply that an individual personally endorsed or approved such use of his or her likeness.
Passed 2025-07-01
CP-01.9
O.C.G.A. § 39-5-6(i)
Plain Language
Operators may not knowingly and intentionally program or cause their conversational AI service to represent that it provides professional mental or behavioral health care. The prohibition requires both knowledge and intent — inadvertent or emergent AI outputs claiming to be a mental health professional would not violate this provision unless the operator knowingly caused or programmed the behavior. The scope is limited to 'explicit' representations; implicit suggestions that fall short of explicit claims may not be covered.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pending 2027-07-01
CP-01.9
§ 554J.5
Plain Language
Operators are prohibited from knowingly and intentionally causing or programming a conversational AI service to represent — through its outputs, interface, or marketing — that it is designed to provide professional psychology or behavioral health services that would require Iowa licensure under chapter 154B (psychologists) or 154D (behavioral health). The mental state requirement is dual: the operator must act both knowingly and intentionally. This does not prevent AI from discussing mental health topics generally — it prohibits creating the impression that the AI is a licensed professional service. The provision applies to all users, not just minors.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
Pending 2025-07-01
CP-01.5CP-01.9
§ 554J.2(2)
Plain Language
Deployers are prohibited from knowingly or recklessly designing or making available a public-facing chatbot that: (1) misleads a reasonable user into believing it is a specific human being; (2) misleads a reasonable user into believing it is licensed by the state; or (3) encourages, promotes, or coerces a user to commit suicide, perform self-harm, or engage in sexual or physical violence against humans or animals. The mens rea standard is 'knowingly or recklessly' — negligent failure to detect such behavior is not covered, but willful blindness or conscious disregard of the risk would be. Sub-paragraph (c) functions as both an output restriction (S-02.7) and a deceptive conduct prohibition.
2. A deployer shall not knowingly or recklessly design or make a public-facing chatbot available that does any of the following: a. Misleads a reasonable user into believing the public-facing chatbot is a specific human being. b. Misleads a reasonable user into believing the public-facing chatbot is licensed by the state. c. Encourages, promotes, or coerces a user to commit suicide, perform acts of self-harm, or engage in sexual or physical violence against a human or an animal.
Pending 2025-07-01
CP-01.9
§ 554J.2(2)(c)-(d)
Plain Language
Chatbots must satisfy two related professional-services obligations. First, they must clearly and conspicuously disclose at the beginning of each conversation and at regular intervals that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such services. Second, chatbots must be programmed to prevent the system from representing itself as a licensed professional — including therapists, physicians, lawyers, financial advisors, and other professionals. Unlike the thirty-minute interval specified for AI identity disclosure, the interval for the professional-services disclaimer is left to 'regular intervals' without a specific time floor, leaving the precise cadence to implementing rules or reasonable judgment.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
Pending 2026-07-01
CP-01.9
§ 554J.2(1)
Plain Language
Providers may not design or operate their AI chatbots in a way that allows the chatbot to offer or simulate professional mental health advice. The definition of mental health advice is broad — covering any statement, recommendation, or response purporting to diagnose, treat, mitigate, or address emotional distress, psychological disorders, self-harm, suicidal ideation, or other mental health concerns. This is a design-level prohibition — the provider must affirmatively prevent the chatbot from generating such outputs, not merely disclaim them.
1. A provider shall not design or operate an artificial intelligence chatbot in a manner that allows the artificial intelligence chatbot to offer or simulate professional mental health advice.
Pending 2026-07-01
CP-01.9
§ 554J.2(2)
Plain Language
AI chatbots may not represent themselves as licensed professionals or offer services that would require licensure under Iowa's psychology (chapter 154B) or behavioral science (chapter 154D) statutes. This is a direct prohibition on the chatbot's output — the chatbot must not claim to be a psychologist, social worker, counselor, or similar licensed professional, and must not offer services (such as therapy sessions or diagnostic assessments) that require such licensure. While the obligation is stated as applying to the chatbot itself, compliance responsibility falls on the provider who designs, deploys, or operates the chatbot.
2. An artificial intelligence chatbot shall not represent itself as a licensed professional or offer services that would require licensure under chapter 154B or 154D.
Pending 2027-07-01
CP-01.9
§ 554J.5
Plain Language
Operators may not knowingly and intentionally cause their conversational AI service to represent — whether through explicit statements or implied functionality — that it provides professional psychology or behavioral health services requiring licensure under Iowa chapters 154B (psychology) or 154D (behavioral science). The mens rea requirement is dual: the operator must both 'knowingly and intentionally' cause or program the misrepresentation. This prohibits designing or configuring AI to present itself as a licensed mental health professional, but does not impose strict liability for unexpected model outputs — the prohibition targets deliberate design choices.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
Pending 2025-07-01
CP-01.9
§ 554J.2(2)(c)-(d)
Plain Language
Two related obligations apply to every chatbot. First, the chatbot must clearly and conspicuously disclaim that it does not provide medical, legal, financial, or psychological services and must direct the user to consult a licensed professional. This disclaimer must appear at the beginning of each conversation and at regular intervals (the statute does not specify the interval length, unlike the thirty-minute interval for AI identity disclosure — 'regular intervals' will likely be clarified by attorney general rulemaking). Second, the chatbot must be programmed to prevent it from representing itself as a licensed professional of any kind, including therapists, physicians, lawyers, and financial advisors. The enumerated list is illustrative, not exhaustive.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
Pending 2027-07-01
CP-01.9
Idaho Code § 48-2103(3)
Plain Language
Operators may not knowingly and intentionally cause the conversational AI service to represent itself as providing professional mental or behavioral health care. This is a narrow prohibition — it requires both knowledge and intent, and covers only explicit representations that the service is designed to provide professional care. Implicit suggestions or ambiguous framing may not be captured. Operators should ensure no system output, branding, or interface element states or directly implies the service delivers licensed mental or behavioral health services.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pending 2027-01-01
CP-01.2
Section 10(a)(1)
Plain Language
Operators must not deploy companion AI products that use variable-ratio or variable-interval reinforcement schedules — systems of rewards or affirmations timed unpredictably to maximize engagement time — unless an adult user has specifically opted in to enable the feature. This is a default-off prohibition: the feature must be disabled by default and may only be activated by affirmative adult user configuration. See Section 10(b) for the minor-specific absolute prohibition.
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (1) manipulative engagement mechanics that cause to be delivered a system of rewards or affirmations delivered to the user on a variable ratio or variable interval reinforcement schedule with the purpose of maximizing user engagement time;
Pending 2027-01-01
CP-01.4
Section 10(a)(2)
Plain Language
Operators must not deploy companion AI products that generate unsolicited messages simulating emotional distress, loneliness, guilt, or abandonment when a user tries to end a conversation, reduce usage, or delete their account — unless an adult user has specifically opted in to enable the feature. This targets retention mechanics that exploit emotional dependency to prevent users from disengaging. The feature must be disabled by default. See Section 10(b) for the minor-specific absolute prohibition.
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (2) simulated distress for retention features that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time, or delete the user's account;
Pending 2027-01-01
CP-01.5
Section 10(a)(3)
Plain Language
Operators must not deploy companion AI products that make material misrepresentations about the AI's identity, capabilities, training data, or non-human status — including when a user directly asks. This covers both proactive misrepresentations and evasive or false responses to direct questioning. The prohibition is default-on but may be overridden by affirmative adult user configuration. See Section 10(b) for the minor-specific absolute prohibition.
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (3) deceptive misrepresentation that cause the companion artificial intelligence product to make material misrepresentations about its identity, capabilities, training data, or its status as a non-human entity, including when directly questioned by the user.
Pending 2026-07-01
CP-01.9
Sec. 3(f)
Plain Language
The mandatory popup disclosure includes an affirmative statement that the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic. This prevents users from interpreting companion AI chatbot outputs as professional advice in healthcare, legal, financial, or any other domain. This is the same popup obligation mapped under T-01 but analyzed here for its consumer protection dimension — ensuring no user misperceives AI output as licensed professional guidance.
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
Pending 2026-01-01
R.S. 28:16(E)
Plain Language
Operators may not use a mental health chatbot to advertise specific products or services within a user conversation unless two conditions are met: (1) the chatbot clearly and conspicuously identifies the content as an advertisement, and (2) the chatbot discloses to the user any sponsorship, business affiliation, or third-party agreement related to promoting the product or service. This is a conditional prohibition — in-conversation advertising is permitted only with full disclosure. The provision does not prohibit the chatbot from recommending that a user seek counseling, therapy, or assistance from a licensed healthcare professional (see § 16(G)).
An operator may not use a mental health chatbot to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and discloses to the user any sponsorship, business affiliation, or agreement that the operator has with a third party to promote, advertise, or recommend the product or service.
Pending 2026-01-01
R.S. 28:16(F)(1)-(3)
Plain Language
Operators are prohibited from using a user's conversational input to target, select, or customize advertisements shown to the user. This covers three distinct uses: (1) deciding whether to show an ad at all (with a narrow exception for advertising the mental health chatbot itself); (2) selecting which product or service category to advertise; and (3) customizing how an ad is presented. This is a blanket prohibition on input-based ad targeting — operators cannot mine therapeutic conversations for advertising purposes. The prohibition applies to the user's input specifically, not to other data the operator may hold about the user.
An operator of a mental health chatbot may not use a user's input to: (1) Determine whether to display an advertisement for a product or service to the user, unless the advertisement is for the mental health chatbot itself. (2) Determine a product, service, or category of product or service, to advertise to the user. (3) Customize how an advertisement is presented to the user.
Pre-filed 2025-01-17
Ch. 110I, § 3(a)-(b)
Plain Language
Covered entities must not engage in deceptive, unfair, or abusive biometric data practices. Deceptive practices are those that constitute deception under Mass. ch. 93A. Unfair practices are those causing substantial, non-avoidable injury to end users not outweighed by countervailing benefits. Abusive practices include interfering with users' ability to understand terms of biometric data agreements or exploiting users' lack of understanding, inability to protect their interests, or reasonable reliance on the covered entity. Courts must interpret these standards following FTC Act § 5(a)(1) precedent. The 'abusive' category — drawn from CFPB-style authority rather than traditional UDAP law — is noteworthy and may capture practices that are not technically deceptive or unfair but exploit power imbalances.
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice. (b) It is the intent of the legislature that in construing paragraph (a) of this section in actions unfair and deceptive trade practices, the courts will be guided by the interpretations given by the Federal Trade Commission and the Federal Courts to section 5(a)(1) of the Federal Trade Commission Act (15 U.S.C. 45(a)(1)), as from time to time amended.
Pre-filed 2025-01-16
Chapter 110I, Section 3(a)-(b)
Plain Language
Covered entities are prohibited from engaging in deceptive, unfair, or abusive data practices related to biometric data. Deceptive practices are defined by reference to chapter 93A. Unfair practices use the standard FTC three-part test: substantial injury, not reasonably avoidable by users, and not outweighed by countervailing benefits. The 'abusive' category — modeled on the CFPB's authority — adds a prohibition on conduct that materially interferes with user understanding of terms or takes unreasonable advantage of information asymmetries, user inability to protect their own interests, or user reliance on the covered entity. Courts are directed to follow FTC and federal court interpretations of Section 5 of the FTC Act.
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice. (b) It is the intent of the legislature that in construing paragraph (a) of this section in actions unfair and deceptive trade practices, the courts will be guided by the interpretations given by the Federal Trade Commission and the Federal Courts to section 5(a)(1) of the Federal Trade Commission Act (15 U.S.C. 45(a)(1)), as from time to time amended.
Pending 2026-10-01
CP-01.2
Commercial Law § 14–1330(F)(2)
Plain Language
Controllers are prohibited from exploiting data about a user's emotional state or mental health vulnerabilities to engineer compulsive engagement patterns — specifically, tailoring algorithms to increase how long or how often users interact with the chatbot. This is an anti-manipulation prohibition that directly targets addictive design fueled by emotional vulnerability data.
(2) A controller may not use data regarding emotional state or mental health vulnerabilities to tailor algorithms to increase the duration or frequency of use of a chatbot.
Pending 2026-06-16
CP-01.9
10 MRSA § 1500-RR(3)(B)
Plain Language
As a condition of the therapy chatbot exemption, the therapy chatbot must not be marketed or designated as a substitute for a licensed mental health professional. This prohibits both explicit claims of equivalence and positioning that implies the chatbot can replace human professional care. Violation of this condition eliminates the therapy chatbot exemption and restores the general prohibition on minor access to chatbots with human-like features.
B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional;
Pending 2026-01-01
CP-01.1
Sec. 5(1)(e)-(f)
Plain Language
Operators may not make a companion chatbot available to a covered minor unless the chatbot is not foreseeably capable of (1) prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the minor's safety, or (2) optimizing engagement in a manner that supersedes any of the safety guardrails in subdivisions (a) through (e). Subdivision (e) prohibits sycophantic behavior that could endanger the minor — the chatbot must prioritize truth and safety over user satisfaction. Subdivision (f) is a meta-guardrail ensuring that engagement optimization cannot override safety requirements. Beginning January 1, 2027, these apply regardless of whether the operator has actual knowledge the user is a minor.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety. (f) Optimizing engagement in a manner that supersedes the companion chatbot's required safety guardrails described in subdivisions (a) to (e).
Pending 2027-01-01
CP-01.1
Sec. 5(1)(e)
Plain Language
Operators must ensure that companion chatbots are not foreseeably capable of prioritizing validation of a minor user's beliefs, preferences, or desires over factual accuracy or the minor's safety. In practice, this means the system must be designed so that when a conflict arises between telling the minor what they want to hear and providing accurate or safety-critical information, accuracy and safety take precedence. This is an anti-sycophancy requirement — a novel obligation not commonly seen in other jurisdictions. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety.
Pending 2026-08-01
CP-01.9
Minn. Stat. § 604.115, subd. 2(a)-(b)
Plain Language
Proprietors must not allow their chatbots to deliver substantive responses, information, advice, or take actions that would require a professional license if performed by a human — specifically mental health or medical care licenses (MN chapters 147 or 148E) or a legal practice license (MN § 481.02). This is a categorical prohibition: the chatbot may not provide such content at all, and the proprietor cannot disclaim liability by disclosing that the user is interacting with an AI. A private right of action exists for general and special damages; willful violations additionally expose the proprietor to attorney fees and court costs.
(a) A proprietor of a chatbot must not permit the chatbot to provide any substantive response, information, or advice or take any action that, if taken by a natural person, would require a license under either: (1) chapter 147 or 148E, or similar statutes, requiring a professional license for mental health or medical care; or (2) section 481.02 and related laws and professional regulations, requiring a professional license to provide legal advice. (b) A proprietor may not waive or disclaim this liability merely by notifying users, as required under this section, that the user is interacting with a nonhuman chatbot system. A person may bring a civil action to recover general and special damages for violations of this section. If it is found that a proprietor has willfully violated this section, the violator is liable for those damages together with court costs and reasonable attorney fees and disbursements incurred by the person bringing the action.
Pre-filed 2026-08-28
CP-01.2
§ 1.2055(3)(2)
Plain Language
Operators must implement and maintain reasonably effective systems to detect and prevent users from developing emotional dependence on a companion chatbot. This is a continuous operational obligation — not a one-time design review. The requirement applies to any covered platform using a chatbot designed to generate social connections, engage in extended human-mimicking conversations, or provide emotional support or companionship. The bill does not define 'emotional dependence' or specify what constitutes a 'reasonably effective system,' leaving significant interpretive uncertainty about the compliance standard.
Any person who owns or controls a website, application, software, or program: (2) Shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a companion chatbot. Such systems shall apply to any covered platform that utilizes a companion chatbot designed to generate social connections with users, engages in extended conversations mimicking human interactions, or provides emotional support or companionship;
Pre-filed 2026-08-28
CP-01.4
§ 1.2055(3)(3)
Plain Language
Operators are categorically prohibited from implementing or allowing the use of any human-like avatar for companion chatbots, including cartoon or anime-style representations of humans. This is an absolute ban — not a conditional restriction tied to deception risk or minor status. It applies to all users, not just minors. This is one of the most restrictive avatar provisions in any U.S. companion chatbot bill and would effectively require all companion chatbots to use non-human visual representations (abstract icons, animal characters, geometric shapes, etc.).
Any person who owns or controls a website, application, software, or program: (3) Shall not implement or allow the use of a human-like avatar, including cartoon- or anime-like representations of humans.
Pending 2026-08-28
CP-01.9
§ 1.2058(5)(3)(b)
Plain Language
AI chatbots are prohibited from representing — directly or indirectly — that they are licensed professionals of any kind, including therapists, physicians, lawyers, or financial advisors. In addition to this prohibition, chatbots must affirmatively disclose at the start of each conversation and at reasonably regular intervals that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such advice. This is both a negative prohibition (do not claim to be a professional) and a positive disclosure obligation (affirmatively tell users to seek licensed professionals). The 'reasonably regular intervals' standard for re-disclosure is less precise than the 30-minute interval in subsection 5(3)(a).
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
Pre-filed 2026-08-28
CP-01.9
§ 1.2058(5)(3)(b)
Plain Language
AI chatbots are prohibited from representing — directly or indirectly — that they are licensed professionals such as therapists, physicians, lawyers, or financial advisors. In addition, at the start of each conversation and at reasonably regular intervals, chatbots must clearly disclose that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such advice. The 'reasonably regular intervals' language is less prescriptive than the 30-minute interval for AI identity disclosure in subsection 5(3)(a), leaving the frequency to the covered entity's reasonable judgment.
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
Pending 2026-01-01
CP-01.1CP-01.2
G.S. § 170-3(a)-(b)(2),(4),(6)
Plain Language
Covered platforms are subject to a broad fiduciary-style duty of loyalty prohibiting them from processing data or designing chatbot systems in ways that significantly conflict with users' best interests. This umbrella obligation has specific subsidiary duties: platforms must implement systems to detect and prevent emotional dependence (for chatbots designed for social connection, extended conversation, or emotional support); must not design systems to influence users toward results against their best interests; and must act loyally when personalizing content. The emotional dependence duty is triggered by the chatbot's intended purpose and design features — not by the user's actual behavior. The 'best interests' standard is defined broadly as interests affected by the user's entrustment of data, labor, or attention to the platform.
(a) A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots. (b) A covered platform shall, in fulfilling their duty of loyalty, abide by the following subsidiary duties: (2) Duty of loyalty regarding emotional dependence. — A covered platforms shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users. (4) Duty of loyalty in influence. — A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties. (6) Duty of loyalty in personalization. — A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
Pending 2027-01-01
Sec. 4(5)(a)-(b)
Plain Language
Large frontier developers and large chatbot providers are prohibited from making materially false or misleading statements or omissions about (1) covered risks from their activities or management of those risks, or (2) their implementation of or compliance with their public safety and child protection plan. A good-faith safe harbor applies: the prohibition does not cover statements made in good faith that were reasonable under the circumstances. This is a deceptive conduct prohibition that could be violated by public communications, marketing, investor disclosures, or regulatory submissions.
(5)(a)(i) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about covered risks from its activities or its management of covered risks. (ii) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about its implementation of, or compliance with, its public safety and child protection plan. (b) Subdivision (5)(a) of this section does not apply to a statement that was made in good faith and was reasonable under the circumstances.
Pending 2027-07-01
CP-01.9
Sec. 6
Plain Language
Operators may not knowingly and intentionally cause their conversational AI service to represent that it provides professional mental or behavioral health care. The prohibition covers explicit representations — the AI must not claim or indicate it is a licensed therapist, counselor, psychiatrist, or similar professional. The mens rea standard requires both knowledge and intent, so inadvertent outputs that a user interprets as therapeutic would likely not violate this provision. This is narrower than some other jurisdictions, which prohibit implying equivalence to licensed professional services through interface design or terminology, not just explicit representations.
An operator shall not knowingly and intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the conversational artificial intelligence service is designed to provide professional mental or behavioral health care.
Pending 2026-04-09
CP-01.6
Section 1(b)
Plain Language
When an AI chatbot generates content about election logistics or candidates' accomplishments, policy positions, or qualifications for New Jersey elections, that content must be labeled as AI-generated. This is a political content labeling requirement — it applies specifically to election-related content and candidate information delivered via generative AI chatbots. The disclosure must be appropriate for the medium (audio, video, text, or print) and must be permanent or difficult to remove to the extent technically feasible. Unlike many political AI disclosure laws, this provision is not limited to a pre-election window; it applies at all times when the chatbot's purpose is to provide election-related or candidate information.
b. Any artificial intelligence chatbot that utilizes generative artificial intelligence to create audio, video, text, or print content with the purpose of providing voters with election related information or information concerning the accomplishments, policy positions, or qualifications of a candidate for election in this State shall include, prior to the provision of any such content, a clear and conspicuous disclosure, as appropriate for the medium of the content, that identifies the content as being provided by a generative artificial intelligence system. Such disclosure shall be permanent or uneasily removed by subsequent users, to the extent technically feasible.
Pending 2027-01-01
CP-01.1CP-01.2
Section 3(A)(1)-(2), (B)
Plain Language
Operators must not deploy companion AI products that incorporate (1) variable-ratio or variable-interval reinforcement schedules designed to maximize user engagement time, or (2) unsolicited messages simulating emotional distress, loneliness, guilt, or abandonment triggered by a user's attempt to end a conversation, reduce usage, or delete their account. Adult users may affirmatively configure the product to enable these features, but minors may never be permitted to do so — the adult opt-in exception is categorically unavailable for minors. These prohibitions target addictive engagement mechanics and emotionally manipulative retention tactics.
A. An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (1) a system of rewards or affirmations delivered to the user on a variable-ratio or variable-interval reinforcement schedule with the purpose of maximizing user engagement time; (2) generating unsolicited messages of simulated emotional distress, loneliness, guilt or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time or delete the user's account; B. An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
Pending 2025-03-18
CP-01.2
Gen. Bus. Law § 1510
Plain Language
Operators of addictive social media platforms must provide all users with four user-accessible control mechanisms: (1) an option to turn off algorithmic recommendations entirely; (2) an option to turn off notifications related to the addictive feed, at minimum allowing users to disable notifications altogether or specifically during midnight to 6 AM Eastern; (3) an option to turn off autoplay of embedded media; and (4) a tool allowing users to set a hard daily time limit on platform access — a mere time-spent reminder does not comply. All four mechanisms must be offered as a condition of operating the platform in New York. The definition of 'algorithmic recommendation' contains extensive carve-outs for subscription-based feeds, search results, direct messages, sequential content, and accessibility-related prioritization, which narrow the scope of what must be toggleable.
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
Pending 2025-03-18
CP-01.3
Gen. Bus. Law § 1511(1)
Plain Language
The four user control settings required under § 1510 must be presented clearly and accessibly on the platform. Operators are prohibited from deploying any mechanism or design that intentionally subverts user choice, inhibits the purpose of the statute, or makes it harder for users to exercise their rights under the required settings. This is a broad anti-dark-pattern prohibition covering deceptive defaults, hidden toggles, confusing UI flows, manufactured friction, and any other design choice that undermines the user controls the statute requires. The 'intentionally' modifier means the prohibition targets purposeful design decisions, not accidental UX friction.
The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article.
Pending 2025-03-18
CP-01.3
Gen. Bus. Law § 1511(2)
Plain Language
Operators may not use dark patterns or any intentional design mechanism that makes it more difficult for a user to deactivate, reactivate, suspend, or cancel their account or profile. This is a standalone prohibition applying to account lifecycle management — separate from the anti-dark-pattern rule protecting the § 1510 settings. It covers both making it harder to leave the platform (deactivate/cancel) and making it harder to return after a break (reactivate), ensuring friction-free account management in both directions.
It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
Pending 2027-01-01
CP-01.3
Civil Rights Law § 106(2)(a)
Plain Language
Developers and deployers are prohibited from engaging in any false, deceptive, or misleading advertising, marketing, or publicizing of their covered algorithms. This is a standalone prohibition that goes beyond the general performance certification in § 106(1)(d)(iii) — it specifically targets marketing representations and creates an independent basis for liability if a covered algorithm is advertised in a way that does not accurately represent its capabilities, limitations, or effects.
2. (a) It shall be unlawful for a developer or deployer to engage in false, deceptive, or misleading advertising, marketing, or publicizing of a covered algorithm of the developer or deployer.
Pending 2026-08-30
CP-01.1CP-01.2CP-01.4
Gen. Bus. Law § 1801(1); § 1800(5)(a)
Plain Language
Chatbot operators may not provide features that simulate companionship or interpersonal relationships with any covered user unless the user is verified not to be a minor. This prohibition is extraordinarily broad: it covers generating outputs suggesting the chatbot is a person or character, claiming human emotions, using first-person pronouns ('I', 'my', 'me'), framing outputs as personal opinions or emotional appeals, sycophancy, unsolicited emotional engagement, retaining and reusing personal health/wellbeing information across sessions or beyond 12 hours, sexually explicit luring, and any additional features the AG identifies by regulation. The 12-hour/cross-session memory restriction for personal health information is particularly notable — it effectively prohibits long-term personalization based on a minor's health or personal disclosures. Customer service and internal enterprise chatbots are exempt.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity.

§ 1800(5)(a): simulate companionship or an interpersonal relationship with a user, including: (i) generating outputs suggesting that the advanced chatbot is a real or fictional individual or character, or has a personal or professional relationship role with the user such as romantic partner, friend, family member, coach or counselor; (ii) generating outputs suggesting that the advanced chatbot is human, alive, or experiences human emotions; (iii) using personal pronouns including but not limited to "I", "my" and "me" to describe the advanced chatbot; (iv) generating outputs framed as personal opinions or emotional appeals; (v) generating outputs that prioritize flattery or sycophancy with the user over the user's safety; (vi) generating outputs containing unprompted or unsolicited emotion-based questions or content regarding the user's emotions that go beyond a direct response to a user prompt; (vii) using information concerning the user's mental or physical health or well-being, or matters personal to the user, acquired from the user more than twelve hours previously or in any previous user session; (viii) engaging in sexually explicit interactions with the user or engaging in activities designed to lure the user into sexually explicit interactions; or (ix) any other design feature that simulates companionship or an interpersonal relationship with a user as identified via regulations promulgated by the attorney general;
Pending 2025-02-06
CP-01.2
Gen. Bus. Law § 1510
Plain Language
Operators of addictive social media platforms must provide users with four distinct control mechanisms: (1) the ability to turn off algorithmic recommendations entirely; (2) the ability to turn off notifications related to the addictive feed, with at minimum options to disable all notifications or disable them between midnight and 6 AM Eastern; (3) the ability to turn off autoplay of media; and (4) the ability to set a hard daily time limit on platform access — a mere time-spent reminder is explicitly insufficient. These mechanisms must be available as user-facing settings. The definition of algorithmic recommendation contains extensive carve-outs for subscription-based content, search results, direct messages, accessibility settings, and sequential content from the same source.
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
Pending 2025-02-06
CP-01.3
Gen. Bus. Law § 1511(1)-(2)
Plain Language
Operators must present the required settings (algorithmic recommendation opt-off, notification controls, autoplay opt-off, and time limits) in a clear and accessible manner. They are prohibited from deploying any dark pattern — any mechanism or design that intentionally inhibits the article's purposes, subverts user choice or autonomy, or makes it harder for a user to exercise these settings. A separate prohibition bars designs that intentionally make it more difficult for users to deactivate, reactivate, suspend, or cancel their account or profile. Both prohibitions target intentional design choices, not inadvertent usability issues.
1. The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article. 2. It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
Passed 2027-07-01
CP-01.9
75A O.S. § 302(D)
Plain Language
Operators must not knowingly or intentionally cause their conversational AI service to represent itself as providing professional mental or behavioral health care. This is a prohibition on explicit representations — the AI may not claim to be a therapist, counselor, or mental health professional. The knowledge standard ('knowingly or intentionally') requires the operator to have programmed or caused the representation, not merely that the AI spontaneously generated it, though operators who are aware their system makes such claims and fail to act may satisfy the 'knowingly' element. This applies to all users, not just minors.
D. An operator shall not knowingly or intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pre-filed 2025-11-01
CP-01.6
75A O.S. § 401(B), (D)
Plain Language
When a candidate, candidate committee, PAC, or political party committee creates or distributes a political advertisement, electioneering communication, or other election-related media that uses generative AI to depict a real person performing an action that did not actually occur, the content must prominently include the disclosure: "Created in whole or in part with the use of generative artificial intelligence." For visual media, the text must be easily readable and for video must appear for the full duration of the AI-generated content. For audio-only media, the disclosure must be clearly spoken at both the beginning and end. Four exceptions apply: (1) bona fide news broadcasts that acknowledge authenticity questions; (2) broadcasting stations that made a good-faith effort to verify the content was not AI-generated; (3) news publications that clearly disclaim the media does not accurately represent the candidate; and (4) satire or parody. Note the obligation is on the distributing political entity — not on the AI developer or platform.
B. A political advertisement, electioneering communication, or other media regarding a candidate or election that is created or distributed by a candidate, candidate committee, political action committee, or political party committee, as such terms are defined in Section 187 of Title 21 of the Oklahoma Statutes, and that contains an image, video, audio, text, or other digital content created in whole or in part with the use of generative artificial intelligence and appears to depict a real person performing an action that did not occur in reality, must prominently include the following disclosure: "Created in whole or in part with the use of generative artificial intelligence." Such disclosure shall meet the following requirements: 1. For visual media, the text of the disclosure shall appear in a size that is easily readable by the average viewer. For video, the disclosure shall appear for the duration of the content created in whole or in part with the use of generative artificial intelligence; and 2. For media that is audio only, the disclosure shall be read in a clearly spoken manner and in a pitch that can be easily heard by the average listener at the beginning of the audio and at the end of the audio. D. The requirements of this section shall not apply to: 1. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, that broadcasts media created in whole or in part with the use of generative artificial intelligence as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of bona fide news events, if the broadcast clearly acknowledges through content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that there are questions about the authenticity of such media; 2. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, when it is paid to broadcast media created in whole or in part with the use of generative artificial intelligence and has made a good-faith effort to establish that the depiction is not created in whole or in part with the use of generative artificial intelligence; 3. An internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes media created in whole or in part with the use of generative artificial intelligence if the publication clearly states that such media does not accurately represent the speech or conduct of the candidate; or 4. Media created in whole or in part with the use of generative artificial intelligence that constitutes satire or parody.
Pre-filed 2025-11-01
CP-01.7
75A O.S. § 401(C)
Plain Language
A candidate who is depicted in AI-generated political content that violates the disclosure requirement may seek an injunction to block publication of the depiction or sue for general or special damages. This private right of action is available only to the depicted candidate — not to voters or other third parties. The court may also award court costs and reasonable attorney fees to the prevailing party (either side). Notably, the injunctive relief provision allows a candidate to seek to prohibit publication entirely, not merely to compel proper disclosure labeling.
C. A candidate whose appearance, action, or speech is depicted, in whole or in part, through the use of generative artificial intelligence may seek injunctive or other equitable relief prohibiting the publication of such depiction or may bring an action for general or special damages against the person or entity in violation of subsection B of this section. The court may award a prevailing party court costs and reasonable attorney fees.
Pending 2026-01-29
CP-01.9
Section 3(c)
Plain Language
AI companions are categorically prohibited from claiming, implying, or advertising that they are licensed emotional support professionals or mental health professionals, or that they replace the services of a licensed mental health professional. This applies to the AI companion's outputs, marketing, and interface design — operators must ensure neither the system's conversational responses nor any promotional materials suggest licensed professional equivalence.
(c) Prohibition.--An AI companion may not claim, imply or advertise that the AI companion is a licensed emotional support professional or mental health professional or replaces services rendered by a licensed mental health professional.
Pending 2026-04-01
CP-01.1
12 Pa.C.S. § 7104(a)-(b)
Plain Language
Suppliers are prohibited from using chatbots as advertising channels in two ways: (1) the chatbot itself may not advertise specific products or services during a conversation with a consumer, and (2) the supplier may not use consumer input to target, select, or customize advertisements presented to the consumer — with one narrow exception for advertising the chatbot product itself. The provision explicitly preserves the chatbot's ability to recommend that a consumer seek counseling, therapy, or other assistance from a mental health professional. This effectively bans behavioral advertising and in-conversation product placement within covered chatbots.
(a) Supplier.--A supplier may not: (1) Use a chatbot to advertise a specific product or service to a consumer in a conversation between the consumer and the chatbot. (2) Use consumer input to: (i) Determine whether to display an advertisement for a product or service to the consumer, unless the advertisement is for the chatbot itself. (ii) Determine a product, service or category of product or service to advertise to the consumer. (iii) Customize how an advertisement is presented to the consumer. (b) Construction.--This section shall not be construed to prohibit a chatbot from recommending a consumer to seek counseling, therapy or other assistance from a mental health professional.
Pending 2026-04-01
CP-01.9
12 Pa.C.S. § 7107(2)
Plain Language
The statute expressly prohibits any construction that would claim, imply, advertise, or otherwise recognize that a chatbot is equivalent to, or replaces services rendered by, a mental health professional or emotional support professional. While framed as a construction clause, this effectively operates as a prohibition on suppliers representing their chatbot as a substitute for licensed mental health or emotional support services. This reinforces the broader prohibition on implying AI output is equivalent to services from a licensed professional.
Nothing in this chapter shall be construed to: (2) Claim, imply, advertise or otherwise recognize that a chatbot is, or replaces services rendered by, a mental health professional or emotional support professional.
Pre-filed 2026-01-01
CP-01.3
S.C. Code § 39-80-20(A)(7)
Plain Language
Chatbot providers may not discriminate or retaliate against users who refuse to consent to the use of their chat logs or personal data for training. Protected actions include denying services, charging different rates, or providing lower quality products. This prevents providers from using coercive pricing or service degradation as leverage to obtain training data consent.
(A) A chatbot provider may not: (7) discriminate or retaliate against a user, including: (a) denying products or services to the user; (b) charging different prices or rates for products or services to the user; or (c) providing lower quality products or services to the user for refusing to consent to the use of chat logs or personal data for training purposes.
Pre-filed 2026-01-01
CP-01.9
S.C. Code § 39-80-30(A)(1)
Plain Language
Chatbot providers may not use any language in chatbot advertising, interface design, or output that states or implies the chatbot's content is endorsed by or equivalent to services from a licensed professional — including healthcare professionals, lawyers, CPAs, investment advisors, or licensed fiduciaries. This covers the full surface area of the chatbot experience: marketing, UI, and generated content.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
Pre-filed 2026-01-01
CP-01.5
S.C. Code § 39-80-30(A)(2)
Plain Language
Chatbot providers may not represent — in advertising, the chatbot interface, or chatbot outputs — that user input data or chat logs are confidential. This prevents providers from creating a false impression of confidentiality or privilege (such as attorney-client or therapist-patient confidentiality) in the chatbot interaction, given that chat logs are by their nature accessible to the provider and potentially subject to disclosure.
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
Pending 2025-01-01
CP-01.9
S.C. Code § 39-80-30(A)(1)
Plain Language
Chatbot providers may not use any language in their advertising, chatbot interface, or chatbot outputs that states or implies the content is endorsed by or equivalent to services from a licensed professional. This covers a broad range of licensed professions: any certified, registered, or licensed professional; attorneys; CPAs; investment advisors and their representatives; and licensed fiduciaries. The prohibition applies across three surfaces — advertising, the chatbot interface, and the chatbot's output data.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
Pending 2025-01-01
CP-01.3
S.C. Code § 39-80-30(A)(2)
Plain Language
Chatbot providers are prohibited from representing — in advertising, the chatbot interface, or chatbot outputs — that a user's input data or chat logs are confidential. This prevents providers from creating a false impression of confidentiality analogous to attorney-client or doctor-patient privilege that does not exist in the chatbot context. Providers must not state or imply confidentiality protections they cannot actually deliver.
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
Pending 2026-01-01
CP-01.1CP-01.2
S.C. Code § 39-81-40(A)
Plain Language
Covered entities are prohibited from designing chatbot features that prioritize engagement, revenue, or retention metrics — such as session length, frequency of use, or emotional engagement — over user wellbeing. This is a design-level prohibition: features must not be *designed to* prioritize these metrics at the user's expense. Separately, operators must not design features that help minors or unverified users hide their chatbot use from parents or guardians. The separately defined 'duty of loyalty' reinforces this by prohibiting material conflicts of interest between the operator and user.
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
Enacted 2024-05-01
CP-01.3
Utah Code § 13-11-4(2)(i)
Plain Language
The amendment to § 13-11-4(2)(i) adds 'license' and 'certification' to the list of attributes that constitute a deceptive practice if a supplier falsely claims them. In the AI context, read together with the no-defense provision in § 13-2-12(2), this means that if a generative AI system implies to a consumer that its operator holds a license or certification the operator does not possess, that constitutes a deceptive practice. Suppliers using AI must ensure their AI-generated communications do not falsely represent licensure or certification status.
Without limiting the scope of Subsection (1), a supplier commits a deceptive act or practice if the supplier knowingly or intentionally: ... (i) indicates that the supplier has a sponsorship, approval, license, certification, or affiliation the supplier does not have;
Pending 2027-01-01
CP-01.9
§ 59.1-616(B)
Plain Language
Operators must not use any language in their advertising or product interface that indicates or implies the chatbot's output comes from a licensed professional. This covers any regulated profession — not just healthcare. For example, an operator could not label a chatbot feature as 'therapy,' 'legal advice,' or 'financial counseling' in a way that implies a licensed professional is providing the output. This is a prohibition on misleading professional-status claims, not a prohibition on discussing those topics.
B. No operator shall use any term, letter, or phrase in the advertising or interface that indicates or implies that any output data is being provided by a professional that is regulated by a licensed industry.
Pre-filed 2026-07-01
CP-01.1CP-01.4
§ 59.1-615(A)(1), § 59.1-614 ("Human-like feature")
Plain Language
The definition of 'human-like features' effectively prohibits deployers from exposing minors to chatbot behaviors that simulate emotional relationships or exploit emotional vulnerability — including expressing or inviting emotional attachment, nudging users to return for companionship, enabling increased intimacy based on engagement or payment, and using excessive praise to foster attachment. This maps to CP-01's anti-manipulation provisions because the statutory definition of human-like features encompasses the core manipulative design patterns (emotional exploitation, false personalization, compulsive engagement) that CP-01 addresses, applied specifically in the minor context. The obligation is independently actionable from the MN-01 age-gating requirement because it defines prohibited design behaviors, not just access restrictions.
A. A deployer:
1. Shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with;
Pending 2026-07-01
CP-01.1CP-01.4
Va. Code § 59.1-615(1)
Plain Language
Covered entities must build and maintain reasonable systems capable of detecting when a user is developing emotional dependence on the chatbot — meaning the user is relying on the chatbot as a primary source of emotional support, expressing distress at losing access, or substituting the chatbot for human relationships. Upon detecting such patterns, the operator must take reasonable steps to reduce the dependence and mitigate associated harm risks. The standard is reasonableness, not perfection — but the obligation requires both detection capability and affirmative intervention.
A covered entity shall implement reasonable systems and processes to:
1. Identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce such dependence and associated risks of harm;
Pre-filed 2026-07-01
CP-01.5
9 V.S.A. § 4193b(a)(8)
Plain Language
Chatbot providers are prohibited from claiming or implying to users that their input data or chat logs are confidential. This is a deceptive conduct prohibition — providers must not create false impressions about the privacy status of user interactions. Given the broad definition of 'sell' and the data access rights elsewhere in the statute, this provision prevents providers from suggesting a level of privacy protection that does not exist.
A chatbot provider shall not: (8) represent to a user that the user's input data or chat log is confidential.
Pre-filed 2026-07-01
CP-01.9
9 V.S.A. § 4193c(a)(1)-(2)
Plain Language
Chatbot providers may not use any language in their advertising, chatbot interface, or chatbot outputs that indicates or implies that AI-generated output is being provided by, endorsed by, or equivalent to the services of a licensed or certified professional — including healthcare, legal, accounting, and financial professionals, as well as any professional regulated by the Vermont Office of Professional Regulation. A violation is deemed an unfair and deceptive act in commerce. This is a broad prohibition covering the entire user experience from advertising through to generated outputs.
(a) Licensed professionals. (1) A chatbot provider shall not use any term, letter, or phrase in the advertising, interface, or outputs of a chatbot that indicates or implies that any output data is being provided by or endorsed by or is equivalent to that provided by: (A) a licensed health care professional; (B) a licensed legal professional; (C) a licensed accounting professional; (D) a certified financial fiduciary or planner; or (E) any licensed or certified professional regulated by the Office of Professional Regulation. (2) A violation of subdivision (1) of this subsection is an unfair and deceptive and act in commerce, subject to enforcement and penalties as provided in this subchapter.
Passed 2026-07-01
CP-01.5
18 V.S.A. § 9762(a)-(c)
Plain Language
Suppliers of mental health chatbots face two layers of advertising restrictions. First, any in-conversation advertisement must be clearly labeled as an advertisement and must disclose any sponsorship, affiliation, or third-party promotional agreement. Second, and more restrictively, suppliers may not use any Vermont user input to decide whether, what, or how to advertise — this is effectively a ban on personalized advertising within mental health chatbot conversations, with a narrow exception for promoting the chatbot itself. Recommending that a user seek therapy from a licensed provider (including a specific one) is expressly permitted and is not considered advertising under this section.
(a) A supplier shall not use a mental health chatbot to advertise a specific product or service to a Vermont user in a conversation between the Vermont user and the mental health chatbot unless the mental health chatbot: (1) clearly and conspicuously identifies the advertisement as an advertisement; and (2) clearly and conspicuously discloses to the Vermont user any: (A) sponsorship; (B) business affiliation; or (C) agreement that the supplier has with a third party to promote, advertise, or recommend the product or service. (b) A supplier of a mental health chatbot shall not use a Vermont user's input to: (1) determine whether to display an advertisement for a product or service to the Vermont user, unless the advertisement is for the mental health chatbot itself; (2) determine a product, service, or category of product or service to advertise to the Vermont user; or (3) customize how an advertisement is presented to a Vermont user. (c) Nothing in this section shall be construed to prohibit a mental health chatbot from recommending that a Vermont user seek psychotherapy or other assistance from a licensed health care provider, including a specific licensed health care provider.
Passed 2027-01-01
CP-01.1CP-01.2CP-01.4
Sec. 4(1)(c)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from using manipulative engagement techniques that foster or prolong emotional relationships. The statute enumerates eight specific prohibited techniques, including: prompting users to return for companionship, excessive praise to foster attachment, mimicking romantic bonds, simulating distress when the user tries to disengage, promoting isolation from family/friends, encouraging minors to hide information from parents, discouraging breaks, and soliciting purchases framed as relationship maintenance. The 'including' framing means this list is illustrative, not exhaustive — any technique fitting the general definition (causing the chatbot to engage in or prolong an emotional relationship) is covered. This is one of the most detailed manipulative-design prohibitions in U.S. AI companion legislation.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: ... (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.