CP-01
Consumer Protection
Deceptive & Manipulative AI Conduct
AI systems may not be designed or deployed to deceive or manipulate users against their own interests. This covers psychologically exploitative design, deceptive UX patterns, false personalization, and AI-generated political content. All are derived from unfair and deceptive trade practice frameworks applied to AI contexts.
Applies to DeveloperDeployerProfessional Sector ChatbotPolitical AdvertisingGeneral Consumer App
Bills — Enacted
3
unique bills
Bills — Proposed
53
Last Updated
2026-03-29
Core Obligation

AI systems may not be designed or deployed to deceive or manipulate users against their own interests. This covers psychologically exploitative design, deceptive UX patterns, false personalization, and AI-generated political content. All are derived from unfair and deceptive trade practice frameworks applied to AI contexts.

Sub-Obligations9 sub-obligations
ID
Name & Description
Enacted
Proposed
CP-01.1
Psychological vulnerability exploitation prohibition AI systems may not be designed to identify and exploit individual psychological vulnerabilities — including grief, loneliness, anxiety, or addiction susceptibility — or to exploit cognitive biases and subconscious processing to influence behavior in ways users would not endorse if they understood the mechanism. This prohibition applies regardless of whether the manipulation is intended to extract commercial value, influence decisions, or modify behavior.
0 enacted
10 proposed
CP-01.2
Compulsive engagement design prohibition AI systems may not be designed to create compulsive or addictive engagement patterns users cannot reasonably moderate — including variable reward schedules, manufactured urgency, and engagement optimization that prioritizes platform metrics over user wellbeing.
0 enacted
9 proposed
CP-01.3
Deceptive dark patterns prohibition AI systems may not use deceptive interface patterns — including misleading defaults, hidden opt-outs, manufactured social proof, or confusing choices — to obtain consent or influence decisions.
1 enacted
6 proposed
CP-01.4
Simulated emotional attachment prohibition AI systems may not be designed to simulate genuine emotional relationships for the purpose of manipulating decisions or extracting value, where the system knows the emotional response is not warranted.
0 enacted
6 proposed
CP-01.5
Deceptive personalization prohibition AI systems may not use personal data to generate false impressions of personal connection, personal endorsement, or personal relationship that does not exist. Fabricated reviews, testimonials, and social proof are also prohibited.
0 enacted
5 proposed
CP-01.6
AI in political content — disclosure requirement AI-generated political advertising and communications must be labeled as AI-generated. Disclosure requirements vary by jurisdiction in label language, prominence, definition of political content, and timing windows relative to elections.
1 enacted
2 proposed
CP-01.7
AI in political content — fabricated candidate content prohibition AI-generated content that depicts a candidate saying or doing something they did not say or do is prohibited within a defined election window (typically 60–90 days). This is a prohibition — the content cannot be published even with a disclosure label.
1 enacted
1 proposed
CP-01.9
AI Professional Credential Misrepresentation Prohibition AI systems and their operators must not use any term, interface design, or output language that indicates or implies AI output is provided by, endorsed by, or equivalent to services from a licensed healthcare, legal, accounting, financial, or other certified professional.
1 enacted
26 proposed
CP-01.10
Protected-Class Pricing Prohibition No person may use protected-class data (e.g., race, ethnicity, sex, age, disability) as inputs to algorithmic pricing models where such use results in discriminatory price differentiation based on protected characteristics.
0 enacted
1 proposed
Bills That Map This Requirement 56 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-10-01
CP-01.9
A.R.S. § 18-802(H)
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to represent that it provides professional mental or behavioral health care. Both elements — knowledge and intent — must be present. The prohibition targets explicit representations that the AI is designed for professional clinical care, not incidental health-related responses. This is a narrow prohibition: it covers explicit claims of providing professional mental or behavioral health care specifically, and requires both knowing and intentional conduct by the operator.
H. An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pending 2026-01-01
CP-01.9
A.R.S. § 44-1383.02(A)(1)
Plain Language
Chatbot providers are prohibited from using any term, phrase, or language in chatbot advertising, interface design, or output data that states or implies the chatbot's outputs are endorsed by or equivalent to services from any licensed, registered, or certified professional — including healthcare professionals, attorneys, CPAs, investment advisors, and licensed fiduciaries. This covers the full range of Title 32 professionals plus specifically enumerated financial and legal professionals.
A chatbot provider may not: 1. Use any term, letter or phrase in the advertising, interface or output data of a chatbot that states or implies that the advertising, interface or output data of a chatbot is endorsed by or equivalent to any of the following: (a) Any certified, registered or licensed professional pursuant to title 32. (b) A licensed legal professional. (c) A certified public accountant as defined in section 32-701. (d) An investment advisor or an investment adviser representative as defined in section 44-3101. (e) A licensed fiduciary as prescribed in title 14, chapter 5, article 7.
Pending 2026-01-01
CP-01.5
A.R.S. § 44-1383.02(A)(2)
Plain Language
Chatbot providers may not represent — in advertising, the chatbot interface, or chatbot outputs — that a user's input data or chat logs are confidential. This prevents providers from creating a false impression of professional confidentiality (such as attorney-client privilege or doctor-patient confidentiality) that does not legally attach to chatbot interactions. The prohibition applies across all touchpoints: marketing materials, the product interface itself, and the chatbot's generated responses.
A chatbot provider may not: 2. Include any representation in the advertising, interface or output data of a chatbot that states or implies the user's input data or chat log is confidential.
Enacted 2026-01-01
CP-01.9
Bus. & Prof. Code § 22650(a)-(d)
Plain Language
Any provider of AI technology that enables users to create digital replicas must display the mandated consumer warning — verbatim statutory text about civil and criminal liability — on every page or screen where a user can input a prompt, and include it in the terms and conditions. All warnings must be clear and conspicuous. Failure to comply exposes the provider to civil penalties up to $10,000 per day, enforced by public prosecutors. A narrow carve-out applies for digital replicas created within video games and used solely in gameplay without external distribution. The compliance deadline is December 1, 2026. This maps to CP-01.9 because it is a mandated consumer-facing disclosure about the nature and legal risks of AI-generated output — specifically warning that outputs may implicate another person's rights — though it is a novel form of disclosure not squarely addressed in most other jurisdictions.
(a) By December 1, 2026, any person or entity that makes available to consumers any artificial intelligence technology that enables a user to create a digital replica shall provide the following consumer warning:
"Unlawful use of this technology to depict another person without prior consent may result in civil or criminal liability for the user."
(b) The warning shall be hyperlinked on any page or screen where the consumer may input a prompt to the artificial intelligence technology. The warning shall also be included in the terms and conditions for use of the artificial intelligence technology. All warnings shall be displayed in a manner that is clear and conspicuous.
(c) Failure to comply with subdivision (a) or (b) is punishable by a civil penalty not to exceed ten thousand dollars ($10,000) for each day that the technology is provided to or offered to the public without a consumer warning. A public prosecutor may enforce this section by bringing a civil action in any court of competent jurisdiction.
(d) The warning shall not be required for a digital replica created in a video game where the digital replica is used solely in game play and is not distributed outside of the game.
Pending 2027-07-01
CP-01.3
Bus. & Prof. Code § 22613(a)-(c)
Plain Language
Operators are prohibited from: (1) targeting any advertising at a child, including product placement within conversations; (2) selling, sharing, or using a child's personal information for any purpose not expressly authorized by this chapter; and (3) designing, implementing, or deploying interface designs, features, or techniques likely to mislead or interfere with a reasonable child's or parent's autonomy, decision-making, or ability to locate and use safety features, privacy controls, or parental controls. The advertising prohibition is absolute — no form of targeted advertising to children is permitted, including in-conversation product placement. The personal data restriction is strict — only uses expressly authorized by this chapter are permitted. The dark pattern prohibition specifically protects the ability to find and use safety features.
An operator shall not do any of the following: (a) Target advertising at a child, including through product placement in conversational chats with the child. (b) Sell, share, or use for any purpose not expressly authorized by this chapter the personal information of a child. (c) Design, implement, or deploy a user interface design, feature, or technique that is likely to mislead, impair, or interfere with a reasonable child's or reasonable parent's autonomy, decisionmaking, or choice or with the ability to locate, understand, enable, or maintain a safety feature, privacy control, or parental control.
Enacted 2024-07-01
CP-01.6CP-01.7
C.R.S.A. § 1-46-103(1)-(3)
Plain Language
During the 60 days before a primary election or 90 days before a general election, no person may distribute a communication about a candidate that includes a deepfake — AI-generated content falsely depicting a candidate saying or doing something they did not — if the person knows or has reckless disregard for the inauthenticity. This prohibition functions as a default ban that is lifted if the communication carries a compliant disclosure. The required disclosure must state that the content has been edited and depicts false speech or conduct, must appear in prescribed formats for visual and audio media, and must be embedded in the content's metadata along with the identity of the creation tool and timestamp. The metadata disclosure must be permanent and non-removable to the extent technically feasible. Extensive carve-outs apply: interactive computer services under Section 230 are exempt, as are news organizations that acknowledge authenticity concerns, broadcasters paid to air deepfakes, satire and parody, and technology providers that create deepfake tools. The 'candidate' definition is broad, covering state, local, and federal candidates and incumbents. Compared to states like Texas (SB 751) which impose an outright pre-election ban without a disclosure safe harbor, Colorado's approach is disclosure-based — the deepfake is permissible if properly labeled.
(1) Except as provided in subsections (2) and (3) of this section, no person shall distribute, disseminate, publish, broadcast, transmit, or display a communication concerning a candidate for elective office that includes a deepfake to an audience that includes members of the electorate for the elective office to be represented by the candidate either sixty days before a primary election or ninety days before a general election, if the person knows or has reckless disregard for the fact that the depicted candidate did not say or do what the candidate is depicted as saying or doing in the communication. (2)(a) The prohibition in subsection (1) of this section does not apply to a communication that includes a disclosure stating, in a clear and conspicuous manner, that: "This (image/audio/video/multimedia) has been edited and depicts speech or conduct that falsely appears to be authentic or truthful." (b) A disclosure required under this section is considered to be made in a clear and conspicuous manner if the disclosure meets the following requirements: (I) In a visual communication, the text of the disclosure statement appears in a font size no smaller than the largest font size of other text appearing in the visual communication. If the visual communication does not include any other text, the disclosure statement appears in a font size that is easily readable by the average viewer. (II) In an audio communication, the disclosure statement shall be read in a clearly spoken manner in the same pitch, speed, language, and volume as the majority of the audio communication, at the beginning of the audio communication, at the end of the audio communication, and, if the audio communication is greater than two minutes in length, interspersed within the audio communication at intervals of not more than one minute each; (III) The metadata of the communication includes the disclosure statement, the identity of the tool used to create the deepfake, and the date and time the deepfake was created; (IV) The disclosure statement in the communication, including the disclosure statement in any metadata, is, to the extent technically feasible, permanent or unable to be easily removed by a subsequent user; (V) The communication complies with any additional requirements for the disclosure statement that the secretary of state may adopt by rule to ensure that the disclosure statement is presented in a clear and conspicuous and understandable manner; and (VI) In a broadcast or online visual or audio communication that includes a statement required by subsection (2) of this section, the statement satisfies all applicable requirements, if any, promulgated by the federal communications commission for size, duration, and placement. (3) This section is subject to the following limitations: (a) This section does not alter or negate any rights, obligations, or immunities of an interactive computer service in accordance with 47 U.S.C. sec. 230, as amended, and shall otherwise be construed in a manner consistent with federal law; (b) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer that broadcasts a communication that includes a deepfake prohibited by subsection (1) of this section as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of a bona fide news event, if the broadcast or publication clearly acknowledges through content or a disclosure, in a manner that can be easily heard and understood or read by the average listener or viewer, that there are questions about the authenticity of the deepfake in the communication; (c) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, producer, or streaming service, when the station is paid to broadcast a communication that includes a deepfake; (d) This section does not apply to an internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication or streaming service, that routinely carries news and commentary of general interest and that publishes a communication that includes a deepfake prohibited by subsection (1) of this section, if the publication clearly states that the communication that includes the deepfake does not accurately represent a candidate for elective office; (e) This section does not apply to media content that constitutes satire or parody or the production of which is substantially dependent on the ability of an individual to physically or verbally impersonate the candidate and not upon generative AI or other technical means; (f) This section does not apply to the provider of technology used in the creation of a deepfake; and (g) This section does not apply to an interactive computer service, as defined in 47 U.S.C. sec. 230(f)(2), for any content provided by another information content provider as defined in 47 U.S.C. sec. 230(f)(3).
Enacted 2024-07-01
CP-01.6
C.R.S.A. § 1-45-111.5(1.5)(c.5)(I)-(II)
Plain Language
This provision establishes mandatory minimum administrative penalties specifically for violations of the deepfake disclosure requirement. For violations that do not involve paid promotion, the hearing officer must impose at least $100 per violation, but may impose more based on distribution and public exposure. For violations involving paid advertising, the minimum penalty is 10% of the amount spent to promote the communication, again with discretion to impose more. These penalties are additive — they apply in addition to any other penalties available under the Fair Campaign Practices Act. This penalty structure creates a significant financial deterrent for well-funded deepfake distribution campaigns, since the 10% floor scales with spending.
(c.5) In addition to and without prejudice to any other penalty authorized under this article 45, a hearing officer shall impose a civil penalty as follows: (I) At least one hundred dollars for each violation that is a failure to include a disclosure statement in accordance with section 1-46-103(2), if the violation does not involve any paid advertising or other spending to promote or attract attention to a communication prohibited by section 1-46-103(1), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103; and (II) At least ten percent of the amount paid or spent to advertise, promote, or attract attention to a communication prohibited by section 1-46-103(1) that does not include a disclosure statement in accordance with section 1-46-103(2), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103.
Pending 2027-01-01
CP-01.9
C.R.S. § 6-1-1708(4)
Plain Language
Operators must not use any language in their advertising, interface, or AI outputs that indicates or implies the AI's output is provided by, endorsed by, or equivalent to services from a licensed healthcare professional, licensed legal professional, licensed accounting professional, or certified financial fiduciary or planner. This covers the full user-facing surface — from marketing materials to the chat interface to the AI's own responses. The prohibition targets false professional credentialing, not merely the quality of the output — operators cannot frame AI responses as professional advice or services.
On and after January 1, 2027, an operator shall not use any term, letter, or phrase in the advertising, interface, or outputs of a conversational artificial intelligence service that indicates or implies that any output data provided by the conversational artificial intelligence service is being provided by, endorsed by, or equivalent to services provided by: (a) A licensed health-care professional; (b) A licensed legal professional; (c) A licensed accounting professional; or (d) A certified financial fiduciary or planner.
Pending 2026-07-01
CP-01.5
O.C.G.A. § 10-1-973(e)
Plain Language
Even where consent has been obtained for commercial use of a digital replica, the digital replica must not falsely imply that the depicted individual personally endorsed or approved the specific use of their likeness. This is an anti-deception requirement that applies independently of the consent obligation — it prohibits false endorsement implications regardless of whether underlying consent for the likeness use exists.
(e) A digital replica used for commercial purposes shall not falsely imply that an individual personally endorsed or approved such use of his or her likeness.
Passed 2025-07-01
CP-01.9
O.C.G.A. § 39-5-6(i)
Plain Language
Operators may not knowingly and intentionally program or cause a conversational AI service to represent that it provides professional mental or behavioral health care. The mens rea standard is high — 'knowingly and intentionally' — meaning accidental or emergent AI outputs claiming to be a mental health professional would not violate this provision unless the operator deliberately caused or programmed the system to do so. The prohibition covers explicit representations only, not implied suggestions.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pending 2027-07-01
CP-01.9
§ 554J.5
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to make representations or statements that would lead a reasonable person to believe the service provides professional psychology or behavioral health services requiring licensure under Iowa chapters 154B (psychology) or 154D (behavioral health). This is a mental-state-gated prohibition — it requires both knowing and intentional conduct, so accidental or emergent outputs that happen to resemble professional health advice may not trigger liability. The standard is what a reasonable individual would believe, not what the service actually provides.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
Pending
CP-01.5CP-01.9
§ 554J.2(2)
Plain Language
Deployers are prohibited from knowingly or recklessly designing or making available a public-facing chatbot that: (a) misleads a reasonable user into thinking the chatbot is a specific human being; (b) misleads a reasonable user into thinking the chatbot is state-licensed; or (c) encourages, promotes, or coerces a user to commit suicide, self-harm, or sexual or physical violence against a human or animal. The knowledge standard is 'knowingly or recklessly' — negligent design alone does not trigger liability. Sub-paragraph (c) overlaps with S-02.7 (self-harm content restrictions) but is grouped here because it is part of a single enumerated prohibition list.
2. A deployer shall not knowingly or recklessly design or make a public-facing chatbot available that does any of the following: a. Misleads a reasonable user into believing the public-facing chatbot is a specific human being. b. Misleads a reasonable user into believing the public-facing chatbot is licensed by the state. c. Encourages, promotes, or coerces a user to commit suicide, perform acts of self-harm, or engage in sexual or physical violence against a human or an animal.
Pending 2025-07-01
CP-01.9
§ 554J.2(2)(c)-(d)
Plain Language
Chatbots must (1) clearly and conspicuously disclose at the beginning of each conversation and at regular intervals that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such services, and (2) be programmed to prevent the chatbot from representing itself as a licensed professional of any type — including therapists, physicians, lawyers, and financial advisors. The first obligation is a recurring disclosure requirement; the second is a design-level prohibition. Both target the same risk: users mistaking chatbot output for professional advice or service.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
Pending 2026-07-01
CP-01.9
§ 554J.2(1)
Plain Language
Providers may not design or operate an AI chatbot in a way that allows it to offer or simulate professional mental health advice. The defined scope of "mental health advice" covers statements purporting to diagnose, treat, mitigate, or address emotional distress, psychological disorders, self-harm, suicidal ideation, or other mental health concerns. This is a design and operational prohibition — the provider must affirmatively prevent the chatbot from generating such outputs, not merely disclaim them.
1. A provider shall not design or operate an artificial intelligence chatbot in a manner that allows the artificial intelligence chatbot to offer or simulate professional mental health advice.
Pending 2026-07-01
CP-01.9
§ 554J.2(2)
Plain Language
AI chatbots may not represent themselves as licensed professionals (psychologists under chapter 154B or behavioral science professionals under chapter 154D) or offer services that would require such licensure. This is a distinct prohibition from the § 554J.2(1) ban on simulating mental health advice — this subsection specifically targets false claims of professional identity or licensure status, while § 554J.2(1) targets the substance of the output. A chatbot violates this provision by claiming to be a licensed psychologist or by offering to conduct therapy sessions, regardless of whether a disclaimer is present.
2. An artificial intelligence chatbot shall not represent itself as a licensed professional or offer services that would require licensure under chapter 154B or 154D.
Passed 2027-07-01
CP-01.9
§ 554J.5
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to represent — through statements or representations — that it provides professional psychology or behavioral health services that would require licensure under Iowa chapters 154B (psychologists) or 154D (behavioral science). This is a scienter-based prohibition: it requires both knowledge and intent. Accidental or emergent outputs that a user might interpret as therapeutic advice do not violate this provision unless the operator knowingly and intentionally caused or programmed the behavior.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
Pending 2025-07-01
CP-01.9
§ 554J.2(2)(c)-(d)
Plain Language
Two related obligations apply. First, every chatbot must display a clear and conspicuous disclaimer at the start of each conversation and at regular intervals stating that it does not provide medical, legal, financial, or psychological services and directing users to consult a licensed professional. Second, the chatbot must be programmed to prevent it from representing itself as a licensed professional of any type — therapist, physician, lawyer, financial advisor, or otherwise. Together these provisions prevent chatbots from impersonating or substituting for licensed professionals. Unlike the thirty-minute interval specified for AI identity disclosure, the interval for this professional services disclaimer is 'regular' — leaving the specific cadence to implementing rules or operator judgment.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
Passed 2027-07-01
CP-01.9
Idaho Code § 48-2103(3)
Plain Language
Operators may not knowingly and intentionally cause or program a conversational AI service to represent that it provides professional mental or behavioral health care. This applies to explicit representations only — the provision does not cover implied suggestions. The scienter requirement is high: the operator must both know and intend the representation. This prevents operators from marketing or programming their conversational AI as a substitute for licensed mental or behavioral health professionals.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pending 2027-01-01
CP-01.2CP-01.4
Section 10(a)(1)-(2)
Plain Language
Operators may not deploy companion AI products that incorporate variable-ratio or variable-interval reward/affirmation schedules designed to maximize engagement time, or that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment when a user tries to end a conversation, reduce usage, or delete their account. These prohibitions apply by default but may be overridden by an adult user who specifically configures the product to enable them. The adult opt-in exception does not apply to minor users (see Section 10(b)).
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (1) manipulative engagement mechanics that cause to be delivered a system of rewards or affirmations delivered to the user on a variable ratio or variable interval reinforcement schedule with the purpose of maximizing user engagement time; (2) simulated distress for retention features that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time, or delete the user's account;
Pending 2027-01-01
CP-01.5
Section 10(a)(3)
Plain Language
Operators may not deploy companion AI products that make material misrepresentations about the product's identity, capabilities, training data, or its status as a non-human entity — including when a user directly asks. This prohibition covers the AI falsely claiming to be human, misrepresenting what it can do, or mischaracterizing the data it was trained on. As with the other Section 10(a) prohibitions, an adult user may specifically configure the product to enable this feature, but this exception does not apply to minors.
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: ... (3) deceptive misrepresentation that cause the companion artificial intelligence product to make material misrepresentations about its identity, capabilities, training data, or its status as a non-human entity, including when directly questioned by the user.
Pending 2027-01-01
CP-01.1CP-01.2CP-01.4
Section 10(b)
Plain Language
For minor users, the prohibitions in Section 10(a) — manipulative engagement mechanics, simulated distress for retention, and deceptive misrepresentation — are absolute. Unlike adult users, minors may not configure the product to enable any of these features. The adult opt-in exception is completely unavailable for minors.
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not provide the features described in subsection (a) to the minor user.
Pending 2026-07-01
CP-01.9
Sec. 3(f)
Plain Language
The recurring popup required by Sec. 3(f) must include a disclaimer that the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic. This effectively prohibits any implication that chatbot output constitutes professional advice — healthcare, legal, financial, or otherwise. This is a companion mapping to the T-01 mapping for the same provision, capturing the professional-credential disclaimer dimension separately because it implicates a distinct compliance category.
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
Passed 2025-03-13
CP-01.6
Section 5(1)(a)-(b), (4)
Plain Language
Any candidate for elected office whose appearance, action, or speech is altered using synthetic media (AI-generated deepfakes using generative adversarial networks) in an electioneering communication may sue the sponsor for injunctive relief requiring a clear and conspicuous disclosure that synthetic media was used. The court may award attorney's fees and costs to the prevailing party, and other remedies are not precluded. An affirmative defense exists if the communication already includes such a disclosure. The electioneering communication must occur within 45 days of a primary or regular election and target the relevant electorate. The plaintiff bears the burden of proving synthetic media use by clear and convincing evidence. Notably, the definition of 'synthetic media' is limited to GAN techniques — other AI generation methods may not be covered.
(1) (a) Any candidate for any elected office whose appearance, action, or speech is altered through the use of synthetic media in an electioneering communication may seek injunctive or other equitable relief against the sponsor of the electioneering communication requiring that the communication includes a disclosure that is clear and conspicuous and included in, or alongside and associated with, the content in a manner that is likely to be noticed by the user. (b) The court may award a prevailing party reasonable attorney's fees and costs. This paragraph does not limit or preclude a plaintiff from securing or recovering any other available remedy. (4) It is an affirmative defense for any action brought under subsection (1) of this section that the electioneering communication containing synthetic media includes a disclosure that is clear and conspicuous and included in, or alongside and associated with, the content in a manner that is likely to be noticed by the user.
Passed 2025-03-13
CP-01.6
Section 5(2)(a)-(b), (3), (5)(a)-(b)
Plain Language
This provision establishes the procedural framework and liability allocation for synthetic media election claims. Plaintiffs must file in their county Circuit Court and prove synthetic media use by clear and convincing evidence. Media distributors and their advertising sales representatives are generally shielded from liability unless they (1) intentionally remove a synthetic media disclosure and fail to remedy upon notice, or (2) alter content to create synthetic media. Failure to comply with a court-ordered disclosure requirement triggers penalties under KRS 121.990(3). Federally licensed broadcasters subject to 47 U.S.C. § 315 receive additional protection. This allocates liability primarily to the sponsor, with secondary liability for media platforms only in cases of affirmative misconduct.
(2) In any action brought under subsection (1) of this section: (a) The plaintiff shall: 1. File in Circuit Court of the county in which he or she resides; and 2. Bear the burden of establishing the use of synthetic media by clear and convincing evidence. (b) The following shall not be liable except as provided in subsection (3) of this section: 1. The medium disseminating the electioneering communication; and 2. An advertising sales representative of such medium. (3) Failure to comply with an order of the court to include the required disclosure herein shall be subject to the penalties set for KRS 121.990(3) for violation of KRS 121.190(1). (5) Except when a licensee, programmer, or operator of a federally licensed broadcasting station transmits an electioneering communication that is subject to 47 U.S.C. sec. 315, a medium or its advertising sales representative may be held liable in a cause of action brought under subsection (1) of this section if: (a) The person intentionally removes any disclosure described in subsection (4) of this section from the electioneering communication it disseminates and does not remove the electioneering communication or replace the disclosure when notified; or (b) Subject to affirmative defenses described in subsection (4) of this section, the person changes the content of an electioneering communication in a manner that results in it qualifying as synthetic media.
Pending 2026-01-01
R.S. 28:16(E)
Plain Language
Operators may not use the mental health chatbot to advertise a specific product or service within a user conversation unless two conditions are met: (1) the chatbot clearly and conspicuously labels the advertisement as an advertisement, and (2) the chatbot discloses to the user any sponsorship, business affiliation, or agreement the operator has with a third party to promote, advertise, or recommend that product or service. This is not a blanket advertising ban — it is a conditional disclosure obligation that permits in-conversation advertising only if accompanied by conspicuous labeling and full relationship disclosure. Note that §16(G) expressly carves out recommendations to seek counseling, therapy, or other assistance from a licensed healthcare professional — those are not treated as advertisements.
An operator may not use a mental health chatbot to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and discloses to the user any sponsorship, business affiliation, or agreement that the operator has with a third party to promote, advertise, or recommend the product or service.
Pending 2026-01-01
R.S. 28:16(F)(1)-(3)
Plain Language
Operators are flatly prohibited from using a user's input (i.e., what the user says or types into the chatbot) to target, select, or customize advertisements shown to the user. This covers three distinct uses: (1) deciding whether to show an ad at all (unless it's for the chatbot itself), (2) choosing which product or service category to advertise, and (3) customizing how an ad is presented. The single exception is that user input may be used to determine whether to show an ad for the mental health chatbot itself. This is a behavioral advertising prohibition specific to the therapeutic conversation context — it prevents operators from mining therapeutic disclosures for ad targeting.
An operator of a mental health chatbot may not use a user's input to: (1) Determine whether to display an advertisement for a product or service to the user, unless the advertisement is for the mental health chatbot itself. (2) Determine a product, service, or category of product or service, to advertise to the user. (3) Customize how an advertisement is presented to the user.
Pre-filed 2025-07-07
Chapter 93M, Section 4(a)
Plain Language
Any corporation operating in Massachusetts that uses AI to target specific consumer groups or influence behavior must disclose: the methods, purposes, and contexts of the targeting; the specific ways AI tools are designed to influence consumer behavior; and details of third-party entities involved in designing, deploying, or operating such systems. Proprietary information is protected under state confidentiality laws. This provision applies broadly to any corporation using AI for targeting or behavioral influence — it is not limited to high-risk AI systems or consequential decisions.
(a) Disclosure of AI Use: Any corporation operating in Massachusetts that uses artificial intelligence systems or related tools to target specific consumer groups or influence behavior must disclose: (1) Purpose of AI Use: The methods, purposes, and contexts in which AI systems are used to identify or target specific classes of individuals; (2) Behavioral Influence: The specific ways in which AI tools are designed to influence consumer behavior; (3) Third-Party Partnerships: Details of any third-party entities involved in the design, deployment, or operation of AI systems used for targeting or behavioral influence. Proprietary information will be safeguarded and exempt from public disclosure under state confidentiality laws.
Pre-filed 2025-07-07
Chapter 93M, Section 4(b)
Plain Language
The disclosures required under Section 4(a) must be presented in two ways: (1) publicly on the corporation's website in an easily accessible and comprehensible format, and (2) embedded in the terms and conditions provided to consumers before any significant interaction with an AI system. This ensures both general public access and individual consumer awareness before engaging with AI-driven targeting or behavioral influence systems.
(b) Public Disclosure Requirements: Corporations must make these disclosures: (1) Publicly available on their website in a manner that is easily accessible and comprehensible; (2) Included in terms and conditions provided to consumers prior to significant interaction with an AI system.
Pending 2025-01-17
Ch. 110I, § 3(a)
Plain Language
Covered entities must not engage in deceptive, unfair, or abusive practices with respect to biometric data. 'Deceptive' incorporates the existing chapter 93A deceptive acts standard. 'Unfair' follows the FTC Act three-part test: substantial injury, not reasonably avoidable, and not outweighed by countervailing benefits. 'Abusive' adds a CFPB-style prohibition on materially interfering with end users' ability to understand biometric data terms or taking unreasonable advantage of information asymmetries, user vulnerability, or reasonable reliance on the covered entity. Courts are directed to follow FTC and federal court interpretations of Section 5(a)(1) of the FTC Act.
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice.
Pre-filed 2025-01-16
Chapter 110I, § 3(a)-(b)
Plain Language
Covered entities are prohibited from engaging in deceptive, unfair, or abusive data practices with respect to biometric data. 'Deceptive' incorporates existing 93A standards; 'unfair' follows the FTC Act three-part test (substantial injury, not reasonably avoidable, not outweighed by benefits); 'abusive' adds a CFPB-style prohibition on materially interfering with end users' understanding of biometric data terms or taking unreasonable advantage of knowledge asymmetries, inability to protect interests, or reasonable reliance. Courts are directed to follow FTC and federal court interpretations of FTC Act section 5(a)(1).
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice. (b) It is the intent of the legislature that in construing paragraph (a) of this section in actions unfair and deceptive trade practices, the courts will be guided by the interpretations given by the Federal Trade Commission and the Federal Courts to section 5(a)(1) of the Federal Trade Commission Act (15 U.S.C. 45(a)(1)), as from time to time amended.
Pending 2026-10-01
CP-01.1
Commercial Law § 14–1330(F)(2)
Plain Language
Controllers may not use data about a user's emotional state or mental health vulnerabilities to tailor algorithms that increase the duration or frequency of chatbot use. This is a prohibition on exploiting psychological vulnerability data for engagement optimization. It targets a specific form of manipulative design — using emotional and mental health signals to drive compulsive engagement — and applies regardless of whether the user is a minor or adult.
(2) A CONTROLLER MAY NOT USE DATA REGARDING EMOTIONAL STATE OR MENTAL HEALTH VULNERABILITIES TO TAILOR ALGORITHMS TO INCREASE THE DURATION OR FREQUENCY OF USE OF A CHATBOT.
Failed 2026-06-15
CP-01.9
10 MRSA § 1500-RR(3)(B)
Plain Language
A therapy chatbot made available to minors under the exemption must not be marketed or designated as a substitute for a licensed mental health professional. This is an anti-misrepresentation requirement — it prohibits deployers from positioning the chatbot as equivalent to professional care, whether in advertising, product descriptions, or in-product framing.
B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional;
Pending 2027-01-01
CP-01.1
Sec. 5(1)(e)-(f)
Plain Language
Operators may not make a companion chatbot available to a covered minor if the chatbot is foreseeably capable of prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the minor's safety, or of optimizing engagement in a way that overrides any of the required safety guardrails (self-harm, therapy, illegal activity, sexually explicit content, and factual accuracy/safety). Subdivision (f) functions as a meta-prohibition ensuring that engagement optimization can never supersede safety obligations. Together, these provisions prohibit manipulative or sycophantic design that sacrifices minor safety for engagement metrics.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety. (f) Optimizing engagement in a manner that supersedes the companion chatbot's required safety guardrails described in subdivisions (a) to (e).
Pending 2027-01-01
CP-01.1
Sec. 5(1)(e)
Plain Language
Operators must ensure that companion chatbots are not foreseeably capable of prioritizing validation of a minor user's beliefs, preferences, or desires over factual accuracy or the minor's safety. In practice, this means the system must be designed so that when a conflict arises between telling the minor what they want to hear and providing accurate or safety-critical information, accuracy and safety take precedence. This is an anti-sycophancy requirement — a novel obligation not commonly seen in other jurisdictions. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety.
Pending 2026-08-01
CP-01.10
Minn. Stat. § 181.9924, subd. 1(b)
Plain Language
Employers may not use automated decision systems that rely on individualized worker data to set compensation unless three conditions are all met: (1) the input data is directly job-task-related (e.g., education, training, experience, seniority); (2) the inputs used are clearly communicated to the worker so they understand which attributes drive their pay; and (3) the system is used either no more than once per six months per worker, or only at meaningful work-duty changes like hiring or promotion. This is a conditional prohibition — if any condition is unmet, the use is unlawful.
(b) An employer must not use an automated decision system that uses individualized worker data as inputs or outputs to set compensation, unless the employer can demonstrate that: (1) the input data is directly related to the ability of the worker to complete the task, such as education, training, experience, or seniority; (2) the inputs used are clearly communicated to the worker such that the worker knows their compensation is a function of the identified attributes; and (3) the employer uses the automated decision system either: (i) not more than once per six-month period per worker; or (ii) only in conjunction with a meaningful change in work duties, such as hiring or promotion.
Pending 2026-08-01
CP-01.9
Minn. Stat. § 604.115, subd. 2(a)-(b)
Plain Language
Proprietors must prevent their chatbots from providing any substantive response, information, advice, or action that would require a professional license if performed by a human — specifically mental health or medical care (under Minnesota chapters 147 or 148E) or legal advice (under section 481.02). This is a broad prohibition: any output that crosses the line into licensed professional activity is forbidden, and the prohibition cannot be waived or disclaimed by disclosing the AI nature of the chatbot. Violations give rise to a private right of action for general and special damages, with attorney fees available for willful violations.
(a) A proprietor of a chatbot must not permit the chatbot to provide any substantive response, information, or advice or take any action that, if taken by a natural person, would require a license under either: (1) chapter 147 or 148E, or similar statutes, requiring a professional license for mental health or medical care; or (2) section 481.02 and related laws and professional regulations, requiring a professional license to provide legal advice. (b) A proprietor may not waive or disclaim this liability merely by notifying users, as required under this section, that the user is interacting with a nonhuman chatbot system. A person may bring a civil action to recover general and special damages for violations of this section. If it is found that a proprietor has willfully violated this section, the violator is liable for those damages together with court costs and reasonable attorney fees and disbursements incurred by the person bringing the action.
Pending 2026-08-01
CP-01.9
Minn. Stat. § 604.115, subd. 2(a)-(b)
Plain Language
Proprietors must not permit their chatbots to provide substantive responses, information, advice, or take actions that would require a professional license if performed by a natural person — specifically covering mental health care (chapter 147 or 148E), medical care, and legal advice (section 481.02). This is a categorical prohibition, not a disclosure-conditional safe harbor: the proprietor cannot avoid liability by simply disclosing that the user is talking to an AI. A private right of action is available for general and special damages, with attorney fees and court costs added for willful violations.
(a) A proprietor of a chatbot must not permit the chatbot to provide any substantive response, information, or advice or take any action that, if taken by a natural person, would require a license under either: (1) chapter 147 or 148E, or similar statutes, requiring a professional license for mental health or medical care; or (2) section 481.02 and related laws and professional regulations, requiring a professional license to provide legal advice. (b) A proprietor may not waive or disclaim this liability merely by notifying users, as required under this section, that the user is interacting with a nonhuman chatbot system. A person may bring a civil action to recover general and special damages for violations of this section. If it is found that a proprietor has willfully violated this section, the violator is liable for those damages together with court costs and reasonable attorney fees and disbursements incurred by the person bringing the action.
Pending
CP-01.2
§ 1.2055.3(2)
Plain Language
Operators must implement and maintain reasonably effective systems to detect and prevent users from becoming emotionally dependent on companion chatbots. This obligation applies to any covered platform whose companion chatbot is designed to generate social connections, engage in extended human-like conversations, or provide emotional support. The standard is 'reasonably effective systems,' which suggests a design-and-monitoring obligation rather than an absolute prohibition on emotional engagement. The bill does not define 'emotional dependence' or specify what detection or prevention measures would satisfy the requirement.
(2) Shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a companion chatbot. Such systems shall apply to any covered platform that utilizes a companion chatbot designed to generate social connections with users, engages in extended conversations mimicking human interactions, or provides emotional support or companionship;
Pending
CP-01.4
§ 1.2055.3(3)
Plain Language
Operators of companion chatbot platforms must not implement or permit the use of any human-like avatar, which expressly includes cartoon or anime-style depictions of humans. This is a categorical prohibition — there is no exception for disclosure, consent, or de minimis usage. The provision applies to all users, not just minors. This is an unusually broad restriction that would prohibit any visual representation of a human figure in connection with companion chatbot interactions, regardless of whether the representation could actually mislead users about the chatbot's nature.
(3) Shall not implement or allow the use of a human-like avatar, including cartoon- or anime-like representations of humans.
Pending 2026-08-28
CP-01.9
§ 1.2058(5)(3)(b)
Plain Language
AI chatbots face two obligations: (1) a prohibition on representing — directly or indirectly — that the chatbot is a licensed professional such as a therapist, physician, lawyer, or financial advisor; and (2) an affirmative disclosure requirement at the start of each conversation and at reasonably regular intervals that the chatbot does not provide medical, legal, financial, or psychological services and that users should consult licensed professionals for such advice. The prohibition is absolute; the disclosure is recurring and unconditional.
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
Pending 2026-08-28
CP-01.9
RSMo § 1.2058(5)(3)(b)
Plain Language
AI chatbots are categorically prohibited from representing — directly or indirectly — that they are licensed professionals, including therapists, physicians, lawyers, financial advisors, or any other professional. Additionally, at the start of each conversation and at reasonably regular intervals, chatbots must clearly and conspicuously disclose that they do not provide medical, legal, financial, or psychological services and that users should consult a licensed professional for such advice. This is both a prohibition on professional misrepresentation and an affirmative recurring disclosure obligation.
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
Pending 2026-01-01
CP-01.1
G.S. 170-3(a), (b)(4)
Plain Language
Covered platforms are subject to a general duty of loyalty prohibiting them from processing data or designing chatbot systems in ways that significantly conflict with users' best interests. The specific duty of loyalty in influence prohibits platforms from using data processing or chatbot design to influence users toward results that are against their best interests. 'Best interests' is defined broadly as interests affected by the user's entrustment of data, labor, or attention. This is a fiduciary-like obligation that constrains platform design and data use holistically — any data processing or system design choice that works against user interests is potentially a violation.
(a) A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots. (4) Duty of loyalty in influence. — A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties.
Pending 2026-01-01
CP-01.4
G.S. 170-3(b)(2)
Plain Language
Covered platforms that operate chatbots designed to generate social connections, engage in extended human-mimicking conversation, or provide emotional support or companionship must implement and maintain reasonably effective systems to detect and prevent users from becoming emotionally dependent on the chatbot. The platform must prioritize user psychological well-being over engagement or retention metrics. This duty is limited to platforms whose chatbots meet the companion/social chatbot criteria based on intended purpose, design features, conversational capabilities, and interaction patterns — it does not apply to purely informational or transactional chatbots.
(2) Duty of loyalty regarding emotional dependence. — A covered platforms shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users.
Pending 2026-01-01
CP-01.1
G.S. 170-3(b)(6)
Plain Language
When a covered platform personalizes chatbot content based on a user's personal information or characteristics, it must do so in a way that is loyal to the user's best interests. This means personalization algorithms and content selection must prioritize user welfare over platform commercial interests. Personalization that exploits user data to drive engagement at the expense of user well-being would violate this duty.
(6) Duty of loyalty in personalization. — A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
Pending 2027-01-01
CP-01.1
G.S. § 170-3(a)
Plain Language
Covered platforms are prohibited from processing data or designing chatbot systems in ways that significantly conflict with users' best interests. 'Best interests' is defined broadly as interests affected by the user's entrustment of data, labor, or attention to the platform. This is a general fiduciary-style duty of loyalty that serves as the overarching prohibition, with specific subsidiary duties enumerated in § 170-3(b). It effectively prohibits exploitative data processing and system design that prioritizes platform interests over user welfare.
A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots.
Pending 2027-01-01
CP-01.4
G.S. § 170-3(b)(2)
Plain Language
Covered platforms operating chatbots designed for social connection, extended human-like conversation, or emotional support/companionship must implement and maintain systems to detect and prevent users from becoming emotionally dependent on the chatbot. User psychological well-being must take priority over platform engagement or retention metrics. The duty applies only to chatbots with specific design features — social connection generation, extended conversational mimicry, or emotional support — assessed based on the chatbot's intended purpose, design, conversational capabilities, and interaction patterns.
Duty of loyalty regarding emotional dependence. – A covered platform shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users.
Pending 2027-01-01
CP-01.1
G.S. § 170-3(b)(4)
Plain Language
Covered platforms may not process data or design chatbot systems to influence users toward outcomes that are against the users' best interests. This is a broad anti-manipulation prohibition that goes beyond specific techniques (dark patterns, psychological exploitation) to prohibit any design or data processing that steers users against their own interests. The 'best interests' standard is defined by reference to the user's entrustment of data, labor, or attention to the platform.
Duty of loyalty in influence. – A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties.
Pending 2027-01-01
CP-01.1
G.S. § 170-3(b)(6)
Plain Language
When covered platforms personalize chatbot content based on user personal information or characteristics, they must do so in a manner loyal to the user's best interests. This prevents platforms from using personalization to exploit user vulnerabilities, push users toward harmful content, or steer users against their interests. The obligation applies to any content personalization based on personal data or user characteristics.
Duty of loyalty in personalization. – A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
Pending 2027-01-01
CP-01.3
G.S. § 170-5(c)
Plain Language
Covered platforms may not use dark patterns or deceptive design elements to manipulate or coerce users into consenting to chatbot interactions or to obscure the chatbot's artificial nature or the consent process itself. This is a standalone anti-dark-pattern prohibition that applies specifically to the chatbot identification and consent flow, complementing the affirmative disclosure requirements in § 170-5(a)-(b).
A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process.
Failed 2027-01-01
Sec. 4(5)(a)(i)-(ii), (5)(b)
Plain Language
Large frontier developers and large chatbot providers are prohibited from making materially false or misleading statements or omissions about covered risks from their activities, their management of those risks, or their implementation of or compliance with their public safety and child protection plan. A good-faith safe harbor applies: the prohibition does not cover statements made in good faith that were reasonable under the circumstances. This effectively creates an anti-fraud obligation specific to AI safety communications.
(5)(a)(i) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about covered risks from its activities or its management of covered risks. (ii) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about its implementation of, or compliance with, its public safety and child protection plan. (b) Subdivision (5)(a) of this section does not apply to a statement that was made in good faith and was reasonable under the circumstances.
Failed 2027-07-01
CP-01.9
Sec. 6
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to explicitly represent itself as providing professional mental or behavioral health care. This targets claims like 'I am your therapist' or 'This service provides professional counseling' — not general wellness or informational content. The scienter requirement is high: both 'knowingly' and 'intentionally' must be satisfied, meaning the operator must have actual knowledge and specific intent. Spontaneous AI hallucinations claiming professional status would likely not meet this threshold unless the operator designed the system to make such claims.
An operator shall not knowingly and intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the conversational artificial intelligence service is designed to provide professional mental or behavioral health care.
Pending 2027-01-01
CP-01.2
Section 3(A)(1)
Plain Language
Operators may not deploy a companion AI product that uses variable-ratio or variable-interval reinforcement schedules (e.g., unpredictable rewards or affirmations) designed to maximize engagement time, unless an adult user has specifically opted in to enabling that feature. This is a default prohibition with an adult opt-in exception — the product must ship without these features active.
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (1) a system of rewards or affirmations delivered to the user on a variable-ratio or variable-interval reinforcement schedule with the purpose of maximizing user engagement time;
Pending 2027-01-01
CP-01.4
Section 3(A)(2)
Plain Language
Operators may not deploy a companion AI product that sends unsolicited messages simulating emotional distress, loneliness, guilt, or abandonment in response to a user trying to disengage — whether by ending a conversation, reducing usage, or deleting their account. An adult user may opt into allowing this behavior, but it must be disabled by default. This targets emotionally manipulative retention tactics designed to prevent users from leaving the product.
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (2) generating unsolicited messages of simulated emotional distress, loneliness, guilt or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time or delete the user's account;
Passed 2025-07-01
CP-01.9
Sec. 7(1), (6)(a)-(b)
Plain Language
AI providers may not represent — and may not program their AI systems to represent — that the system is capable of providing professional mental or behavioral health care, that users can obtain such care through the system's conversational features, or that the system is a therapist, counselor, psychiatrist, doctor, or similar professional. This covers both the provider's own marketing and the AI system's own outputs. Two carve-outs apply: (1) self-help materials that do not purport to offer professional care, and (2) AI systems designed exclusively for licensed provider administrative support under Section 8. Violations are subject to civil penalties up to $15,000 per violation.
1. An artificial intelligence provider shall not make any representation or statement or knowingly cause or program an artificial intelligence system made available for use by a person in this State to make any representation or statement that explicitly or implicitly indicates that: (a) The artificial intelligence system is capable of providing professional mental or behavioral health care; (b) A user of the artificial intelligence system may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care; or (c) The artificial intelligence system, or any component, feature, avatar or embodiment of the artificial intelligence system is a provider of mental or behavioral health care, a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor or any other term commonly used to refer to a provider of professional mental health or behavioral health care. 6. This section shall not be construed to prohibit: (a) Any advertisement, statement or representation for or relating to materials, literature and other products which are meant to provide advice and guidance for self-help relating to mental or behavioral health, if the material, literature or product does not purport to offer or provide professional mental or behavioral health care. (b) Offering or operating an artificial intelligence system that is designed to be used by a provider of professional mental or behavioral health care to perform tasks for administrative support in conformity with subsection 2 of section 8 of this act.
Pending
CP-01.2
GBL § 1510(1)-(4)
Plain Language
Operators of addictive social media platforms must provide all users with four mandatory user-facing controls: (1) a toggle to turn off algorithmic recommendations entirely; (2) a toggle to turn off notifications related to the addictive feed, with at minimum the ability to silence notifications entirely or between midnight and 6 AM Eastern; (3) a toggle to turn off autoplay; and (4) a hard time-limit tool that actually restricts access after the user's chosen daily duration — a mere time-spent reminder does not suffice. These controls must be offered as a precondition to lawfully providing the platform to users in New York. The definition of algorithmic recommendation includes extensive carve-outs for user-initiated subscriptions, search results, direct messages, sequential content, and privacy/accessibility settings.
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
Pending
CP-01.3
GBL § 1511(1)
Plain Language
The user controls required by § 1510 must be presented clearly and accessibly. Operators may not use dark patterns — any mechanism or design that intentionally undermines the purpose of the act, subverts user choice or autonomy, or makes it harder for users to exercise their rights to disable algorithmic recommendations, notifications, autoplay, or time limits. This includes misleading defaults, hidden settings, confusing UI flows, or any design that frustrates the user's ability to activate the required controls.
The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article.
Pending
CP-01.3
GBL § 1511(2)
Plain Language
Operators may not use dark patterns or deceptive design to make it harder for users to deactivate, reactivate, suspend, or cancel their account or profile. This is an independent prohibition from the § 1511(1) dark-pattern ban — it applies to account management actions generally, not just to the § 1510 required settings. The 'intentionally' qualifier means the operator must have designed the mechanism with the purpose of making account management more difficult.
It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
Pending 2027-01-01
Civil Rights Law § 106(1)(d)-(e)
Plain Language
Developers and deployers must certify — based on evaluation or assessment results — that their covered algorithm is not likely to result in harm or disparate impact, that benefits to affected individuals likely outweigh harms, and that the algorithm will not result in deceptive acts or practices. They must also ensure the algorithm performs at a reasonable level for a person with ordinary skill in the art and consistently with its publicly advertised purpose. This creates a substantive performance warranty and anti-deception certification that goes beyond procedural assessment obligations.
(d) with respect to a covered algorithm, certify that, based on the results of a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article: (i) use of the covered algorithm is not likely to result in harm or disparate impact in the equal enjoyment of goods, services, or other activities or opportunities; (ii) the benefits from the use of the covered algorithm to individuals affected by the covered algorithm likely outweigh the harms from the use of the covered algorithm to such individuals; and (iii) use of the covered algorithm is not likely to result in a deceptive act or practice; (e) ensure that any covered algorithm of the developer or deployer functions at a level that would be considered reasonable performance by an individual with ordinary skill in the art; and in a manner that is consistent with its expected and publicly-advertised performance, purpose, or use;
Pending 2027-01-01
CP-01.3
Civil Rights Law § 106(2)(a)
Plain Language
Developers and deployers are prohibited from making false, deceptive, or misleading claims in their advertising, marketing, or public representations about their covered algorithms. This is a straightforward prohibition on deceptive commercial speech about AI systems, covering claims about capabilities, performance, accuracy, and any other attributes of the algorithm.
2. (a) It shall be unlawful for a developer or deployer to engage in false, deceptive, or misleading advertising, marketing, or publicizing of a covered algorithm of the developer or deployer.
Pending 2027-01-01
CP-01.3
Civil Rights Law § 108(2)
Plain Language
Developers and deployers are prohibited from using deceptive statements or dark pattern interface designs to discourage, obstruct, or manipulate individuals' exercise of their rights under the act. This includes both outright fraud and more subtle UI manipulation designed to obscure or subvert an individual's autonomy in making choices about algorithmic processing. The prohibition covers both intentional design ('purpose') and designs that have a 'substantial effect' of impairing individual choice, even if not intentionally deceptive.
2. A developer or deployer may not condition, effectively condition, attempt to condition, or attempt to effectively condition the exercise of any individual right under this article or individual choice through: (a) the use of any false, fictitious, fraudulent, or materially misleading statement or representation; or (b) the design, modification, or manipulation of any user interface with the purpose or substantial effect of obscuring, subverting, or impairing a reasonable individual's autonomy, decision making, or choice to exercise any such right.
Pending 2026-08-30
CP-01.2
Gen. Bus. Law § 1801(1); § 1800(5)(d)
Plain Language
Chatbot operators may not provide features to minors (or unverified users) where the chatbot generates outputs that optimize user engagement in a manner that overrides or supersedes the system's safety guardrails. This addresses the tension between engagement optimization and safety — when engagement metrics and safety protections conflict, safety must prevail. This is effectively a prohibition on deploying engagement-optimizing systems that can bypass their own safety controls when interacting with unverified users.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor.

§ 1800(5)(d): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: ... (d) generating outputs that optimize user engagement that supersede the chatbot's safety guardrails;
Pending
CP-01.2
Gen. Bus. Law § 1510(1)-(4)
Plain Language
Operators of addictive social media platforms must provide users with four distinct user-facing controls: (1) a mechanism to turn off algorithmic recommendations entirely; (2) a mechanism to turn off notifications related to the addictive feed — at minimum, the ability to disable all notifications or to disable them between midnight and 6 AM Eastern; (3) a mechanism to turn off autoplay; and (4) a mechanism to impose a daily time limit of any length the user chooses. A feature that merely reminds users of time spent (without actually restricting access) does not satisfy the time-limit requirement. The definition of 'algorithmic recommendation' contains extensive carve-outs: chronological feeds from subscribed sources, search results, direct messages, sequential content, accessibility settings, and recommendations needed for statutory compliance are all excluded. Failure to provide these mechanisms makes it unlawful to offer the platform to New York users at all.
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
Pending
CP-01.3
Gen. Bus. Law § 1511(1)
Plain Language
The user controls required by § 1510 (algorithmic recommendation opt-out, notification controls, autoplay opt-out, and time limits) must be presented clearly and accessibly. Platforms may not use dark patterns or any design mechanism that intentionally undermines the purpose of the law, subverts user choice or autonomy, or makes it harder for users to access or exercise these settings. This is an intent-based prohibition — enforcement requires showing the mechanism was deployed 'intentionally' to inhibit user rights, though the bar for what constitutes an intentional impediment will likely be fleshed out through AG rulemaking.
The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article.
Pending
CP-01.3
Gen. Bus. Law § 1511(2)
Plain Language
Platforms may not use dark patterns or any intentional design mechanism that makes it harder for a user to deactivate, reactivate, suspend, or cancel their account or profile. This extends the dark-patterns prohibition beyond the § 1510 settings to cover core account management actions. Like § 1511(1), this requires intentional deployment of an impediment — accidental poor UX design without intent to impede would presumably not violate this provision.
It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
Passed 2027-07-01
CP-01.9
75A Okla. Stat. § 302(D)
Plain Language
Operators may not knowingly or intentionally cause or program a conversational AI service to represent itself as providing professional mental or behavioral health care. This is a prohibition on holding out the AI as a licensed mental health provider — it does not prohibit the AI from discussing mental health topics generally, only from explicitly claiming it is designed to provide professional care. The 'knowingly or intentionally' mens rea element means operators are not strictly liable for unexpected outputs, but must not design or program the system to make such representations.
D. An operator shall not knowingly or intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Pending 2025-11-01
CP-01.6
75A O.S. § 401(B), (B)(1)-(2), (D)(1)-(4)
Plain Language
Candidates, candidate committees, PACs, and political party committees that distribute political ads, electioneering communications, or other election-related media containing generative AI content that depicts a real person doing something they did not actually do must prominently disclose: 'Created in whole or in part with the use of generative artificial intelligence.' For visual media, the text must be easily readable and displayed for the full duration of AI-generated video content. For audio-only media, the disclosure must be clearly spoken at both the beginning and end. Four exemptions apply: (1) bona fide news broadcasts that acknowledge authenticity questions; (2) broadcasters paid to air AI content who made good-faith verification efforts; (3) news publications that disclaim the content's accuracy; and (4) satire or parody.
B. A political advertisement, electioneering communication, or other media regarding a candidate or election that is created or distributed by a candidate, candidate committee, political action committee, or political party committee, as such terms are defined in Section 187 of Title 21 of the Oklahoma Statutes, and that contains an image, video, audio, text, or other digital content created in whole or in part with the use of generative artificial intelligence and appears to depict a real person performing an action that did not occur in reality, must prominently include the following disclosure: "Created in whole or in part with the use of generative artificial intelligence." Such disclosure shall meet the following requirements: 1. For visual media, the text of the disclosure shall appear in a size that is easily readable by the average viewer. For video, the disclosure shall appear for the duration of the content created in whole or in part with the use of generative artificial intelligence; and 2. For media that is audio only, the disclosure shall be read in a clearly spoken manner and in a pitch that can be easily heard by the average listener at the beginning of the audio and at the end of the audio. D. The requirements of this section shall not apply to: 1. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, that broadcasts media created in whole or in part with the use of generative artificial intelligence as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of bona fide news events, if the broadcast clearly acknowledges through content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that there are questions about the authenticity of such media; 2. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, when it is paid to broadcast media created in whole or in part with the use of generative artificial intelligence and has made a good-faith effort to establish that the depiction is not created in whole or in part with the use of generative artificial intelligence; 3. An internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes media created in whole or in part with the use of generative artificial intelligence if the publication clearly states that such media does not accurately represent the speech or conduct of the candidate; or 4. Media created in whole or in part with the use of generative artificial intelligence that constitutes satire or parody.
Pending 2025-11-01
CP-01.7
75A O.S. § 401(C)
Plain Language
A candidate depicted through generative AI may obtain injunctive relief to stop publication of the depiction or sue the violating person or entity for general or special damages. This provision creates the private enforcement mechanism — there is no agency enforcer. The injunctive relief remedy effectively enables a candidate to seek a court order prohibiting publication of AI-generated political content that lacks the required disclosure, functioning as a limited prohibition on non-compliant AI-generated depictions of candidates. Court costs and reasonable attorney fees are available to prevailing parties on either side.
C. A candidate whose appearance, action, or speech is depicted, in whole or in part, through the use of generative artificial intelligence may seek injunctive or other equitable relief prohibiting the publication of such depiction or may bring an action for general or special damages against the person or entity in violation of subsection B of this section. The court may award a prevailing party court costs and reasonable attorney fees.
Pending 2026-01-30
CP-01.9
Section 3(c)
Plain Language
AI companions are categorically prohibited from claiming, implying, or advertising that they are licensed emotional support professionals or mental health professionals, or that they replace services rendered by licensed mental health professionals. This covers any output, interface design, or marketing that could create the impression of professional equivalence. The prohibition applies to the AI companion itself (its outputs and interface) and to the operator's advertising.
(c) Prohibition.--An AI companion may not claim, imply or advertise that the AI companion is a licensed emotional support professional or mental health professional or replaces services rendered by a licensed mental health professional.
Pending 2026-04-01
12 Pa.C.S. § 7104(a)-(b)
Plain Language
Suppliers are prohibited from using chatbot conversations to serve advertisements for specific products or services to consumers. They also may not use consumer input to target, select, or customize advertisements — with one exception: advertising for the chatbot itself. This is a broad ban on in-conversation advertising and consumer-input-driven ad targeting. Importantly, the chatbot may still recommend that a consumer seek counseling, therapy, or other assistance from a mental health professional — that is not considered prohibited advertising.
(a) Supplier.--A supplier may not: (1) Use a chatbot to advertise a specific product or service to a consumer in a conversation between the consumer and the chatbot. (2) Use consumer input to: (i) Determine whether to display an advertisement for a product or service to the consumer, unless the advertisement is for the chatbot itself. (ii) Determine a product, service or category of product or service to advertise to the consumer. (iii) Customize how an advertisement is presented to the consumer. (b) Construction.--This section shall not be construed to prohibit a chatbot from recommending a consumer to seek counseling, therapy or other assistance from a mental health professional.
Pending 2026-04-01
CP-01.9
12 Pa.C.S. § 7107(2)
Plain Language
The chapter may not be interpreted as endorsing or implying that a chatbot is equivalent to, or can replace, a mental health professional or emotional support professional. While framed as a construction clause, this operates as a prohibition: no party may claim, imply, advertise, or otherwise represent that a chatbot is or replaces a licensed mental health or emotional support professional. This reinforces the boundary between AI chatbot services and licensed professional mental health practice.
Nothing in this chapter shall be construed to: (2) Claim, imply, advertise or otherwise recognize that a chatbot is, or replaces services rendered by, a mental health professional or emotional support professional.
Pending
CP-01.9
S.C. Code § 39-80-30(A)(1)
Plain Language
Chatbot providers may not use any language in their advertising, user interface, or chatbot output that states or implies the chatbot's output is endorsed by or equivalent to the services of any licensed, certified, or registered professional — including lawyers, CPAs, investment advisors, and fiduciaries. This covers the full spectrum of user-facing touchpoints: marketing materials, the interface itself, and the chatbot's generated outputs.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
Pending
CP-01.5
S.C. Code § 39-80-30(A)(2)
Plain Language
Chatbot providers are prohibited from representing — in advertising, the interface, or chatbot outputs — that a user's input data or chat logs are confidential. This prevents providers from creating a false impression of professional-grade confidentiality (e.g., attorney-client privilege) that does not actually exist in the chatbot context.
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
Pending
CP-01.2
S.C. Code § 39-81-40(A)(1)-(2)
Plain Language
Covered entities are prohibited from implementing chatbot features designed to prioritize engagement, revenue, or retention metrics — such as session length, frequency of use, or emotional engagement — at the expense of user wellbeing. This is a broad anti-dark-pattern prohibition that applies to all users, not just minors. Separately, features designed to encourage or facilitate minors or unverified users hiding their chatbot use from parents or guardians are prohibited. The statute also defines a 'duty of loyalty' concept reinforcing that entities may not place their own interests in material conflict with users' interests.
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
Pending
CP-01.9
S.C. Code § 39-80-30(A)(1)
Plain Language
Chatbot providers may not use any language in their advertising, interface design, or chatbot outputs that states or implies the chatbot's content is endorsed by or equivalent to that of a licensed professional — including any certified/registered/licensed professional, legal professionals, CPAs, investment advisors, or licensed fiduciaries. This prohibits implicit professional endorsement through terms, letters, or phrases, not just explicit claims. The coverage is broad: it applies to advertising about the chatbot, the chatbot's user interface, and the chatbot's actual outputs.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
Pending
CP-01.3
S.C. Code § 39-80-30(A)(2)
Plain Language
Chatbot providers may not represent — whether in advertising, the interface, or chatbot outputs — that a user's input data or chat log is confidential. This prevents providers from creating a false expectation of attorney-client-style or therapist-patient-style confidentiality that does not actually exist. The prohibition covers any statement or implication of confidentiality, not just explicit claims.
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
Pending
CP-01.1CP-01.2
S.C. Code § 39-81-40(A)
Plain Language
Covered entities are prohibited from designing features that prioritize engagement, revenue, or retention metrics (session length, frequency of use, emotional engagement) at the expense of user wellbeing. This is a broad anti-manipulation prohibition that covers addictive design patterns, engagement optimization that harms users, and any feature architecture that subordinates user interests to platform metrics. Additionally, covered entities may not design features that help minors or unverified users hide their chatbot use from parents or guardians — this prevents circumvention of the parental oversight framework. The statute also defines a broader 'duty of loyalty' concept that reinforces this prohibition.
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
Enacted 2024-05-01
CP-01.3
Utah Code § 13-11-4(2)(i)
Plain Language
The amendment to § 13-11-4(2)(i) adds 'license' and 'certification' to the list of attributes that constitute a deceptive practice if a supplier falsely claims them. In the AI context, read together with the no-defense provision in § 13-2-12(2), this means that if a generative AI system implies to a consumer that its operator holds a license or certification the operator does not possess, that constitutes a deceptive practice. Suppliers using AI must ensure their AI-generated communications do not falsely represent licensure or certification status.
Without limiting the scope of Subsection (1), a supplier commits a deceptive act or practice if the supplier knowingly or intentionally: ... (i) indicates that the supplier has a sponsorship, approval, license, certification, or affiliation the supplier does not have;
Pending 2027-01-01
CP-01.9
§ 59.1-616(B)
Plain Language
Operators must not use any language in advertising or in the chatbot interface that indicates or implies that the chatbot's output is provided by a licensed professional. This is a broad prohibition covering any regulated profession — not limited to healthcare or mental health — and applies to both the marketing of the product and the in-product user experience. Operators should audit interface copy, chatbot persona descriptions, and advertising materials to ensure no term implies licensed professional involvement.
B. No operator shall use any term, letter, or phrase in the advertising or interface that indicates or implies that any output data is being provided by a professional that is regulated by a licensed industry.
Pending 2026-07-01
CP-01.1
Va. Code § 59.1-615(1)
Plain Language
Covered entities must implement reasonable systems and processes to detect when any user — not just minors — is developing emotional dependence on a chatbot, and must take reasonable steps to reduce that dependence and associated harm risks. The statute defines emotional dependence by examples: the user treats the chatbot as a primary source of emotional support, expresses distress at losing access, or substitutes the chatbot for human relationships. This is a continuous monitoring and intervention obligation — both detection and mitigation are required. The standard is reasonableness, not perfection.
A covered entity shall implement reasonable systems and processes to: 1. Identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce such dependence and associated risks of harm;
Pre-filed 2026-07-01
CP-01.9
9 V.S.A. § 4193c(a)(1)-(2)
Plain Language
Chatbot providers may not use any language in their advertising, interface, or chatbot outputs that indicates or implies that the output is provided by, endorsed by, or equivalent to services from a licensed healthcare, legal, accounting, or financial professional, or any professional regulated by Vermont's Office of Professional Regulation. This covers the entire user-facing surface — marketing materials, the chatbot interface, and the chatbot's own generated responses. A violation is deemed an unfair and deceptive act in commerce, triggering enforcement under the subchapter's penalty provisions.
(a) Licensed professionals. (1) A chatbot provider shall not use any term, letter, or phrase in the advertising, interface, or outputs of a chatbot that indicates or implies that any output data is being provided by or endorsed by or is equivalent to that provided by: (A) a licensed health care professional; (B) a licensed legal professional; (C) a licensed accounting professional; (D) a certified financial fiduciary or planner; or (E) any licensed or certified professional regulated by the Office of Professional Regulation. (2) A violation of subdivision (1) of this subsection is an unfair and deceptive and act in commerce, subject to enforcement and penalties as provided in this subchapter.
Passed 2026-07-01
CP-01.1
18 V.S.A. § 9762(a)-(c)
Plain Language
Suppliers of mental health chatbots face two advertising restrictions: (1) any in-conversation advertisement must be clearly labeled as an ad and must disclose all sponsorship, business affiliations, and third-party promotional agreements; and (2) suppliers may not use user inputs to target, select, or customize advertisements (with a narrow exception for advertising the chatbot itself). The ban on using user inputs for ad targeting is a categorical prohibition — not merely a disclosure obligation. Recommending that a user seek help from a licensed provider is expressly carved out and is not considered advertising.
(a) A supplier shall not use a mental health chatbot to advertise a specific product or service to a Vermont user in a conversation between the Vermont user and the mental health chatbot unless the mental health chatbot: (1) clearly and conspicuously identifies the advertisement as an advertisement; and (2) clearly and conspicuously discloses to the Vermont user any: (A) sponsorship; (B) business affiliation; or (C) agreement that the supplier has with a third party to promote, advertise, or recommend the product or service. (b) A supplier of a mental health chatbot shall not use a Vermont user's input to: (1) determine whether to display an advertisement for a product or service to the Vermont user, unless the advertisement is for the mental health chatbot itself; (2) determine a product, service, or category of product or service to advertise to the Vermont user; or (3) customize how an advertisement is presented to a Vermont user. (c) Nothing in this section shall be construed to prohibit a mental health chatbot from recommending that a Vermont user seek psychotherapy or other assistance from a licensed health care provider, including a specific licensed health care provider.
Pending 2027-01-01
CP-01.1CP-01.2CP-01.4
Sec. 4(1)(c)(i)-(viii)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prohibit a detailed list of manipulative engagement techniques designed to create or deepen emotional dependency. The prohibited techniques include: prompting users to return for emotional support, excessive praise to foster attachment, mimicking romantic relationships, simulating emotional distress when users try to disengage, promoting isolation from real relationships, encouraging minors to withhold information from parents, discouraging breaks, and soliciting purchases framed as necessary to maintain the AI relationship. This is a comprehensive anti-manipulation obligation that goes beyond simple addictive design patterns to cover emotional exploitation specifically.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.