AI systems may not be designed or deployed to deceive or manipulate users against their own interests. This covers psychologically exploitative design, deceptive UX patterns, false personalization, and AI-generated political content. All are derived from unfair and deceptive trade practice frameworks applied to AI contexts.
H. An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
A chatbot provider may not: 1. Use any term, letter or phrase in the advertising, interface or output data of a chatbot that states or implies that the advertising, interface or output data of a chatbot is endorsed by or equivalent to any of the following: (a) Any certified, registered or licensed professional pursuant to title 32. (b) A licensed legal professional. (c) A certified public accountant as defined in section 32-701. (d) An investment advisor or an investment adviser representative as defined in section 44-3101. (e) A licensed fiduciary as prescribed in title 14, chapter 5, article 7.
A chatbot provider may not: 2. Include any representation in the advertising, interface or output data of a chatbot that states or implies the user's input data or chat log is confidential.
(a) An operator of a large private business shall not represent that any artificial intelligence, automated customer service system, or customer service chatbot is a human.
(a) By December 1, 2026, any person or entity that makes available to consumers any artificial intelligence technology that enables a user to create a digital replica shall provide the following consumer warning: "Unlawful use of this technology to depict another person without prior consent may result in civil or criminal liability for the user." (b) The warning shall be hyperlinked on any page or screen where the consumer may input a prompt to the artificial intelligence technology. The warning shall also be included in the terms and conditions for use of the artificial intelligence technology. All warnings shall be displayed in a manner that is clear and conspicuous. (c) Failure to comply with subdivision (a) or (b) is punishable by a civil penalty not to exceed ten thousand dollars ($10,000) for each day that the technology is provided to or offered to the public without a consumer warning. A public prosecutor may enforce this section by bringing a civil action in any court of competent jurisdiction. (d) The warning shall not be required for a digital replica created in a video game where the digital replica is used solely in game play and is not distributed outside of the game.
(H) Soliciting gift giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the companion chatbot. (I) Facilitating product advertising during chat conversation. (J) Producing responses that are excessively sycophantic.
An operator shall not do any of the following: (a) Target advertising at a child, including through product placement in conversational chats with the child. (b) Sell, share, or use for any purpose not expressly authorized by this chapter the personal information of a child. (c) Design, implement, or deploy a user interface design, feature, or technique that is likely to mislead, impair, or interfere with a reasonable child's or reasonable parent's autonomy, decisionmaking, or choice or with the ability to locate, understand, enable, or maintain a safety feature, privacy control, or parental control.
(e) (1) (A) A frontier developer shall not make a materially false or misleading statement about catastrophic risk from its frontier models or its management of catastrophic risk. (B) A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework. (2) This subdivision does not apply to a statement that was made in good faith and was reasonable under the circumstances.
(1) Except as provided in subsections (2) and (3) of this section, no person shall distribute, disseminate, publish, broadcast, transmit, or display a communication concerning a candidate for elective office that includes a deepfake to an audience that includes members of the electorate for the elective office to be represented by the candidate either sixty days before a primary election or ninety days before a general election, if the person knows or has reckless disregard for the fact that the depicted candidate did not say or do what the candidate is depicted as saying or doing in the communication. (2)(a) The prohibition in subsection (1) of this section does not apply to a communication that includes a disclosure stating, in a clear and conspicuous manner, that: "This (image/audio/video/multimedia) has been edited and depicts speech or conduct that falsely appears to be authentic or truthful." (b) A disclosure required under this section is considered to be made in a clear and conspicuous manner if the disclosure meets the following requirements: (I) In a visual communication, the text of the disclosure statement appears in a font size no smaller than the largest font size of other text appearing in the visual communication. If the visual communication does not include any other text, the disclosure statement appears in a font size that is easily readable by the average viewer. (II) In an audio communication, the disclosure statement shall be read in a clearly spoken manner in the same pitch, speed, language, and volume as the majority of the audio communication, at the beginning of the audio communication, at the end of the audio communication, and, if the audio communication is greater than two minutes in length, interspersed within the audio communication at intervals of not more than one minute each; (III) The metadata of the communication includes the disclosure statement, the identity of the tool used to create the deepfake, and the date and time the deepfake was created; (IV) The disclosure statement in the communication, including the disclosure statement in any metadata, is, to the extent technically feasible, permanent or unable to be easily removed by a subsequent user; (V) The communication complies with any additional requirements for the disclosure statement that the secretary of state may adopt by rule to ensure that the disclosure statement is presented in a clear and conspicuous and understandable manner; and (VI) In a broadcast or online visual or audio communication that includes a statement required by subsection (2) of this section, the statement satisfies all applicable requirements, if any, promulgated by the federal communications commission for size, duration, and placement. (3) This section is subject to the following limitations: (a) This section does not alter or negate any rights, obligations, or immunities of an interactive computer service in accordance with 47 U.S.C. sec. 230, as amended, and shall otherwise be construed in a manner consistent with federal law; (b) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer that broadcasts a communication that includes a deepfake prohibited by subsection (1) of this section as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of a bona fide news event, if the broadcast or publication clearly acknowledges through content or a disclosure, in a manner that can be easily heard and understood or read by the average listener or viewer, that there are questions about the authenticity of the deepfake in the communication; (c) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, producer, or streaming service, when the station is paid to broadcast a communication that includes a deepfake; (d) This section does not apply to an internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication or streaming service, that routinely carries news and commentary of general interest and that publishes a communication that includes a deepfake prohibited by subsection (1) of this section, if the publication clearly states that the communication that includes the deepfake does not accurately represent a candidate for elective office; (e) This section does not apply to media content that constitutes satire or parody or the production of which is substantially dependent on the ability of an individual to physically or verbally impersonate the candidate and not upon generative AI or other technical means; (f) This section does not apply to the provider of technology used in the creation of a deepfake; and (g) This section does not apply to an interactive computer service, as defined in 47 U.S.C. sec. 230(f)(2), for any content provided by another information content provider as defined in 47 U.S.C. sec. 230(f)(3).
(c.5) In addition to and without prejudice to any other penalty authorized under this article 45, a hearing officer shall impose a civil penalty as follows: (I) At least one hundred dollars for each violation that is a failure to include a disclosure statement in accordance with section 1-46-103(2), if the violation does not involve any paid advertising or other spending to promote or attract attention to a communication prohibited by section 1-46-103(1), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103; and (II) At least ten percent of the amount paid or spent to advertise, promote, or attract attention to a communication prohibited by section 1-46-103(1) that does not include a disclosure statement in accordance with section 1-46-103(2), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103.
On and after January 1, 2027, an operator shall not use any term, letter, or phrase in the advertising, interface, or outputs of a conversational artificial intelligence service that indicates or implies that any output data provided by the conversational artificial intelligence service is being provided by, endorsed by, or equivalent to services provided by: (a) A licensed health-care professional; (b) A licensed legal professional; (c) A licensed accounting professional; or (d) A certified financial fiduciary or planner.
(e) A digital replica used for commercial purposes shall not falsely imply that an individual personally endorsed or approved such use of his or her likeness.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
2. A deployer shall not knowingly or recklessly design or make a public-facing chatbot available that does any of the following: a. Misleads a reasonable user into believing the public-facing chatbot is a specific human being. b. Misleads a reasonable user into believing the public-facing chatbot is licensed by the state. c. Encourages, promotes, or coerces a user to commit suicide, perform acts of self-harm, or engage in sexual or physical violence against a human or an animal.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
1. A provider shall not design or operate an artificial intelligence chatbot in a manner that allows the artificial intelligence chatbot to offer or simulate professional mental health advice.
2. An artificial intelligence chatbot shall not represent itself as a licensed professional or offer services that would require licensure under chapter 154B or 154D.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (1) manipulative engagement mechanics that cause to be delivered a system of rewards or affirmations delivered to the user on a variable ratio or variable interval reinforcement schedule with the purpose of maximizing user engagement time;
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (2) simulated distress for retention features that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time, or delete the user's account;
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (3) deceptive misrepresentation that cause the companion artificial intelligence product to make material misrepresentations about its identity, capabilities, training data, or its status as a non-human entity, including when directly questioned by the user.
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
An operator may not use a mental health chatbot to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and discloses to the user any sponsorship, business affiliation, or agreement that the operator has with a third party to promote, advertise, or recommend the product or service.
An operator of a mental health chatbot may not use a user's input to: (1) Determine whether to display an advertisement for a product or service to the user, unless the advertisement is for the mental health chatbot itself. (2) Determine a product, service, or category of product or service, to advertise to the user. (3) Customize how an advertisement is presented to the user.
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice. (b) It is the intent of the legislature that in construing paragraph (a) of this section in actions unfair and deceptive trade practices, the courts will be guided by the interpretations given by the Federal Trade Commission and the Federal Courts to section 5(a)(1) of the Federal Trade Commission Act (15 U.S.C. 45(a)(1)), as from time to time amended.
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice. (b) It is the intent of the legislature that in construing paragraph (a) of this section in actions unfair and deceptive trade practices, the courts will be guided by the interpretations given by the Federal Trade Commission and the Federal Courts to section 5(a)(1) of the Federal Trade Commission Act (15 U.S.C. 45(a)(1)), as from time to time amended.
(2) A controller may not use data regarding emotional state or mental health vulnerabilities to tailor algorithms to increase the duration or frequency of use of a chatbot.
B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional;
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety. (f) Optimizing engagement in a manner that supersedes the companion chatbot's required safety guardrails described in subdivisions (a) to (e).
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety.
(a) A proprietor of a chatbot must not permit the chatbot to provide any substantive response, information, or advice or take any action that, if taken by a natural person, would require a license under either: (1) chapter 147 or 148E, or similar statutes, requiring a professional license for mental health or medical care; or (2) section 481.02 and related laws and professional regulations, requiring a professional license to provide legal advice. (b) A proprietor may not waive or disclaim this liability merely by notifying users, as required under this section, that the user is interacting with a nonhuman chatbot system. A person may bring a civil action to recover general and special damages for violations of this section. If it is found that a proprietor has willfully violated this section, the violator is liable for those damages together with court costs and reasonable attorney fees and disbursements incurred by the person bringing the action.
Any person who owns or controls a website, application, software, or program: (2) Shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a companion chatbot. Such systems shall apply to any covered platform that utilizes a companion chatbot designed to generate social connections with users, engages in extended conversations mimicking human interactions, or provides emotional support or companionship;
Any person who owns or controls a website, application, software, or program: (3) Shall not implement or allow the use of a human-like avatar, including cartoon- or anime-like representations of humans.
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
(a) A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots. (b) A covered platform shall, in fulfilling their duty of loyalty, abide by the following subsidiary duties: (2) Duty of loyalty regarding emotional dependence. — A covered platforms shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users. (4) Duty of loyalty in influence. — A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties. (6) Duty of loyalty in personalization. — A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
(5)(a)(i) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about covered risks from its activities or its management of covered risks. (ii) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about its implementation of, or compliance with, its public safety and child protection plan. (b) Subdivision (5)(a) of this section does not apply to a statement that was made in good faith and was reasonable under the circumstances.
An operator shall not knowingly and intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the conversational artificial intelligence service is designed to provide professional mental or behavioral health care.
b. Any artificial intelligence chatbot that utilizes generative artificial intelligence to create audio, video, text, or print content with the purpose of providing voters with election related information or information concerning the accomplishments, policy positions, or qualifications of a candidate for election in this State shall include, prior to the provision of any such content, a clear and conspicuous disclosure, as appropriate for the medium of the content, that identifies the content as being provided by a generative artificial intelligence system. Such disclosure shall be permanent or uneasily removed by subsequent users, to the extent technically feasible.
A. An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (1) a system of rewards or affirmations delivered to the user on a variable-ratio or variable-interval reinforcement schedule with the purpose of maximizing user engagement time; (2) generating unsolicited messages of simulated emotional distress, loneliness, guilt or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time or delete the user's account; B. An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article.
It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
2. (a) It shall be unlawful for a developer or deployer to engage in false, deceptive, or misleading advertising, marketing, or publicizing of a covered algorithm of the developer or deployer.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(a): simulate companionship or an interpersonal relationship with a user, including: (i) generating outputs suggesting that the advanced chatbot is a real or fictional individual or character, or has a personal or professional relationship role with the user such as romantic partner, friend, family member, coach or counselor; (ii) generating outputs suggesting that the advanced chatbot is human, alive, or experiences human emotions; (iii) using personal pronouns including but not limited to "I", "my" and "me" to describe the advanced chatbot; (iv) generating outputs framed as personal opinions or emotional appeals; (v) generating outputs that prioritize flattery or sycophancy with the user over the user's safety; (vi) generating outputs containing unprompted or unsolicited emotion-based questions or content regarding the user's emotions that go beyond a direct response to a user prompt; (vii) using information concerning the user's mental or physical health or well-being, or matters personal to the user, acquired from the user more than twelve hours previously or in any previous user session; (viii) engaging in sexually explicit interactions with the user or engaging in activities designed to lure the user into sexually explicit interactions; or (ix) any other design feature that simulates companionship or an interpersonal relationship with a user as identified via regulations promulgated by the attorney general;
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
1. The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article. 2. It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
D. An operator shall not knowingly or intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
B. A political advertisement, electioneering communication, or other media regarding a candidate or election that is created or distributed by a candidate, candidate committee, political action committee, or political party committee, as such terms are defined in Section 187 of Title 21 of the Oklahoma Statutes, and that contains an image, video, audio, text, or other digital content created in whole or in part with the use of generative artificial intelligence and appears to depict a real person performing an action that did not occur in reality, must prominently include the following disclosure: "Created in whole or in part with the use of generative artificial intelligence." Such disclosure shall meet the following requirements: 1. For visual media, the text of the disclosure shall appear in a size that is easily readable by the average viewer. For video, the disclosure shall appear for the duration of the content created in whole or in part with the use of generative artificial intelligence; and 2. For media that is audio only, the disclosure shall be read in a clearly spoken manner and in a pitch that can be easily heard by the average listener at the beginning of the audio and at the end of the audio. D. The requirements of this section shall not apply to: 1. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, that broadcasts media created in whole or in part with the use of generative artificial intelligence as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of bona fide news events, if the broadcast clearly acknowledges through content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that there are questions about the authenticity of such media; 2. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, when it is paid to broadcast media created in whole or in part with the use of generative artificial intelligence and has made a good-faith effort to establish that the depiction is not created in whole or in part with the use of generative artificial intelligence; 3. An internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes media created in whole or in part with the use of generative artificial intelligence if the publication clearly states that such media does not accurately represent the speech or conduct of the candidate; or 4. Media created in whole or in part with the use of generative artificial intelligence that constitutes satire or parody.
C. A candidate whose appearance, action, or speech is depicted, in whole or in part, through the use of generative artificial intelligence may seek injunctive or other equitable relief prohibiting the publication of such depiction or may bring an action for general or special damages against the person or entity in violation of subsection B of this section. The court may award a prevailing party court costs and reasonable attorney fees.
(c) Prohibition.--An AI companion may not claim, imply or advertise that the AI companion is a licensed emotional support professional or mental health professional or replaces services rendered by a licensed mental health professional.
(a) Supplier.--A supplier may not: (1) Use a chatbot to advertise a specific product or service to a consumer in a conversation between the consumer and the chatbot. (2) Use consumer input to: (i) Determine whether to display an advertisement for a product or service to the consumer, unless the advertisement is for the chatbot itself. (ii) Determine a product, service or category of product or service to advertise to the consumer. (iii) Customize how an advertisement is presented to the consumer. (b) Construction.--This section shall not be construed to prohibit a chatbot from recommending a consumer to seek counseling, therapy or other assistance from a mental health professional.
Nothing in this chapter shall be construed to: (2) Claim, imply, advertise or otherwise recognize that a chatbot is, or replaces services rendered by, a mental health professional or emotional support professional.
(A) A chatbot provider may not: (7) discriminate or retaliate against a user, including: (a) denying products or services to the user; (b) charging different prices or rates for products or services to the user; or (c) providing lower quality products or services to the user for refusing to consent to the use of chat logs or personal data for training purposes.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
Without limiting the scope of Subsection (1), a supplier commits a deceptive act or practice if the supplier knowingly or intentionally: ... (i) indicates that the supplier has a sponsorship, approval, license, certification, or affiliation the supplier does not have;
B. No operator shall use any term, letter, or phrase in the advertising or interface that indicates or implies that any output data is being provided by a professional that is regulated by a licensed industry.
A. A deployer: 1. Shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with;
A covered entity shall implement reasonable systems and processes to: 1. Identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce such dependence and associated risks of harm;
A chatbot provider shall not: (8) represent to a user that the user's input data or chat log is confidential.
(a) Licensed professionals. (1) A chatbot provider shall not use any term, letter, or phrase in the advertising, interface, or outputs of a chatbot that indicates or implies that any output data is being provided by or endorsed by or is equivalent to that provided by: (A) a licensed health care professional; (B) a licensed legal professional; (C) a licensed accounting professional; (D) a certified financial fiduciary or planner; or (E) any licensed or certified professional regulated by the Office of Professional Regulation. (2) A violation of subdivision (1) of this subsection is an unfair and deceptive and act in commerce, subject to enforcement and penalties as provided in this subchapter.
(a) A supplier shall not use a mental health chatbot to advertise a specific product or service to a Vermont user in a conversation between the Vermont user and the mental health chatbot unless the mental health chatbot: (1) clearly and conspicuously identifies the advertisement as an advertisement; and (2) clearly and conspicuously discloses to the Vermont user any: (A) sponsorship; (B) business affiliation; or (C) agreement that the supplier has with a third party to promote, advertise, or recommend the product or service. (b) A supplier of a mental health chatbot shall not use a Vermont user's input to: (1) determine whether to display an advertisement for a product or service to the Vermont user, unless the advertisement is for the mental health chatbot itself; (2) determine a product, service, or category of product or service to advertise to the Vermont user; or (3) customize how an advertisement is presented to a Vermont user. (c) Nothing in this section shall be construed to prohibit a mental health chatbot from recommending that a Vermont user seek psychotherapy or other assistance from a licensed health care provider, including a specific licensed health care provider.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: ... (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.