AI systems may not be designed or deployed to deceive or manipulate users against their own interests. This covers psychologically exploitative design, deceptive UX patterns, false personalization, and AI-generated political content. All are derived from unfair and deceptive trade practice frameworks applied to AI contexts.
H. An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
A chatbot provider may not: 1. Use any term, letter or phrase in the advertising, interface or output data of a chatbot that states or implies that the advertising, interface or output data of a chatbot is endorsed by or equivalent to any of the following: (a) Any certified, registered or licensed professional pursuant to title 32. (b) A licensed legal professional. (c) A certified public accountant as defined in section 32-701. (d) An investment advisor or an investment adviser representative as defined in section 44-3101. (e) A licensed fiduciary as prescribed in title 14, chapter 5, article 7.
A chatbot provider may not: 2. Include any representation in the advertising, interface or output data of a chatbot that states or implies the user's input data or chat log is confidential.
(a) By December 1, 2026, any person or entity that makes available to consumers any artificial intelligence technology that enables a user to create a digital replica shall provide the following consumer warning: "Unlawful use of this technology to depict another person without prior consent may result in civil or criminal liability for the user." (b) The warning shall be hyperlinked on any page or screen where the consumer may input a prompt to the artificial intelligence technology. The warning shall also be included in the terms and conditions for use of the artificial intelligence technology. All warnings shall be displayed in a manner that is clear and conspicuous. (c) Failure to comply with subdivision (a) or (b) is punishable by a civil penalty not to exceed ten thousand dollars ($10,000) for each day that the technology is provided to or offered to the public without a consumer warning. A public prosecutor may enforce this section by bringing a civil action in any court of competent jurisdiction. (d) The warning shall not be required for a digital replica created in a video game where the digital replica is used solely in game play and is not distributed outside of the game.
An operator shall not do any of the following: (a) Target advertising at a child, including through product placement in conversational chats with the child. (b) Sell, share, or use for any purpose not expressly authorized by this chapter the personal information of a child. (c) Design, implement, or deploy a user interface design, feature, or technique that is likely to mislead, impair, or interfere with a reasonable child's or reasonable parent's autonomy, decisionmaking, or choice or with the ability to locate, understand, enable, or maintain a safety feature, privacy control, or parental control.
(1) Except as provided in subsections (2) and (3) of this section, no person shall distribute, disseminate, publish, broadcast, transmit, or display a communication concerning a candidate for elective office that includes a deepfake to an audience that includes members of the electorate for the elective office to be represented by the candidate either sixty days before a primary election or ninety days before a general election, if the person knows or has reckless disregard for the fact that the depicted candidate did not say or do what the candidate is depicted as saying or doing in the communication. (2)(a) The prohibition in subsection (1) of this section does not apply to a communication that includes a disclosure stating, in a clear and conspicuous manner, that: "This (image/audio/video/multimedia) has been edited and depicts speech or conduct that falsely appears to be authentic or truthful." (b) A disclosure required under this section is considered to be made in a clear and conspicuous manner if the disclosure meets the following requirements: (I) In a visual communication, the text of the disclosure statement appears in a font size no smaller than the largest font size of other text appearing in the visual communication. If the visual communication does not include any other text, the disclosure statement appears in a font size that is easily readable by the average viewer. (II) In an audio communication, the disclosure statement shall be read in a clearly spoken manner in the same pitch, speed, language, and volume as the majority of the audio communication, at the beginning of the audio communication, at the end of the audio communication, and, if the audio communication is greater than two minutes in length, interspersed within the audio communication at intervals of not more than one minute each; (III) The metadata of the communication includes the disclosure statement, the identity of the tool used to create the deepfake, and the date and time the deepfake was created; (IV) The disclosure statement in the communication, including the disclosure statement in any metadata, is, to the extent technically feasible, permanent or unable to be easily removed by a subsequent user; (V) The communication complies with any additional requirements for the disclosure statement that the secretary of state may adopt by rule to ensure that the disclosure statement is presented in a clear and conspicuous and understandable manner; and (VI) In a broadcast or online visual or audio communication that includes a statement required by subsection (2) of this section, the statement satisfies all applicable requirements, if any, promulgated by the federal communications commission for size, duration, and placement. (3) This section is subject to the following limitations: (a) This section does not alter or negate any rights, obligations, or immunities of an interactive computer service in accordance with 47 U.S.C. sec. 230, as amended, and shall otherwise be construed in a manner consistent with federal law; (b) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer that broadcasts a communication that includes a deepfake prohibited by subsection (1) of this section as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of a bona fide news event, if the broadcast or publication clearly acknowledges through content or a disclosure, in a manner that can be easily heard and understood or read by the average listener or viewer, that there are questions about the authenticity of the deepfake in the communication; (c) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, producer, or streaming service, when the station is paid to broadcast a communication that includes a deepfake; (d) This section does not apply to an internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication or streaming service, that routinely carries news and commentary of general interest and that publishes a communication that includes a deepfake prohibited by subsection (1) of this section, if the publication clearly states that the communication that includes the deepfake does not accurately represent a candidate for elective office; (e) This section does not apply to media content that constitutes satire or parody or the production of which is substantially dependent on the ability of an individual to physically or verbally impersonate the candidate and not upon generative AI or other technical means; (f) This section does not apply to the provider of technology used in the creation of a deepfake; and (g) This section does not apply to an interactive computer service, as defined in 47 U.S.C. sec. 230(f)(2), for any content provided by another information content provider as defined in 47 U.S.C. sec. 230(f)(3).
(c.5) In addition to and without prejudice to any other penalty authorized under this article 45, a hearing officer shall impose a civil penalty as follows: (I) At least one hundred dollars for each violation that is a failure to include a disclosure statement in accordance with section 1-46-103(2), if the violation does not involve any paid advertising or other spending to promote or attract attention to a communication prohibited by section 1-46-103(1), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103; and (II) At least ten percent of the amount paid or spent to advertise, promote, or attract attention to a communication prohibited by section 1-46-103(1) that does not include a disclosure statement in accordance with section 1-46-103(2), or such other higher amount that, based on the degree of distribution and public exposure to the unlawful communication, the hearing officer deems appropriate to deter future violations of section 1-46-103.
On and after January 1, 2027, an operator shall not use any term, letter, or phrase in the advertising, interface, or outputs of a conversational artificial intelligence service that indicates or implies that any output data provided by the conversational artificial intelligence service is being provided by, endorsed by, or equivalent to services provided by: (a) A licensed health-care professional; (b) A licensed legal professional; (c) A licensed accounting professional; or (d) A certified financial fiduciary or planner.
(e) A digital replica used for commercial purposes shall not falsely imply that an individual personally endorsed or approved such use of his or her likeness.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
2. A deployer shall not knowingly or recklessly design or make a public-facing chatbot available that does any of the following: a. Misleads a reasonable user into believing the public-facing chatbot is a specific human being. b. Misleads a reasonable user into believing the public-facing chatbot is licensed by the state. c. Encourages, promotes, or coerces a user to commit suicide, perform acts of self-harm, or engage in sexual or physical violence against a human or an animal.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
1. A provider shall not design or operate an artificial intelligence chatbot in a manner that allows the artificial intelligence chatbot to offer or simulate professional mental health advice.
2. An artificial intelligence chatbot shall not represent itself as a licensed professional or offer services that would require licensure under chapter 154B or 154D.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make a representation or statement that would lead a reasonable individual to believe that the conversational AI service is designed to provide professional psychology or behavioral health services that an individual would require licensure under chapter 154B or 154D to provide.
c. Clearly and conspicuously disclose that the chatbot does not provide medical, legal, financial, or psychological services and that the user should consult a licensed professional for such services at the beginning of each conversation and at regular intervals. d. Be programmed to prevent the chatbot from representing that the chatbot is a licensed professional, including but not limited to a therapist, physician, lawyer, financial advisor, or other professional.
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: (1) manipulative engagement mechanics that cause to be delivered a system of rewards or affirmations delivered to the user on a variable ratio or variable interval reinforcement schedule with the purpose of maximizing user engagement time; (2) simulated distress for retention features that generate unsolicited messages of simulated emotional distress, loneliness, guilt, or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time, or delete the user's account;
(a) An operator shall not deploy or operate a companion artificial intelligence product that incorporates the following features, unless specifically configured to do so by an adult user: ... (3) deceptive misrepresentation that cause the companion artificial intelligence product to make material misrepresentations about its identity, capabilities, training data, or its status as a non-human entity, including when directly questioned by the user.
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not provide the features described in subsection (a) to the minor user.
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
(1) (a) Any candidate for any elected office whose appearance, action, or speech is altered through the use of synthetic media in an electioneering communication may seek injunctive or other equitable relief against the sponsor of the electioneering communication requiring that the communication includes a disclosure that is clear and conspicuous and included in, or alongside and associated with, the content in a manner that is likely to be noticed by the user. (b) The court may award a prevailing party reasonable attorney's fees and costs. This paragraph does not limit or preclude a plaintiff from securing or recovering any other available remedy. (4) It is an affirmative defense for any action brought under subsection (1) of this section that the electioneering communication containing synthetic media includes a disclosure that is clear and conspicuous and included in, or alongside and associated with, the content in a manner that is likely to be noticed by the user.
(2) In any action brought under subsection (1) of this section: (a) The plaintiff shall: 1. File in Circuit Court of the county in which he or she resides; and 2. Bear the burden of establishing the use of synthetic media by clear and convincing evidence. (b) The following shall not be liable except as provided in subsection (3) of this section: 1. The medium disseminating the electioneering communication; and 2. An advertising sales representative of such medium. (3) Failure to comply with an order of the court to include the required disclosure herein shall be subject to the penalties set for KRS 121.990(3) for violation of KRS 121.190(1). (5) Except when a licensee, programmer, or operator of a federally licensed broadcasting station transmits an electioneering communication that is subject to 47 U.S.C. sec. 315, a medium or its advertising sales representative may be held liable in a cause of action brought under subsection (1) of this section if: (a) The person intentionally removes any disclosure described in subsection (4) of this section from the electioneering communication it disseminates and does not remove the electioneering communication or replace the disclosure when notified; or (b) Subject to affirmative defenses described in subsection (4) of this section, the person changes the content of an electioneering communication in a manner that results in it qualifying as synthetic media.
An operator may not use a mental health chatbot to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and discloses to the user any sponsorship, business affiliation, or agreement that the operator has with a third party to promote, advertise, or recommend the product or service.
An operator of a mental health chatbot may not use a user's input to: (1) Determine whether to display an advertisement for a product or service to the user, unless the advertisement is for the mental health chatbot itself. (2) Determine a product, service, or category of product or service, to advertise to the user. (3) Customize how an advertisement is presented to the user.
(a) Disclosure of AI Use: Any corporation operating in Massachusetts that uses artificial intelligence systems or related tools to target specific consumer groups or influence behavior must disclose: (1) Purpose of AI Use: The methods, purposes, and contexts in which AI systems are used to identify or target specific classes of individuals; (2) Behavioral Influence: The specific ways in which AI tools are designed to influence consumer behavior; (3) Third-Party Partnerships: Details of any third-party entities involved in the design, deployment, or operation of AI systems used for targeting or behavioral influence. Proprietary information will be safeguarded and exempt from public disclosure under state confidentiality laws.
(b) Public Disclosure Requirements: Corporations must make these disclosures: (1) Publicly available on their website in a manner that is easily accessible and comprehensible; (2) Included in terms and conditions provided to consumers prior to significant interaction with an AI system.
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice.
(a) A covered entity shall not: (i) engage in a deceptive data practice; (ii) engage in an unfair data practice; or (iii) engage in an abusive trade practice. (b) It is the intent of the legislature that in construing paragraph (a) of this section in actions unfair and deceptive trade practices, the courts will be guided by the interpretations given by the Federal Trade Commission and the Federal Courts to section 5(a)(1) of the Federal Trade Commission Act (15 U.S.C. 45(a)(1)), as from time to time amended.
(2) A CONTROLLER MAY NOT USE DATA REGARDING EMOTIONAL STATE OR MENTAL HEALTH VULNERABILITIES TO TAILOR ALGORITHMS TO INCREASE THE DURATION OR FREQUENCY OF USE OF A CHATBOT.
B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional;
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety. (f) Optimizing engagement in a manner that supersedes the companion chatbot's required safety guardrails described in subdivisions (a) to (e).
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (e) Prioritizing validation of the user's beliefs, preferences, or desires over factual accuracy or the covered minor's safety.
(b) An employer must not use an automated decision system that uses individualized worker data as inputs or outputs to set compensation, unless the employer can demonstrate that: (1) the input data is directly related to the ability of the worker to complete the task, such as education, training, experience, or seniority; (2) the inputs used are clearly communicated to the worker such that the worker knows their compensation is a function of the identified attributes; and (3) the employer uses the automated decision system either: (i) not more than once per six-month period per worker; or (ii) only in conjunction with a meaningful change in work duties, such as hiring or promotion.
(a) A proprietor of a chatbot must not permit the chatbot to provide any substantive response, information, or advice or take any action that, if taken by a natural person, would require a license under either: (1) chapter 147 or 148E, or similar statutes, requiring a professional license for mental health or medical care; or (2) section 481.02 and related laws and professional regulations, requiring a professional license to provide legal advice. (b) A proprietor may not waive or disclaim this liability merely by notifying users, as required under this section, that the user is interacting with a nonhuman chatbot system. A person may bring a civil action to recover general and special damages for violations of this section. If it is found that a proprietor has willfully violated this section, the violator is liable for those damages together with court costs and reasonable attorney fees and disbursements incurred by the person bringing the action.
(a) A proprietor of a chatbot must not permit the chatbot to provide any substantive response, information, or advice or take any action that, if taken by a natural person, would require a license under either: (1) chapter 147 or 148E, or similar statutes, requiring a professional license for mental health or medical care; or (2) section 481.02 and related laws and professional regulations, requiring a professional license to provide legal advice. (b) A proprietor may not waive or disclaim this liability merely by notifying users, as required under this section, that the user is interacting with a nonhuman chatbot system. A person may bring a civil action to recover general and special damages for violations of this section. If it is found that a proprietor has willfully violated this section, the violator is liable for those damages together with court costs and reasonable attorney fees and disbursements incurred by the person bringing the action.
(2) Shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a companion chatbot. Such systems shall apply to any covered platform that utilizes a companion chatbot designed to generate social connections with users, engages in extended conversations mimicking human interactions, or provides emotional support or companionship;
(3) Shall not implement or allow the use of a human-like avatar, including cartoon- or anime-like representations of humans.
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
(b) a. An artificial intelligence chatbot shall not represent, directly or indirectly, that the chatbot is a licensed professional, including a therapist, physician, lawyer, financial advisor, or other professional. b. Each artificial intelligence chatbot made available to users shall, at the initiation of each conversation with a user and at reasonably regular intervals, clearly and conspicuously disclose to the user that: (i) The chatbot does not provide medical, legal, financial, or psychological services; and (ii) Users of the chatbot should consult a licensed professional for such advice.
(a) A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots. (4) Duty of loyalty in influence. — A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties.
(2) Duty of loyalty regarding emotional dependence. — A covered platforms shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users.
(6) Duty of loyalty in personalization. — A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots.
Duty of loyalty regarding emotional dependence. – A covered platform shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users.
Duty of loyalty in influence. – A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties.
Duty of loyalty in personalization. – A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process.
(5)(a)(i) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about covered risks from its activities or its management of covered risks. (ii) A large frontier developer or large chatbot provider shall not make a materially false or misleading statement or omission about its implementation of, or compliance with, its public safety and child protection plan. (b) Subdivision (5)(a) of this section does not apply to a statement that was made in good faith and was reasonable under the circumstances.
An operator shall not knowingly and intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the conversational artificial intelligence service is designed to provide professional mental or behavioral health care.
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (1) a system of rewards or affirmations delivered to the user on a variable-ratio or variable-interval reinforcement schedule with the purpose of maximizing user engagement time;
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (2) generating unsolicited messages of simulated emotional distress, loneliness, guilt or abandonment that are triggered by a user's indication of a desire to end a conversation, reduce usage time or delete the user's account;
1. An artificial intelligence provider shall not make any representation or statement or knowingly cause or program an artificial intelligence system made available for use by a person in this State to make any representation or statement that explicitly or implicitly indicates that: (a) The artificial intelligence system is capable of providing professional mental or behavioral health care; (b) A user of the artificial intelligence system may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care; or (c) The artificial intelligence system, or any component, feature, avatar or embodiment of the artificial intelligence system is a provider of mental or behavioral health care, a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor or any other term commonly used to refer to a provider of professional mental health or behavioral health care. 6. This section shall not be construed to prohibit: (a) Any advertisement, statement or representation for or relating to materials, literature and other products which are meant to provide advice and guidance for self-help relating to mental or behavioral health, if the material, literature or product does not purport to offer or provide professional mental or behavioral health care. (b) Offering or operating an artificial intelligence system that is designed to be used by a provider of professional mental or behavioral health care to perform tasks for administrative support in conformity with subsection 2 of section 8 of this act.
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article.
It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
(d) with respect to a covered algorithm, certify that, based on the results of a pre-deployment evaluation described in section one hundred three or an impact assessment described in section one hundred four of this article: (i) use of the covered algorithm is not likely to result in harm or disparate impact in the equal enjoyment of goods, services, or other activities or opportunities; (ii) the benefits from the use of the covered algorithm to individuals affected by the covered algorithm likely outweigh the harms from the use of the covered algorithm to such individuals; and (iii) use of the covered algorithm is not likely to result in a deceptive act or practice; (e) ensure that any covered algorithm of the developer or deployer functions at a level that would be considered reasonable performance by an individual with ordinary skill in the art; and in a manner that is consistent with its expected and publicly-advertised performance, purpose, or use;
2. (a) It shall be unlawful for a developer or deployer to engage in false, deceptive, or misleading advertising, marketing, or publicizing of a covered algorithm of the developer or deployer.
2. A developer or deployer may not condition, effectively condition, attempt to condition, or attempt to effectively condition the exercise of any individual right under this article or individual choice through: (a) the use of any false, fictitious, fraudulent, or materially misleading statement or representation; or (b) the design, modification, or manipulation of any user interface with the purpose or substantial effect of obscuring, subverting, or impairing a reasonable individual's autonomy, decision making, or choice to exercise any such right.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. § 1800(5)(d): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: ... (d) generating outputs that optimize user engagement that supersede the chatbot's safety guardrails;
It shall be unlawful for an operator to provide an addictive social media platform to a user in this state unless such platform offers mechanisms through which a user may: 1. Turn off algorithmic recommendations; 2. Turn off notifications concerning an addictive feed, provided further that such mechanism shall, at a minimum, provide the user with the ability to turn off notifications overall or to turn off notifications between the hours of 12 AM Eastern and 6 AM Eastern; 3. Turn off autoplay on such platform; and 4. Limit such user's access to such platform to any length of day specified by such user, provided further that any mechanism which solely reminds such user of time spent on a platform rather than allowing such user to limit such user's access shall not be in compliance with this subdivision.
The settings required in section fifteen hundred ten of this article shall be presented in a clear and accessible manner on an addictive social media platform. It shall be unlawful for such platform to deploy any mechanism or design which intentionally inhibits the purpose of this article, subverts user choice or autonomy, or makes it more difficult for a user to exercise their rights under any of the prescribed settings in section fifteen hundred ten of this article.
It shall be unlawful for an addictive social media platform to deploy any mechanism or design which intentionally serves to make it more difficult for a user to deactivate, reactivate, suspend, or cancel such user's account or profile.
D. An operator shall not knowingly or intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
B. A political advertisement, electioneering communication, or other media regarding a candidate or election that is created or distributed by a candidate, candidate committee, political action committee, or political party committee, as such terms are defined in Section 187 of Title 21 of the Oklahoma Statutes, and that contains an image, video, audio, text, or other digital content created in whole or in part with the use of generative artificial intelligence and appears to depict a real person performing an action that did not occur in reality, must prominently include the following disclosure: "Created in whole or in part with the use of generative artificial intelligence." Such disclosure shall meet the following requirements: 1. For visual media, the text of the disclosure shall appear in a size that is easily readable by the average viewer. For video, the disclosure shall appear for the duration of the content created in whole or in part with the use of generative artificial intelligence; and 2. For media that is audio only, the disclosure shall be read in a clearly spoken manner and in a pitch that can be easily heard by the average listener at the beginning of the audio and at the end of the audio. D. The requirements of this section shall not apply to: 1. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, that broadcasts media created in whole or in part with the use of generative artificial intelligence as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of bona fide news events, if the broadcast clearly acknowledges through content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that there are questions about the authenticity of such media; 2. A radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, when it is paid to broadcast media created in whole or in part with the use of generative artificial intelligence and has made a good-faith effort to establish that the depiction is not created in whole or in part with the use of generative artificial intelligence; 3. An internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes media created in whole or in part with the use of generative artificial intelligence if the publication clearly states that such media does not accurately represent the speech or conduct of the candidate; or 4. Media created in whole or in part with the use of generative artificial intelligence that constitutes satire or parody.
C. A candidate whose appearance, action, or speech is depicted, in whole or in part, through the use of generative artificial intelligence may seek injunctive or other equitable relief prohibiting the publication of such depiction or may bring an action for general or special damages against the person or entity in violation of subsection B of this section. The court may award a prevailing party court costs and reasonable attorney fees.
(c) Prohibition.--An AI companion may not claim, imply or advertise that the AI companion is a licensed emotional support professional or mental health professional or replaces services rendered by a licensed mental health professional.
(a) Supplier.--A supplier may not: (1) Use a chatbot to advertise a specific product or service to a consumer in a conversation between the consumer and the chatbot. (2) Use consumer input to: (i) Determine whether to display an advertisement for a product or service to the consumer, unless the advertisement is for the chatbot itself. (ii) Determine a product, service or category of product or service to advertise to the consumer. (iii) Customize how an advertisement is presented to the consumer. (b) Construction.--This section shall not be construed to prohibit a chatbot from recommending a consumer to seek counseling, therapy or other assistance from a mental health professional.
Nothing in this chapter shall be construed to: (2) Claim, imply, advertise or otherwise recognize that a chatbot is, or replaces services rendered by, a mental health professional or emotional support professional.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
(A) A chatbot provider may not: (1) use any term, letter, or phrase in the advertising, interface, or output data of a chatbot that states or implies that the advertising, interface, or output data of a chatbot is endorsed by or equivalent to any of the following: (a) any certified, registered, or licensed professional; (b) a licensed legal professional; (c) a certified public accountant; (d) an investment advisor or an investment advisor representative; or (e) a licensed fiduciary;
(A) A chatbot provider may not: (2) include any representation in the advertising, interface, or output data of a chatbot that states or implies the user's input data or chat log is confidential.
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
Without limiting the scope of Subsection (1), a supplier commits a deceptive act or practice if the supplier knowingly or intentionally: ... (i) indicates that the supplier has a sponsorship, approval, license, certification, or affiliation the supplier does not have;
B. No operator shall use any term, letter, or phrase in the advertising or interface that indicates or implies that any output data is being provided by a professional that is regulated by a licensed industry.
A covered entity shall implement reasonable systems and processes to: 1. Identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce such dependence and associated risks of harm;
(a) Licensed professionals. (1) A chatbot provider shall not use any term, letter, or phrase in the advertising, interface, or outputs of a chatbot that indicates or implies that any output data is being provided by or endorsed by or is equivalent to that provided by: (A) a licensed health care professional; (B) a licensed legal professional; (C) a licensed accounting professional; (D) a certified financial fiduciary or planner; or (E) any licensed or certified professional regulated by the Office of Professional Regulation. (2) A violation of subdivision (1) of this subsection is an unfair and deceptive and act in commerce, subject to enforcement and penalties as provided in this subchapter.
(a) A supplier shall not use a mental health chatbot to advertise a specific product or service to a Vermont user in a conversation between the Vermont user and the mental health chatbot unless the mental health chatbot: (1) clearly and conspicuously identifies the advertisement as an advertisement; and (2) clearly and conspicuously discloses to the Vermont user any: (A) sponsorship; (B) business affiliation; or (C) agreement that the supplier has with a third party to promote, advertise, or recommend the product or service. (b) A supplier of a mental health chatbot shall not use a Vermont user's input to: (1) determine whether to display an advertisement for a product or service to the Vermont user, unless the advertisement is for the mental health chatbot itself; (2) determine a product, service, or category of product or service to advertise to the Vermont user; or (3) customize how an advertisement is presented to a Vermont user. (c) Nothing in this section shall be construed to prohibit a mental health chatbot from recommending that a Vermont user seek psychotherapy or other assistance from a licensed health care provider, including a specific licensed health care provider.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.