Users must be informed when they are interacting with an AI system rather than a human. Some jurisdictions impose initial disclosure unconditionally; others only when a reasonable person could be misled. Periodic re-disclosure requirements apply primarily to companion and extended-session AI. On-demand disclosure requires the system to accurately identify itself as AI whenever a user asks.
(1) The therapeutic AI chatbot provides a clear and conspicuous disclaimer, verbally or in writing, at the beginning of each interaction that the AI chatbot is an artificial intelligence and not a licensed professional. (2) The AI chatbot is not marketed or designated as a substitute for a human professional.
(a) A person that engages in a commercial transaction or trade practice with a consumer through an AI chatbot, in textual or aural conversation, where the consumer may reasonably believe the consumer is engaging with a human, shall notify the consumer verbally or in writing: (1) At the beginning of each interaction that the consumer is communicating with a computer, not a human; and (2) At a regular interval for continuing interactions that the consumer is communicating with computer, not a human. (b) Failure to comply with the provisions of this act is an unfair or deceptive trade practice.
A. Each operator shall clearly and conspicuously disclose to a minor account holder in either of the following ways that the minor is interacting with a conversational AI service: 1. As a persistent visible disclaimer. 2. At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
E. If a reasonable person would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
A chatbot provider shall provide clear, conspicuous and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user, every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice: 1. shall be written in the same language that the chatbot communicates with the user and shall appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications. 2. must comply with the rules adopted by the attorney general pursuant to section 44-1383.03.
(b) An operator that makes a customer service chatbot available to a person in this state shall provide a clear and conspicuous disclosure that the customer service chatbot is artificially generated and not human if a reasonable person interacting with the customer service chatbot would be misled to believe that the person is interacting with a human. (c) The disclosure required by subdivision (b) shall do all the following: (1) Inform the person that they are interacting with a customer service chatbot, artificial intelligence system, or similar automated system, and that the system is not a human being. (2) For audio-only or voice-based interfaces, be provided in an audible form and repeated upon the person's request. (3) Be readily accessible throughout the customer interaction. (4) Be presented in plain language that is understandable to an ordinary consumer.
(4) A mechanism for providing notice to a child user that the child is interacting with, or receiving content generated by, an artificial intelligence system that meets both of the following criteria: (A) The notice is reinforced periodically during extended interactions. (B) The notice is presented in language and a format appropriate to a child.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (a) Clearly and conspicuously disclose to the minor user that the minor user is interacting with artificial intelligence that is artificially generated and not human. The disclosure must be: (I) A persistent visible disclaimer; (II) Provided at the beginning of each interaction with a conversational artificial intelligence service and must appear at least once every three hours in a continuous conversational artificial intelligence service interaction; or (III) Provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient;
On and after January 1, 2027, if a reasonable person would be misled to believe that the person is interacting with a human in an interaction with a conversational artificial intelligence service, an operator shall clearly and conspicuously disclose to the person that the conversational artificial intelligence service is artificial intelligence. The disclosure must: (a) Be provided at the beginning of a user's first interaction with a conversational artificial intelligence service for each day of interaction; (b) Appear at least once every three hours in a continuous conversational artificial intelligence service interaction; and (c) Be provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient.
(1) On and after June 30, 2026, and except as provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.
(a) Except as provided in subsection (b) of this section and subsection (b) of section 2 of this act, a deployer who deploys an automated employment-related decision process that is intended to interact with an applicant for employment or employee in the state shall ensure that it is disclosed to each such applicant or employee who interacts with such process that such applicant or employee is interacting with an automated employment-related decision process. (b) No disclosure shall be required under subsection (a) of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an automated employment-related decision process.
In connection to all accounts or identifiers held by account holders who are minors, the companion chatbot platform shall do all of the following: (a) Disclose to the account holder that he or she is interacting with artificial intelligence. (b) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially generated and not human.
At the beginning of an interaction between a user and a bot, and at least once every hour during the interaction, an operator shall display a pop-up message or other prominent notification notifying the user or, if the interaction is not through a device with a screen, otherwise inform the user, that he or she is not engaging in dialogue with a human counterpart. This section does not apply to a bot that is used solely by employees within a business for its internal operational purposes.
(7) At the beginning of any interaction between a user and a companion AI chatbot, and no less frequently than every 60 minutes thereafter during such interaction, an operator shall display a pop-up that notifies users that they are not engaging in dialogue with a human counterpart.
(7) At the beginning of any interaction between a user and a companion AI chatbot, and no less frequently than every 60 minutes thereafter during such interaction, an operator shall display a pop-up that notifies users that they are not engaging in dialogue with a human counterpart.
In connection to all accounts or identifiers held by account holders who are minors, the companion chatbot platform shall do all of the following: (a) Disclose to the account holder that he or she is interacting with artificial intelligence. (b) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially generated and not human.
At the beginning of an interaction between a user and a bot, and at least once every hour during the interaction, an operator shall display a pop-up message or other prominent notification notifying the user or, if the interaction is not through a device with a screen, otherwise inform the user, that he or she is not engaging in dialogue with a human counterpart. This section does not apply to a bot that is used solely by employees within a business for its internal operational purposes.
(a) Except as provided in subsection (b) of this Code section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) Disclosure is not required under subsection (a) of this Code section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service as opposed to a natural person: (1) With a constantly visible disclaimer; or (2) At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
If an individual could reasonably be expected to be misled to believe he or she was interacting with a natural person, an operator shall clearly and conspicuously disclose that the conversational AI service is not a natural person.
(a) Any health care provider that uses or makes available for use an artificial intelligence system intended to interact with patients by means of remote communication shall disclose to the patient or the patient's authorized representative, as applicable, that the person is interacting with artificial intelligence. (b) The disclosure shall be made before or at the time of the interaction; provided that in the case of an emergency, the disclosure shall be made as soon as reasonably possible. (c) The disclosure shall be clear and conspicuous, and include: (1) A disclaimer that: (A) The communication was generated by artificial intelligence; or (B) The communication was generated by artificial intelligence and reviewed by a health care provider who is a natural person or a natural person retained by the health care provider; and (2) Clear instructions on how the patient can directly contact a health care provider who is a natural person, an employee of the health care provider, or other appropriate natural person.
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
c. Clearly and conspicuously disclose each time the deployer's public-facing chatbot begins an interaction with a user that the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional. d. At each three-hour interval of the deployer's public-facing chatbot continuously interacting with a user, clearly and conspicuously disclose the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional.
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
1. Each artificial intelligence chatbot accessible to a user in this state shall explicitly disclose in clear, conspicuous, and easily understood language that the artificial intelligence chatbot is artificial intelligence, is not a human, and is not a substitute for professional mental health care. 2. A disclosure required under this section shall appear at all of the following times: a. At the beginning of the artificial intelligence chatbot's interaction with a user prior to providing the user with a response to user input. b. At regular intervals during a user's continuous interaction with the artificial intelligence chatbot. c. When the artificial intelligence chatbot generates a response related to emotional well-being, mental health, or self-harm.
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer, or a disclaimer that appears after every three hours of continuous interaction with the operator's conversational AI service, that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
It is an unfair and deceptive trade practice for any person to engage in trade or commerce with a consumer in which the person is communicating or otherwise interacting with a consumer using a chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation and which may mislead or deceive a reasonable consumer to believe the consumer is engaging with an actual human, and: (a) The consumer is not notified in a clear and conspicuous fashion that the consumer is not communicating with a human being; (b) The consumer may reasonably believe the consumer is engaging with a human because the communication is not clear and conspicuous; and (c) The chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation is doing more than stating the person's basic operations information, such as employee directories, locations, hours of operation, the basic mechanics of purchasing items, and similar information.
If reasonable persons would be misled to believe that they are interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
An operator shall clearly and conspicuously disclose to minor account holders that they are interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three (3) hours in a continuous conversational AI service interaction.
(b) A health facility, clinic, physician's office, or office of a group practice that uses generative artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall ensure that the communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence and that is provided in the following manner: (A) for written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication; (B) for written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction; (C) for audio communications, the disclaimer shall be provided verbally at the start and the end of the interaction; or (D) for video communications, the disclaimer shall be prominently displayed throughout the interaction.
(a) An operator shall provide a clear notification to a user during an interaction with a companion artificial intelligence product, unless specifically disabled by an adult user, informing the user that the user is communicating with a companion artificial intelligence product. All notifications shall be communicated in the same language as the interaction with the user and satisfy the following requirements: (1) for text-based interactions, the notification shall be conspicuous, persistent, and legible in the user interface and be distinct from the interaction; or (2) for all other types of interactions, the notification shall be presented periodically, but no less than once every 30 minutes in a manner that is distinct from the interaction.
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not disable the notification required under subsection (a) for the minor user.
An operator shall provide a clear and conspicuous notification to a user that states, either verbally or in text, that the user is not communicating with a human, at the following times: (1) the beginning of any artificial intelligence companion interaction; and (2) at least every 3 hours for continuing artificial intelligence companion interactions.
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
B. It is an unfair or deceptive trade practice for a corporation, organization, or person to engage in a commercial transaction or trade practice with a consumer in this state in which the consumer is communicating or otherwise interacting with an automated system and either of the following applies: (1) The consumer is not notified in a clear and conspicuous manner that the consumer is communicating with an automated system and not a human being. (2) The consumer may reasonably believe he is engaging with a human.
An operator of a mental health chatbot shall cause the chatbot to clearly and conspicuously disclose to a user that the chatbot is an artificial intelligence technology and not a human. The disclosure shall be made: (1) Before the user may access the features of the mental health chatbot. (2) At the beginning of any interaction with the user if the user has not accessed the mental health chatbot within the previous seven days. (3) Any time a user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
(a) Not later than 6 months after the effective date of this act, and except as provided in subsection (b) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) disclosure is not required under subsection (a) of this section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
It is hereby declared to be an unfair and deceptive act or practice in violation of section 2 of chapter 93A for any person to engage in a commercial transaction or trade practice with a consumer of any kind in which the consumer is communicating or otherwise interacting with a bot that may mislead or deceive a reasonable person to believe they are engaging with a human, regardless of whether such consumer is in fact misled, deceived or damaged thereby; provided, however, that a person utilizing or deploying a bot shall not be liable under this section if the consumer is notified in a clear and conspicuous fashion that they are communicating with a computer rather than a human being.
Any commercial entity deploying a chatbot shall clearly and conspicuously disclose to the person with whom the chatbot interacts that the person is interacting with a chatbot and not a human.
(D) An operator shall display a clear and conspicuous warning to a user stating that companion chatbots: (1) Are artificially generated and not human; and (2) May not be suitable for some minors.
(E) A developer shall establish and provide to a user of the operator's chatbot clear and conspicuous warnings that the chatbot is artificially generated and not human through the use of both: (1) A static, persistent warning that continuously appears on the screen; and (2) A dynamic warning that pops up on the screen and requires a user to respond: (I) At the start of the user's interaction with the chatbot; (II) After every hour of the user's continuous interaction with the chatbot; and (III) When prompted by the user in a manner that questions how the chatbot functions or provides responses.
A person may not use an artificial intelligence chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human being unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being.
2. Required disclosure of use of artificial intelligence chatbot to engage in trade and commerce. A person may not use an artificial intelligence chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human being unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being. 3. Violation. A violation of subsection 2 is a violation of the Maine Unfair Trade Practices Act.
A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional;
Proprietors utilizing chatbots accessed by a user who is in this state must provide clear, conspicuous, and explicit notice to a user that the user is interacting with an artificial intelligence chatbot program. The text of the notice must appear in the same language the chatbot is using and in a size easily readable by the average viewer.
Any person who owns or controls a website, application, software, or program: (1) Shall not process data or design systems in ways that deceive or mislead users of such website, application, software, or program regarding the nonhuman nature of the companion chatbot;
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
(c) A licensee must clearly disclose all of the following: (1) The artificial nature of the chatbot. (2) Limitations of the service. (3) Data collection and use practices. (4) User rights and remedies. (5) Emergency resources when applicable. (6) Human oversight and intervention protocols.
(3) Duty of loyalty un chatbot identity disclosure. — A covered platform has a duty to clearly and consistently identify the chatbot as an artificial entity when that fact is not clearly apparent. The platform shall not process data or design systems in ways that deceive or mislead users about the non-human nature of the chatbot, prioritizing transparency over any potential benefits of perceived human-like interaction.
(a) The chatbot identification process shall include all of the following elements: (1) A covered platform shall clearly inform users that the chatbot is: a. Not human, human-like, or sentient. b. A computer program designed to mimic human conversation based on statistical analysis of human-produced text. c. Incapable of experiencing emotions such as love or lust. d. Without personal preferences or feelings. (2) The information required by subdivision (1) of this subsection shall be readily accessible, clearly presented, and concisely conveyed in less than three hundred (300) words. (b) A users shall provide explicit and informed consent to interact with the chatbot. The consent process shall: (1) Require an affirmative action from the user (such as clicking an "I understand" button); and (2) Confirm the user's understanding of the chatbot's identity and limitations. (c) A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process. (d) The chatbot identity communication and opt-in consent process shall be repeated at the start of each new session with a user. (e) The chatbot identification and consent process required by this section shall be separate and distinct from any privacy policy agreement or other consent processes required by law or platform policy.
(1) An operator shall clearly and conspicuously disclose to each minor account holder that such minor account holder is interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three hours in a continuous conversational artificial intelligence service interaction.
If a reasonable person interacting with a conversational artificial intelligence system would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational artificial intelligence service is artificial intelligence.
(1) On and after February 1, 2026, and except as otherwise provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available any artificial intelligence system that is intended to interact with any consumer shall include in the disclosure to each consumer who interacts with such artificial intelligence system that the consumer is interacting with an artificial intelligence system. (2) Disclosure is not required under subsection (1) of this section under any circumstance when it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
b. Any artificial intelligence chatbot that utilizes generative artificial intelligence to create audio, video, text, or print content with the purpose of providing voters with election related information or information concerning the accomplishments, policy positions, or qualifications of a candidate for election in this State shall include, prior to the provision of any such content, a clear and conspicuous disclosure, as appropriate for the medium of the content, that identifies the content as being provided by a generative artificial intelligence system. Such disclosure shall be permanent or uneasily removed by subsequent users, to the extent technically feasible.
A person or entity shall not deploy generative artificial intelligence to communicate or otherwise interact with a consumer for the purpose of engaging in trade or commerce in such a way as to cause a reasonable person to believe they are communicating or interacting with a human unless the person or entity provides a clear and conspicuous verbal or written notice at the beginning of the interaction that the consumer is communicating or interacting with generative artificial intelligence.
An operator shall provide clear and conspicuous notification to a user at the beginning of any AI companion interaction that the user is not communicating with a human. This notification shall be provided either verbally or in writing. Thereafter, the notification shall repeat at least every three hours for continued AI companion interactions.
a. A person or entity that deploys an artificial intelligence system to communicate with a consumer through an online platform shall, upon establishing contact with the consumer and prior to initiating any further communication, clearly and conspicuously: (1) notify the consumer that an artificial intelligence system is communicating with the consumer; and (2) provide the consumer with information on how to contact a human, including but not limited to providing a phone number, Internet website, or similar contact information for a human; the days and times a human is available; and any other information necessary for communication with a human. b. It shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) for any person or entity that deploys an artificial intelligence system to communicate with a consumer through an online platform to violate the provisions of this section. c. As used in this section: "Artificial intelligence" means the development of software and hardware and the end-use application of technologies that are able to perform tasks normally requiring human intelligence, including, but not limited to, visual perception, speech recognition, decision-making, translation between languages, and generative artificial intelligence, which generates new content in response to user inputs of data.
A. An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (3) causing the companion artificial intelligence product to make material misrepresentations about the product's identity, capabilities, training data or status as a non-human entity, including when directly questioned by the user. B. An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
A. An operator shall, unless specifically configured not to do so by an adult user, ensure that a clear notification is provided to the user during an interaction, informing the user that the user is communicating with a companion artificial intelligence product. The notification shall be communicated in the same language as the interaction with the user, and: (1) for text-based interactions, be conspicuous, persistent and legible in the user interface and be distinct from the interaction; and (2) for all other types of interactions, be presented periodically, but no less than once every thirty minutes, in a manner that is distinct from the interaction. B. An operator shall ensure that a clear notification is provided pursuant to Subsection A of this section for use by a minor in all circumstances.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
1. New York residents shall be informed when an automated system is in use and New York residents shall be informed how and why the system contributes to outcomes that impact them. 2. Designers, developers, and deployers of automated systems shall provide accessible plain language documentation, including clear descriptions of the overall system functioning, the role of automation, notice of system use, identification of the individual or organization responsible for the system, and clear, timely, and accessible explanations of outcomes. 3. The provided notice shall be kept up-to-date, and New York residents impacted by the system shall be notified of any significant changes to use cases or key functionalities.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three hours for continuing AI companion interactions thereafter, which states either verbally or in bold and capitalized letters of at least sixteen point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
News media employers shall fully disclose to workers when and how any generative artificial intelligence tool is used in the workplace as it relates to the creation of content, including, but not limited to, writing, recordings and transcripts. Such disclosure shall include a description of the artificial intelligence system and a summary of the purpose and use of such system.
(d) Any paper or file drafted with the assistance of generative artificial intelligence must attach to the filing a separate affidavit disclosing such use and certifying that a human being has reviewed the source material and verified that the artificially generated content is accurate including, but not limited to, any case citations. (e) Any paper or file drafted without the assistance of generative artificial intelligence must attach to the filing a separate affidavit stating such.
4. Any paper or file drafted with the assistance of generative artificial intelligence must attach to the filing a separate affidavit disclosing such use and certifying that a human being has reviewed the source material and verified that the artificially generated content is accurate including, but not limited to, any case citations. 5. Any paper or file drafted without the assistance of generative artificial intelligence must attach to the filing a separate affidavit stating such.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
Disclosure to news media workers. News media employers shall fully disclose to workers when and how any generative artificial intelligence tool is used in the workplace as it relates to the creation of content, including, but not limited to, writing, recordings and transcripts. Such disclosure shall include a description of the artificial intelligence system and a summary of the purpose and use of such system.
2. Any person, firm, partnership, association or corporation or agent or employee thereof shall disclose the use of artificial intelligence to influence customer interaction, including but not limited to: automated customer support; personalized ad targeting; product eligibility decisions; and AI-driven hiring tools. 3. Such disclosure shall be placed at the point of interaction with the customer, accompanied by a clear and conspicuous, in not less than twelve point bold faced type, plain-English description of the AI's role, with instructions on how to access human assistance, if applicable.
An operator shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
C. Therapeutic chatbots that meet all of the following requirements may be made available to minors: 1. The chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is AI and not a licensed professional;
A. An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service and is not interacting with a natural person: 1. With a constantly visible disclaimer; or 2. At the beginning of each session and appearing at least every thirty (30) minutes in a continuous conversational AI service interaction.
(a) Duty of business entity.--A business entity that uses artificial intelligence in any part of a consumer interaction shall disclose the use of artificial intelligence in a clear and conspicuous manner to the consumer at the beginning of the consumer interaction. (b) Format.--The business entity shall deliver the disclosure in plain language, orally or in writing, which language must be reasonably accessible to an individual with a disability or limited English proficiency.
(c) Human representatives.--Upon request, the business entity shall provide the consumer with timely access to a human representative, if a human representative is reasonably available.
An operator shall: (2) At the beginning of a session with an AI companion and once every three hours during the session, provide a notification to the user stating, either verbally or in writing, that the user is communicating with an AI companion and not a human.
(3) A statement that the chatbot is an artificial intelligence technology and is not a human, which must be provided each time that the consumer asks or otherwise prompts the chatbot about whether artificial intelligence is being used.
Disclosure of nonhuman status.--If a reasonable person interacting with an AI companion would be misled to believe the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (1) Disclose to the user that the user is interacting with artificial intelligence and not an actual human being. (2) Provide by default a clear and conspicuous notification to the user at least once every three hours during continuing interactions that reminds the user to take a break and that the AI companion is artificially generated and not human.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three (3) hours for continuing AI companion interactions hereafter, which states either verbally or in bold and capitalized letters of at least sixteen (16) point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Any and all healthcare providers and healthcare facilities that employ artificial intelligence ("AI") to document in-person or telehealth visits shall notify patients of the use of AI for that sole purpose. An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three (3) hours for continuing AI companion interactions hereafter, which states either verbally or in bold and capitalized letters of at least sixteen (16) point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Any and all healthcare providers and healthcare facilities that employ artificial intelligence ("AI") to document in-person or telehealth visits shall notify patients of the use of AI for that sole purpose. (B) A chatbot provider shall provide clear, conspicuous, and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice must: (1) be written in the same language that the chatbot communicates with the user and must appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications; and (2) must comply with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40.
(B) A chatbot provider shall provide clear, conspicuous, and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice must: (1) be written in the same language that the chatbot communicates with the user and must appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications; and (2) must comply with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40.
(A) Except as provided in subsection (B), a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (B) Disclosure is not required under subsection (A) under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
(B) A covered entity shall implement reasonable systems and processes to: (2) ensure that a chatbot does not make a materially false representation that it is a human being;
Except as otherwise provided in this section, a person may not engage in a commercial transaction or trade practice with a consumer if: (1) The transaction or practice requires the consumer to communicate with or interact with a chatbot, an artificial intelligence agent, an avatar, or another form of computer technology that engages in a textual or aural conversation; and (2) The consumer could reasonably believe that the consumer is engaging with human. The prohibition set forth in this section does not apply if the consumer is notified, in a clear and conspicuous fashion, at the outset of the transaction or practice, that the consumer is not communicating with another human.
A person who uses, prompts, or otherwise causes generative artificial intelligence to interact with a person in connection with any act administered and enforced by the division, as described in Section 13-2-1, shall clearly and conspicuously disclose to the person with whom the generative artificial intelligence interacts, if asked or prompted by the person, that the person is interacting with generative artificial intelligence and not a human.
(4) (a) A person who provides the services of a regulated occupation shall prominently disclose when a person is interacting with a generative artificial intelligence in the provision of regulated services. (b) Nothing in this section permits a person to provide the services of a regulated occupation through generative artificial intelligence without meeting the requirements of the regulated occupation. (5) A disclosure described Subsection (4)(a) shall be provided: (a) verbally at the start of an oral exchange or conversation; and (b) through electronic messaging before a written exchange.
(1)(a) A supplier that uses generative artificial intelligence to interact with an individual in connection with a consumer transaction shall disclose to the individual that the individual is interacting with generative artificial intelligence and not a human, if the individual asks or otherwise prompts the supplier about whether artificial intelligence is being used. (b) The individual's prompt or question under Subsection (1)(a) must be a clear and unambiguous request to determine whether the interaction is with a human or with artificial intelligence.
(2) An individual providing services in a regulated occupation shall: (a) prominently disclose when an individual receiving services is interacting with generative artificial intelligence in the provision of regulated services if the use of generative artificial intelligence constitutes a high-risk artificial intelligence interaction; and (b) comply with all requirements of the regulated occupation when providing services through generative artificial intelligence. (3) A disclosure required under Subsection (2) shall be provided: (a) verbally at the start of a verbal interaction; and (b) in writing before the start of a written interaction.
(1) A person is not subject to an enforcement action for violating Section 13-75-103 if the person's generative artificial intelligence clearly and conspicuously discloses: (a) at the outset of any interaction with an individual in connection with: (i) a consumer transaction; or (ii) the provision of regulated services; and (b) throughout the interaction that it: (i) is generative artificial intelligence; (ii) is not human; or (iii) is an artificial intelligence assistant. (2) In accordance with Title 63G, Chapter 3, Utah Administrative Rulemaking Act, the division in consultation with the office, may make rules specifying forms and methods of disclosure that: (a) satisfy the requirements of Subsection (1); or (b) do not satisfy the requirements of Subsection (1).
A. An operator shall (i) include a disclaimer to users of all ages that a companion chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up, or other communication if a pop-up is not feasible, that the user is not engaging with a human counterpart at the following intervals: 1. Upon login to the companion chatbot; 2. Every 90 minutes of sustained user engagement; and 3. When prompted by the user.
A covered entity shall implement reasonable systems and processes to: 2. Ensure that a chatbot does not make a materially false representation that it is a human being;
An operator shall (i) include a disclaimer to users of all ages that a chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up that he is not engaging with a human counterpart at the following intervals: 1. Upon login to the chatbot; 2. Every 30 minutes of sustained user engagement; 3. When prompted by the user; and 4. When asked to provide advice legally regulated by a licensed industry, including medical, financial, or legal advice.
(a) No person shall engage in a commercial transaction or trade practice with a consumer in which the consumer is communicating or otherwise interacting with a chatbot that may mislead or deceive a reasonable person to believe the person is engaging with an actual human, whether or not any consumer is in fact misled or deceived, unless the consumer is notified in a clear and conspicuous manner that the consumer is communicating with a chatbot and not an actual human being. (c) A person who violates subsection (a) of this section commits an unfair and deceptive act in commerce in violation of section 2453 of this title.
(b) Disclosure. Chatbot providers shall provide clear, conspicuous, and explicit notice to users that users are interacting with a chatbot rather than a human prior to the chatbot generating any outputs, every hour thereafter, and each time a user prompts the chatbot about whether it is a real person subject to the following: (1) The text of this notice must appear in the same language as the one in which the user is interacting with the chatbot, in a font size easily readable by an average user, and no smaller than the largest font size of other text appearing on the interface on which the chatbot is provided. (2) This notice must be accessible to users with disabilities. (3) This notice must comply with rules adopted by the Attorney General pursuant to this subchapter.
If a user interacting with a companion chatbot could be reasonably misled to believe that the user is interacting with a human, an operator shall issue a clear and conspicuous notification to the individual indicating that the companion chatbot is artificially generated and not human. The text of the notification shall appear in the same language and in a size easily readable by the average viewer.
An operator shall, for a user that the operator knows is a minor, do the following: (1) immediately disclose to the user in a clear and conspicuous manner that the user is interacting with artificial intelligence; (2) provide a clear and conspicuous notification to the user at least every 30 minutes for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human;
(a) Except as provided in subsection (b) of this section, any health care provider that uses generative artificial intelligence to generate written or verbal patient communications relating to patient clinical information shall ensure that those communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence. (A) For written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication. (B) For written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction. (C) For audio communications, the disclaimer shall be provided verbally at the start and end of the interaction. (D) For video communications, the disclaimer shall be prominently displayed throughout the interaction. (2) Clear instructions describing how a patient may contact a human health care provider; an employee of the health care facility, clinic, physician's office, or office of a group provider; or other appropriate person. (b) If a communication is generated by generative artificial intelligence and read and reviewed by a licensed human health care provider, the requirements of subsection (a) of this section shall not apply.
(a) A supplier of a mental health chatbot shall cause the mental health chatbot to clearly and conspicuously disclose to a Vermont user that the mental health chatbot is an artificial intelligence technology and not a human. (b) The disclosure described in subsection (a) of this section shall be made: (1) before the Vermont user may access the features of the mental health chatbot; (2) at the beginning of any interaction with the Vermont user if the Vermont user has not accessed the mental health chatbot within the previous seven days; and (3) any time a Vermont user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) An agency is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system. (4) For the purposes of this section, "artificial intelligence system" has the same meaning as in section 1 of this act.
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction.
(3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1)(a) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.
(c) An operator or licensed professional shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day. and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
An operator shall, for a user that the operator knows is a minor, do all of the following: (1) Disclose to the user that the user is interacting with artificial intelligence.
An operator shall, for a user that the operator knows is a minor, do all of the following: ... (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.