T-01
Transparency & Disclosure
AI Identity Disclosure
Users must be informed when they are interacting with an AI system rather than a human. Some jurisdictions impose initial disclosure unconditionally; others only when a reasonable person could be misled. Periodic re-disclosure requirements apply primarily to companion and extended-session AI. On-demand disclosure requires the system to accurately identify itself as AI whenever a user asks.
Applies to DeveloperDeployerProfessionalGovernment Sector Chatbot
Bills — Enacted
8
unique bills
Bills — Proposed
81
Last Updated
2026-03-29
Core Obligation

Users must be informed when they are interacting with an AI system rather than a human. Some jurisdictions impose initial disclosure unconditionally; others only when a reasonable person could be misled. Periodic re-disclosure requirements apply primarily to companion and extended-session AI. On-demand disclosure requires the system to accurately identify itself as AI whenever a user asks.

Sub-Obligations3 sub-obligations
Bills That Map This Requirement 89 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
T-01.1
Section 3(1)-(2)
Plain Language
Therapeutic chatbots that are made available to minors under the Section 3 exception must provide a clear and conspicuous disclaimer — verbally or in writing — at the beginning of each interaction stating that the chatbot is an AI and not a licensed professional. Additionally, the chatbot must not be marketed or designated as a substitute for a human professional. These are conditions precedent to the therapeutic chatbot exception; failure to meet them means the chatbot cannot be made available to minors under Section 3.
(1) The therapeutic AI chatbot provides a clear and conspicuous disclaimer, verbally or in writing, at the beginning of each interaction that the AI chatbot is an artificial intelligence and not a licensed professional. (2) The AI chatbot is not marketed or designated as a substitute for a human professional.
Pending 2026-10-01
T-01.1T-01.2
Section 2(a)-(b)
Plain Language
Any person using an AI chatbot to engage consumers in commercial transactions must notify the consumer — verbally or in writing — that they are communicating with a computer, not a human, when the consumer could reasonably believe they are talking to a person. The notice must be provided both at the beginning of each interaction and at regular intervals during continuing interactions. The bill does not define what constitutes a 'regular interval,' leaving that to future interpretation. Failure to comply is classified as an unfair or deceptive trade practice under Alabama law. The disclosure obligation is conditional — it applies only where a consumer 'may reasonably believe' they are engaging with a human.
(a) A person that engages in a commercial transaction or trade practice with a consumer through an AI chatbot, in textual or aural conversation, where the consumer may reasonably believe the consumer is engaging with a human, shall notify the consumer verbally or in writing: (1) At the beginning of each interaction that the consumer is communicating with a computer, not a human; and (2) At a regular interval for continuing interactions that the consumer is communicating with computer, not a human. (b) Failure to comply with the provisions of this act is an unfair or deceptive trade practice.
Pending 2027-10-01
T-01.1T-01.2
A.R.S. § 18-802(A)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with an AI. The operator may choose between two disclosure methods: (1) a persistent visible disclaimer that remains on-screen throughout the interaction, or (2) a disclosure at the beginning of each session plus at least every three hours in a continuous session. This obligation is unconditional for minors — it applies regardless of whether the AI might be mistaken for a human. The minor definition is knowledge-based: it applies only when the operator has actual knowledge or reasonable certainty the user is under 18.
A. Each operator shall clearly and conspicuously disclose to a minor account holder in either of the following ways that the minor is interacting with a conversational AI service: 1. As a persistent visible disclaimer. 2. At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
Pending 2027-10-01
T-01.1
A.R.S. § 18-802(E)
Plain Language
For all users (not just minors), if a reasonable person could be misled into believing they are interacting with a human, the operator must clearly and conspicuously disclose that the service is AI. This is a conditional trigger — if the conversational AI service clearly presents itself as AI from the outset, no disclosure is required.
E. If a reasonable person would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
Pending 2026-01-01
T-01.1T-01.2T-01.3
A.R.S. § 44-1383.02(B)
Plain Language
Before any chatbot output, the provider must give a clear, conspicuous, and explicit notice that the user is interacting with a chatbot, not a human. This notice is unconditional — it must appear at the beginning of every communication, be repeated every hour during continuing interactions, and be provided each time a user asks whether they are talking to a natural person. The notice must be in the same language as the chatbot and in a font size at least as large as the largest font used in the chatbot's other communications. The Attorney General will adopt rules specifying the form and content of the notice, including a template.
A chatbot provider shall provide clear, conspicuous and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user, every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice: 1. shall be written in the same language that the chatbot communicates with the user and shall appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications. 2. must comply with the rules adopted by the attorney general pursuant to section 44-1383.03.
Pending 2027-01-01
T-01.1T-01.3
Bus. & Prof. Code § 22626(b)-(c)
Plain Language
When a reasonable person could be misled into thinking they are speaking with a human, operators must provide a clear, conspicuous disclosure that the customer service chatbot is AI-generated and not human. The disclosure must plainly state the system is not a human being, be presented in ordinary consumer-understandable language, remain accessible throughout the entire interaction, and — for voice-based interfaces — be audible and repeatable on request. This is a conditional trigger: if the chatbot is obviously non-human, no disclosure is required. However, the 'readily accessible throughout the interaction' and 'repeated upon request' requirements effectively function as ongoing and on-demand disclosure obligations once triggered.
(b) An operator that makes a customer service chatbot available to a person in this state shall provide a clear and conspicuous disclosure that the customer service chatbot is artificially generated and not human if a reasonable person interacting with the customer service chatbot would be misled to believe that the person is interacting with a human. (c) The disclosure required by subdivision (b) shall do all the following: (1) Inform the person that they are interacting with a customer service chatbot, artificial intelligence system, or similar automated system, and that the system is not a human being. (2) For audio-only or voice-based interfaces, be provided in an audible form and repeated upon the person's request. (3) Be readily accessible throughout the customer interaction. (4) Be presented in plain language that is understandable to an ordinary consumer.
Pending 2027-07-01
T-01.1T-01.2
Bus. & Prof. Code § 22612(d)(4)
Plain Language
Operators must implement an AI identity disclosure mechanism specifically for child users that (1) notifies the child they are interacting with or receiving content from an AI system, (2) periodically reinforces this notice during extended interactions, and (3) presents the notice in child-appropriate language and format. Unlike the general companion chatbot disclosure under SB 243 (§ 22602(a)), which is conditional on whether a reasonable person would be misled, this obligation appears unconditional for child users. The bill does not specify a minimum interval for periodic reinforcement.
(4) A mechanism for providing notice to a child user that the child is interacting with, or receiving content generated by, an artificial intelligence system that meets both of the following criteria: (A) The notice is reinforced periodically during extended interactions. (B) The notice is presented in language and a format appropriate to a child.
Pending 2027-01-01
T-01.1T-01.2T-01.3
C.R.S. § 6-1-1708(1)(a)
Plain Language
When an operator knows or has reasonable certainty that a user is a minor (under 18), it must clearly and conspicuously disclose that the user is interacting with AI, not a human. The statute provides three alternative disclosure mechanisms — any one satisfies the requirement: (1) a persistent visible disclaimer displayed throughout the interaction; (2) a notice at the start of each interaction plus a reminder at least every three hours during continuous sessions; or (3) an on-demand response when the user asks whether the system is human or sentient. Unlike the general consumer disclosure in § 6-1-1708(2), this obligation is unconditional — it applies regardless of whether a reasonable person would be misled.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (a) Clearly and conspicuously disclose to the minor user that the minor user is interacting with artificial intelligence that is artificially generated and not human. The disclosure must be: (I) A persistent visible disclaimer; (II) Provided at the beginning of each interaction with a conversational artificial intelligence service and must appear at least once every three hours in a continuous conversational artificial intelligence service interaction; or (III) Provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient;
Pending 2027-01-01
T-01.1T-01.2T-01.3
C.R.S. § 6-1-1708(2)
Plain Language
For all users (not just minors), if a reasonable person could be misled into thinking they are interacting with a human, the operator must clearly and conspicuously disclose that the system is AI. Unlike the minor-specific disclosure in § 6-1-1708(1)(a) — where the three methods are alternatives — the general consumer disclosure requires all three simultaneously: (1) disclosure at the beginning of each day's first interaction; (2) a reminder at least every three hours during continuous sessions; and (3) an on-demand response when the user asks if the system is human or sentient. The trigger is conditional — if the system clearly presents as AI and no reasonable person would be misled, the obligation is not activated.
On and after January 1, 2027, if a reasonable person would be misled to believe that the person is interacting with a human in an interaction with a conversational artificial intelligence service, an operator shall clearly and conspicuously disclose to the person that the conversational artificial intelligence service is artificial intelligence. The disclosure must: (a) Be provided at the beginning of a user's first interaction with a conversational artificial intelligence service for each day of interaction; (b) Appear at least once every three hours in a continuous conversational artificial intelligence service interaction; and (c) Be provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient.
Enacted 2026-06-30
T-01.1
C.R.S. § 6-1-1704(1)
Plain Language
Deployers or developers who make available an AI system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. This is an unconditional disclosure obligation — it does not depend on whether a reasonable person would be misled. It applies broadly to any AI system intended for consumer interaction, not just high-risk systems. Exceptions are provided in subsection (2) of the original statute. This is a broader disclosure trigger than states like California SB 243, which conditions disclosure on a reasonable-person misleading standard.
(1) On and after June 30, 2026, and except as provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.
Pending 2026-10-01
T-01.1
Sec. 3(a)-(b)
Plain Language
Deployers must disclose to every applicant or employee who interacts with an automated employment-related decision process that they are interacting with an automated system. This obligation is conditional — no disclosure is required if a reasonable person would find it obvious they are interacting with an automated system. The developer may contractually assume this obligation under Section 2(b).
(a) Except as provided in subsection (b) of this section and subsection (b) of section 2 of this act, a deployer who deploys an automated employment-related decision process that is intended to interact with an applicant for employment or employee in the state shall ensure that it is disclosed to each such applicant or employee who interacts with such process that such applicant or employee is interacting with an automated employment-related decision process. (b) No disclosure shall be required under subsection (a) of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an automated employment-related decision process.
Pending 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9984(2)(a)-(b)
Plain Language
For all minor account holders, companion chatbot platforms must: (1) unconditionally disclose that the user is interacting with AI, and (2) provide a clear and conspicuous notification at the beginning of each interaction and at least every hour during continuing interactions reminding the minor to take a break and that the chatbot is AI-generated, not human. The hourly notification is a default setting — it applies automatically without requiring the minor or parent to enable it. Compare to CA SB 243's three-hour interval; Florida's one-hour interval is more frequent.
In connection to all accounts or identifiers held by account holders who are minors, the companion chatbot platform shall do all of the following: (a) Disclose to the account holder that he or she is interacting with artificial intelligence. (b) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially generated and not human.
Pending 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9985(1)
Plain Language
All bot operators must display a pop-up or other prominent notification at the start of every user interaction, and at least hourly during continuing interactions, informing the user they are not speaking with a human. For non-screen interactions (e.g., voice), the operator must otherwise inform the user. This applies to all bots — not just companion chatbots — and to all users regardless of age. The only exemption is bots used solely by employees for internal business operations. Operators may demonstrate compliance during a cure period by showing persistent and conspicuous identity indicators conforming to NIST AI RMF and ISO 42001.
At the beginning of an interaction between a user and a bot, and at least once every hour during the interaction, an operator shall display a pop-up message or other prominent notification notifying the user or, if the interaction is not through a device with a screen, otherwise inform the user, that he or she is not engaging in dialogue with a human counterpart. This section does not apply to a bot that is used solely by employees within a business for its internal operational purposes.
Pending 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.1739(7)
Plain Language
Operators must display an on-screen pop-up at the start of every interaction telling the user they are not speaking with a human. The pop-up must repeat at least every 60 minutes during a continuing interaction. Unlike some jurisdictions that only require disclosure when a reasonable person could be misled, this obligation is unconditional — it applies to every user in every interaction, regardless of how obviously AI-like the chatbot may be. The pop-up is dismissible (the user can resolve it by interacting with it), but it must appear at the prescribed intervals.
(7) At the beginning of any interaction between a user and a companion AI chatbot, and no less frequently than every 60 minutes thereafter during such interaction, an operator shall display a pop-up that notifies users that they are not engaging in dialogue with a human counterpart.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.1739(7)
Plain Language
Operators must display a pop-up notification at the start of every companion AI chatbot interaction and at least every 60 minutes during continuing interactions, informing the user they are not communicating with a human. This is unconditional — it applies to all users regardless of whether a reasonable person would be misled. The pop-up must be a visible on-screen notification that the user can dismiss by interacting with it. Compare to CA SB 243, which requires three-hour periodic reminders for minors; FL SB 1344 imposes a stricter 60-minute interval for all users regardless of age. Unlike CA SB 243's conditional trigger for adults (only when a reasonable person could be misled), this disclosure is mandatory for every interaction.
(7) At the beginning of any interaction between a user and a companion AI chatbot, and no less frequently than every 60 minutes thereafter during such interaction, an operator shall display a pop-up that notifies users that they are not engaging in dialogue with a human counterpart.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9984(2)(a)-(b)
Plain Language
For all minor account holders, the platform must unconditionally disclose that the user is interacting with AI, and must display a clear, conspicuous reminder at the start and at least every hour during ongoing interactions that the chatbot is AI-generated and that the minor should take a break. The hourly reminder interval is more frequent than California SB 243's every-three-hours floor, making this a stricter periodic disclosure requirement. Both obligations are unconditional — they apply regardless of whether the minor could be misled.
In connection to all accounts or identifiers held by account holders who are minors, the companion chatbot platform shall do all of the following: (a) Disclose to the account holder that he or she is interacting with artificial intelligence. (b) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially generated and not human.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9985(1)
Plain Language
All bot operators must display a pop-up or other prominent notification at the start of every user interaction — and at least once every hour during continuing interactions — informing the user they are not communicating with a human. For non-screen interactions (e.g., voice), the operator must otherwise inform the user. This applies to all bots, not just companion chatbots, making it a broad AI identity disclosure obligation. Internal-use-only bots used solely by employees for business operational purposes are exempt. The hourly reminder requirement applies to all users regardless of age, which is more expansive than California SB 243 (which imposes periodic reminders only for known minors). During enforcement, operators may present evidence of NIST AI RMF/ISO 42001-aligned identity indicators and disclosures as mitigating factors.
At the beginning of an interaction between a user and a bot, and at least once every hour during the interaction, an operator shall display a pop-up message or other prominent notification notifying the user or, if the interaction is not through a device with a screen, otherwise inform the user, that he or she is not engaging in dialogue with a human counterpart. This section does not apply to a bot that is used solely by employees within a business for its internal operational purposes.
Pending 2025-07-01
T-01.1
O.C.G.A. § 10-16-11(a)-(b)
Plain Language
Any deployer or developer that makes available an AI system intended to interact with consumers must disclose to each interacting consumer that they are interacting with an AI system. This is a per-interaction disclosure obligation — not a one-time or pre-engagement notice. The disclosure is excused only when it would be obvious to a reasonable person that they are interacting with AI. This provision applies to all AI systems intended for consumer interaction, not just automated decision systems — it uses the broader 'artificial intelligence system' definition.
(a) Except as provided in subsection (b) of this Code section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) Disclosure is not required under subsection (a) of this Code section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Passed 2025-07-01
T-01.1T-01.2
O.C.G.A. § 39-5-6(b)
Plain Language
Operators must proactively and unconditionally disclose to all minor account holders that they are interacting with AI, not a human. Compliance may be achieved in one of two ways: (1) a constantly visible on-screen disclaimer, or (2) a disclosure at the beginning of each session plus a reminder at least every three hours during continuous interactions. This is not conditional on whether the minor could be misled — disclosure is mandatory for all minor accounts. Compare to subsection (e), which imposes a separate conditional disclosure for all users.
An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service as opposed to a natural person: (1) With a constantly visible disclaimer; or (2) At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
Passed 2025-07-01
T-01.1
O.C.G.A. § 39-5-6(e)
Plain Language
For all users (not just minors), if the conversational AI service could reasonably mislead someone into thinking they are talking to a human, the operator must display a clear and conspicuous disclosure that the service is not a natural person. Unlike subsection (b)'s unconditional obligation for minors, this general disclosure is triggered only when a reasonable person could be misled. If the AI clearly presents as non-human, no disclosure is required under this provision.
If an individual could reasonably be expected to be misled to believe he or she was interacting with a natural person, an operator shall clearly and conspicuously disclose that the conversational AI service is not a natural person.
Pending 2028-07-01
T-01.1
HRS § 321-__ (Patient interaction; disclosure)(a)-(c)
Plain Language
Health care providers that deploy AI systems to interact with patients via remote communication (telehealth, videoconference, electronic messaging, etc.) must disclose to the patient or authorized representative that they are interacting with AI. The disclosure must be clear and conspicuous, provided before or at the time of interaction (or as soon as reasonably possible in emergencies), and must include either a disclaimer that the communication was AI-generated or that it was AI-generated and reviewed by a natural person. It must also include clear instructions for how the patient can contact a human health care provider or appropriate natural person directly.
(a) Any health care provider that uses or makes available for use an artificial intelligence system intended to interact with patients by means of remote communication shall disclose to the patient or the patient's authorized representative, as applicable, that the person is interacting with artificial intelligence.
(b) The disclosure shall be made before or at the time of the interaction; provided that in the case of an emergency, the disclosure shall be made as soon as reasonably possible.
(c) The disclosure shall be clear and conspicuous, and include:
(1) A disclaimer that:
(A) The communication was generated by artificial intelligence; or
(B) The communication was generated by artificial intelligence and reviewed by a health care provider who is a natural person or a natural person retained by the health care provider; and
(2) Clear instructions on how the patient can directly contact a health care provider who is a natural person, an employee of the health care provider, or other appropriate natural person.
Pending 2027-07-01
T-01.1T-01.2
§ 554J.2(1)
Plain Language
When the operator knows or is reasonably certain a user is under 18, it must clearly and conspicuously disclose that the user is interacting with AI. The operator may satisfy this through either (a) a persistent visible disclaimer always on screen, or (b) a disclaimer at the beginning of each interaction plus a recurring disclaimer at least every three hours of continuous use. This is an unconditional obligation for minor account holders — no reasonable-person trigger is required. The operator has flexibility to choose between the two disclosure methods.
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
Pending 2027-07-01
T-01.1
§ 554J.3
Plain Language
For all users (not just minors), operators must display a persistent visible disclaimer identifying the conversational AI service as AI, but only when a reasonable person would otherwise believe they are interacting with a human. Unlike the minor-specific obligation in § 554J.2(1), this is a conditional trigger — if no reasonable person would be misled, no disclosure is required. The disclosure must be persistent and visible, meaning it must remain on screen during the interaction rather than appearing only once.
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
Pending 2025-07-01
T-01.1T-01.2
§ 554J.2(1)(c)-(d)
Plain Language
Deployers must provide a clear, conspicuous disclosure at the start of every interaction that the chatbot is AI and is not a licensed medical, legal, financial, or mental health professional. This disclosure must be repeated every three hours during continuous interactions. Unlike some jurisdictions, this is unconditional — it applies regardless of whether a reasonable person would be misled. The disclosure combines AI identity disclosure with an anti-professional-impersonation notice in a single mandatory statement.
c. Clearly and conspicuously disclose each time the deployer's public-facing chatbot begins an interaction with a user that the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional. d. At each three-hour interval of the deployer's public-facing chatbot continuously interacting with a user, clearly and conspicuously disclose the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional.
Pending 2025-07-01
T-01.1T-01.2
§ 554J.2(2)(a)
Plain Language
Every chatbot must clearly and conspicuously disclose to the user that it is a chatbot and not a human. This disclosure must occur at two points: (1) at the beginning of each conversation, and (2) at recurring thirty-minute intervals during ongoing interactions. This is an unconditional obligation — the disclosure is required regardless of whether a reasonable person would be misled. The thirty-minute interval is more frequent than some comparable state laws (e.g., California SB 243's three-hour interval).
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
Pending 2025-07-01
T-01.3
§ 554J.2(2)(b)
Plain Language
Chatbots must be programmed so that they cannot claim to be human and cannot respond deceptively when a user asks whether the chatbot is a human. This is both a proactive design requirement (the chatbot must be prevented from spontaneously claiming human identity) and an on-demand disclosure obligation (the chatbot must truthfully identify itself as non-human when asked). The 'respond deceptively' standard is broader than merely requiring a truthful answer — it prohibits evasive or misleading responses as well.
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
Pending 2026-07-01
T-01.1T-01.2
§ 554J.3(1)–(2)
Plain Language
Every AI chatbot accessible to Iowa users must provide a clear, conspicuous, and easily understood disclosure stating three things: (1) it is artificial intelligence, (2) it is not a human, and (3) it is not a substitute for professional mental health care. This disclosure must appear at three mandatory times: before the chatbot's first response to the user, at regular intervals during continuous interaction, and whenever the chatbot generates a response related to emotional well-being, mental health, or self-harm. The bill does not specify a numeric interval (e.g., every three hours) — the Department of HHS is directed to adopt rules on acceptable disclosure formats. The mental-health-triggered disclosure in subsection 2(c) creates an additional, context-specific disclosure obligation beyond the periodic reminder.
1. Each artificial intelligence chatbot accessible to a user in this state shall explicitly disclose in clear, conspicuous, and easily understood language that the artificial intelligence chatbot is artificial intelligence, is not a human, and is not a substitute for professional mental health care. 2. A disclosure required under this section shall appear at all of the following times: a. At the beginning of the artificial intelligence chatbot's interaction with a user prior to providing the user with a response to user input. b. At regular intervals during a user's continuous interaction with the artificial intelligence chatbot. c. When the artificial intelligence chatbot generates a response related to emotional well-being, mental health, or self-harm.
Pending 2027-07-01
T-01.1T-01.2
§ 554J.2(1)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI. The operator may satisfy this through either (a) a persistent visible disclaimer that remains on screen, or (b) a disclaimer at the beginning of each interaction plus a recurring disclaimer at least every three hours during continuous sessions. Unlike the general consumer disclosure in § 554J.3, this minor-specific obligation is unconditional — it applies regardless of whether a reasonable person would be misled.
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
Pending 2027-07-01
T-01.1T-01.2
§ 554J.3
Plain Language
If a reasonable person interacting with the conversational AI service would believe they are talking to a human, the operator must disclose that the service is AI. The disclosure must be made via either a persistent visible disclaimer or a disclaimer appearing at least every three hours of continuous interaction. This is a conditional obligation — it triggers only when a reasonable person could be misled. Compare to the unconditional minor-specific disclosure in § 554J.2(1), which applies regardless of whether a reasonable person would be misled.
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer, or a disclaimer that appears after every three hours of continuous interaction with the operator's conversational AI service, that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
Pending 2025-07-01
T-01.1T-01.2
§ 554J.2(2)(a)
Plain Language
Every chatbot must provide a clear and conspicuous disclosure that it is a chatbot and not a human being. This disclosure must appear at two points: (1) at the beginning of each conversation, and (2) at thirty-minute intervals during ongoing conversations. This is an unconditional requirement — it applies regardless of whether a reasonable person would be misled. The thirty-minute re-disclosure interval is more frequent than some comparable statutes (e.g., CA SB 243's three-hour interval).
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
Pending 2025-07-01
T-01.3
§ 554J.2(2)(b)
Plain Language
Chatbots must be programmed so they cannot claim to be human and cannot respond deceptively when a user directly asks whether the chatbot is a human. This is both a proactive prohibition (no affirmative claims of humanity) and a reactive on-demand obligation (truthful response when asked). The 'programmed to prevent' language suggests a design-level requirement, not merely a policy-level instruction.
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
Enacted 2025-07-01
T-01.1
Idaho Code § 48-603H(1)(a)-(c)
Plain Language
Any person using a chatbot, AI agent, avatar, or similar conversational AI technology in trade or commerce must clearly and conspicuously notify consumers that they are not communicating with a human being, when two conditions are met: (1) the interaction could mislead a reasonable consumer into thinking they are speaking with a human, and (2) the AI is doing more than conveying basic operational information such as hours, locations, employee directories, or simple purchase mechanics. The disclosure must be sufficiently clear and conspicuous that a reasonable consumer would not be misled. This is a conditional trigger — simple informational bots providing only basic operational details are carved out. Note that this obligation is structured as a prohibition (unfair trade practice) rather than an affirmative mandate, meaning all three elements (a), (b), and (c) must be present simultaneously for a violation.
It is an unfair and deceptive trade practice for any person to engage in trade or commerce with a consumer in which the person is communicating or otherwise interacting with a consumer using a chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation and which may mislead or deceive a reasonable consumer to believe the consumer is engaging with an actual human, and: (a) The consumer is not notified in a clear and conspicuous fashion that the consumer is not communicating with a human being; (b) The consumer may reasonably believe the consumer is engaging with a human because the communication is not clear and conspicuous; and (c) The chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation is doing more than stating the person's basic operations information, such as employee directories, locations, hours of operation, the basic mechanics of purchasing items, and similar information.
Pending 2027-07-01
T-01.1
Idaho Code § 48-2103(1)
Plain Language
If a reasonable person could be misled into thinking they are interacting with a human, the operator must provide a clear and conspicuous disclosure that the service is AI. This is a conditional trigger — it applies only when the AI's presentation could mislead, not unconditionally. Compare to the stricter unconditional disclosure required for minor account holders under § 48-2104(1).
If reasonable persons would be misled to believe that they are interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
Pending 2027-07-01
T-01.1T-01.2
Idaho Code § 48-2104(1)
Plain Language
When the operator knows or has reasonable certainty that an account holder is a minor, it must unconditionally disclose that the user is interacting with AI — regardless of whether a reasonable person would be misled. The operator has two compliance paths: either display a persistent visible disclaimer throughout the interaction, or disclose at the beginning of each session and then at least every three hours during continuous use. This is stricter than the general disclosure under § 48-2103(1), which is conditional on a reasonable person being misled.
An operator shall clearly and conspicuously disclose to minor account holders that they are interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three (3) hours in a continuous conversational AI service interaction.
Pending 2026-01-01
T-01.1
225 ILCS 60/67(b)(1)(A)-(D)
Plain Language
Health facilities, clinics, physician's offices, and group practice offices that use generative AI to create patient communications about clinical information must include a prominent disclaimer that the communication was AI-generated. The required format varies by medium: for letters, emails, and similar written messages, the disclaimer must appear prominently at the beginning; for chat-based telehealth and continuous online interactions, it must be displayed throughout; for audio, it must be stated verbally at the start and end; and for video, it must be displayed throughout. This obligation applies only to communications pertaining to patient clinical information — administrative messages about scheduling, billing, or clerical matters are excluded.
(b) A health facility, clinic, physician's office, or office of a group practice that uses generative artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall ensure that the communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence and that is provided in the following manner: (A) for written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication; (B) for written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction; (C) for audio communications, the disclaimer shall be provided verbally at the start and the end of the interaction; or (D) for video communications, the disclaimer shall be prominently displayed throughout the interaction.
Pending 2027-01-01
T-01.1T-01.2
Section 15(a)
Plain Language
Operators must notify users during interactions that they are communicating with a companion AI product. The notification must be in the same language as the interaction. For text-based interactions, the notification must be conspicuous, persistent, and legible — always visible in the interface and visually distinct from the conversation. For voice or other non-text interactions, the notification must be presented periodically, at least every 30 minutes, in a manner distinct from the interaction. Adult users may disable this notification, but see Section 15(b) for the minor-specific prohibition on disabling.
(a) An operator shall provide a clear notification to a user during an interaction with a companion artificial intelligence product, unless specifically disabled by an adult user, informing the user that the user is communicating with a companion artificial intelligence product. All notifications shall be communicated in the same language as the interaction with the user and satisfy the following requirements: (1) for text-based interactions, the notification shall be conspicuous, persistent, and legible in the user interface and be distinct from the interaction; or (2) for all other types of interactions, the notification shall be presented periodically, but no less than once every 30 minutes in a manner that is distinct from the interaction.
Pending 2027-01-01
T-01.1T-01.2
Section 15(b)
Plain Language
For minor users, the AI identity notification required by Section 15(a) may not be disabled under any circumstances. While adult users may opt out of the notification, minor users must always receive it — the conspicuous persistent text notification or the periodic 30-minute non-text notification. This creates an unconditional, non-waivable disclosure obligation for minors.
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not disable the notification required under subsection (a) for the minor user.
Pending 2027-01-01
T-01.1T-01.2
Section 15
Plain Language
Operators must provide a clear and conspicuous notification — either verbal or in text — telling the user they are not communicating with a human. This disclosure is unconditional and must occur at two trigger points: (1) at the beginning of every AI companion interaction, and (2) at least every three hours during continuing interactions. Unlike CA SB 243's general disclosure, which is triggered only when a reasonable person could be misled, this Illinois bill imposes the disclosure unconditionally at the start of every interaction for all users. The three-hour periodic reminder applies to all users, not just minors.
An operator shall provide a clear and conspicuous notification to a user that states, either verbally or in text, that the user is not communicating with a human, at the following times: (1) the beginning of any artificial intelligence companion interaction; and (2) at least every 3 hours for continuing artificial intelligence companion interactions.
Pending 2026-07-01
T-01.1T-01.2
Sec. 3(f)
Plain Language
At the start of every interaction and at least every 60 minutes during ongoing interactions, the covered entity must display a clear popup to the user with two disclosures: (1) the user is not talking to a human, and (2) the chatbot is not licensed or credentialed to provide advice or guidance on any topic. This is unconditional — it applies to all users (minors and adults alike) and does not depend on whether a reasonable person would be misled. The popup must be a visible on-screen notification that requires user interaction to dismiss. The 60-minute interval is a minimum floor; more frequent reminders are permitted.
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
Pending 2026-08-01
T-01.1
R.S. 51:1430(B)(1)-(2)
Plain Language
Any corporation, organization, or person that uses an automated system in a commercial transaction with a Louisiana consumer must clearly and conspicuously notify the consumer that they are communicating with an automated system and not a human being. The violation is a two-pronged disjunctive test: it is triggered if the consumer is not notified OR if the consumer may reasonably believe they are engaging with a human — meaning that even if some form of notification is given, it is insufficient if a consumer could still reasonably believe they are speaking with a human. Note that this obligation applies only in the context of a commercial transaction or trade practice, not to all AI interactions generally. The definition of 'automated system' itself contains a limiting element — it must be technology that 'may mislead or deceive a reasonable person' — so systems that are obviously non-human in character may fall outside the definition entirely.
B. It is an unfair or deceptive trade practice for a corporation, organization, or person to engage in a commercial transaction or trade practice with a consumer in this state in which the consumer is communicating or otherwise interacting with an automated system and either of the following applies: (1) The consumer is not notified in a clear and conspicuous manner that the consumer is communicating with an automated system and not a human being. (2) The consumer may reasonably believe he is engaging with a human.
Pending 2026-01-01
T-01.1T-01.2T-01.3
R.S. 28:16(B)(1)-(3)
Plain Language
Operators must cause the mental health chatbot to clearly and conspicuously disclose that it is AI and not a human in three situations: (1) before the user can access any features — this is an unconditional, pre-access gate; (2) at the start of any new interaction if the user has been inactive for more than seven days; and (3) whenever a user asks or prompts the chatbot about whether AI is being used. The seven-day re-disclosure requirement functions as a periodic reminder for returning users, though it is session-triggered rather than time-interval-based within a session. The on-demand disclosure in subsection (3) requires the chatbot to accurately identify itself as AI whenever asked.
An operator of a mental health chatbot shall cause the chatbot to clearly and conspicuously disclose to a user that the chatbot is an artificial intelligence technology and not a human. The disclosure shall be made: (1) Before the user may access the features of the mental health chatbot. (2) At the beginning of any interaction with the user if the user has not accessed the mental health chatbot within the previous seven days. (3) Any time a user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
Pre-filed 2025-07-17
T-01.1
Ch. 93M § 4(a)-(b)
Plain Language
Any deployer or developer that makes available a consumer-facing AI system must disclose to each interacting consumer that they are interacting with an AI system. This applies to all AI systems intended to interact with consumers — not just high-risk systems. The disclosure is not required where it would be obvious to a reasonable person that the interaction is with AI. Note the broader scope: this provision covers any 'artificial intelligence system,' not just 'high-risk artificial intelligence systems' as in Sections 2 and 3.
(a) Not later than 6 months after the effective date of this act, and except as provided in subsection (b) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) disclosure is not required under subsection (a) of this section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Pre-filed 2025-01-17
T-01.1
M.G.L. c. 93, § 115(b)
Plain Language
Any person who deploys a bot in a commercial transaction or trade practice with a consumer commits a per se Chapter 93A violation if the bot could mislead a reasonable person into thinking they are interacting with a human — actual consumer deception or harm is not required. The sole safe harbor is providing clear and conspicuous notice to the consumer that they are communicating with a computer rather than a human being. This notice must be provided before or during the interaction; failure to disclose creates the violation regardless of outcome. The obligation applies only in the context of commercial transactions or trade practices — non-commercial uses of bots are not covered. Because the statute uses a 'reasonable person' standard for the misleading threshold but provides an absolute safe harbor for clear disclosure, the practical compliance path is to always disclose AI identity clearly and conspicuously.
It is hereby declared to be an unfair and deceptive act or practice in violation of section 2 of chapter 93A for any person to engage in a commercial transaction or trade practice with a consumer of any kind in which the consumer is communicating or otherwise interacting with a bot that may mislead or deceive a reasonable person to believe they are engaging with a human, regardless of whether such consumer is in fact misled, deceived or damaged thereby; provided, however, that a person utilizing or deploying a bot shall not be liable under this section if the consumer is notified in a clear and conspicuous fashion that they are communicating with a computer rather than a human being.
Pre-filed 2025-01-17
T-01.1
Chapter 93M, § 2
Plain Language
Any commercial entity that deploys a chatbot must provide a clear and conspicuous disclosure to every user that they are interacting with a chatbot, not a human. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The disclosure must be made to every person the chatbot interacts with. The statute does not specify timing or format, but the 'clearly and conspicuously' standard requires prominence sufficient to ensure users actually notice the disclosure.
Any commercial entity deploying a chatbot shall clearly and conspicuously disclose to the person with whom the chatbot interacts that the person is interacting with a chatbot and not a human.
Pending 2026-10-01
T-01.1
Commercial Law § 14–1330(D)
Plain Language
Operators must display a clear and conspicuous warning to all users stating that companion chatbots are artificially generated and not human, and that they may not be suitable for some minors. This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled. Note that this is a general-user obligation separate from the enhanced developer disclosure requirements under subsection (E).
(D) An operator shall display a clear and conspicuous warning to a user stating that companion chatbots: (1) Are artificially generated and not human; and (2) May not be suitable for some minors.
Pending 2026-10-01
T-01.1T-01.2T-01.3
Commercial Law § 14–1330(E)(1)–(2)
Plain Language
Developers must provide two forms of AI identity disclosure to users: (1) a static, persistent on-screen warning that the chatbot is artificially generated and not human, which must remain visible at all times; and (2) a dynamic pop-up warning requiring user acknowledgment at the start of the interaction, after every hour of continuous interaction, and whenever the user asks about how the chatbot functions or provides responses. The hourly pop-up serves as a periodic re-disclosure, and the user-prompt trigger functions as an on-demand disclosure. Note that this obligation falls on the 'developer' — distinct from the operator obligations elsewhere in the statute — though the statute does not provide a separate definition of 'developer.'
(E) A developer shall establish and provide to a user of the operator's chatbot clear and conspicuous warnings that the chatbot is artificially generated and not human through the use of both: (1) A static, persistent warning that continuously appears on the screen; and (2) A dynamic warning that pops up on the screen and requires a user to respond: (I) At the start of the user's interaction with the chatbot; (II) After every hour of the user's continuous interaction with the chatbot; and (III) When prompted by the user in a manner that questions how the chatbot functions or provides responses.
Enacted 2025-09-13
T-01.1
10 MRSA § 1500-Y(2)
Plain Language
Any person using an AI chatbot or other computer technology to interact with consumers in trade and commerce must provide clear and conspicuous notice that the consumer is not interacting with a human — but only when the interaction could mislead or deceive a reasonable consumer into believing they are dealing with a human. This is a conditional trigger: if the AI system clearly presents itself as non-human from the outset or no reasonable person would be confused, no disclosure is required. The scope is limited to trade and commerce contexts. A violation constitutes a violation of the Maine Unfair Trade Practices Act, enforceable by the Attorney General with civil penalties up to $10,000 per violation.
A person may not use an artificial intelligence chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human being unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being.
Enacted 2025-09-10
T-01.1
10 MRSA § 1500-Y(2)-(3)
Plain Language
Any person using an AI chatbot or other computer technology in trade and commerce must provide clear and conspicuous notice to the consumer that they are not interacting with a human being, whenever the interaction could mislead or deceive a reasonable consumer into thinking otherwise. This is a conditional trigger — if the AI clearly cannot be mistaken for a human, no disclosure is required. The obligation applies broadly to any person engaged in trade and commerce, not just specific entity types. The scope extends beyond chatbots to 'any other computer technology' used in consumer interactions. Violations are enforceable under the Maine Unfair Trade Practices Act, carrying civil penalties of up to $10,000 per violation.
2. Required disclosure of use of artificial intelligence chatbot to engage in trade and commerce. A person may not use an artificial intelligence chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human being unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being. 3. Violation. A violation of subsection 2 is a violation of the Maine Unfair Trade Practices Act.
Pending 2026-06-16
T-01.1
10 MRSA § 1500-RR(3)(A)
Plain Language
When a therapy chatbot is made available to a minor under the therapy chatbot exemption, it must provide a clear and conspicuous disclaimer at the beginning of each interaction that it is AI and not a licensed mental health professional. This is an unconditional per-interaction disclosure requirement — not triggered by user confusion, but required every time. This obligation is a condition of the therapy chatbot exemption; failure to comply eliminates the exemption and subjects the deployer to the general prohibition on minor access.
A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional;
Pending 2026-08-01
T-01.1
Minn. Stat. § 604.115, subd. 3
Plain Language
Every proprietor of a chatbot accessed by a user located in Minnesota must provide clear, conspicuous, and explicit notice that the user is interacting with an AI chatbot. This is an unconditional disclosure requirement — it applies regardless of whether a reasonable person would be misled. The notice must be in the same language the chatbot is using and in a font size easily readable by the average viewer. Unlike CA SB 243's conditional trigger (only when a reasonable person could be misled), this obligation applies to every chatbot interaction with a Minnesota user.
Proprietors utilizing chatbots accessed by a user who is in this state must provide clear, conspicuous, and explicit notice to a user that the user is interacting with an artificial intelligence chatbot program. The text of the notice must appear in the same language the chatbot is using and in a size easily readable by the average viewer.
Pre-filed 2026-08-28
T-01.1
§ 1.2055(3)(1)
Plain Language
Any person who owns or controls a platform offering a companion chatbot must not process data or design systems in ways that deceive or mislead users about the fact that the chatbot is not human. This is framed as a prohibition on deception rather than an affirmative disclosure requirement — the operator need not proactively disclose AI identity in every interaction, but may not design the system in a way that would lead users to believe they are interacting with a human. This is a design-level obligation covering both data processing choices and system design decisions.
Any person who owns or controls a website, application, software, or program: (1) Shall not process data or design systems in ways that deceive or mislead users of such website, application, software, or program regarding the nonhuman nature of the companion chatbot;
Pending 2026-08-28
T-01.1T-01.2T-01.3
§ 1.2058(5)(3)(a)
Plain Language
Every AI chatbot must clearly and conspicuously disclose to the user at the start of each conversation — and again every 30 minutes during the conversation — that it is an AI system and not a human being. This is an unconditional requirement that applies regardless of whether the user would otherwise be misled. Additionally, the chatbot must be programmed so that it does not claim to be human or respond deceptively when a user asks whether it is a human. The on-demand honesty requirement is ongoing — the chatbot must accurately self-identify whenever asked, not just during the initial or periodic disclosures.
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
Pre-filed 2026-08-28
T-01.1T-01.2T-01.3
§ 1.2058(5)(3)(a)
Plain Language
Every AI chatbot must disclose to the user at the start of each conversation and every 30 minutes that it is an AI system, not a human. This disclosure is unconditional — it applies to all users regardless of whether a reasonable person would be misled. Additionally, the chatbot must be programmed to never claim to be human and must respond truthfully when asked by a user whether it is human. The 30-minute interval is a fixed requirement — not a minimum that operators can extend.
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
Pending 2026-01-01
T-01.1
G.S. § 114B-4(c)
Plain Language
Licensed health information chatbot operators must clearly disclose six categories of information to users: that the chatbot is AI, the service's limitations, data collection and use practices, user rights and remedies, emergency resources (when applicable), and human oversight and intervention protocols. This is a general disclosure obligation under the licensing regime — it is distinct from the more detailed chatbot identification process required under Part II (Chapter 170) for covered platforms.
(c) A licensee must clearly disclose all of the following: (1) The artificial nature of the chatbot. (2) Limitations of the service. (3) Data collection and use practices. (4) User rights and remedies. (5) Emergency resources when applicable. (6) Human oversight and intervention protocols.
Pending 2026-01-01
T-01.1
G.S. § 170-3(b)(3)
Plain Language
Covered platforms must clearly and consistently identify their chatbots as AI when the chatbot's artificial nature is not already apparent. Platforms may not design systems or process data in ways that deceive or mislead users about the chatbot's non-human nature. Transparency must be prioritized over any engagement benefits from perceived human-like interaction. This is a conditional trigger — disclosure is required when the AI nature is 'not clearly apparent,' similar to the 'reasonable person' standard in CA SB 243. This is the overarching duty-of-loyalty framing; the detailed procedural requirements for the disclosure are specified in § 170-5.
(3) Duty of loyalty un chatbot identity disclosure. — A covered platform has a duty to clearly and consistently identify the chatbot as an artificial entity when that fact is not clearly apparent. The platform shall not process data or design systems in ways that deceive or mislead users about the non-human nature of the chatbot, prioritizing transparency over any potential benefits of perceived human-like interaction.
Pending 2026-01-01
T-01.1
G.S. § 170-5(a)-(e)
Plain Language
Covered platforms must implement a detailed chatbot identification process with four specific mandatory disclosures: (1) the chatbot is not human, human-like, or sentient; (2) it is a computer program based on statistical analysis of human text; (3) it cannot experience emotions; and (4) it has no personal preferences or feelings. This disclosure must be under 300 words, clearly presented, and readily accessible. Users must provide affirmative informed consent (e.g., clicking 'I understand') confirming they understand the chatbot's nature and limitations. Deceptive design elements in the consent flow are prohibited. Critically, the identification and consent process must be repeated at the start of each new session — not just at initial onboarding — and must be separate from any privacy policy or other consent process. This is among the most prescriptive AI identity disclosure requirements in any U.S. jurisdiction.
(a) The chatbot identification process shall include all of the following elements: (1) A covered platform shall clearly inform users that the chatbot is: a. Not human, human-like, or sentient. b. A computer program designed to mimic human conversation based on statistical analysis of human-produced text. c. Incapable of experiencing emotions such as love or lust. d. Without personal preferences or feelings. (2) The information required by subdivision (1) of this subsection shall be readily accessible, clearly presented, and concisely conveyed in less than three hundred (300) words. (b) A users shall provide explicit and informed consent to interact with the chatbot. The consent process shall: (1) Require an affirmative action from the user (such as clicking an "I understand" button); and (2) Confirm the user's understanding of the chatbot's identity and limitations. (c) A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process. (d) The chatbot identity communication and opt-in consent process shall be repeated at the start of each new session with a user. (e) The chatbot identification and consent process required by this section shall be separate and distinct from any privacy policy agreement or other consent processes required by law or platform policy.
Pending 2027-07-01
T-01.1T-01.2
Sec. 3(1)
Plain Language
Operators must unconditionally disclose to every known minor account holder that they are interacting with AI. The operator may satisfy this either with a persistent on-screen disclaimer visible at all times, or by disclosing at the beginning of each session and at least every three hours in a continuous interaction. Unlike the general disclosure in Sec. 4, this obligation is not conditional on whether a reasonable person would be misled — it applies whenever the operator knows or has reasonable certainty the user is under 18.
(1) An operator shall clearly and conspicuously disclose to each minor account holder that such minor account holder is interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three hours in a continuous conversational artificial intelligence service interaction.
Pending 2027-07-01
T-01.1
Sec. 4
Plain Language
If a reasonable person could be misled into thinking they are talking to a human, the operator must provide a clear and conspicuous disclosure that the service is AI. This is a conditional trigger — it applies only when the interaction could mislead a reasonable person. Unlike the minor-specific disclosure in Sec. 3(1), this provision applies to all users but only when the deception threshold is met. Compare to CA SB 243, which uses the same conditional reasonable-person standard for general users.
If a reasonable person interacting with a conversational artificial intelligence system would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational artificial intelligence service is artificial intelligence.
Pending 2026-02-01
T-01.1
Sec. 5(1)-(2)
Plain Language
Any deployer or developer that makes an AI system intended to interact with consumers available must disclose to each interacting consumer that they are interacting with an AI system. This obligation applies broadly to all AI systems intended for consumer interaction — not just high-risk systems. Disclosure is not required where it would be obvious to a reasonable person that they are interacting with AI. Note this is the inverse of CA SB 243's trigger: here, disclosure is required by default unless it would be obviously unnecessary, rather than triggered only when a reasonable person could be misled.
(1) On and after February 1, 2026, and except as otherwise provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available any artificial intelligence system that is intended to interact with any consumer shall include in the disclosure to each consumer who interacts with such artificial intelligence system that the consumer is interacting with an artificial intelligence system. (2) Disclosure is not required under subsection (1) of this section under any circumstance when it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Pending 2026-04-09
T-01.1
Section 1(b)
Plain Language
Before an AI chatbot powered by generative AI provides any election-related content or information about a candidate's accomplishments, policy positions, or qualifications, it must display a clear and conspicuous disclosure — appropriate for the medium (text, audio, video, or print) — stating that the content is being provided by a generative AI system. The disclosure must be permanent or difficult to remove by downstream users, to the extent technically feasible. The obligation is triggered by the chatbot's purpose: it applies when the chatbot is designed to provide voters with election-related information or candidate information for New Jersey elections. The scope of 'election related information' is broad, covering everything from voter registration and polling logistics to ballot canvassing and certification of results.
b. Any artificial intelligence chatbot that utilizes generative artificial intelligence to create audio, video, text, or print content with the purpose of providing voters with election related information or information concerning the accomplishments, policy positions, or qualifications of a candidate for election in this State shall include, prior to the provision of any such content, a clear and conspicuous disclosure, as appropriate for the medium of the content, that identifies the content as being provided by a generative artificial intelligence system. Such disclosure shall be permanent or uneasily removed by subsequent users, to the extent technically feasible.
Pending 2026-03-10
T-01.1
Section 1(a)
Plain Language
Any person or entity deploying generative AI to communicate with a consumer for trade or commerce purposes must provide a clear and conspicuous verbal or written disclosure at the start of the interaction that the consumer is interacting with AI. This obligation is conditionally triggered — it applies only when the deployment is such that a reasonable person could believe they are communicating with a human. The disclosure must occur at the beginning of the interaction, not mid-stream. The scope is limited to commercial contexts (trade or commerce), so non-commercial AI interactions are not covered.
A person or entity shall not deploy generative artificial intelligence to communicate or otherwise interact with a consumer for the purpose of engaging in trade or commerce in such a way as to cause a reasonable person to believe they are communicating or interacting with a human unless the person or entity provides a clear and conspicuous verbal or written notice at the beginning of the interaction that the consumer is communicating or interacting with generative artificial intelligence.
Pending 2026-03-10
T-01.1T-01.2
Section 2
Plain Language
Operators of AI companion systems must provide a clear and conspicuous notification — verbally or in writing — at the very beginning of every interaction informing the user that they are not communicating with a human. This disclosure is unconditional; it is not triggered by whether the user could be misled, but applies to every interaction. For continued sessions, the notification must repeat at least every three hours. Unlike California SB 243, which conditions initial disclosure on whether a reasonable person would be misled (except for minors), NJ A 4732 requires unconditional disclosure for all users at the start of every interaction. The three-hour re-notification interval mirrors the CA SB 243 minor-specific requirement but applies here to all users regardless of age.
An operator shall provide clear and conspicuous notification to a user at the beginning of any AI companion interaction that the user is not communicating with a human. This notification shall be provided either verbally or in writing. Thereafter, the notification shall repeat at least every three hours for continued AI companion interactions.
Pre-filed 2026-02-24
T-01.1
Section 1(a)(1)-(2), (b), (c)
Plain Language
Any person or entity that uses an AI system to communicate with a consumer on an online platform must, at the moment of first contact and before any further communication, clearly and conspicuously do two things: (1) notify the consumer that they are communicating with an AI system, and (2) provide information on how to reach a human — including contact details such as a phone number or website, the days and hours a human is available, and any other information the consumer needs to connect with a human. This is an unconditional disclosure obligation — it applies whenever AI communicates with a consumer on an online platform, regardless of whether the consumer could be misled. Failure to comply is an unlawful practice under the New Jersey Consumer Fraud Act, exposing the violator to CFA enforcement and penalties.
a. A person or entity that deploys an artificial intelligence system to communicate with a consumer through an online platform shall, upon establishing contact with the consumer and prior to initiating any further communication, clearly and conspicuously: (1) notify the consumer that an artificial intelligence system is communicating with the consumer; and (2) provide the consumer with information on how to contact a human, including but not limited to providing a phone number, Internet website, or similar contact information for a human; the days and times a human is available; and any other information necessary for communication with a human. b. It shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) for any person or entity that deploys an artificial intelligence system to communicate with a consumer through an online platform to violate the provisions of this section. c. As used in this section: "Artificial intelligence" means the development of software and hardware and the end-use application of technologies that are able to perform tasks normally requiring human intelligence, including, but not limited to, visual perception, speech recognition, decision-making, translation between languages, and generative artificial intelligence, which generates new content in response to user inputs of data.
Pending 2027-01-01
T-01.3
Section 3(A)(3), (B)
Plain Language
Operators must not deploy companion AI products that make material misrepresentations about the product's identity, capabilities, training data, or status as a non-human entity — including when a user directly asks. Adult users may configure the product to enable this feature, but minors may never be permitted to do so. This effectively requires truthful self-identification as AI when questioned, and prohibits false claims about capabilities or training data. The adult opt-in carve-out is unusual — most jurisdictions impose this obligation unconditionally.
A. An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (3) causing the companion artificial intelligence product to make material misrepresentations about the product's identity, capabilities, training data or status as a non-human entity, including when directly questioned by the user. B. An operator shall not permit a minor to configure a companion artificial intelligence product to enable the features described in Subsection A of this section.
Pending 2027-01-01
T-01.1T-01.2
Section 4(A)-(B)
Plain Language
Operators must provide a clear notification during interactions informing users they are communicating with a companion AI product. The notification must be in the same language as the interaction. For text-based interactions, it must be conspicuous, persistent, legible, and distinct from the conversation itself. For non-text interactions (voice, video, etc.), it must be presented periodically — at least every thirty minutes — in a manner distinct from the interaction. Adult users may configure the product to disable this notification, but for minors, the notification must be provided in all circumstances with no opt-out. The thirty-minute periodic reminder for non-text interactions is more frequent than CA SB 243's three-hour interval.
A. An operator shall, unless specifically configured not to do so by an adult user, ensure that a clear notification is provided to the user during an interaction, informing the user that the user is communicating with a companion artificial intelligence product. The notification shall be communicated in the same language as the interaction with the user, and: (1) for text-based interactions, be conspicuous, persistent and legible in the user interface and be distinct from the interaction; and (2) for all other types of interactions, be presented periodically, but no less than once every thirty minutes, in a manner that is distinct from the interaction. B. An operator shall ensure that a clear notification is provided pursuant to Subsection A of this section for use by a minor in all circumstances.
Pending 2027-01-01
T-01.1
GBL § 1554(1)-(2)
Plain Language
Any person doing business in New York that deploys or makes available a consumer-facing AI decision system must disclose to each interacting consumer that they are interacting with an AI system. This is a broad obligation applying to all AI decision systems intended to interact with consumers — not limited to high-risk systems. No disclosure is required where a reasonable person would obviously recognize they are interacting with AI. The obligation covers deployers and any other person making an AI system available to consumers.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
Pending 2025-04-27
T-01.1
State Tech. Law § 507(1)-(3)
Plain Language
New York residents must be informed whenever an automated system is in use that impacts them. Designers, developers, and deployers must provide accessible, plain-language documentation covering: how the system works overall, the role of automation in decisions, notice that the system is in use, identification of the responsible organization or individual, and clear explanations of outcomes. This documentation must be kept current, and residents must be notified of significant changes to use cases or key functionalities. This is a broad notice and documentation obligation that applies to all covered automated systems, not just high-risk ones.
1. New York residents shall be informed when an automated system is in use and New York residents shall be informed how and why the system contributes to outcomes that impact them.
2. Designers, developers, and deployers of automated systems shall provide accessible plain language documentation, including clear descriptions of the overall system functioning, the role of automation, notice of system use, identification of the individual or organization responsible for the system, and clear, timely, and accessible explanations of outcomes.
3. The provided notice shall be kept up-to-date, and New York residents impacted by the system shall be notified of any significant changes to use cases or key functionalities.
Pending 2025-09-09
T-01.1T-01.2
Gen. Bus. Law § 1702
Plain Language
Operators must notify every user — unconditionally, not just when a reasonable person might be misled — at the start of every AI companion interaction that the system is a computer program and not a human being, and that it is unable to feel human emotion. For continuing sessions, the same notification must be repeated at least every three hours. The notification must be delivered either verbally or in bold, capitalized text of at least 16-point font. The statute prescribes the exact language to be used, including substitution of the AI companion's name. Unlike CA SB 243, this disclosure obligation applies to all users regardless of age, uses mandatory prescribed language, and includes the affirmative statement that the AI cannot feel emotion.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three hours for continuing AI companion interactions thereafter, which states either verbally or in bold and capitalized letters of at least sixteen point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Pending 2025-10-12
T-01.1
GBL § 1152
Plain Language
News media employers must fully disclose to their workers whenever and however any generative AI tool is being used in the workplace for content creation — including writing, recordings, and transcripts. The disclosure must include a description of the AI system and a summary of its purpose and use. This is an internal, worker-facing disclosure obligation, distinct from the consumer-facing labeling requirement in § 1153. The bill does not specify timing, format, or frequency of the disclosure beyond requiring it be 'full.'
News media employers shall fully disclose to workers when and how any generative artificial intelligence tool is used in the workplace as it relates to the creation of content, including, but not limited to, writing, recordings and transcripts. Such disclosure shall include a description of the artificial intelligence system and a summary of the purpose and use of such system.
Pending 2026-03-12
T-01.1
CPLR Rule 2107(d)-(e)
Plain Language
Every civil filing must include a separate affidavit addressing generative AI use. If AI was used in drafting — including for research, document review, or content creation — the affidavit must disclose that use and certify that a human reviewed all source material and verified accuracy of all AI-generated content, including case citations. If AI was not used, the filing must still include an affidavit affirmatively stating that. This is a universal affidavit requirement: there is no filing that can be submitted without one or the other affidavit attached.
(d) Any paper or file drafted with the assistance of generative artificial intelligence must attach to the filing a separate affidavit disclosing such use and certifying that a human being has reviewed the source material and verified that the artificially generated content is accurate including, but not limited to, any case citations. (e) Any paper or file drafted without the assistance of generative artificial intelligence must attach to the filing a separate affidavit stating such.
Pending 2026-03-12
T-01.1
CPL § 10.50(4)-(5)
Plain Language
Every filing in a criminal proceeding must include a separate affidavit addressing AI use. If generative AI assisted in drafting — including research or document review — the affidavit must disclose the AI use and certify that a human reviewed all source material and verified the accuracy of AI-generated content, including case citations. If AI was not used, a separate affidavit must affirmatively state that fact. This mirrors the civil filing requirement in CPLR Rule 2107(d)-(e) and applies to all parties in criminal proceedings.
4. Any paper or file drafted with the assistance of generative artificial intelligence must attach to the filing a separate affidavit disclosing such use and certifying that a human being has reviewed the source material and verified that the artificially generated content is accurate including, but not limited to, any case citations. 5. Any paper or file drafted without the assistance of generative artificial intelligence must attach to the filing a separate affidavit stating such.
Pending 2025-10-11
T-01.1
GBL § 1554(1)-(2)
Plain Language
Any person doing business in New York that makes available an AI decision system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. This obligation applies broadly — not just to deployers of high-risk systems but to any person making a consumer-facing AI system available. The disclosure is not required where a reasonable person would find it obvious they are interacting with AI. Note that unlike the high-risk obligations in §§ 1551–1552, this applies to all AI decision systems intended to interact with consumers, not just high-risk systems.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
Pending 2025-09-05
T-01.1
Gen. Bus. Law § 1152
Plain Language
News media employers must fully disclose to their workers when and how any generative AI tool is being used in the workplace for content creation — covering writing, recordings, transcripts, and similar outputs. The disclosure must include a description of the AI system and a summary of its purpose and use. This is a worker-facing transparency obligation, not a consumer-facing one. The bill does not specify timing, format, or frequency of the disclosure beyond requiring it to be 'full.'
Disclosure to news media workers. News media employers shall fully disclose to workers when and how any generative artificial intelligence tool is used in the workplace as it relates to the creation of content, including, but not limited to, writing, recordings and transcripts. Such disclosure shall include a description of the artificial intelligence system and a summary of the purpose and use of such system.
Pending 2026-05-13
T-01.1
Gen. Bus. Law § 399-m-1(2)-(3)
Plain Language
Any business entity or its agents must disclose to customers that AI is being used to influence the interaction, at the point where the customer first encounters the AI. The disclosure must be in at least 12-point boldface type, written in plain English, describe the AI's role in the interaction, and include instructions on how to reach a human if that option is available. The obligation applies broadly across use cases including automated customer support, personalized ad targeting, product eligibility decisions, and AI-driven hiring tools. The bill does not specify penalties for non-compliance or create a private right of action, so enforcement would rely on existing General Business Law mechanisms.
2. Any person, firm, partnership, association or corporation or agent or employee thereof shall disclose the use of artificial intelligence to influence customer interaction, including but not limited to: automated customer support; personalized ad targeting; product eligibility decisions; and AI-driven hiring tools. 3. Such disclosure shall be placed at the point of interaction with the customer, accompanied by a clear and conspicuous, in not less than twelve point bold faced type, plain-English description of the AI's role, with instructions on how to access human assistance, if applicable.
Enacted 2025-11-05
T-01.1T-01.2
General Business Law § 1702
Plain Language
Operators must deliver a clear and prominent disclosure - either verbal or written - that the user is not communicating with a human. This disclosure is required: (1) at the start of every AI companion interaction (initial disclosure), subject to a cap of once per day, and (2) at least every three hours during any continuing interaction (periodic re-disclosure). The obligation is unconditional — it applies to all users regardless of whether a reasonable person would be misled.
An operator shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
Pending 2026-11-01
T-01.1
Section 2(C)(1)
Plain Language
A therapeutic chatbot may only be made available to minors if it provides a clear and conspicuous disclaimer at the start of each interaction that it is AI and not a licensed professional. This is an unconditional, per-interaction disclosure requirement — not triggered by whether a reasonable person would be misled. It is one of five cumulative conditions that must all be satisfied for the therapeutic exemption to apply.
C. Therapeutic chatbots that meet all of the following requirements may be made available to minors: 1. The chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is AI and not a licensed professional;
Passed 2027-07-01
T-01.1T-01.2
75A O.S. § 302(A)
Plain Language
Operators must provide clear and conspicuous AI identity disclosure to minor account holders. The operator may satisfy this obligation in one of two ways: (1) a constantly visible on-screen disclaimer, or (2) a disclosure at the beginning of each session plus a reminder at least every 30 minutes during continuous interaction. This is an unconditional disclosure requirement for all known minor accounts — there is no 'reasonable person would be misled' trigger. The 30-minute interval is significantly more frequent than comparable obligations in other jurisdictions (e.g., California SB 243 requires every 3 hours).
A. An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service and is not interacting with a natural person: 1. With a constantly visible disclaimer; or 2. At the beginning of each session and appearing at least every thirty (30) minutes in a continuous conversational AI service interaction.
Pending 2026-03-10
T-01.1
Section 3(a)-(b)
Plain Language
Any business entity that uses AI in any part of a consumer interaction must proactively disclose the use of AI in a clear and conspicuous manner at the beginning of the interaction. The disclosure must be in plain language, delivered orally or in writing, and must be reasonably accessible to individuals with disabilities or limited English proficiency. This is an unconditional disclosure — it applies whenever AI is used in any consumer interaction, regardless of whether the consumer could be misled. The trigger is extremely broad: any communication, transaction, or service directed at a Pennsylvania resident that involves AI in any part.
(a) Duty of business entity.--A business entity that uses artificial intelligence in any part of a consumer interaction shall disclose the use of artificial intelligence in a clear and conspicuous manner to the consumer at the beginning of the consumer interaction. (b) Format.--The business entity shall deliver the disclosure in plain language, orally or in writing, which language must be reasonably accessible to an individual with a disability or limited English proficiency.
Pending 2026-03-10
T-01.3
Section 3(c)
Plain Language
When a consumer requests to speak with a human during an AI-assisted consumer interaction, the business entity must provide timely access to a human representative. This obligation is conditioned on a human representative being 'reasonably available,' which provides a practical escape valve for businesses that may not have human staff available at all times. The statute does not define what constitutes 'timely' or 'reasonably available,' leaving significant interpretive discretion.
(c) Human representatives.--Upon request, the business entity shall provide the consumer with timely access to a human representative, if a human representative is reasonably available.
Pending 2026-01-29
T-01.1T-01.2
Section 4(2)
Plain Language
Operators must unconditionally disclose to every user that they are communicating with an AI companion and not a human. This disclosure must be provided at the start of every session and repeated every three hours during continuing sessions. The disclosure may be delivered verbally or in writing. Unlike some jurisdictions that trigger disclosure only when a reasonable person could be misled, this obligation is unconditional — it applies at every session regardless of context.
An operator shall: (2) At the beginning of a session with an AI companion and once every three hours during the session, provide a notification to the user stating, either verbally or in writing, that the user is communicating with an AI companion and not a human.
Pending 2026-04-01
T-01.1T-01.3
12 Pa.C.S. § 7105(c)(3)
Plain Language
The supplier's disclosure policy must include a clear and conspicuous statement that the chatbot is AI and not a human. Additionally, this statement must be provided each time a consumer asks or prompts the chatbot about whether AI is being used — creating an on-demand disclosure obligation. The initial statement is part of the pre-access policy the consumer must acknowledge under § 7105(b), meaning every consumer sees the AI identity disclosure before any interaction begins. The on-demand component ensures the chatbot accurately identifies itself whenever questioned during a session.
(3) A statement that the chatbot is an artificial intelligence technology and is not a human, which must be provided each time that the consumer asks or otherwise prompts the chatbot about whether artificial intelligence is being used.
Pending 2026-06-03
T-01.1
Section 3(a)
Plain Language
If a user could reasonably mistake the AI companion for a real person, the operator must display a clear, prominent notice that the AI companion is artificially generated and not human. This is a conditional trigger — if the AI companion clearly presents itself as AI from the outset in a way that no reasonable person would be misled, no disclosure is required under this subsection. Compare to subsection (c)(1), which imposes an unconditional disclosure requirement for known or suspected minors.
Disclosure of nonhuman status.--If a reasonable person interacting with an AI companion would be misled to believe the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.
Pending 2026-06-03
T-01.1T-01.2
Section 3(c)(1)-(2)
Plain Language
When the operator knows or should have known a user is a minor, two unconditional obligations apply: (1) always disclose that the user is interacting with AI rather than a human — this is not subject to the 'reasonable person' condition in subsection (a); and (2) provide a prominent default reminder at least every three hours during ongoing conversations that the AI companion is AI-generated and the user should take a break. The 'should have known' standard (as amended from the original 'should reasonably suspect') creates a constructive knowledge obligation — operators cannot avoid these duties by failing to implement reasonable age-detection measures.
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (1) Disclose to the user that the user is interacting with artificial intelligence and not an actual human being. (2) Provide by default a clear and conspicuous notification to the user at least once every three hours during continuing interactions that reminds the user to take a break and that the AI companion is artificially generated and not human.
Pending 2027-01-01
T-01.1T-01.2
R.I. Gen. Laws § 6-63-3
Plain Language
Operators must unconditionally disclose to every user at the start of every AI companion interaction — and again at least every three hours during continuing interactions — that the AI companion is a computer program and not a human being, and that it is unable to feel human emotion. This is an unconditional, mandatory disclosure — there is no 'reasonable person would be misled' threshold. The notification must be delivered verbally or in bold, capitalized text of at least 16-point font. The prescribed language must include the specific AI companion's name. Unlike CA SB 243, which only requires periodic re-disclosure for minors, Rhode Island requires the three-hour interval for all users regardless of age.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three (3) hours for continuing AI companion interactions hereafter, which states either verbally or in bold and capitalized letters of at least sixteen (16) point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Pending 2026-02-06
T-01.1
R.I. Gen. Laws § 23-106-3
Plain Language
Healthcare providers and healthcare facilities that use AI to document patient visits — whether in-person or via telehealth — must notify patients that AI is being used for that documentation purpose. The obligation is narrowly limited to AI used for visit documentation (e.g., AI scribes, ambient listening tools that generate clinical notes); it does not extend to AI used for diagnosis, treatment planning, or other clinical functions. The bill does not specify the form, timing, or content of the notification, leaving significant implementation discretion to providers. No enforcement mechanism, penalties, or remedies are specified.
Any and all healthcare providers and healthcare facilities that employ artificial intelligence ("AI") to document in-person or telehealth visits shall notify patients of the use of AI for that sole purpose.
Pending 2027-01-01
T-01.1T-01.2
R.I. Gen. Laws § 6-63-3
Plain Language
Operators must provide a mandatory notification to every user at the start of each AI companion interaction and then at least every three hours during continuing interactions. The notification must be delivered either verbally or in bold, capitalized text of at least 16-point font size. The required language is prescribed verbatim: the system must identify itself (by name or generically) as a computer program, not a human, that is unable to feel human emotion. This is an unconditional obligation — it applies to all users regardless of whether they could be misled, and the precise wording is mandated by the statute. Operators should substitute the AI companion's actual name where the template says 'NAME OF THE AI COMPANION.'
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three (3) hours for continuing AI companion interactions hereafter, which states either verbally or in bold and capitalized letters of at least sixteen (16) point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Pending 2026-02-13
T-01.1
R.I. Gen. Laws § 23-106-3
Plain Language
Healthcare providers and healthcare facilities that use AI to document patient visits — whether in-person or via telehealth — must notify the patient that AI is being used for that documentation purpose. The obligation is narrowly scoped: it applies only when AI is used to document visits (e.g., AI-powered ambient scribes, transcription tools, or clinical note generators), not when AI is used for diagnosis, treatment, or other clinical functions. The statute does not specify the form, timing, or content of the notification beyond that patients must be informed. No enforcement mechanism or penalties are provided.
Any and all healthcare providers and healthcare facilities that employ artificial intelligence ("AI") to document in-person or telehealth visits shall notify patients of the use of AI for that sole purpose.
Pre-filed 2026-01-01
T-01.1T-01.2T-01.3
S.C. Code § 39-80-30(B)
Plain Language
Before a chatbot generates any output, the provider must give the user clear, conspicuous, and explicit notice that they are interacting with a chatbot, not a human. This notice must be repeated at the beginning of each communication, every hour during continuing interactions, and each time a user asks whether the chatbot is a natural person. The notice must be in the chatbot's communication language, in a font at least as large as the largest font used elsewhere in chatbot communications, and must comply with Attorney General regulations. This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled.
(B) A chatbot provider shall provide clear, conspicuous, and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice must: (1) be written in the same language that the chatbot communicates with the user and must appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications; and (2) must comply with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40.
Pending 2025-01-01
T-01.1T-01.2T-01.3
S.C. Code § 39-80-30(B)
Plain Language
Chatbot providers must display a clear, conspicuous, and explicit notice that the user is interacting with a chatbot — not a human — before the chatbot generates any output. This is an unconditional obligation; it does not depend on whether a reasonable person could be misled. The notice must be repeated at the beginning of each communication session, every hour during continuing interactions, and each time a user asks whether the chatbot is a natural person. The notice must appear in the chatbot's operating language, in a font size at least as large as the largest font used for other chatbot communications, and must comply with Attorney General regulations. This is among the most demanding AI identity disclosure requirements in the U.S., with hourly re-disclosure and format specifications.
(B) A chatbot provider shall provide clear, conspicuous, and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice must: (1) be written in the same language that the chatbot communicates with the user and must appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications; and (2) must comply with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40.
Pending
T-01.1
S.C. Code § 37-31-40(A)-(B)
Plain Language
Deployers and developers that make available an AI system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. This obligation applies broadly to any consumer-facing AI system — not just high-risk systems. The disclosure is not required where it would be obvious to a reasonable person that they are interacting with AI. Note this is the inverse of the CA SB 243 pattern: here the default is disclosure required unless obviously AI, rather than disclosure required only when a reasonable person might be misled.
(A) Except as provided in subsection (B), a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (B) Disclosure is not required under subsection (A) under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Pending 2026-01-01
T-01.1
S.C. Code § 39-81-40(B)(2)
Plain Language
Covered entities must implement systems to prevent their chatbot from making materially false claims that it is human. This is not a proactive disclosure requirement (the chatbot need not affirmatively state it is AI at the start of every interaction), but rather a prohibition on the chatbot making false representations of humanity. The 'materially false' qualifier suggests minor or incidental anthropomorphic language may not trigger liability — only affirmative misrepresentation of human status.
(B) A covered entity shall implement reasonable systems and processes to: (2) ensure that a chatbot does not make a materially false representation that it is a human being;
Pending 2026-07-01
T-01.1
Section 1 (new section added to ch. 37-24)
Plain Language
A person conducting a commercial transaction or trade practice may not require a consumer to interact with a chatbot, AI agent, avatar, or other conversational technology if the consumer could reasonably believe they are communicating with a human — unless the person provides clear and conspicuous notice at the outset of the interaction that the consumer is not speaking with a human. The obligation is conditional: it applies only when two elements are met — (1) the interaction uses conversational AI technology, and (2) a reasonable consumer could be misled into thinking they are dealing with a human. Providing the notice at the outset is a safe harbor that satisfies the requirement. The scope is limited to commercial transactions and trade practices — non-commercial uses of conversational AI are not covered.
Except as otherwise provided in this section, a person may not engage in a commercial transaction or trade practice with a consumer if: (1) The transaction or practice requires the consumer to communicate with or interact with a chatbot, an artificial intelligence agent, an avatar, or another form of computer technology that engages in a textual or aural conversation; and (2) The consumer could reasonably believe that the consumer is engaging with human. The prohibition set forth in this section does not apply if the consumer is notified, in a clear and conspicuous fashion, at the outset of the transaction or practice, that the consumer is not communicating with another human.
Enacted 2024-05-01
T-01.3
Utah Code § 13-2-12(3)
Plain Language
Any person deploying generative AI in connection with activities overseen by the Utah Division of Consumer Protection must, when asked by the person interacting with the AI, clearly and conspicuously disclose that the person is interacting with generative AI and not a human. This is an on-demand disclosure — it is triggered only when the individual asks or prompts, not proactively. Compare to the proactive disclosure required under subsection (4)(a) for regulated occupations, which does not require a user inquiry.
A person who uses, prompts, or otherwise causes generative artificial intelligence to interact with a person in connection with any act administered and enforced by the division, as described in Section 13-2-1, shall clearly and conspicuously disclose to the person with whom the generative artificial intelligence interacts, if asked or prompted by the person, that the person is interacting with generative artificial intelligence and not a human.
Enacted 2024-05-01
T-01.1
Utah Code § 13-2-12(4)(a)-(b), (5)
Plain Language
Providers of services in a regulated occupation (i.e., any occupation requiring a license or state certification from the Utah Department of Commerce) must proactively and prominently disclose whenever a consumer is interacting with generative AI in the delivery of those services. The disclosure must be given verbally at the start of any oral conversation and via electronic message before any written exchange. This is an unconditional proactive disclosure — unlike subsection (3), it does not require the consumer to ask. Subsection (4)(b) clarifies that this provision does not create a new authorization to provide regulated services via AI; all existing licensure and certification requirements remain in full effect.
(4) (a) A person who provides the services of a regulated occupation shall prominently disclose when a person is interacting with a generative artificial intelligence in the provision of regulated services. (b) Nothing in this section permits a person to provide the services of a regulated occupation through generative artificial intelligence without meeting the requirements of the regulated occupation. (5) A disclosure described Subsection (4)(a) shall be provided: (a) verbally at the start of an oral exchange or conversation; and (b) through electronic messaging before a written exchange.
Enacted 2025-05-07
T-01.3
Utah Code § 13-75-103(1)(a)-(b)
Plain Language
When a supplier uses generative AI to interact with a consumer in a consumer transaction, the supplier must disclose that the consumer is interacting with AI (not a human) if the consumer asks or prompts whether AI is being used. The consumer's question must be a clear and unambiguous request — vague or ambiguous inquiries do not trigger the obligation. This is an on-demand disclosure duty, not a proactive one: no disclosure is required unless the consumer affirmatively asks. Compare to the heightened obligation in § 13-75-103(2) for regulated occupations, which requires proactive disclosure without a consumer prompt.
(1)(a) A supplier that uses generative artificial intelligence to interact with an individual in connection with a consumer transaction shall disclose to the individual that the individual is interacting with generative artificial intelligence and not a human, if the individual asks or otherwise prompts the supplier about whether artificial intelligence is being used. (b) The individual's prompt or question under Subsection (1)(a) must be a clear and unambiguous request to determine whether the interaction is with a human or with artificial intelligence.
Enacted 2025-05-07
T-01.1
Utah Code § 13-75-103(2)-(3)
Plain Language
Individuals in regulated occupations (those regulated by the Utah Department of Commerce and requiring a license or state certification) must proactively and prominently disclose when a client is interacting with generative AI, if the use constitutes a high-risk AI interaction. This is an unconditional, proactive disclosure — unlike the consumer transaction rule in § 103(1), it does not wait for the consumer to ask. The disclosure must be provided verbally at the start of a verbal interaction and in writing before a written interaction begins. The high-risk trigger covers collection of sensitive personal data and personalized financial, legal, medical, or mental health advice, plus any additional categories the Division defines by rule. The provision also requires continued compliance with all existing requirements of the regulated occupation when delivering services through generative AI.
(2) An individual providing services in a regulated occupation shall: (a) prominently disclose when an individual receiving services is interacting with generative artificial intelligence in the provision of regulated services if the use of generative artificial intelligence constitutes a high-risk artificial intelligence interaction; and (b) comply with all requirements of the regulated occupation when providing services through generative artificial intelligence. (3) A disclosure required under Subsection (2) shall be provided: (a) verbally at the start of a verbal interaction; and (b) in writing before the start of a written interaction.
Enacted 2025-05-07
T-01.1T-01.2
Utah Code § 13-75-104(1)-(2)
Plain Language
A safe harbor protects any person from enforcement under the disclosure requirements of § 13-75-103 if their generative AI system clearly and conspicuously discloses — both at the outset and throughout the interaction — that it is generative AI, is not human, or is an AI assistant. This applies to both consumer transactions and regulated services. The practical takeaway: if you embed a persistent, prominent AI disclosure from the first message and maintain it throughout the session, you are shielded from enforcement even if you otherwise would have failed to comply with the on-demand or proactive disclosure requirements. The Division may issue rules specifying what forms and methods of disclosure satisfy or fail to satisfy this safe harbor.
(1) A person is not subject to an enforcement action for violating Section 13-75-103 if the person's generative artificial intelligence clearly and conspicuously discloses: (a) at the outset of any interaction with an individual in connection with: (i) a consumer transaction; or (ii) the provision of regulated services; and (b) throughout the interaction that it: (i) is generative artificial intelligence; (ii) is not human; or (iii) is an artificial intelligence assistant. (2) In accordance with Title 63G, Chapter 3, Utah Administrative Rulemaking Act, the division in consultation with the office, may make rules specifying forms and methods of disclosure that: (a) satisfy the requirements of Subsection (1); or (b) do not satisfy the requirements of Subsection (1).
Pending 2027-01-01
T-01.1T-01.2T-01.3
§ 59.1-616(A)
Plain Language
Operators must provide AI identity disclosure to all users (not just minors) through two mechanisms: (1) a static, persistent disclaimer visible at all times indicating the companion chatbot is not a human, and (2) active pop-up notifications (or equivalent if pop-ups are not feasible) at three intervals — upon login, every 90 minutes of sustained engagement, and whenever the user asks. The persistent disclosure is always-on; the pop-up notifications are triggered at defined intervals. Unlike CA SB 243, which conditions disclosure on whether a reasonable person could be misled, Virginia requires unconditional disclosure to all users. The 90-minute re-disclosure interval is more frequent than some jurisdictions (e.g., CA SB 243's 3-hour interval).
A. An operator shall (i) include a disclaimer to users of all ages that a companion chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up, or other communication if a pop-up is not feasible, that the user is not engaging with a human counterpart at the following intervals: 1. Upon login to the companion chatbot; 2. Every 90 minutes of sustained user engagement; and 3. When prompted by the user.
Pending 2026-07-01
T-01.1
Va. Code § 59.1-615(2)
Plain Language
Covered entities must implement reasonable systems and processes to prevent their chatbots from making materially false representations that they are human beings. This goes beyond a disclosure obligation — it requires affirmative technical measures to ensure the chatbot itself does not claim to be human in its outputs. The standard is 'materially false representation,' which implies that incidental anthropomorphic language may not trigger a violation, but affirmative claims of being human would.
A covered entity shall implement reasonable systems and processes to:
2. Ensure that a chatbot does not make a materially false representation that it is a human being;
Pending 2026-07-01
T-01.1T-01.2T-01.3
Va. Code § 59.1-617
Plain Language
All operators (not just covered entities meeting the 500,000-user threshold) must provide two layers of AI identity disclosure: (1) a static, persistent disclaimer visible to all users at all times indicating the chatbot is not human, and (2) pop-up notifications at four specific trigger points — upon login, every 30 minutes of sustained engagement, whenever a user asks, and whenever the chatbot is about to provide advice in a licensed field such as medical, financial, or legal advice. The 30-minute interval is notably more frequent than some jurisdictions (e.g., California SB 243's 3-hour interval). The obligation applies to users of all ages and is unconditional — no 'reasonable person would be misled' threshold applies.
An operator shall (i) include a disclaimer to users of all ages that a chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up that he is not engaging with a human counterpart at the following intervals:
1. Upon login to the chatbot;
2. Every 30 minutes of sustained user engagement;
3. When prompted by the user; and
4. When asked to provide advice legally regulated by a licensed industry, including medical, financial, or legal advice.
Pre-filed 2026-07-01
T-01.1
9 V.S.A. § 2466e(a)-(c)
Plain Language
Any person using a chatbot in a commercial transaction or trade practice with a consumer must provide a clear and conspicuous disclosure that the consumer is communicating with a chatbot, not a human — but only if the chatbot could mislead a reasonable person into thinking they are interacting with a human. The trigger is objective: it does not matter whether any particular consumer was actually misled. If the chatbot clearly presents as non-human and could not deceive a reasonable person, no disclosure is required. Failure to disclose constitutes an unfair and deceptive act under Vermont's Consumer Protection Act, exposing the violator to the full remedial framework of that statute.
(a) No person shall engage in a commercial transaction or trade practice with a consumer in which the consumer is communicating or otherwise interacting with a chatbot that may mislead or deceive a reasonable person to believe the person is engaging with an actual human, whether or not any consumer is in fact misled or deceived, unless the consumer is notified in a clear and conspicuous manner that the consumer is communicating with a chatbot and not an actual human being. (c) A person who violates subsection (a) of this section commits an unfair and deceptive act in commerce in violation of section 2453 of this title.
Pre-filed 2026-07-01
T-01.1T-01.2T-01.3
9 V.S.A. § 4193c(b)-(b)(3)
Plain Language
Chatbot providers must unconditionally disclose to users that they are interacting with an AI, not a human, at three trigger points: (1) before the chatbot generates any output; (2) every hour during continuing interactions; and (3) whenever a user asks whether the chatbot is a real person. The notice must be in the user's interaction language, in a font at least as large as the largest text on the interface, accessible to users with disabilities, and compliant with AG rules. This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled. The hourly re-disclosure frequency is stricter than CA SB 243's three-hour interval.
(b) Disclosure. Chatbot providers shall provide clear, conspicuous, and explicit notice to users that users are interacting with a chatbot rather than a human prior to the chatbot generating any outputs, every hour thereafter, and each time a user prompts the chatbot about whether it is a real person subject to the following: (1) The text of this notice must appear in the same language as the one in which the user is interacting with the chatbot, in a font size easily readable by an average user, and no smaller than the largest font size of other text appearing on the interface on which the chatbot is provided. (2) This notice must be accessible to users with disabilities. (3) This notice must comply with rules adopted by the Attorney General pursuant to this subchapter.
Pre-filed 2026-07-01
T-01.1
9 V.S.A. § 4193b(a)
Plain Language
When a user could reasonably mistake the companion chatbot for a human, the operator must display a clear, conspicuous notification that the chatbot is AI-generated and not human. The notification must match the language of the interaction and be sized for easy readability. This is a conditional trigger — if the chatbot is clearly identifiable as AI from the outset, no disclosure is required. Compare to the minor-specific provision in § 4193b(c)(1), which imposes an unconditional immediate disclosure.
If a user interacting with a companion chatbot could be reasonably misled to believe that the user is interacting with a human, an operator shall issue a clear and conspicuous notification to the individual indicating that the companion chatbot is artificially generated and not human. The text of the notification shall appear in the same language and in a size easily readable by the average viewer.
Pre-filed 2026-07-01
T-01.1T-01.2
9 V.S.A. § 4193b(c)(1)-(2)
Plain Language
When the operator knows a user is a minor (17 or younger), two unconditional obligations apply: (1) immediately disclose in a clear and conspicuous manner that the user is interacting with AI — no reasonable-person trigger, this is absolute; and (2) send a prominent reminder at least every 30 minutes during continuing interactions that the chatbot is AI and the user should take a break. The 30-minute interval is a minimum floor — operators may remind more frequently. These obligations are triggered only by actual knowledge that the user is a minor.
An operator shall, for a user that the operator knows is a minor, do the following: (1) immediately disclose to the user in a clear and conspicuous manner that the user is interacting with artificial intelligence; (2) provide a clear and conspicuous notification to the user at least every 30 minutes for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human;
Passed 2026-07-01
T-01.1
18 V.S.A. § 9752(a)-(b)
Plain Language
Health care providers using generative AI to create patient communications about clinical information must include a disclaimer that the communication was AI-generated and provide clear instructions for contacting a human provider. The disclaimer placement varies by medium: at the beginning for letters/emails, throughout for chat and video, and verbally at start and end for audio. A critical safe harbor applies: if a licensed human provider reads and reviews the AI-generated communication before it is sent, none of these requirements apply. Additionally, violations by licensed providers are subject to jurisdiction of the Office of Professional Regulation and Board of Medical Practice.
(a) Except as provided in subsection (b) of this section, any health care provider that uses generative artificial intelligence to generate written or verbal patient communications relating to patient clinical information shall ensure that those communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence. (A) For written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication. (B) For written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction. (C) For audio communications, the disclaimer shall be provided verbally at the start and end of the interaction. (D) For video communications, the disclaimer shall be prominently displayed throughout the interaction. (2) Clear instructions describing how a patient may contact a human health care provider; an employee of the health care facility, clinic, physician's office, or office of a group provider; or other appropriate person. (b) If a communication is generated by generative artificial intelligence and read and reviewed by a licensed human health care provider, the requirements of subsection (a) of this section shall not apply.
Passed 2026-07-01
T-01.1T-01.3
18 V.S.A. § 9763(a)-(b)
Plain Language
Mental health chatbot suppliers must ensure the chatbot clearly and conspicuously discloses to Vermont users that it is AI and not a human. This disclosure is unconditional — it is not triggered by whether a reasonable person would be misled. Timing requirements are: (1) before the user can access chatbot features (initial gating), (2) at the start of any interaction if the user hasn't used the chatbot within 7 days (re-disclosure after inactivity), and (3) whenever the user asks whether AI is being used (on-demand). The 7-day re-disclosure threshold is less aggressive than CA SB 243's 3-hour rule but applies to a broader trigger (any gap over 7 days, not just continuous sessions).
(a) A supplier of a mental health chatbot shall cause the mental health chatbot to clearly and conspicuously disclose to a Vermont user that the mental health chatbot is an artificial intelligence technology and not a human. (b) The disclosure described in subsection (a) of this section shall be made: (1) before the Vermont user may access the features of the mental health chatbot; (2) at the beginning of any interaction with the Vermont user if the Vermont user has not accessed the mental health chatbot within the previous seven days; and (3) any time a Vermont user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
Passed 2027-02-01
T-01.1
Sec. 6(1)-(4)
Plain Language
Any Washington government agency that deploys an AI system intended to interact with consumers must unconditionally disclose to each consumer — before or at the time of interaction — that they are interacting with an AI system. The disclosure must be clear, conspicuously posted, in plain language, and may not employ dark patterns. It may be delivered via hyperlink to a separate web page. This is an unconditional requirement: the agency must disclose even if it would be obvious to a reasonable consumer that they are interacting with AI. The provision applies broadly to any AI system intended for consumer interaction, not just generative AI. No enforcement mechanism or penalty is specified for this provision.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) An agency is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system. (4) For the purposes of this section, "artificial intelligence system" has the same meaning as in section 1 of this act.
Pending 2027-01-01
T-01.1
Sec. 3(4)(a)-(e)
Plain Language
Before or at the time a deployer uses a high-risk AI system to interact with a consumer, the deployer must disclose that the consumer is interacting with an AI system. Simultaneously, the deployer must provide detailed information including: the system's purpose, its nature, the type of consequential decision being made, deployer contact information, and a plain-language description covering what personal attributes the system measures, how it measures them, their relevance to the decision, what human components exist, and how automated components inform decisions. This is an unconditional disclosure obligation — it is triggered whenever the system interacts with a consumer, regardless of whether the consumer could be misled.
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
Passed 2027-01-01
T-01.1T-01.2
Sec. 3(1)-(2)
Plain Language
Operators must display a clear, conspicuous notice to all users that the AI companion chatbot is artificially generated and not human. This notice must appear at the start of every interaction and be repeated at least every three hours during a continuous session. This obligation is unconditional — it applies to every interaction regardless of context. Note that Sec. 4 imposes a shorter interval (every hour) for minors.
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction.
Passed 2027-01-01
T-01.3
Sec. 3(3)
Plain Language
Operators must take reasonable measures to ensure that AI companion chatbots never claim to be human — whether proactively or in response to a direct question — and never generate outputs that contradict or undermine the mandatory AI identity disclosure. This is an affirmative design obligation requiring technical safeguards (e.g., system-level instructions, output filtering) to prevent the chatbot from asserting humanity. This provision applies to all users; Sec. 4(3) imposes the identical obligation specifically in the minor context.
(3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
Passed 2027-01-01
T-01.1T-01.2
Sec. 4(1)(a), (2), (3)
Plain Language
When the operator knows the user is a minor, or the chatbot is directed to minors, heightened disclosure obligations apply: the AI identity notification must appear at the start of each interaction and be repeated at least every hour (compared to every three hours for general users under Sec. 3). The operator must also take reasonable measures to prevent the chatbot from claiming to be human or generating outputs contradicting the disclosure. The 'directed to minors' trigger is broader than CA SB 243, which requires actual knowledge — here, if the product is designed for or marketed to minors, the heightened obligations apply automatically.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1)(a) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
Pending 2026-07-01
T-01.1
Sec. 10(1)-(3)
Plain Language
Government agencies that deploy an AI system intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with an AI system. The disclosure must be clear, conspicuously posted, written in plain language, and free of dark patterns. A hyperlink to a separate web page is an acceptable format. The disclosure is unconditional — it must be provided even if a reasonable consumer would already realize they are interacting with AI. Note this provision applies to any AI system (not just high-risk), and the obligated party is a government agency rather than a private deployer. This section is codified in Title 42 RCW, separate from the private-sector obligations in Title 19 RCW.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.
Pending 2027-01-01
T-01.1T-01.2T-01.3
Sec. 3(1)-(3)
Plain Language
Operators must unconditionally disclose that an AI companion chatbot is AI-generated and not human — this is not conditioned on whether a reasonable person would be misled. The disclosure must appear at the beginning of the interaction and be repeated at least every three hours during continued use. In addition, operators must take reasonable measures to prevent the chatbot from claiming to be human at any time, including when directly asked by a user, and from generating any output that contradicts the AI disclosure. This combines an affirmative disclosure obligation with a prohibition on deceptive outputs that would undermine it.
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
Pending 2027-01-01
T-01.1T-01.2T-01.3
Sec. 4(1)(a), (2), (3)
Plain Language
When the operator knows a user is a minor, or when the AI companion chatbot is directed to minors, the operator must disclose that the chatbot is AI-generated and not human at the beginning of the interaction and repeat that disclosure at least every hour during continuous use — a significantly more frequent reminder cadence than the three-hour interval for general users under Section 3. The operator must also prevent the chatbot from claiming to be human, including when directly asked. The trigger is either actual knowledge of the user's minor status or the chatbot being directed to minors as a product category.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
Pending 2026-07-01
T-01.1
Sec. 11(1)-(3)
Plain Language
Government agencies that deploy AI systems intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with AI. The disclosure must be clear, conspicuously posted, written in plain language, and may not use dark patterns. A hyperlink to a separate web page is acceptable. Critically, the disclosure is unconditional — it must be made even if it would be obvious to a reasonable consumer that they are interacting with AI. This applies to any AI system (not just high-risk systems) and covers government agencies (which are excluded from the 'person' definition and therefore from the Title 19 chapter's deployer/developer obligations). This section is codified in Title 42 RCW and does not include a specified enforcement mechanism.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.
Pending 2027-01-01
T-01.1T-01.2
§33-57-2(c)
Plain Language
Operators and licensed professionals must provide a clear and conspicuous notification — verbal or written — to users at the beginning of any AI companion interaction stating that the user is not communicating with a human. This initial disclosure need not exceed once per day. For continuing AI companion interactions, a reminder must be provided at least every three hours. This is an unconditional disclosure requirement — it applies to every AI companion interaction regardless of whether a reasonable person would be misled.
(c) An operator or licensed professional shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day. and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
Enacted 2026-01-01
T-01.1
Bus. & Prof. Code § 22602(a)
Plain Language
If a user could reasonably mistake the chatbot for a real person, the operator must display a clear, prominent notice that the companion chatbot is AI-generated and not human. This is a conditional trigger — if the chatbot's presentation already makes its artificial nature apparent such that no reasonable person would be misled, no disclosure is required. Compare to jurisdictions that impose an unconditional disclosure at the start of every interaction regardless of whether a reasonable person would be misled.
If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
Enacted 2026-01-01
T-01.1
Bus. & Prof. Code § 22602(c)(1)-(2)
Plain Language
For users the operator knows are minors, the operator must disclose that the user is interacting with AI — unconditionally, with no reasonable-person standard. Actual knowledge of minor status is required to trigger this obligation.
An operator shall, for a user that the operator knows is a minor, do all of the following: (1) Disclose to the user that the user is interacting with artificial intelligence.
Enacted 2026-01-01
T-01.2
Bus. & Prof. Code § 22602(c)(1)-(2)
Plain Language
For users the operator knows are minors, a clear and prominent reminder must be sent at least every three hours during ongoing interactions that the chatbot is AI and the user should take a break. The three-hour interval is a floor — operators may remind more frequently. Actual knowledge of minor status is required to trigger this obligation.
An operator shall, for a user that the operator knows is a minor, do all of the following: ... (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.