T-01
Transparency & Disclosure
AI Identity Disclosure
Users must be informed when they are interacting with an AI system rather than a human. Some jurisdictions impose initial disclosure unconditionally; others only when a reasonable person could be misled. Periodic re-disclosure requirements apply primarily to companion and extended-session AI. On-demand disclosure requires the system to accurately identify itself as AI whenever a user asks.
Applies to DeveloperDeployerProfessionalGovernment Sector Chatbot
Bills — Enacted
10
unique bills
Bills — Proposed
92
Last Updated
2026-03-29
Core Obligation

Users must be informed when they are interacting with an AI system rather than a human. Some jurisdictions impose initial disclosure unconditionally; others only when a reasonable person could be misled. Periodic re-disclosure requirements apply primarily to companion and extended-session AI. On-demand disclosure requires the system to accurately identify itself as AI whenever a user asks.

Sub-Obligations3 sub-obligations
Bills That Map This Requirement 102 bills
Bill
Status
Sub-Obligations
Section
Pending 2026-10-01
T-01.1T-01.2
Section 2(a)(1)-(2)
Plain Language
Any person conducting a commercial transaction or trade practice with a consumer through an AI chatbot must disclose — verbally or in writing — that the consumer is communicating with a computer, not a human, if the consumer could reasonably believe they are engaging with a human. This disclosure must occur at the beginning of each interaction and must be repeated at regular intervals during continuing interactions. The trigger is a reasonable-person standard: if the chatbot's interface is clearly non-human, the disclosure obligation may not apply. The statute does not specify the interval frequency, leaving 'regular interval' to judicial interpretation.
(a) A person that engages in a commercial transaction or trade practice with a consumer through an AI chatbot, in textual or aural conversation, where the consumer may reasonably believe the consumer is engaging with a human, shall notify the consumer verbally or in writing: (1) At the beginning of each interaction that the consumer is communicating with a computer, not a human; and (2) At a regular interval for continuing interactions that the consumer is communicating with computer, not a human.
Pending 2027-10-01
T-01.1T-01.2
A.R.S. § 18-802(A)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with a conversational AI service. The operator may satisfy this obligation in one of two ways: (1) a persistent visible disclaimer displayed throughout the interaction, or (2) a disclosure at the beginning of each session plus a reminder at least every three hours during continuous interactions. This obligation is unconditional — it applies whenever the operator has actual knowledge or reasonable certainty that the user is under 18, regardless of whether the user could be misled.
A. Each operator shall clearly and conspicuously disclose to a minor account holder in either of the following ways that the minor is interacting with a conversational AI service: 1. As a persistent visible disclaimer. 2. At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
Pending 2027-10-01
T-01.1
A.R.S. § 18-802(E)
Plain Language
When a reasonable person could be misled into thinking they are interacting with a human, the operator must clearly and conspicuously disclose that the conversational AI service is artificial intelligence. This is a conditional trigger — it applies to all users (not just minors) but only when the AI's presentation could reasonably mislead someone into thinking they are talking to a human. If the system clearly presents as AI, no disclosure is required under this provision.
E. If a reasonable person would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
Pending 2026-01-01
T-01.1T-01.2T-01.3
A.R.S. § 44-1383.02(B)
Plain Language
Chatbot providers must display a clear, conspicuous, and explicit notice that the user is interacting with a chatbot — not a human — before the chatbot generates any output. This disclosure is unconditional (not triggered by a 'reasonable person' test). The notice must repeat at the beginning of each communication, every hour during ongoing interactions, and whenever a user asks if the chatbot is a natural person. The notice must be in the same language as the chatbot's communications and in a font size at least as large as the largest font used in other chatbot communications. Notice form and content must also comply with Attorney General rules.
A chatbot provider shall provide clear, conspicuous and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user, every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice: 1. shall be written in the same language that the chatbot communicates with the user and shall appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications. 2. must comply with the rules adopted by the attorney general pursuant to section 44-1383.03.
Pending 2027-01-01
T-01.1T-01.3
Bus. & Prof. Code § 22626(a)-(c)
Plain Language
Operators must never represent that a customer service chatbot is human. Additionally, if a reasonable person could be misled into thinking they are interacting with a human, the operator must provide a clear, conspicuous disclosure that the system is AI-generated and not human. The disclosure must inform the person they are interacting with an automated system, be accessible throughout the interaction, and be in plain language. For voice-based interfaces, the disclosure must be audible and repeated on request. The prohibition on misrepresentation in subdivision (a) is unconditional; the affirmative disclosure obligation in subdivision (b) is triggered only when a reasonable person could be misled.
(a) An operator of a large private business shall not represent that any artificial intelligence, automated customer service system, or customer service chatbot is a human.
(b) An operator that makes a customer service chatbot available to a person in this state shall provide a clear and conspicuous disclosure that the customer service chatbot is artificially generated and not human if a reasonable person interacting with the customer service chatbot would be misled to believe that the person is interacting with a human.
(c) The disclosure required by subdivision (b) shall do all the following:
(1) Inform the person that they are interacting with a customer service chatbot, artificial intelligence system, or similar automated system, and that the system is not a human being.
(2) For audio-only or voice-based interfaces, be provided in an audible form and repeated upon the person's request.
(3) Be readily accessible throughout the customer interaction.
(4) Be presented in plain language that is understandable to an ordinary consumer.
Enacted 2019-07-01
T-01.1
Bus. & Prof. Code § 17941(a)-(b)
Plain Language
Any person who uses a bot to communicate with another person in California online is prohibited from doing so with intent to mislead about the bot's artificial identity for the purpose of knowingly deceiving the person about the communication's content to drive a commercial transaction or influence an election vote. The safe harbor is straightforward: if you disclose that the account is a bot, you are not liable. The disclosure must be clear, conspicuous, and reasonably designed to inform the person they are interacting with a bot. Note the high intent threshold — liability requires both (1) intent to mislead about artificial identity and (2) a purpose of knowingly deceiving about communication content for a commercial or electoral objective. Mere failure to disclose bot status, without the specific deceptive intent and purpose, does not trigger liability.
(a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot. (b) The disclosure required by this section shall be clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.
Pending 2027-07-01
T-01.1T-01.2
Bus. & Prof. Code § 22612(d)(4)(A)-(B)
Plain Language
Operators must implement a mechanism to notify child users that they are interacting with or receiving content from an AI system. The notice must be periodically reinforced during extended interactions — not just shown once at the start — and must be presented in child-appropriate language and format. This is an unconditional disclosure obligation for all child users, unlike CA SB 243's conditional trigger based on whether a reasonable person could be misled.
(4) A mechanism for providing notice to a child user that the child is interacting with, or receiving content generated by, an artificial intelligence system that meets both of the following criteria: (A) The notice is reinforced periodically during extended interactions. (B) The notice is presented in language and a format appropriate to a child.
Enacted 2025-01-01
T-01.1
Gov. Code § 11549.66(a)(1)-(4)
Plain Language
Any California state agency or department that uses generative AI to communicate directly with individuals about government services and benefits must include a disclaimer indicating the communication was generated by GenAI. The disclaimer format varies by medium: for written letters and emails, it must appear prominently at the start; for continuous online interactions like chatbots, it must be displayed throughout; for audio, it must be stated verbally at the start and end; and for video, it must be displayed throughout. This is an unconditional disclosure — there is no 'reasonable person' trigger. It applies only to state government entities, not private-sector operators.
A state agency or department that utilizes GenAI to directly communicate with a person regarding government services and benefits shall ensure that those communications include both of the following: (a) A disclaimer that indicates to the person that the communication was generated by GenAI. (1) For written communications involving physical and digital media, including letters, email, and other occasional messages, the disclaimer shall appear prominently at the start of each communication. (2) For written communications involving continuous online interactions, including interactions with chatbots, the disclaimer shall be prominently displayed throughout the interaction. (3) For audio communications, the disclaimer shall be provided verbally at the start and end of the interaction. (4) For video communications, the disclaimer shall be prominently displayed throughout the interaction.
Pending 2027-01-01
T-01.1T-01.2T-01.3
C.R.S. § 6-1-1708(1)(a)
Plain Language
When an operator knows or has reasonable certainty that a user is a minor (under 18), the operator must clearly and conspicuously disclose that the user is interacting with AI, not a human. The disclosure must satisfy at least one of three formats: (1) a persistent visible disclaimer always on screen, (2) a notice at the beginning of each interaction plus at least every three hours during continuous sessions, or (3) a response to user questions about whether the AI is human or sentient. Unlike the general consumer disclosure in § 6-1-1708(2), this obligation is unconditional — it applies regardless of whether a reasonable person would be misled. The three disclosure formats are presented as alternatives (joined by 'or'), so an operator need only implement one.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (a) Clearly and conspicuously disclose to the minor user that the minor user is interacting with artificial intelligence that is artificially generated and not human. The disclosure must be: (I) A persistent visible disclaimer; (II) Provided at the beginning of each interaction with a conversational artificial intelligence service and must appear at least once every three hours in a continuous conversational artificial intelligence service interaction; or (III) Provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient;
Pending 2027-01-01
T-01.1T-01.2T-01.3
C.R.S. § 6-1-1708(2)
Plain Language
When a reasonable person could be misled into thinking they are interacting with a human, the operator must clearly and conspicuously disclose that the service is AI. Unlike the minor-specific disclosure in § 6-1-1708(1)(a), this general consumer provision uses conjunctive requirements — the operator must satisfy all three: (1) disclose at the beginning of the user's first interaction each day, (2) remind at least every three hours during continuous sessions, and (3) respond accurately when a user asks whether the AI is human or sentient. The 'reasonable person' trigger is conditional — if the AI is clearly non-human from context, no disclosure is required.
On and after January 1, 2027, if a reasonable person would be misled to believe that the person is interacting with a human in an interaction with a conversational artificial intelligence service, an operator shall clearly and conspicuously disclose to the person that the conversational artificial intelligence service is artificial intelligence. The disclosure must: (a) Be provided at the beginning of a user's first interaction with a conversational artificial intelligence service for each day of interaction; (b) Appear at least once every three hours in a continuous conversational artificial intelligence service interaction; and (c) Be provided in response to user prompts regarding whether the conversational artificial intelligence service is human or artificially sentient.
Enacted 2026-06-30
T-01.1
C.R.S. § 6-1-1704(1)
Plain Language
Deployers or developers who make available an AI system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. This is an unconditional disclosure obligation — it does not depend on whether a reasonable person would be misled. It applies broadly to any AI system intended for consumer interaction, not just high-risk systems. Exceptions are provided in subsection (2) of the original statute. This is a broader disclosure trigger than states like California SB 243, which conditions disclosure on a reasonable-person misleading standard.
(1) On and after June 30, 2026, and except as provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.
Pending 2026-10-01
T-01.1
Sec. 3(a)-(b)
Plain Language
Deployers must disclose to each applicant or employee who interacts with an automated employment-related decision process that the person is interacting with an automated system. This disclosure is not required where a reasonable person would deem it obvious they are interacting with an automated process. The obligation may be contractually shifted to the developer under Section 2(b).
(a) Except as provided in subsection (b) of this section and subsection (b) of section 2 of this act, a deployer who deploys an automated employment-related decision process that is intended to interact with an applicant for employment or employee in the state shall ensure that it is disclosed to each such applicant or employee who interacts with such process that such applicant or employee is interacting with an automated employment-related decision process. (b) No disclosure shall be required under subsection (a) of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an automated employment-related decision process.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9984(2)(a)-(b)
Plain Language
For all minor account holders, companion chatbot platforms must: (1) unconditionally disclose that the user is interacting with AI, and (2) provide a clear and conspicuous notification at the beginning of each interaction and at least every hour during continuing interactions reminding the minor to take a break and that the chatbot is AI, not human. The hourly interval is notably more frequent than some other jurisdictions (e.g., California SB 243 requires every three hours). These are default-on obligations — not configurable by the minor.
In connection to all accounts or identifiers held by account holders who are minors, the companion chatbot platform shall do all of the following: (a) Disclose to the account holder that he or she is interacting with artificial intelligence. (b) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially generated and not human.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9985(1)
Plain Language
All bot operators must display a pop-up or other prominent notification at the start of every user interaction, and at least every hour during continuing interactions, informing the user they are not speaking with a human. For non-screen interactions, the operator must otherwise inform the user. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The only carve-out is for bots used solely by employees for internal business operations. Operators may demonstrate compliance during a cure period by showing persistent and conspicuous identity indicators aligned with the NIST AI RMF and ISO 42001.
At the beginning of an interaction between a user and a bot, and at least once every hour during the interaction, an operator shall display a pop-up message or other prominent notification notifying the user or, if the interaction is not through a device with a screen, otherwise inform the user, that he or she is not engaging in dialogue with a human counterpart. This section does not apply to a bot that is used solely by employees within a business for its internal operational purposes.
Failed 2026-07-01
T-01.1
Fla. Stat. § 1006.1495(3)
Plain Language
Before any minor student receives access credentials for an AI instructional tool, the educational entity must give the parent written notice identifying the tool and its educational purpose, describing how it will be used, explaining the opt-out process, and explaining how the parent can access the student's account or request access to account information and activity. This is a pre-access notice requirement — credentials may not be issued until notice has been provided.
Before a student is provided access credentials for an artificial intelligence instructional tool, the educational entity must provide the parent of a minor student with notice that: (a) Identifies the tool and its educational purpose; (b) Describes, in general terms, the manner in which the tool will be used by students; (c) Explains how the parent may exercise the opt-out process under subsection (4); and (d) Explains how the parent may access the student's account or request access to information and account activity under subsection (5), including the method for submitting a written request.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9984(2)(a)-(b)
Plain Language
For all minor account holders, companion chatbot platforms must (1) unconditionally disclose that the user is interacting with AI, and (2) provide a clear, conspicuous notification at the start of every interaction and at least once every hour during continuing interactions reminding the minor to take a break and that the chatbot is AI-generated, not human. The hourly notification is a minimum — platforms may notify more frequently. Both obligations are unconditional for minor accounts; there is no 'reasonable person' trigger.
In connection to all accounts or identifiers held by account holders who are minors, the companion chatbot platform shall do all of the following: (a) Disclose to the account holder that he or she is interacting with artificial intelligence. (b) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially generated and not human.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9985(1)
Plain Language
All bot operators must display a pop-up or other prominent notification at the start of every interaction and at least hourly during ongoing interactions informing the user that they are not communicating with a human. For non-screen interactions, the operator must otherwise inform the user. This is an unconditional disclosure obligation — it applies regardless of whether a reasonable person would be misled. Internal employee-only bots are exempt. The safe harbor allows operators to demonstrate compliance by showing persistent and conspicuous identity indicators conforming with NIST AI RMF or ISO 42001. This section expressly excludes private suits under ss. 501.211 and 501.212; enforcement is solely by the Department of Legal Affairs.
At the beginning of an interaction between a user and a bot, and at least once every hour during the interaction, an operator shall display a pop-up message or other prominent notification notifying the user or, if the interaction is not through a device with a screen, otherwise inform the user, that he or she is not engaging in dialogue with a human counterpart. This section does not apply to a bot that is used solely by employees within a business for its internal operational purposes.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.1739(7)
Plain Language
Operators must display a pop-up notification at the start of every user interaction with a companion AI chatbot, and at least every 60 minutes thereafter during a continuing interaction, informing users that they are not engaging in dialogue with a human. This is an unconditional disclosure obligation — it applies to all users (adults and minors alike) regardless of whether a reasonable person would be misled. The pop-up is a dismissible on-screen notification that the user can resolve by interacting with it. Compare to CA SB 243, which requires reminders every three hours for minors only; this Florida bill imposes the hourly reminder frequency on all users.
(7) At the beginning of any interaction between a user and a companion AI chatbot, and no less frequently than every 60 minutes thereafter during such interaction, an operator shall display a pop-up that notifies users that they are not engaging in dialogue with a human counterpart.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.1739(7)
Plain Language
Operators must display a pop-up notification at the start of every companion AI chatbot interaction and at least every 60 minutes during continuing interactions, informing the user they are not communicating with a human. This is unconditional — it applies to all users regardless of whether a reasonable person would be misled. The pop-up must be a visible on-screen notification that the user can dismiss by interacting with it. Compare to CA SB 243, which requires three-hour periodic reminders for minors; FL SB 1344 imposes a stricter 60-minute interval for all users regardless of age. Unlike CA SB 243's conditional trigger for adults (only when a reasonable person could be misled), this disclosure is mandatory for every interaction.
(7) At the beginning of any interaction between a user and a companion AI chatbot, and no less frequently than every 60 minutes thereafter during such interaction, an operator shall display a pop-up that notifies users that they are not engaging in dialogue with a human counterpart.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9984(2)(a)-(b)
Plain Language
For all minor account holders, the platform must unconditionally disclose that the user is interacting with AI, and must display a clear, conspicuous reminder at the start and at least every hour during ongoing interactions that the chatbot is AI-generated and that the minor should take a break. The hourly reminder interval is more frequent than California SB 243's every-three-hours floor, making this a stricter periodic disclosure requirement. Both obligations are unconditional — they apply regardless of whether the minor could be misled.
In connection to all accounts or identifiers held by account holders who are minors, the companion chatbot platform shall do all of the following: (a) Disclose to the account holder that he or she is interacting with artificial intelligence. (b) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially generated and not human.
Failed 2026-07-01
T-01.1T-01.2
Fla. Stat. § 501.9985(1)
Plain Language
All bot operators must display a pop-up or other prominent notification at the start of every user interaction — and at least once every hour during continuing interactions — informing the user they are not communicating with a human. For non-screen interactions (e.g., voice), the operator must otherwise inform the user. This applies to all bots, not just companion chatbots, making it a broad AI identity disclosure obligation. Internal-use-only bots used solely by employees for business operational purposes are exempt. The hourly reminder requirement applies to all users regardless of age, which is more expansive than California SB 243 (which imposes periodic reminders only for known minors). During enforcement, operators may present evidence of NIST AI RMF/ISO 42001-aligned identity indicators and disclosures as mitigating factors.
At the beginning of an interaction between a user and a bot, and at least once every hour during the interaction, an operator shall display a pop-up message or other prominent notification notifying the user or, if the interaction is not through a device with a screen, otherwise inform the user, that he or she is not engaging in dialogue with a human counterpart. This section does not apply to a bot that is used solely by employees within a business for its internal operational purposes.
Pending 2025-07-01
T-01.1
O.C.G.A. § 10-16-11(a)-(b)
Plain Language
Deployers and developers that make available any AI system intended to interact with consumers must disclose to each interacting consumer that they are interacting with an AI system. This is a conditional obligation — no disclosure is required if it would be obvious to a reasonable person that the interaction is with AI. This applies to all AI systems intended for consumer interaction, not just automated decision systems, making it broader in scope than most other provisions in this chapter.
(a) Except as provided in subsection (b) of this Code section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) Disclosure is not required under subsection (a) of this Code section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Passed 2025-07-01
T-01.1T-01.2
O.C.G.A. § 39-5-6(b)
Plain Language
Operators must unconditionally disclose to minor account holders that they are interacting with an AI, not a human. The operator may satisfy this through either a constantly visible disclaimer or a notice at the beginning of each session plus reminders at least every three hours during continuous interactions. This obligation is unconditional for minors — unlike subsection (e), which applies to all users only when a reasonable person could be misled.
An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service as opposed to a natural person: (1) With a constantly visible disclaimer; or (2) At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
Passed 2025-07-01
T-01.1
O.C.G.A. § 39-5-6(e)
Plain Language
For all users (not just minors), operators must provide a clear and conspicuous disclosure that the user is interacting with AI when a reasonable person could be expected to be misled into thinking they are talking to a human. This is a conditional trigger — if the conversational AI service does not plausibly appear to be a human, the disclosure is not required. Compare to subsection (b), which imposes an unconditional disclosure obligation for minor account holders.
If an individual could reasonably be expected to be misled to believe he or she was interacting with a natural person, an operator shall clearly and conspicuously disclose that the conversational AI service is not a natural person.
Pending 2028-07-01
T-01.1
HRS § 321-__ (Patient interaction; disclosure)(a)-(c)
Plain Language
Health care providers that deploy AI systems to interact with patients via remote communication (telehealth, videoconference, electronic messaging, etc.) must disclose to the patient or their authorized representative before or at the start of the interaction that they are communicating with AI — not a human. The disclosure must be clear and conspicuous and must include either a disclaimer that the communication was generated by AI, or that it was generated by AI and reviewed by a natural person. It must also include clear instructions on how the patient can reach a human health care provider or appropriate natural person. In an emergency, the disclosure may be made as soon as reasonably possible after the interaction begins.
(a) Any health care provider that uses or makes available for use an artificial intelligence system intended to interact with patients by means of remote communication shall disclose to the patient or the patient's authorized representative, as applicable, that the person is interacting with artificial intelligence. (b) The disclosure shall be made before or at the time of the interaction; provided that in the case of an emergency, the disclosure shall be made as soon as reasonably possible. (c) The disclosure shall be clear and conspicuous, and include: (1) A disclaimer that: (A) The communication was generated by artificial intelligence; or (B) The communication was generated by artificial intelligence and reviewed by a health care provider who is a natural person or a natural person retained by the health care provider; and (2) Clear instructions on how the patient can directly contact a health care provider who is a natural person, an employee of the health care provider, or other appropriate natural person.
Pending 2027-07-01
T-01.1T-01.2
§ 554J.2(1)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI. The operator may satisfy this obligation through either (a) a persistent visible disclaimer always displayed during the interaction, or (b) a disclaimer at the beginning of each interaction plus a recurring disclaimer at least every three hours of continuous interaction. This is an unconditional disclosure requirement — it applies whenever the operator knows or is reasonably certain the user is under 18, regardless of whether the chatbot could be mistaken for a human.
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
Pending 2027-07-01
T-01.1
§ 554J.3
Plain Language
If a reasonable person using the conversational AI service would believe they are interacting with a human, the operator must display a persistent visible disclaimer that the service is AI. This is a conditional obligation — it triggers only when a reasonable person would be misled. The disclosure mechanism must be a persistent visible disclaimer, not a one-time notice. This applies to all users, not just minors. Compare to the minor-specific unconditional disclosure in § 554J.2(1).
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
Pending
T-01.1T-01.2
§ 554J.2(1)(c)-(d)
Plain Language
Deployers must provide a clear and conspicuous disclosure at the start of every interaction that the chatbot is AI and is not a licensed medical, legal, financial, or mental health professional. This disclosure must be repeated every three hours during continuous interactions. Unlike some jurisdictions, this is unconditional — it applies regardless of whether a reasonable person would be misled. The disclosure includes both AI identity and a non-licensure disclaimer, combining transparency and anti-deception functions.
c. Clearly and conspicuously disclose each time the deployer's public-facing chatbot begins an interaction with a user that the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional. d. At each three-hour interval of the deployer's public-facing chatbot continuously interacting with a user, clearly and conspicuously disclose the public-facing chatbot is artificial intelligence and is not licensed as a medical, legal, financial, or mental health professional.
Pending 2025-07-01
T-01.1T-01.2
§ 554J.2(2)(a)
Plain Language
Every chatbot must provide a clear and conspicuous disclosure that the user is interacting with a chatbot — not a human — at two points: (1) at the beginning of each conversation, and (2) at thirty-minute intervals during the conversation. This is an unconditional requirement — it applies regardless of whether a reasonable person would be misled. The thirty-minute interval is notably more frequent than the three-hour interval in comparable legislation such as CA SB 243.
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
Pending 2025-07-01
T-01.3
§ 554J.2(2)(b)
Plain Language
Chatbots must be programmed so they cannot claim to be human and must respond truthfully when a user asks whether the chatbot is a human. This is both a proactive design requirement (prevent claiming to be human) and an on-demand disclosure obligation (respond accurately when asked). The obligation is framed as a programming requirement, meaning it must be built into the chatbot's behavior, not merely addressed through a terms-of-service disclosure.
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
Pending 2026-07-01
T-01.1T-01.2
§ 554J.3(1)–(2)
Plain Language
Every AI chatbot accessible to Iowa users must disclose — in clear, conspicuous, and easily understood language — three facts: (1) it is artificial intelligence, (2) it is not a human, and (3) it is not a substitute for professional mental health care. This disclosure must appear at three distinct points: before the chatbot provides its first response, at regular intervals during continuous interaction, and whenever the chatbot generates a response related to emotional well-being, mental health, or self-harm. The bill does not specify a minimum interval for periodic re-disclosure (contrast CA SB 243's every-three-hours floor), so the 'regular intervals' standard will likely be defined by HHS rulemaking under § 554J.6. The third trigger — mental health topic responses — is context-activated and functionally adds a heightened disclosure requirement beyond standard AI identity disclosure.
1. Each artificial intelligence chatbot accessible to a user in this state shall explicitly disclose in clear, conspicuous, and easily understood language that the artificial intelligence chatbot is artificial intelligence, is not a human, and is not a substitute for professional mental health care. 2. A disclosure required under this section shall appear at all of the following times: a. At the beginning of the artificial intelligence chatbot's interaction with a user prior to providing the user with a response to user input. b. At regular intervals during a user's continuous interaction with the artificial intelligence chatbot. c. When the artificial intelligence chatbot generates a response related to emotional well-being, mental health, or self-harm.
Passed 2027-07-01
T-01.1T-01.2
§ 554J.2(1)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI. This disclosure must be delivered through either (a) a persistent visible disclaimer always on screen, or (b) a disclaimer at the start of each interaction plus a recurring reminder at least every three hours of continuous use. This is an unconditional obligation — it applies whenever the operator knows or is reasonably certain the user is a minor, regardless of whether a reasonable person would be misled.
1. An operator shall clearly and conspicuously disclose to a minor account holder that the minor account holder is interacting with artificial intelligence through any of the following: a. A persistent visible disclaimer. b. All of the following: (1) A disclaimer that appears at the beginning of each interaction between the operator's conversational AI service and a minor account holder. (2) A disclaimer that appears at least once every three hours of continuous interaction between the operator's conversational AI service and a minor account holder.
Passed 2027-07-01
T-01.1T-01.2
§ 554J.3
Plain Language
For all users (not just minors), operators must disclose that their conversational AI service is artificial intelligence when a reasonable person would believe they are interacting with a human. The disclosure must be made either via a persistent visible disclaimer or via a disclaimer that appears after every three hours of continuous interaction. This is a conditional obligation — it is triggered only when the AI is realistic enough that a reasonable person could be misled. Compare to § 554J.2(1), which imposes an unconditional disclosure obligation for minors regardless of whether a reasonable person would be misled.
An operator shall clearly and conspicuously disclose using a persistent visible disclaimer, or a disclaimer that appears after every three hours of continuous interaction with the operator's conversational AI service, that the operator's conversational AI service is artificial intelligence if a reasonable individual interacting with the conversational AI service would believe that the individual is interacting with a human.
Pending 2025-07-01
T-01.1T-01.2
§ 554J.2(2)(a)
Plain Language
Every chatbot must provide a clear and conspicuous disclosure that it is a chatbot, not a human, at two points: (1) at the beginning of each conversation, and (2) at recurring thirty-minute intervals during ongoing interactions. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The thirty-minute interval is notably more frequent than comparable statutes (e.g., California SB 243 requires three-hour intervals for minors only).
Each chatbot shall meet all of the following requirements: a. Clearly and conspicuously disclose that the chatbot is a chatbot and not a human being at the beginning of each conversation and at thirty-minute intervals.
Pending 2025-07-01
T-01.3
§ 554J.2(2)(b)
Plain Language
Chatbots must be programmed so they cannot claim to be human and must respond truthfully when a user asks whether the chatbot is a human. This is both a proactive prohibition (no affirmative claims of humanity) and a reactive obligation (honest response on demand). The term 'respond deceptively' goes beyond simple non-disclosure to prohibit any misleading answer to a direct identity question.
Be programmed to prevent the chatbot from claiming to be a human or respond deceptively when asked by a user if the chatbot is a human.
Enacted 2025-07-01
T-01.1
Idaho Code § 48-603H(1)(a)-(c)
Plain Language
Any person using a chatbot, AI agent, avatar, or similar conversational AI technology in trade or commerce must clearly and conspicuously notify consumers that they are not communicating with a human being, when two conditions are met: (1) the interaction could mislead a reasonable consumer into thinking they are speaking with a human, and (2) the AI is doing more than conveying basic operational information such as hours, locations, employee directories, or simple purchase mechanics. The disclosure must be sufficiently clear and conspicuous that a reasonable consumer would not be misled. This is a conditional trigger — simple informational bots providing only basic operational details are carved out. Note that this obligation is structured as a prohibition (unfair trade practice) rather than an affirmative mandate, meaning all three elements (a), (b), and (c) must be present simultaneously for a violation.
It is an unfair and deceptive trade practice for any person to engage in trade or commerce with a consumer in which the person is communicating or otherwise interacting with a consumer using a chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation and which may mislead or deceive a reasonable consumer to believe the consumer is engaging with an actual human, and: (a) The consumer is not notified in a clear and conspicuous fashion that the consumer is not communicating with a human being; (b) The consumer may reasonably believe the consumer is engaging with a human because the communication is not clear and conspicuous; and (c) The chatbot, artificial intelligence agent, avatar, or other computer technology that engages in a textual or aural conversation is doing more than stating the person's basic operations information, such as employee directories, locations, hours of operation, the basic mechanics of purchasing items, and similar information.
Passed 2027-07-01
T-01.1
Idaho Code § 48-2103(1)
Plain Language
When a reasonable person could be misled into believing they are speaking with a human, the operator must provide a clear and conspicuous disclosure that the service is AI. This is a conditional trigger — if the conversational AI service obviously presents itself as AI, no affirmative disclosure is required. The standard is objective (reasonable person), not subjective.
If reasonable persons would be misled to believe that they are interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
Passed 2027-07-01
T-01.1T-01.2
Idaho Code § 48-2104(1)
Plain Language
When a user is a minor account holder, operators must unconditionally disclose that the user is interacting with AI — no 'reasonable person' test applies. Operators may satisfy this obligation in one of two ways: (1) a persistent visible disclaimer always on screen, or (2) a disclosure at the beginning of each session plus a reminder at least every three hours during continuous interactions. The obligation is triggered when the operator has actual knowledge or reasonable certainty the user is under 18. Unlike the general disclosure in § 48-2103(1), this is unconditional — it applies regardless of whether the AI could be mistaken for a human.
An operator shall clearly and conspicuously disclose to minor account holders that they are interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three (3) hours in a continuous conversational AI service interaction.
Pending 2026-01-01
T-01.1
225 ILCS 60/67(b)(1)-(2), (c)
Plain Language
Health facilities, clinics, physician's offices, and group practices that use generative AI to create patient communications about clinical information must include two things: (1) a prominent disclaimer that the communication was AI-generated, with format-specific requirements — at the beginning for letters/emails, displayed throughout for chat-based telehealth and video, and verbally at the start and end for audio; and (2) clear instructions on how to reach a human provider. Critically, these requirements do not apply if a licensed or certified health care provider has read and reviewed the AI-generated communication before it reaches the patient — this human-in-the-loop exemption is the key safe harbor. Administrative communications (scheduling, billing) are excluded because they fall outside the definition of patient clinical information.
(b) A health facility, clinic, physician's office, or office of a group practice that uses generative artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall ensure that the communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence and that is provided in the following manner: (A) for written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication; (B) for written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction; (C) for audio communications, the disclaimer shall be provided verbally at the start and the end of the interaction; or (D) for video communications, the disclaimer shall be prominently displayed throughout the interaction. (2) Clear instructions describing how a patient may contact a human health care provider, employee of the health facility, clinic, physician's office, or office of a group provider, or other appropriate person. (c) If a communication is generated by generative artificial intelligence and read and reviewed by a human licensed or certified health care provider, the requirements of subdivision (b) do not apply.
Pending 2027-01-01
T-01.1T-01.2
Section 15(a)
Plain Language
Operators must provide users with a clear notification that they are communicating with an AI product. The notification must be in the same language as the interaction. For text-based interactions, the notification must be conspicuous, persistent, legible, and visually distinct from the conversation itself. For non-text interactions (e.g., voice), the notification must be presented periodically, at least every 30 minutes, in a manner distinct from the interaction. Adult users may disable this notification, but minors may not (see Section 15(b)). This is an unconditional disclosure — it does not depend on whether a reasonable person would be misled.
(a) An operator shall provide a clear notification to a user during an interaction with a companion artificial intelligence product, unless specifically disabled by an adult user, informing the user that the user is communicating with a companion artificial intelligence product. All notifications shall be communicated in the same language as the interaction with the user and satisfy the following requirements: (1) for text-based interactions, the notification shall be conspicuous, persistent, and legible in the user interface and be distinct from the interaction; or (2) for all other types of interactions, the notification shall be presented periodically, but no less than once every 30 minutes in a manner that is distinct from the interaction.
Pending 2027-01-01
T-01.1T-01.2
Section 15(b)
Plain Language
For minor users, the AI identity notification required under Section 15(a) may not be disabled under any circumstances. Unlike adult users, who may opt out of the notification, minor users must always receive the persistent text-based notification or the periodic (at least every 30 minutes) non-text notification. This creates an unconditional, non-waivable disclosure obligation for all minor interactions.
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not disable the notification required under subsection (a) for the minor user.
Pending 2027-01-01
T-01.1T-01.2
Section 15
Plain Language
Operators must unconditionally disclose to every user — verbally or in text — that they are not communicating with a human. This disclosure is required at two points: (1) at the beginning of every AI companion interaction, and (2) at least every three hours during continuing interactions. Unlike CA SB 243's general disclosure, which is conditional on whether a reasonable person could be misled, this Illinois provision is unconditional — the disclosure must be given to all users at all times, regardless of whether deception is likely. The three-hour periodic reminder also applies to all users, not just minors.
An operator shall provide a clear and conspicuous notification to a user that states, either verbally or in text, that the user is not communicating with a human, at the following times: (1) the beginning of any artificial intelligence companion interaction; and (2) at least every 3 hours for continuing artificial intelligence companion interactions.
Pending 2026-07-01
T-01.1T-01.2
Sec. 3(f)
Plain Language
At the start of every interaction and at least every 60 minutes during a continuing interaction, the covered entity must show a clear popup notification telling the user two things: (1) they are not talking to a human, and (2) the AI chatbot is not licensed or credentialed to provide advice or guidance on any topic. This is an unconditional obligation — it applies to all users (not just minors) and does not depend on whether a reasonable person would be misled. The popup is a dismissible visible notification. The 60-minute interval is a floor; operators may remind more frequently. The professional credential disclaimer goes beyond standard AI identity disclosure and effectively warns users not to rely on chatbot output as professional advice.
(f) At the beginning of any interaction between a user and a companion AI chatbot and not less frequently than every 60 minutes during such interaction thereafter, a covered entity shall display to such user a clear popup that notifies the user that such user is not engaging in dialogue with a human counterpart and the AI chatbot is not licensed or otherwise credentialed to provide advice or guidance on any topic.
Passed 2025-03-13
T-01.1
Section 3(6)(a)
Plain Language
State agencies must provide a clear and conspicuous public disclaimer whenever AI is used to make decisions about citizens or businesses, to inform a decision or produce an output, or to produce publicly accessible information. This is a broad AI use disclosure obligation — it applies not only to consequential decisions but to any AI-generated output accessible to the public. The trigger is very wide: any AI involvement in producing citizen-facing information or decisions requires disclosure.
(6) (a) A department, agency, or administrative body shall disclose to the public, through a clear and conspicuous disclaimer, when generative artificial intelligence, artificial intelligence systems, or other artificial intelligence-related capabilities are used: 1. To render any decision regarding individual citizens or businesses within the state; 2. In any process, or to produce materials used by the system or humans, to inform a decision or create an output; or 3. To produce information or outputs accessible by citizens and businesses.
Pending 2026-08-01
T-01.1T-01.2
R.S. 51:2162(B)(1)-(2)
Plain Language
For all minor account holders, the platform must (1) unconditionally disclose that the user is interacting with AI — there is no 'reasonable person' trigger here; and (2) provide a clear, conspicuous notification by default at the start of each interaction and at least every hour during ongoing sessions reminding the minor to take a break and that the chatbot is AI-generated and not human. The every-hour cadence is more frequent than CA SB 243's every-three-hours requirement. These obligations apply to all minor accounts regardless of whether a reasonable person would be misled.
B. In connection with all accounts held by account holders who are minors, a companion chatbot platform shall do all of the following: (1) Disclose to the account holder that he is interacting with artificial intelligence. (2) Provide by default a clear and conspicuous notification to the account holder, at the beginning of companion chatbot interactions and at least once every hour during continuing interactions, reminding the minor to take a break and that the companion chatbot is artificially-generated and not human.
Pending 2026-08-01
T-01.1
R.S. 51:1430(B)
Plain Language
Any corporation, organization, or person conducting a commercial transaction with a Louisiana consumer using an automated system must provide clear and conspicuous notice that the consumer is interacting with an automated system rather than a human. A violation occurs under either of two independent prongs: (1) the consumer is not given clear and conspicuous notice, or (2) the consumer may reasonably believe they are engaging with a human. This means disclosure alone may be insufficient — if a reasonable consumer could still believe they are talking to a human despite the disclosure, the second prong may still be triggered. The obligation applies only in commercial transaction or trade practice contexts; non-commercial uses of AI are not covered.
B. It is an unfair or deceptive trade practice for a corporation, organization, or person to engage in a commercial transaction or trade practice with a consumer in this state in which the consumer is communicating or otherwise interacting with an automated system and either of the following applies: (1) The consumer is not notified in a clear and conspicuous manner that the consumer is communicating with an automated system and not a human being. (2) The consumer may reasonably believe he is engaging with a human.
Pending 2026-01-01
T-01.1T-01.2T-01.3
R.S. 28:16(B)(1)-(3)
Plain Language
Operators must ensure the mental health chatbot clearly and conspicuously tells users it is AI and not a human in three situations: (1) before the user can access any features (unconditional initial disclosure), (2) at the start of any interaction following a seven-day gap in use (a re-disclosure obligation triggered by inactivity), and (3) whenever a user asks or prompts whether AI is being used (on-demand disclosure). This is an unconditional disclosure obligation — it does not depend on whether a reasonable person would be misled. The seven-day re-disclosure trigger is notably longer than the three-hour periodic reminder in CA SB 243, and there is no shorter interval for minor users.
An operator of a mental health chatbot shall cause the chatbot to clearly and conspicuously disclose to a user that the chatbot is an artificial intelligence technology and not a human. The disclosure shall be made: (1) Before the user may access the features of the mental health chatbot. (2) At the beginning of any interaction with the user if the user has not accessed the mental health chatbot within the previous seven days. (3) Any time a user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
Pre-filed 2025-07-07
T-01.1
Chapter 93M, Section 4(c)
Plain Language
Consumers must receive notification in two situations: (1) when AI systems are targeting or influencing them in ways that materially impact their decisions, and (2) when algorithms are used to determine pricing, eligibility, or access to services. This is broader than the consequential-decision notification in Section 3(c) — it covers any material impact on consumer decisions, including pricing and service eligibility determinations that may not rise to the level of a 'consequential decision' as defined in the bill.
(c) Consumer Notification: Consumers must be notified when: (1) They are being targeted or influenced by AI systems in a way that materially impacts their decisions; (2) Algorithms are used to determine pricing, eligibility, or access to services.
Pre-filed
T-01.1
Chapter 93M § 4(a)-(b)
Plain Language
Any deployer or developer that makes available a consumer-facing AI system must disclose to each interacting consumer that they are interacting with an AI system. This is a broad obligation that applies to all AI systems intended to interact with consumers — not just high-risk systems. No disclosure is required where it would be obvious to a reasonable person that the interaction is with an AI system. Note this is one of the few provisions in the bill that applies beyond high-risk AI systems to any consumer-facing AI system.
(a) Not later than 6 months after the effective date of this act, and except as provided in subsection (b) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) disclosure is not required under subsection (a) of this section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Pre-filed 2025-01-17
T-01.1
Mass. Gen. Laws ch. 93, § 115(b)
Plain Language
Any person who deploys a bot in a commercial context where a reasonable person could be misled into thinking they are talking to a human commits a per se Chapter 93A violation — regardless of whether the consumer is actually misled or damaged. The safe harbor is clear and conspicuous disclosure: if you notify the consumer that they are communicating with a computer rather than a human, no liability attaches. Importantly, the trigger is objective (could a reasonable person be misled?) and does not require proof of actual deception or harm. The scope is limited to commercial transactions or trade practices — purely non-commercial bot interactions are not covered.
(b) It is hereby declared to be an unfair and deceptive act or practice in violation of section 2 of chapter 93A for any person to engage in a commercial transaction or trade practice with a consumer of any kind in which the consumer is communicating or otherwise interacting with a bot that may mislead or deceive a reasonable person to believe they are engaging with a human, regardless of whether such consumer is in fact misled, deceived or damaged thereby; provided, however, that a person utilizing or deploying a bot shall not be liable under this section if the consumer is notified in a clear and conspicuous fashion that they are communicating with a computer rather than a human being.
Pre-filed 2025-01-17
T-01.1
G.L. c. 93M, § 2
Plain Language
Any commercial entity that deploys a chatbot must clearly and conspicuously tell every user that the user is interacting with a chatbot, not a human. This is an unconditional disclosure obligation — it is not triggered by whether a reasonable person would be misled, but applies in all cases. The statute does not specify the timing or format of the disclosure, only that it must be clear and conspicuous.
Any commercial entity deploying a chatbot shall clearly and conspicuously disclose to the person with whom the chatbot interacts that the person is interacting with a chatbot and not a human.
Pending 2026-10-01
T-01.1
Commercial Law § 14–1330(D)
Plain Language
Operators must display a clear and conspicuous warning to all users stating that companion chatbots are artificially generated and not human, and that they may not be suitable for some minors. This is an unconditional disclosure — it applies to every user regardless of whether a reasonable person would be misled. Note this provision was amended to replace the original subsection (D) and is distinct from the more detailed developer warning obligations in subsection (E).
(D) AN OPERATOR SHALL DISPLAY A CLEAR AND CONSPICUOUS WARNING TO A USER STATING THAT COMPANION CHATBOTS: (1) ARE ARTIFICIALLY GENERATED AND NOT HUMAN; AND (2) MAY NOT BE SUITABLE FOR SOME MINORS.
Pending 2026-10-01
T-01.1T-01.2T-01.3
Commercial Law § 14–1330(E)(1)–(2)
Plain Language
Developers must implement two forms of AI identity disclosure for users of the operator's chatbot. First, a static, persistent warning must continuously appear on screen indicating the chatbot is AI-generated and not human. Second, a dynamic pop-up warning requiring user acknowledgment must appear: (1) at the start of each interaction, (2) after every hour of continuous use, and (3) whenever a user asks how the chatbot works or generates responses. The on-demand disclosure (responding when a user questions chatbot functionality) maps to T-01.3. The hourly pop-up maps to T-01.2 (periodic re-disclosure). This obligation is placed on the 'developer' — a term not defined in this section — rather than the 'operator.'
(E) A DEVELOPER SHALL ESTABLISH AND PROVIDE TO A USER OF THE OPERATOR'S CHATBOT CLEAR AND CONSPICUOUS WARNINGS THAT THE CHATBOT IS ARTIFICIALLY GENERATED AND NOT HUMAN THROUGH THE USE OF BOTH: (1) A STATIC, PERSISTENT WARNING THAT CONTINUOUSLY APPEARS ON THE SCREEN; AND (2) A DYNAMIC WARNING THAT POPS UP ON THE SCREEN AND REQUIRES A USER TO RESPOND: (I) AT THE START OF THE USER'S INTERACTION WITH THE CHATBOT; (II) AFTER EVERY HOUR OF THE USER'S CONTINUOUS INTERACTION WITH THE CHATBOT; AND (III) WHEN PROMPTED BY THE USER IN A MANNER THAT QUESTIONS HOW THE CHATBOT FUNCTIONS OR PROVIDES RESPONSES.
Enacted 2025-09-13
T-01.1
10 MRSA § 1500-Y(2)
Plain Language
Any person using an AI chatbot or other computer technology to interact with consumers in trade and commerce must provide clear and conspicuous notice that the consumer is not interacting with a human — but only when the interaction could mislead or deceive a reasonable consumer into believing they are dealing with a human. This is a conditional trigger: if the AI system clearly presents itself as non-human from the outset or no reasonable person would be confused, no disclosure is required. The scope is limited to trade and commerce contexts. A violation constitutes a violation of the Maine Unfair Trade Practices Act, enforceable by the Attorney General with civil penalties up to $10,000 per violation.
A person may not use an artificial intelligence chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human being unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being.
Enacted 2025-09-10
T-01.1
10 MRSA § 1500-Y(2)
Plain Language
Any person using an AI chatbot or other computer technology to interact with consumers in a trade or commerce context must provide clear and conspicuous notice that the consumer is not engaging with a human being, whenever the interaction could mislead a reasonable consumer. The trigger is conditional — the disclosure is required only when the AI communication is realistic enough that a reasonable consumer could be misled into thinking they are dealing with a human. If the chatbot clearly presents as non-human, no disclosure is required. The statute applies broadly to any 'person,' which under Maine law encompasses individuals, corporations, and other entities.
A person may not use an artificial intelligence chatbot or any other computer technology to engage in trade and commerce with a consumer in a manner that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human being unless the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being.
Failed 2026-06-15
T-01.1
10 MRSA § 1500-RR(3)(A)
Plain Language
When a therapy chatbot is made available to a minor under the exemption, it must provide a clear and conspicuous disclaimer at the beginning of each interaction that it is artificial intelligence and not a licensed mental health professional. This is an unconditional per-session AI identity disclosure — it must appear at the start of every individual interaction, not just the first one. This obligation applies only to therapy chatbots serving minors under the §1500-RR(3) exemption.
A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional;
Pending 2026-08-01
T-01.1
Minn. Stat. § 604.115, subd. 3
Plain Language
All chatbot proprietors must provide clear, conspicuous, and explicit notice to every user that they are interacting with an AI chatbot — not a human. This is an unconditional obligation: unlike some jurisdictions that trigger disclosure only when a reasonable person could be misled, Minnesota requires it for all chatbot interactions. The notice must be in the same language the chatbot uses and must be large enough to be easily readable. This applies to any chatbot accessed by a user located in Minnesota.
Proprietors utilizing chatbots accessed by a user who is in this state must provide clear, conspicuous, and explicit notice to a user that the user is interacting with an artificial intelligence chatbot program. The text of the notice must appear in the same language the chatbot is using and in a size easily readable by the average viewer.
Pending 2026-08-01
T-01.1
Minn. Stat. § 604.115, subd. 3
Plain Language
All proprietors operating chatbots accessible to Minnesota users must provide clear, conspicuous, and explicit notice that the user is interacting with an AI chatbot. This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled. The notice must be in the same language the chatbot uses and in a legible size. Unlike CA SB 243, which conditions disclosure on whether a reasonable person could be misled, this provision requires disclosure in all cases.
Proprietors utilizing chatbots accessed by a user who is in this state must provide clear, conspicuous, and explicit notice to a user that the user is interacting with an artificial intelligence chatbot program. The text of the notice must appear in the same language the chatbot is using and in a size easily readable by the average viewer.
Pending
T-01.1
§ 1.2055.3(1)
Plain Language
Persons who own or control websites, applications, software, or programs offering companion chatbots must not process data or design their systems in ways that deceive or mislead users into thinking the companion chatbot is human. This is framed as a prohibition on deceptive design rather than as an affirmative disclosure requirement — the operator need not proactively disclose AI identity, but must not affirmatively mislead users about the chatbot's nonhuman nature. This is narrower than jurisdictions that require unconditional upfront AI disclosure.
Any person who owns or controls a website, application, software, or program: (1) Shall not process data or design systems in ways that deceive or mislead users of such website, application, software, or program regarding the nonhuman nature of the companion chatbot;
Pending 2026-08-28
T-01.1T-01.2T-01.3
§ 1.2058(5)(3)(a)
Plain Language
Every AI chatbot made available to users must provide two types of AI identity disclosure: (1) an unconditional, clear and conspicuous disclosure at the start of each conversation and repeated every 30 minutes that the chatbot is AI and not human; and (2) accurate identification as AI when asked — the chatbot must never claim to be human or respond deceptively when a user asks. The disclosure obligation is unconditional — it does not depend on whether a reasonable person would be misled. The 30-minute interval applies to all users, not just minors.
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
Pending 2026-08-28
T-01.1T-01.2T-01.3
RSMo § 1.2058(5)(3)(a)
Plain Language
Every AI chatbot made available to users must clearly and conspicuously disclose at the start of each conversation — and again every 30 minutes — that it is an AI system and not a human being. This is an unconditional requirement applying to all users, not just minors. Additionally, the chatbot must be programmed so it does not claim to be human or respond deceptively when a user asks whether it is human. The 30-minute interval applies to all users; compare to CA SB 243, which imposes periodic re-disclosure only for minors.
(3) (a) Each artificial intelligence chatbot made available to users shall: a. At the initiation of each conversation with a user and at thirty-minute intervals, clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being; and b. Be programmed to ensure that the chatbot does not claim to be a human being or otherwise respond deceptively when asked by a user if the chatbot is a human being.
Pending 2026-01-01
T-01.1
G.S. 114B-4(c)
Plain Language
Licensed health information chatbot operators must clearly disclose six categories of information to users: that the chatbot is AI (not human), what the service's limitations are, how user data is collected and used, what rights and remedies users have, emergency resources where applicable, and human oversight and intervention protocols. This is an unconditional disclosure obligation — it applies regardless of whether users could be misled. The disclosure of emergency resources and human oversight protocols goes beyond standard AI identity disclosure.
(c) A licensee must clearly disclose all of the following: (1) The artificial nature of the chatbot. (2) Limitations of the service. (3) Data collection and use practices. (4) User rights and remedies. (5) Emergency resources when applicable. (6) Human oversight and intervention protocols.
Pending 2026-01-01
T-01.1
G.S. 170-3(b)(3)
Plain Language
When the chatbot's artificial nature is not clearly apparent, the covered platform must clearly and consistently identify it as non-human. The platform is also affirmatively prohibited from processing data or designing systems that deceive or mislead users about the chatbot's non-human nature. This is a conditional trigger — the identity disclosure obligation activates when the AI nature is not already obvious — combined with an unconditional prohibition on deceptive design. The platform must prioritize transparency over any benefits of perceived human-like interaction.
(3) Duty of loyalty un chatbot identity disclosure. — A covered platform has a duty to clearly and consistently identify the chatbot as an artificial entity when that fact is not clearly apparent. The platform shall not process data or design systems in ways that deceive or mislead users about the non-human nature of the chatbot, prioritizing transparency over any potential benefits of perceived human-like interaction.
Pending 2026-01-01
T-01.1T-01.3
G.S. 170-5(a)-(e)
Plain Language
Covered platforms must implement a detailed chatbot identification disclosure process with four specific elements: the chatbot must be identified as (1) not human, human-like, or sentient, (2) a computer program mimicking conversation based on statistical analysis, (3) incapable of emotions like love or lust, and (4) without personal preferences or feelings. This disclosure must be under 300 words, readily accessible, and clearly presented. Users must provide affirmative, informed consent (e.g., clicking 'I understand') confirming they understand the chatbot's identity and limitations. Platforms may not use deceptive design elements to manipulate the consent process. Critically, this identification and consent process must be repeated at the start of each new session and must be separate from privacy policies or other consent processes. This is among the most prescriptive AI identity disclosure requirements in any U.S. chatbot statute.
(a) The chatbot identification process shall include all of the following elements: (1) A covered platform shall clearly inform users that the chatbot is: a. Not human, human-like, or sentient. b. A computer program designed to mimic human conversation based on statistical analysis of human-produced text. c. Incapable of experiencing emotions such as love or lust. d. Without personal preferences or feelings. (2) The information required by subdivision (1) of this subsection shall be readily accessible, clearly presented, and concisely conveyed in less than three hundred (300) words. (b) A users shall provide explicit and informed consent to interact with the chatbot. The consent process shall: (1) Require an affirmative action from the user (such as clicking an "I understand" button); and (2) Confirm the user's understanding of the chatbot's identity and limitations. (c) A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process. (d) The chatbot identity communication and opt-in consent process shall be repeated at the start of each new session with a user. (e) The chatbot identification and consent process required by this section shall be separate and distinct from any privacy policy agreement or other consent processes required by law or platform policy.
Pending 2027-01-01
T-01.1
G.S. § 114B-4(c)(1)-(6)
Plain Language
Licensees must clearly disclose to users six categories of information: that the chatbot is artificial, the service's limitations, how data is collected and used, what user rights and remedies exist, emergency resources when applicable, and how human oversight and intervention work. This is a multi-element disclosure obligation that goes beyond mere AI identity disclosure to include service limitations, data practices, rights, and safety information. The AI identity disclosure component (item 1) maps directly to T-01.1; the remaining items are broader operational transparency requirements.
A licensee must clearly disclose all of the following: (1) The artificial nature of the chatbot. (2) Limitations of the service. (3) Data collection and use practices. (4) User rights and remedies. (5) Emergency resources when applicable. (6) Human oversight and intervention protocols.
Pending 2027-01-01
T-01.1
G.S. § 114B-5
Plain Language
Licensees under the Chatbot Licensing Act must ensure their chatbots comply with the chatbot identification process requirements in Chapter 170, § 170-5. This is a cross-reference provision that extends the Chapter 170 identification and consent obligations to all licensed health-information chatbots. The substantive obligations are mapped separately under § 170-5. This provision creates no independent obligation beyond ensuring compliance with the referenced section.
Licensees shall ensure that all interactions between chatbots and users comply with the provisions of G.S. 170-5.
Pending 2027-01-01
T-01.1
G.S. § 170-3(b)(3)
Plain Language
Covered platforms must clearly and consistently disclose the chatbot's artificial nature whenever it is not already apparent to the user. Platforms may not process data or design systems in ways that deceive or mislead users about the chatbot being non-human. This is a conditional disclosure trigger (only when the AI nature is 'not clearly apparent') combined with an anti-deception prohibition. Transparency must be prioritized over any commercial benefit of human-like perceived interaction.
Duty of loyalty in chatbot identity disclosure. – A covered platform has a duty to clearly and consistently identify the chatbot as an artificial entity when that fact is not clearly apparent. The platform shall not process data or design systems in ways that deceive or mislead users about the non-human nature of the chatbot, prioritizing transparency over any potential benefits of perceived human-like interaction.
Pending 2027-01-01
T-01.1
G.S. § 170-5(a)-(e)
Plain Language
Covered platforms must implement a detailed chatbot identification process with four specific disclosures: the chatbot is not human, human-like, or sentient; it is a computer program that mimics conversation via statistical analysis; it cannot experience emotions; and it has no personal preferences or feelings. This disclosure must be under 300 words, clearly presented, and readily accessible. Users must provide explicit, informed, affirmative consent (e.g., clicking 'I understand') confirming they understand the chatbot's identity and limitations. Deceptive design elements that manipulate consent or obscure the chatbot's nature are prohibited. The identification and consent process must be repeated at the start of each new interaction and must be separate from privacy policies or other consent processes. This is one of the most prescriptive chatbot disclosure requirements in U.S. legislation — it mandates specific factual statements rather than just requiring 'clear and conspicuous' notice.
(a) The chatbot identification process shall include all of the following elements: (1) A covered platform shall clearly inform users that the chatbot is: a. Not human, human-like, or sentient. b. A computer program designed to mimic human conversation based on statistical analysis of human-produced text. c. Incapable of experiencing emotions such as love or lust. d. Without personal preferences or feelings. (2) The information required by subdivision (1) of this subsection shall be readily accessible, clearly presented, and concisely conveyed in less than 300 words. (b) A user shall provide explicit and informed consent to interact with the chatbot. The consent process shall: (1) Require an affirmative action from the user (such as clicking an "I understand" button); and (2) Confirm the user's understanding of the chatbot's identity and limitations. (c) A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process. (d) The chatbot identity communication and opt-in consent process shall be repeated at the start of each new interaction with a user. (e) The chatbot identification and consent process required by this section shall be separate and distinct from any privacy policy agreement or other consent processes required by law or platform policy.
Failed 2027-07-01
T-01.1T-01.2
Sec. 3(1)
Plain Language
When an operator knows or has reasonable certainty that an account holder is under 18, the operator must clearly and conspicuously disclose that the user is interacting with AI. The operator may satisfy this obligation either through a persistent visible disclaimer that remains on screen at all times, or by disclosing at the start of each session and then at least every three hours during continuous interactions. This is unconditional for minors — it does not depend on whether a reasonable person would be misled.
(1) An operator shall clearly and conspicuously disclose to each minor account holder that such minor account holder is interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three hours in a continuous conversational artificial intelligence service interaction.
Failed 2027-07-01
T-01.1
Sec. 4
Plain Language
For all users (not just minors), if a reasonable person could be misled into believing they are talking to a human, the operator must clearly and conspicuously disclose that the service is AI. This is a conditional trigger — it only applies when the interaction could mislead a reasonable person. Unlike the minor-specific disclosure in Section 3(1), this provision does not specify the form of disclosure (persistent disclaimer vs. session-start) or require periodic reminders, giving operators more flexibility in implementation.
If a reasonable person interacting with a conversational artificial intelligence system would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational artificial intelligence service is artificial intelligence.
Failed 2026-02-01
T-01.1
Sec. 5(1)-(2)
Plain Language
Any deployer or developer that makes available an AI system intended to interact with consumers must disclose to each interacting consumer that they are interacting with an AI system. This applies to all AI systems (not just high-risk ones) that are designed to interact with consumers. Disclosure is not required where it would be obvious to a reasonable person that they are interacting with AI. Note this is broader than the high-risk AI system framework — it covers any consumer-facing AI system.
(1) On and after February 1, 2026, and except as otherwise provided in subsection (2) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available any artificial intelligence system that is intended to interact with any consumer shall include in the disclosure to each consumer who interacts with such artificial intelligence system that the consumer is interacting with an artificial intelligence system. (2) Disclosure is not required under subsection (1) of this section under any circumstance when it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Pending
T-01.1
Section 1(b)
Plain Language
Any AI chatbot powered by generative AI that provides voters with election-related information — including election dates, voter eligibility, registration procedures, polling locations, ballot procedures, and election results — or information about a candidate's accomplishments, policy positions, or qualifications must display a clear and conspicuous disclosure before providing any such content. The disclosure must identify the content as coming from a generative AI system, be appropriate for the medium (audio, video, text, or print), and be permanent or difficult for subsequent users to remove, to the extent technically feasible. The trigger is purpose-based: the chatbot must have the purpose of providing election-related or candidate information. The scope covers all New Jersey state, county, municipal, and school district elections but excludes party office elections.
b. Any artificial intelligence chatbot that utilizes generative artificial intelligence to create audio, video, text, or print content with the purpose of providing voters with election related information or information concerning the accomplishments, policy positions, or qualifications of a candidate for election in this State shall include, prior to the provision of any such content, a clear and conspicuous disclosure, as appropriate for the medium of the content, that identifies the content as being provided by a generative artificial intelligence system. Such disclosure shall be permanent or uneasily removed by subsequent users, to the extent technically feasible.
Pending 2026-03-10
T-01.1
Section 1(a)
Plain Language
Any person or entity deploying generative AI to interact with a consumer for trade or commerce purposes must provide a clear and conspicuous verbal or written notice at the beginning of the interaction that the consumer is communicating with AI. This obligation is triggered only when the deployment would cause a reasonable person to believe they are interacting with a human — if the AI interaction is obviously non-human, no disclosure is required. The notice must be given before or at the start of the interaction, not mid-conversation.
A person or entity shall not deploy generative artificial intelligence to communicate or otherwise interact with a consumer for the purpose of engaging in trade or commerce in such a way as to cause a reasonable person to believe they are communicating or interacting with a human unless the person or entity provides a clear and conspicuous verbal or written notice at the beginning of the interaction that the consumer is communicating or interacting with generative artificial intelligence.
Pending 2026-03-10
T-01.1T-01.2
Section 2
Plain Language
Operators of AI companion systems must notify users at the start of every interaction that they are not communicating with a human. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The notification must be either verbal or written and must be clear and conspicuous. For ongoing sessions, the operator must repeat the notification at least every three hours. Unlike California SB 243, which triggers initial disclosure only when a reasonable person could be misled (with unconditional disclosure reserved for known minors), NJ A 4732 requires unconditional disclosure for all users from the start.
An operator shall provide clear and conspicuous notification to a user at the beginning of any AI companion interaction that the user is not communicating with a human. This notification shall be provided either verbally or in writing. Thereafter, the notification shall repeat at least every three hours for continued AI companion interactions.
Pending 2026-02-02
T-01.1
Section 1.a.(1)-(3)
Plain Language
Before requesting an AI-analyzed video interview, employers must: (1) notify the applicant that AI may be used to analyze their video and assess their fitness, (2) explain how the AI works and what types of characteristics it evaluates, and (3) obtain written consent (which may be electronic) to be evaluated by the AI. If an applicant does not consent, the employer may not use AI to evaluate that applicant. All three steps — notice, explanation, and consent — must be completed before the interview takes place. This combines AI identity disclosure with an informed consent requirement.
a. An employer in the State that requests applicants to record video interviews and uses an artificial intelligence analysis of the applicant-submitted video shall, prior to making the request for a video interview: (1) notify an applicant before the interview that artificial intelligence may be used to analyze the applicant's video interview and consider the applicant's fitness for the position; (2) provide an applicant with information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants; and (3) obtain, before the interview, written consent, which may be electronic, from the applicant to be evaluated by the artificial intelligence program as described in the information provided. An employer shall not use artificial intelligence to evaluate an applicant who has not consented to the use of artificial intelligence analysis.
Pending 2026-02-24
T-01.1
Section 1(a)(1)
Plain Language
Any person or entity that uses an AI system to communicate with a consumer on an online platform must clearly and conspicuously notify the consumer that they are communicating with an AI system. This notification must occur upon establishing contact and before any further communication takes place. This is an unconditional disclosure requirement — it applies regardless of whether a reasonable person would be misled. The trigger is deployment of an AI system for consumer communication on an online platform.
a. A person or entity that deploys an artificial intelligence system to communicate with a consumer through an online platform shall, upon establishing contact with the consumer and prior to initiating any further communication, clearly and conspicuously: (1) notify the consumer that an artificial intelligence system is communicating with the consumer;
Pending 2027-01-01
T-01.1T-01.3
Section 3(A)(3)
Plain Language
Operators may not deploy a companion AI product that makes material misrepresentations about its identity, capabilities, training data, or non-human status — including when a user directly asks whether it is AI. An adult user may opt into allowing this, but it must be prohibited by default. This goes beyond simple AI identity disclosure by also covering misrepresentations about capabilities and training data. When a user asks directly, the system must not lie about being AI.
An operator shall not deploy or operate a companion artificial intelligence product that, unless specifically configured to do so by an adult user, incorporates: (3) causing the companion artificial intelligence product to make material misrepresentations about the product's identity, capabilities, training data or status as a non-human entity, including when directly questioned by the user.
Pending 2027-01-01
T-01.1T-01.2
Section 4(A)(1)-(2)
Plain Language
Operators must provide a clear notification during interactions informing the user they are communicating with a companion AI product, in the same language as the interaction. For text-based interactions, the notification must be conspicuous, persistent, legible, and visually distinct from the conversation. For non-text interactions (voice, etc.), the notification must be presented at least every 30 minutes in a manner distinct from the interaction itself. An adult user may configure this notification off, but it is on by default. The persistent requirement for text and the 30-minute periodic requirement for other modalities are minimum floors.
An operator shall, unless specifically configured not to do so by an adult user, ensure that a clear notification is provided to the user during an interaction, informing the user that the user is communicating with a companion artificial intelligence product. The notification shall be communicated in the same language as the interaction with the user, and: (1) for text-based interactions, be conspicuous, persistent and legible in the user interface and be distinct from the interaction; and (2) for all other types of interactions, be presented periodically, but no less than once every thirty minutes, in a manner that is distinct from the interaction.
Pending 2027-01-01
T-01.1T-01.2
Section 4(B)
Plain Language
When the user is a minor, the AI identity notification required by Section 4(A) must be provided unconditionally — a minor may not configure it off. The adult opt-out exception does not apply. This means the persistent text notification and the periodic 30-minute non-text notification are mandatory and non-configurable for all minor users.
An operator shall ensure that a clear notification is provided pursuant to Subsection A of this section for use by a minor in all circumstances.
Pending 2027-01-01
T-01.1
GBL § 1554(1)-(2)
Plain Language
Any person doing business in New York that makes available an AI decision system intended to interact with consumers must disclose to each interacting consumer that they are interacting with an AI system. This applies broadly to all AI decision systems — not just high-risk ones — and to any person, not just developers and deployers. The disclosure is excused only where a reasonable person would obviously recognize they are interacting with an AI system. This is a conditional disclosure: it applies unless the AI nature is already obvious.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
Pending 2025-04-27
T-01.1
State Tech. Law § 507(1)-(3)
Plain Language
Residents must be informed whenever an automated system is in use and must be told how and why it contributes to outcomes affecting them. Designers, developers, and deployers must provide accessible plain-language documentation covering system functioning, the role of automation, notice of use, identification of the responsible party, and explanations of outcomes. This documentation must be kept current, and residents must be notified of significant changes to use cases or functionality. This combines AI identity disclosure with an ongoing documentation and change-notification obligation.
1. New York residents shall be informed when an automated system is in use and New York residents shall be informed how and why the system contributes to outcomes that impact them.
2. Designers, developers, and deployers of automated systems shall provide accessible plain language documentation, including clear descriptions of the overall system functioning, the role of automation, notice of system use, identification of the individual or organization responsible for the system, and clear, timely, and accessible explanations of outcomes.
3. The provided notice shall be kept up-to-date, and New York residents impacted by the system shall be notified of any significant changes to use cases or key functionalities.
Pending 2025-09-09
T-01.1T-01.2
Gen. Bus. Law § 1702
Plain Language
Operators must unconditionally disclose to every user at the start of every AI companion interaction that the system is a computer program and not a human being, and that it is unable to feel human emotion. This disclosure must be repeated at least every three hours during continuing interactions. The disclosure must be provided either verbally or in bold, capitalized text of at least 16-point type. Unlike CA SB 243's conditional trigger (disclosure only when a reasonable person could be misled), this is unconditional — every interaction, every user, regardless of whether the user could reasonably be misled. The statute prescribes the exact mandatory language to be used.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three hours for continuing AI companion interactions thereafter, which states either verbally or in bold and capitalized letters of at least sixteen point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Pending
T-01.1
Gen. Bus. Law § 1152
Plain Language
News media employers must fully disclose to their workers when and how any generative AI tool is being used in the workplace for content creation — including writing, recordings, and transcripts. The disclosure must include a description of the AI system and a summary of its purpose and use. This is a worker-facing transparency obligation, not a consumer-facing one. The bill does not specify timing, format, or frequency of the disclosure beyond requiring it be 'full.'
News media employers shall fully disclose to workers when and how any generative artificial intelligence tool is used in the workplace as it relates to the creation of content, including, but not limited to, writing, recordings and transcripts. Such disclosure shall include a description of the artificial intelligence system and a summary of the purpose and use of such system.
Pending
T-01.1
CPLR Rule 2107(d)-(e)
Plain Language
Every civil filing submitted to a New York court must include a separate affidavit addressing generative AI use. If AI was used in any aspect of drafting — including research, document review, or document creation — the affidavit must disclose that use and certify that a human reviewed the source material and verified accuracy, including case citations. If AI was not used, the affidavit must affirmatively state that. This is a universal filing requirement: every paper or file requires one affidavit or the other. The definition of 'drafting' is broad enough to encompass using AI for legal research even if a human wrote the final text.
(d) Any paper or file drafted with the assistance of generative artificial intelligence must attach to the filing a separate affidavit disclosing such use and certifying that a human being has reviewed the source material and verified that the artificially generated content is accurate including, but not limited to, any case citations. (e) Any paper or file drafted without the assistance of generative artificial intelligence must attach to the filing a separate affidavit stating such.
Pending
T-01.1
CPL § 10.50(4)-(5)
Plain Language
Every criminal filing submitted to a New York court must include a separate affidavit addressing generative AI use. If AI was used in drafting — including research, document review, or document creation — the affidavit must disclose that use and certify that a human reviewed the source material and verified accuracy, including case citations. If AI was not used, the affidavit must affirmatively state that. This parallels the civil filing requirement in CPLR Rule 2107(d)-(e) but applies in criminal proceedings.
4. Any paper or file drafted with the assistance of generative artificial intelligence must attach to the filing a separate affidavit disclosing such use and certifying that a human being has reviewed the source material and verified that the artificially generated content is accurate including, but not limited to, any case citations. 5. Any paper or file drafted without the assistance of generative artificial intelligence must attach to the filing a separate affidavit stating such.
Pending
T-01.1
CPLR Rule 5528(a)(6)
Plain Language
Appellate briefs filed under CPLR Rule 5528 must include, as a required component of the brief itself, a disclosure of any generative AI use in drafting and a certification that the content was reviewed and verified by a human. This extends the Rule 2107 filing-affidavit requirement specifically to appellate briefs by making it a formal element of the brief structure alongside the appendix and argument sections.
6. if required by rule twenty-one hundred seven, a disclosure of the use of generative artificial intelligence in the drafting of the brief and certification that the content therein was reviewed and verified by a human being.
Pending 2027-01-01
T-01.1
Civil Rights Law § 110(6)-(8)
Plain Language
Deployers must provide a short-form notice (500 words maximum) to individuals about their covered algorithms. The notice must be concise, plain-language, disability-accessible, and highlight any practices that may be unexpected or that involve consequential actions, including an overview of individual rights. For individuals with whom the deployer has a relationship, the notice must be delivered electronically at the individual's first interaction with the algorithm. For individuals without a direct relationship, the notice must be posted conspicuously on the deployer's website. The Division will promulgate regulations specifying minimum content requirements and a template. This is a point-of-interaction disclosure, distinct from the comprehensive public disclosure in § 110(1).
6. A deployer shall provide a short-form notice regarding a covered algorithm it develops, offers, licenses, or uses in a manner that: (a) is concise, clear, conspicuous, in plain language, and not misleading; (b) is readily accessible to individuals with disabilities; (c) is based on what is reasonably anticipated within the context of the relationship between the individual and the deployer; (d) includes an overview of each applicable individual right and disclosure in a manner that draws attention to any practice that may be unexpected to a reasonable individual or that involves a consequential action; (e) is not more than five hundred words in length; and (f) is available to the public at no cost. 7. (a) If a deployer has a relationship with an individual, the deployer shall provide an electronic version of the short-form notice directly to the individual upon the individual's first interaction with the covered algorithm. (b) If a deployer does not have a relationship with an individual, the deployer shall provide the short-form notice in a clear, conspicuous, accessible, and not misleading manner on their website. 8. The division shall promulgate regulations specifying the minimum content required to be included in the short-form notice described in subdivision six of this section, which shall not exceed the content requirements described in subdivision six of this section and shall include a template or model for the short-form notice described in subdivision seven of this section.
Pending
T-01.1T-01.2
Gen. Bus. Law § 1702
Plain Language
Operators must provide a mandatory disclosure to every user at the start of every AI companion interaction — unconditionally, not only when a reasonable person could be misled. For continuing interactions, the same disclosure must be repeated at least every three hours. The disclosure must state that the AI companion is a computer program, not a human being, and that it cannot feel human emotion. The disclosure must be delivered either verbally or in bold, capitalized text of at least 16-point type. The statute prescribes exact mandatory language, which is notably more prescriptive than comparable statutes like CA SB 243 that leave the specific wording to the operator. This obligation applies to all users regardless of age.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three hours for continuing AI companion interactions thereafter, which states either verbally or in bold and capitalized letters of at least sixteen point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Pending 2025-10-11
T-01.1
GBL § 1554(1)-(2)
Plain Language
Any person doing business in New York — including deployers — that offers an AI decision system intended to interact with consumers must disclose to each consumer that they are interacting with an AI system. The disclosure obligation is conditional: it does not apply where a reasonable person would find it obvious they are interacting with AI. This applies to all AI decision systems intended for consumer interaction, not just high-risk systems. The broader 'person doing business in this state' scope means this obligation reaches beyond the defined developer/deployer roles.
1. Beginning on January first, two thousand twenty-seven, and except as provided in subdivision two of this section, each person doing business in this state, including, but not limited to, each deployer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available, as applicable, any artificial intelligence decision system that is intended to interact with consumers shall ensure that it is disclosed to each consumer who interacts with such artificial intelligence decision system that such consumer is interacting with an artificial intelligence decision system. 2. No disclosure shall be required pursuant to subdivision one of this section under circumstances in which a reasonable person would deem it obvious that such person is interacting with an artificial intelligence decision system.
Pending 2025-09-05
T-01.1
Gen. Bus. Law § 1152
Plain Language
News media employers must fully disclose to their workers whenever and however generative AI tools are used in the workplace for content creation — including writing, recordings, and transcripts. The disclosure must include a description of the AI system and a summary of its purpose and use. This is an internal workforce disclosure obligation (employer to employee), not a consumer-facing requirement. The statute does not specify timing, format, or frequency of the disclosure beyond requiring it to be 'full.'
News media employers shall fully disclose to workers when and how any generative artificial intelligence tool is used in the workplace as it relates to the creation of content, including, but not limited to, writing, recordings and transcripts. Such disclosure shall include a description of the artificial intelligence system and a summary of the purpose and use of such system.
Pending
T-01.1
Gen. Bus. Law § 399-m-1(2)-(3)
Plain Language
Any business entity or individual using AI to influence customer interactions must disclose that fact at the point of interaction — meaning where the customer first encounters the AI, such as a chat window, chatbot, website footer, or email. The disclosure must be in clear and conspicuous 12-point bold-faced plain English, must describe the AI's role, and must include instructions on how to reach a human if human assistance is available. The obligation covers a broad range of AI-influenced interactions including automated customer support, personalized ad targeting, product eligibility decisions, and AI-driven hiring tools. The bill does not limit covered entities to any particular industry or size threshold — it applies to any person, firm, partnership, association, corporation, or agent thereof.
2. Any person, firm, partnership, association or corporation or agent or employee thereof shall disclose the use of artificial intelligence to influence customer interaction, including but not limited to: automated customer support; personalized ad targeting; product eligibility decisions; and AI-driven hiring tools. 3. Such disclosure shall be placed at the point of interaction with the customer, accompanied by a clear and conspicuous, in not less than twelve point bold faced type, plain-English description of the AI's role, with instructions on how to access human assistance, if applicable.
Enacted 2025-11-05
T-01.1T-01.2
General Business Law § 1702
Plain Language
Operators must deliver a clear and prominent disclosure - either verbal or written - that the user is not communicating with a human. This disclosure is required: (1) at the start of every AI companion interaction (initial disclosure), subject to a cap of once per day, and (2) at least every three hours during any continuing interaction (periodic re-disclosure). The obligation is unconditional — it applies to all users regardless of whether a reasonable person would be misled.
An operator shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
Passed 2027-07-01
T-01.1T-01.2
75A Okla. Stat. § 302(A)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with AI, not a human. This obligation is unconditional — no reasonable-person trigger applies for minors. Operators may satisfy this through either (1) a constantly visible disclaimer, or (2) a disclosure at the beginning of each session plus at least every 30 minutes during a continuous interaction. The 30-minute interval is notably more frequent than comparable requirements in other jurisdictions (e.g., California SB 243 requires every 3 hours).
A. An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service and is not interacting with a natural person: 1. With a constantly visible disclaimer; or 2. At the beginning of each session and appearing at least every thirty (30) minutes in a continuous conversational AI service interaction.
Pending 2026-03-10
T-01.1
Section 3(a)-(b)
Plain Language
Any business entity that uses AI in any part of a consumer interaction with a Pennsylvania resident must disclose that fact clearly and conspicuously at the beginning of the interaction. The disclosure must be in plain language, delivered orally or in writing, and must be reasonably accessible to individuals with disabilities or limited English proficiency. This is an unconditional disclosure — it is triggered whenever AI is used in any part of the interaction, regardless of whether the consumer would otherwise be misled.
(a) Duty of business entity.--A business entity that uses artificial intelligence in any part of a consumer interaction shall disclose the use of artificial intelligence in a clear and conspicuous manner to the consumer at the beginning of the consumer interaction. (b) Format.--The business entity shall deliver the disclosure in plain language, orally or in writing, which language must be reasonably accessible to an individual with a disability or limited English proficiency.
Pending 2026-01-30
T-01.1T-01.2
Section 4(2)
Plain Language
Operators must notify all users — both at the start of every AI companion session and at least every three hours during ongoing sessions — that they are communicating with an AI companion and not a human. The notification may be verbal or written. Unlike CA SB 243, which conditions initial disclosure on whether a reasonable person would be misled (except for minors), this obligation is unconditional and applies to all users regardless of whether deception is plausible. The three-hour periodic reminder matches CA SB 243's interval but applies to all users, not just known minors.
An operator shall: (2) At the beginning of a session with an AI companion and once every three hours during the session, provide a notification to the user stating, either verbally or in writing, that the user is communicating with an AI companion and not a human.
Pending 2026-04-01
T-01.1T-01.3
12 Pa.C.S. § 7105(a)-(c)(1)-(3)
Plain Language
Suppliers must develop, implement, and maintain a written disclosure policy that clearly and conspicuously states the chatbot's intended purposes, its abilities and limitations, and that it is an AI and not a human. Consumers must acknowledge they have read, understood, and consent to this policy before accessing the chatbot. The AI identity statement must be restated each time a consumer asks or prompts the chatbot about whether AI is being used — creating an on-demand disclosure obligation. Written consent may be provided via signature, checkbox, electronic signature, or button click. Trade secrets and proprietary information must be protected in the policy.
(a) Policy required.-- (1) Subject to paragraph (2), a supplier of a chatbot shall develop, implement and maintain a written policy containing disclosures regarding the chatbot in accordance with subsection (c). (2) In complying with paragraph (1), a supplier shall protect any trade secret or other proprietary information regarding the chatbot. (b) Consent required.-- (1) Before accessing the features of a chatbot or entering the chat page of a chatbot, a consumer must acknowledge that the consumer has read, understands and consents to the policy described under subsection (a) and the purpose, capabilities and limitations of the chatbot. (2) The consent under this subsection must be in writing and may involve the consumer initialing or signing the acknowledgment described in paragraph (1), checking a box, providing an electronic signature or hitting a button. (c) Specific disclosures.--The policy described under subsection (a) must clearly and conspicuously provide the following: (1) The intended purposes of the chatbot. (2) The abilities and limitations of the chatbot. (3) A statement that the chatbot is an artificial intelligence technology and is not a human, which must be provided each time that the consumer asks or otherwise prompts the chatbot about whether artificial intelligence is being used.
Pending 2026-06-03
T-01.1
Section 3(a)
Plain Language
If a user could reasonably mistake the AI companion for a real person, the operator must display a clear and prominent notice that the companion is AI-generated and not human. This is a conditional trigger — disclosure is only required when a reasonable person would be misled. If the AI companion clearly presents itself as artificial from the outset, no additional disclosure under this subsection is needed.
If a reasonable person interacting with an AI companion would be misled to believe the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the AI companion is artificially generated and not human.
Pending 2026-06-03
T-01.1T-01.2
Section 3(c)(1)-(2)
Plain Language
When the operator knows or should have known that a user is a minor, two disclosure obligations apply unconditionally: (1) the operator must always disclose that the user is interacting with AI and not a human — regardless of whether a reasonable person would be misled; and (2) the operator must provide a default, clear and conspicuous reminder at least every three hours during continuing interactions that the AI companion is artificially generated and that the user should take a break. The 'should have known' standard is broader than actual knowledge and may require operators to make reasonable efforts to identify minor users.
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (1) Disclose to the user that the user is interacting with artificial intelligence and not an actual human being. (2) Provide by default a clear and conspicuous notification to the user at least once every three hours during continuing interactions that reminds the user to take a break and that the AI companion is artificially generated and not human.
Pending 2027-01-01
T-01.1T-01.2
R.I. Gen. Laws § 6-63-3
Plain Language
Operators must provide a mandatory notification at the start of every AI companion interaction and at least every three hours during ongoing interactions. The notification must be delivered either verbally or in bold, capitalized text of at least 16-point type, using the prescribed language: the AI companion is a computer program, not a human being, and is unable to feel human emotion. This is an unconditional disclosure obligation — it applies regardless of whether a reasonable person would be misled. Unlike CA SB 243, where the three-hour periodic reminder applies only to known minors, this provision applies to all users. The statute also prescribes specific formatting requirements (bold, capitalized, 16-point minimum) and mandated verbatim language, which is stricter than most comparable statutes.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three (3) hours for continuing AI companion interactions hereafter, which states either verbally or in bold and capitalized letters of at least sixteen (16) point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Pending
T-01.1
R.I. Gen. Laws § 23-106-3
Plain Language
Healthcare providers and healthcare facilities that use AI to document patient visits — whether in-person or via telehealth — must notify patients that AI is being used for that documentation purpose. The obligation is limited to AI used for visit documentation (e.g., AI scribes, ambient listening tools that generate clinical notes); it does not extend to AI used for diagnosis, treatment recommendations, or other clinical functions. The bill does not specify the timing, format, or content of the notification, leaving significant implementation discretion to covered entities. No enforcement mechanism or penalties are provided.
Any and all healthcare providers and healthcare facilities that employ artificial intelligence ("AI") to document in-person or telehealth visits shall notify patients of the use of AI for that sole purpose.
Pending 2027-01-01
T-01.1T-01.2
R.I. Gen. Laws § 6-63-3
Plain Language
Operators must provide an unconditional AI identity disclosure to every user at the start of every AI companion interaction — no reasonable-person trigger is required. For continuing interactions, the notification must be repeated at least every three hours. The notification must be delivered either verbally or in bold, capitalized text of at least 16-point type, using prescribed statutory language identifying the companion as a computer program that cannot feel human emotion. The mandated text includes a substantive claim about emotional capacity — not merely an AI identity disclosure. Unlike some comparable state laws (e.g., CA SB 243), this applies to all users unconditionally, not only minors.
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three (3) hours for continuing AI companion interactions hereafter, which states either verbally or in bold and capitalized letters of at least sixteen (16) point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Pending 2026-02-13
T-01.1
R.I. Gen. Laws § 23-106-3
Plain Language
Healthcare providers and healthcare facilities that use AI to document patient visits — whether in-person or via telehealth — must notify patients that AI is being used for that documentation purpose. This is narrowly scoped: the notification obligation applies only when AI is used for visit documentation (e.g., ambient clinical scribes or AI-assisted note-taking), not for diagnosis, treatment recommendations, or other clinical functions. The bill does not specify the timing, format, or content of the notification, nor does it provide any enforcement mechanism or penalty for non-compliance.
Any and all healthcare providers and healthcare facilities that employ artificial intelligence ("AI") to document in-person or telehealth visits shall notify patients of the use of AI for that sole purpose.
Pending
T-01.1T-01.2T-01.3
S.C. Code § 39-80-30(B)
Plain Language
Chatbot providers must disclose — clearly, conspicuously, and explicitly — that the user is interacting with a chatbot rather than a human before the chatbot generates any output. The disclosure must be repeated at the beginning of each communication, every hour during ongoing interactions, and any time a user asks whether the chatbot is a human. The notice must be in the same language as the chatbot's communications and in a font at least as large as the largest font used elsewhere in the chatbot interface. The Attorney General will prescribe the specific form and content of the notice by rule.
(B) A chatbot provider shall provide clear, conspicuous, and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice must: (1) be written in the same language that the chatbot communicates with the user and must appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications; and (2) must comply with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40.
Pending
T-01.1
S.C. Code § 39-81-40(B)(2)
Plain Language
Covered entities must implement reasonable systems to ensure their chatbot does not make a materially false representation that it is a human being. Unlike some jurisdictions that require affirmative disclosure at the start of every interaction, this provision is narrower: it prohibits the chatbot from affirmatively and materially misrepresenting itself as human, but does not mandate unprompted AI identity disclosure. The obligation applies to all users.
(B) A covered entity shall implement reasonable systems and processes to: (2) ensure that a chatbot does not make a materially false representation that it is a human being;
Pending
T-01.1T-01.2T-01.3
S.C. Code § 39-80-30(B)
Plain Language
Before generating any output, chatbot providers must give users clear, conspicuous, and explicit notice that they are interacting with a chatbot, not a human. This notice is unconditional — it applies regardless of whether a reasonable person would be misled. The notice must be repeated at the beginning of each communication, every hour during ongoing sessions, and whenever a user asks if the chatbot is human. The notice must be in the chatbot's operating language, in a font at least as large as the largest font used in chatbot communications. The notice must also comply with AG-promulgated regulations specifying form and content requirements.
(B) A chatbot provider shall provide clear, conspicuous, and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice must: (1) be written in the same language that the chatbot communicates with the user and must appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications; and (2) must comply with the rules adopted and the regulations promulgated by the Attorney General pursuant to Section 39-80-40.
Pending 2025-01-01
T-01.1
Section 37-31-40(A)-(B)
Plain Language
Any deployer or developer that makes an AI system available for consumer interaction must disclose to each consumer that they are interacting with an AI system. This disclosure is not required where it would be obvious to a reasonable person that they are interacting with AI. Note that this obligation applies to all AI systems intended to interact with consumers — not just high-risk systems — making it broader in scope than the rest of the chapter's high-risk-focused requirements.
(A) Except as provided in subsection (B), a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (B) Disclosure is not required under subsection (A) under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Pending
T-01.1
S.C. Code § 39-81-40(B)(2)
Plain Language
Covered entities must implement reasonable systems and processes to prevent their chatbot from making materially false representations that it is a human being. This is framed as an anti-deception obligation rather than a proactive disclosure requirement — the entity need not affirmatively disclose AI identity at every interaction, but must ensure the chatbot does not falsely claim to be human. The 'materially false' qualifier implies that incidental or clearly playful statements may not trigger liability, but any representation that could genuinely mislead a user into believing they are speaking with a human is prohibited.
(B) A covered entity shall implement reasonable systems and processes to: (2) ensure that a chatbot does not make a materially false representation that it is a human being;
Pending 2026-07-01
T-01.1
Section 1 (new section added to ch. 37-24)
Plain Language
Any person conducting a commercial transaction or trade practice with a consumer must provide a clear and conspicuous disclosure at the outset of the interaction that the consumer is not communicating with a human, whenever the interaction involves a chatbot, AI agent, avatar, or other conversational computer technology and a consumer could reasonably believe they are engaging with a human. The disclosure is a safe harbor — if it is made clearly and conspicuously at the outset, the prohibition does not apply. The trigger is a reasonable-person standard: if the AI system is obviously non-human (e.g., a basic menu-driven phone tree), the obligation may not apply. The scope is limited to commercial transactions and trade practices — non-commercial AI interactions are not covered.
Except as otherwise provided in this section, a person may not engage in a commercial transaction or trade practice with a consumer if: (1) The transaction or practice requires the consumer to communicate with or interact with a chatbot, an artificial intelligence agent, an avatar, or another form of computer technology that engages in a textual or aural conversation; and (2) The consumer could reasonably believe that the consumer is engaging with human. The prohibition set forth in this section does not apply if the consumer is notified, in a clear and conspicuous fashion, at the outset of the transaction or practice, that the consumer is not communicating with another human.
Passed 2025-09-01
T-01.1
Health & Safety Code § 183.005(b)
Plain Language
When a health care practitioner uses AI for diagnostic purposes (including diagnosis recommendations or treatment suggestions based on patient records), the practitioner must disclose that AI use to the patient. The statute does not specify the timing, format, or content of the disclosure — only that it must occur. This creates a patient-facing transparency obligation on the individual practitioner, not the entity or AI vendor.
A health care practitioner who uses artificial intelligence for diagnostic purposes as described by Subsection (a) must disclose the practitioner's use of that technology to the practitioner's patients.
Passed 2025-09-01
T-01.1
Gov't Code § 2054.707
Plain Language
State agencies using public-facing AI systems must clearly disclose to the public that they are interacting with an AI system, following the format prescribed by the AI code of ethics. This is a conditional obligation — no disclosure is required if a reasonable person would already know they are interacting with AI. The specific form and manner of disclosure will depend on the code of ethics developed by DIR under § 2054.702. This is analogous to AI identity disclosure laws in other jurisdictions but applies only to government-deployed systems.
Sec. 2054.707. DISCLOSURE REQUIREMENTS. A state agency that procures, develops, deploys, or uses a public-facing artificial intelligence system shall provide clear disclosure of interaction with the system to the public as provided by the artificial intelligence system code of ethics established under Section 2054.702. The disclosure is not required if a reasonable person would know the person is interacting with an artificial intelligence system.
Passed 2025-09-01
T-01.1
Gov't Code § 2054.711(a)-(c)
Plain Language
State agencies and local governments must post a standardized notice on all applications, websites, and public computer systems associated with any AI system that is either public-facing or a controlling factor in a consequential decision. DIR will develop the required form, which must describe the system, its data sources, and privacy/ethics compliance measures. This is broader than § 2054.707's disclosure requirement because it covers both public-facing AI and AI that is a controlling factor in consequential decisions (even if not public-facing). Healthcare facilities have a lighter compliance path — they may satisfy this requirement by including a generalized AI disclosure in patient consent forms rather than posting the full standardized notice.
Sec. 2054.711. STANDARDIZED NOTICE. (a) Each state agency and local government deploying or using an artificial intelligence system that is public-facing or that is a controlling factor in a consequential decision shall include a standardized notice on all related applications, Internet websites, and public computer systems. (b) The department shall develop a form that agencies must use for the notice required under Subsection (a). The form must include: (1) general information about the system and data sources the system uses; and (2) measures taken to maintain compliance with information privacy laws and ethics standards. (c) For the purposes of this section, any health care service by an academic medical center, state owned hospital, public hospital or hospital district organized under Article IX of the Texas Constitution or under Texas Health and Safety Code may satisfy their disclosure requirements by including a generalized statement in the patient consent forms that an artificial intelligence system may be used in the course of their treatment.
Enacted 2024-05-01
T-01.3
Utah Code § 13-2-12(3)
Plain Language
Any person deploying generative AI in connection with activities overseen by the Utah Division of Consumer Protection must, when asked by the person interacting with the AI, clearly and conspicuously disclose that the person is interacting with generative AI and not a human. This is an on-demand disclosure — it is triggered only when the individual asks or prompts, not proactively. Compare to the proactive disclosure required under subsection (4)(a) for regulated occupations, which does not require a user inquiry.
A person who uses, prompts, or otherwise causes generative artificial intelligence to interact with a person in connection with any act administered and enforced by the division, as described in Section 13-2-1, shall clearly and conspicuously disclose to the person with whom the generative artificial intelligence interacts, if asked or prompted by the person, that the person is interacting with generative artificial intelligence and not a human.
Enacted 2024-05-01
T-01.1
Utah Code § 13-2-12(4)(a)-(b), (5)
Plain Language
Providers of services in a regulated occupation (i.e., any occupation requiring a license or state certification from the Utah Department of Commerce) must proactively and prominently disclose whenever a consumer is interacting with generative AI in the delivery of those services. The disclosure must be given verbally at the start of any oral conversation and via electronic message before any written exchange. This is an unconditional proactive disclosure — unlike subsection (3), it does not require the consumer to ask. Subsection (4)(b) clarifies that this provision does not create a new authorization to provide regulated services via AI; all existing licensure and certification requirements remain in full effect.
(4) (a) A person who provides the services of a regulated occupation shall prominently disclose when a person is interacting with a generative artificial intelligence in the provision of regulated services. (b) Nothing in this section permits a person to provide the services of a regulated occupation through generative artificial intelligence without meeting the requirements of the regulated occupation. (5) A disclosure described Subsection (4)(a) shall be provided: (a) verbally at the start of an oral exchange or conversation; and (b) through electronic messaging before a written exchange.
Enacted 2025-05-07
T-01.3
Utah Code § 13-75-103(1)(a)-(b)
Plain Language
When a supplier uses generative AI to interact with a consumer in a consumer transaction, the supplier must disclose that the consumer is interacting with AI (not a human) if the consumer asks or prompts whether AI is being used. The consumer's question must be a clear and unambiguous request — vague or ambiguous inquiries do not trigger the obligation. This is an on-demand disclosure duty, not a proactive one: no disclosure is required unless the consumer affirmatively asks. Compare to the heightened obligation in § 13-75-103(2) for regulated occupations, which requires proactive disclosure without a consumer prompt.
(1)(a) A supplier that uses generative artificial intelligence to interact with an individual in connection with a consumer transaction shall disclose to the individual that the individual is interacting with generative artificial intelligence and not a human, if the individual asks or otherwise prompts the supplier about whether artificial intelligence is being used. (b) The individual's prompt or question under Subsection (1)(a) must be a clear and unambiguous request to determine whether the interaction is with a human or with artificial intelligence.
Enacted 2025-05-07
T-01.1
Utah Code § 13-75-103(2)-(3)
Plain Language
Individuals in regulated occupations (those regulated by the Utah Department of Commerce and requiring a license or state certification) must proactively and prominently disclose when a client is interacting with generative AI, if the use constitutes a high-risk AI interaction. This is an unconditional, proactive disclosure — unlike the consumer transaction rule in § 103(1), it does not wait for the consumer to ask. The disclosure must be provided verbally at the start of a verbal interaction and in writing before a written interaction begins. The high-risk trigger covers collection of sensitive personal data and personalized financial, legal, medical, or mental health advice, plus any additional categories the Division defines by rule. The provision also requires continued compliance with all existing requirements of the regulated occupation when delivering services through generative AI.
(2) An individual providing services in a regulated occupation shall: (a) prominently disclose when an individual receiving services is interacting with generative artificial intelligence in the provision of regulated services if the use of generative artificial intelligence constitutes a high-risk artificial intelligence interaction; and (b) comply with all requirements of the regulated occupation when providing services through generative artificial intelligence. (3) A disclosure required under Subsection (2) shall be provided: (a) verbally at the start of a verbal interaction; and (b) in writing before the start of a written interaction.
Enacted 2025-05-07
T-01.1T-01.2
Utah Code § 13-75-104(1)-(2)
Plain Language
A safe harbor protects any person from enforcement under the disclosure requirements of § 13-75-103 if their generative AI system clearly and conspicuously discloses — both at the outset and throughout the interaction — that it is generative AI, is not human, or is an AI assistant. This applies to both consumer transactions and regulated services. The practical takeaway: if you embed a persistent, prominent AI disclosure from the first message and maintain it throughout the session, you are shielded from enforcement even if you otherwise would have failed to comply with the on-demand or proactive disclosure requirements. The Division may issue rules specifying what forms and methods of disclosure satisfy or fail to satisfy this safe harbor.
(1) A person is not subject to an enforcement action for violating Section 13-75-103 if the person's generative artificial intelligence clearly and conspicuously discloses: (a) at the outset of any interaction with an individual in connection with: (i) a consumer transaction; or (ii) the provision of regulated services; and (b) throughout the interaction that it: (i) is generative artificial intelligence; (ii) is not human; or (iii) is an artificial intelligence assistant. (2) In accordance with Title 63G, Chapter 3, Utah Administrative Rulemaking Act, the division in consultation with the office, may make rules specifying forms and methods of disclosure that: (a) satisfy the requirements of Subsection (1); or (b) do not satisfy the requirements of Subsection (1).
Pending 2027-01-01
T-01.1T-01.2T-01.3
§ 59.1-616(A)
Plain Language
Operators must provide AI identity disclosure in two forms: (1) a static, persistent disclaimer visible at all times indicating the companion chatbot is not human, and (2) active pop-up notifications (or equivalent if pop-ups are not feasible) at three specific intervals — upon login, every 90 minutes of sustained engagement, and whenever the user asks. Unlike some jurisdictions that condition disclosure on a reasonable-person deception standard, this obligation is unconditional and applies to all users regardless of age. The 90-minute re-disclosure interval is longer than the three-hour interval in California SB 243.
A. An operator shall (i) include a disclaimer to users of all ages that a companion chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up, or other communication if a pop-up is not feasible, that the user is not engaging with a human counterpart at the following intervals: 1. Upon login to the companion chatbot; 2. Every 90 minutes of sustained user engagement; and 3. When prompted by the user.
Pending 2026-07-01
T-01.1
Va. Code § 59.1-615(2)
Plain Language
Covered entities must implement reasonable systems and processes to prevent their chatbots from making materially false representations that the chatbot is a human being. This is framed as a prohibition on affirmative misrepresentation rather than a proactive disclosure duty — the chatbot may not claim to be human, but this provision does not independently require the chatbot to affirmatively disclose that it is AI. The affirmative disclosure obligation is in § 59.1-617. The reasonableness standard acknowledges that edge-case outputs may occur but requires systemic safeguards.
A covered entity shall implement reasonable systems and processes to: 2. Ensure that a chatbot does not make a materially false representation that it is a human being;
Pending 2026-07-01
T-01.1T-01.2T-01.3
Va. Code § 59.1-617
Plain Language
All operators — not just covered entities meeting the 500,000-user threshold — must provide two layers of AI identity disclosure: (1) a static, persistent disclaimer visible at all times indicating the chatbot is not human, and (2) pop-up notifications at four specific triggers: login, every 30 minutes of sustained engagement, whenever the user asks, and whenever the chatbot is asked to provide advice in a licensed field such as medicine, finance, or law. The 30-minute interval is more frequent than comparable statutes (e.g., CA SB 243's 3-hour interval). The licensed-advice trigger is unique to this bill and functions as a context-sensitive disclosure requirement. This section applies to all operators of chatbots in Virginia, regardless of user count — it is not limited to covered entities.
An operator shall (i) include a disclaimer to users of all ages that a chatbot is not a human via a static, persistent disclosure and (ii) notify a user via a pop-up that he is not engaging with a human counterpart at the following intervals: 1. Upon login to the chatbot; 2. Every 30 minutes of sustained user engagement; 3. When prompted by the user; and 4. When asked to provide advice legally regulated by a licensed industry, including medical, financial, or legal advice.
Pre-filed 2026-07-01
T-01.1
9 V.S.A. § 2466e(a)
Plain Language
Any person operating a chatbot in a commercial transaction must provide clear and conspicuous notice to the consumer that they are interacting with a chatbot and not a human, whenever the chatbot could mislead a reasonable person into believing they are engaging with a human. The trigger is the reasonable-person standard — it does not matter whether any particular consumer was actually misled. The disclosure must be given before or at the point of interaction. This obligation is scoped to commercial transactions and trade practices only; non-commercial uses of chatbots are not covered.
No person shall engage in a commercial transaction or trade practice with a consumer in which the consumer is communicating or otherwise interacting with a chatbot that may mislead or deceive a reasonable person to believe the person is engaging with an actual human, whether or not any consumer is in fact misled or deceived, unless the consumer is notified in a clear and conspicuous manner that the consumer is communicating with a chatbot and not an actual human being.
Pre-filed 2026-07-01
T-01.1T-01.2T-01.3
9 V.S.A. § 4193c(b)(1)-(3)
Plain Language
Chatbot providers must proactively inform every user that they are interacting with a chatbot, not a human, at three touchpoints: (1) before the chatbot generates any outputs, (2) every hour during continuing interactions, and (3) any time the user asks whether the chatbot is a real person. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The notice must appear in the user's interaction language, in a font at least as large as the largest other text on the interface, be accessible to users with disabilities, and comply with Attorney General rules. The every-hour periodic reminder and on-demand disclosure make this one of the more comprehensive AI identity disclosure requirements among state bills.
(b) Disclosure. Chatbot providers shall provide clear, conspicuous, and explicit notice to users that users are interacting with a chatbot rather than a human prior to the chatbot generating any outputs, every hour thereafter, and each time a user prompts the chatbot about whether it is a real person subject to the following: (1) The text of this notice must appear in the same language as the one in which the user is interacting with the chatbot, in a font size easily readable by an average user, and no smaller than the largest font size of other text appearing on the interface on which the chatbot is provided. (2) This notice must be accessible to users with disabilities. (3) This notice must comply with rules adopted by the Attorney General pursuant to this subchapter.
Pre-filed 2026-07-01
T-01.1
9 V.S.A. § 4193b(a)
Plain Language
When a user could reasonably mistake the companion chatbot for a human, the operator must provide a clear and conspicuous notification stating the chatbot is AI-generated and not human. The notification must be in the same language as the interaction and in a font size easily readable by the average viewer. This is a conditional trigger — if the chatbot already presents itself clearly as AI, no additional disclosure is required. Compare to the minor-specific unconditional disclosure in § 4193b(c)(1).
If a user interacting with a companion chatbot could be reasonably misled to believe that the user is interacting with a human, an operator shall issue a clear and conspicuous notification to the individual indicating that the companion chatbot is artificially generated and not human. The text of the notification shall appear in the same language and in a size easily readable by the average viewer.
Pre-filed 2026-07-01
T-01.1T-01.2
9 V.S.A. § 4193b(c)(1)-(2)
Plain Language
When the operator knows a user is a minor (17 or younger), two unconditional disclosure obligations apply: (1) the operator must immediately disclose in a clear and conspicuous manner that the user is interacting with AI — this is unconditional, unlike the general disclosure in § 4193b(a) which requires a 'reasonable misleading' trigger; and (2) the operator must send a prominent notification at least every 30 minutes during continuing interactions reminding the minor to take a break and that the chatbot is AI-generated and not human. The 30-minute interval is notably more frequent than CA SB 243's 3-hour floor. These obligations are triggered only by actual knowledge that the user is a minor.
An operator shall, for a user that the operator knows is a minor, do the following: (1) immediately disclose to the user in a clear and conspicuous manner that the user is interacting with artificial intelligence; (2) provide a clear and conspicuous notification to the user at least every 30 minutes for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human;
Passed 2026-07-01
T-01.1
18 V.S.A. § 9752(a)-(b)
Plain Language
Health care providers using generative AI to produce patient communications about clinical information must include two things: (1) a disclaimer that the communication was AI-generated, with specific placement rules depending on the medium (prominently at the beginning for letters/emails, displayed throughout for chat and video, verbally at start and end for audio); and (2) clear instructions for contacting a human provider. There is a safe harbor: if a licensed human health care provider reads and reviews the AI-generated communication before it reaches the patient, no disclaimer is required. This creates a practical choice for providers — either have a human review every AI-generated communication, or label it.
(a) Except as provided in subsection (b) of this section, any health care provider that uses generative artificial intelligence to generate written or verbal patient communications relating to patient clinical information shall ensure that those communications include both of the following: (1) A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence. (A) For written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication. (B) For written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction. (C) For audio communications, the disclaimer shall be provided verbally at the start and end of the interaction. (D) For video communications, the disclaimer shall be prominently displayed throughout the interaction. (2) Clear instructions describing how a patient may contact a human health care provider; an employee of the health care facility, clinic, physician's office, or office of a group provider; or other appropriate person. (b) If a communication is generated by generative artificial intelligence and read and reviewed by a licensed human health care provider, the requirements of subsection (a) of this section shall not apply.
Passed 2026-07-01
T-01.1T-01.3
18 V.S.A. § 9763(a)-(b)
Plain Language
Suppliers must ensure the mental health chatbot clearly and conspicuously discloses that it is AI and not a human at three trigger points: (1) before the user can access chatbot features (unconditional initial disclosure); (2) at the beginning of any interaction after a 7-day gap in access (re-disclosure after absence); and (3) whenever a user asks or prompts about whether AI is being used (on-demand disclosure). This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled. The 7-day re-disclosure trigger is a distinctive feature compared to other states' periodic re-disclosure requirements.
(a) A supplier of a mental health chatbot shall cause the mental health chatbot to clearly and conspicuously disclose to a Vermont user that the mental health chatbot is an artificial intelligence technology and not a human. (b) The disclosure described in subsection (a) of this section shall be made: (1) before the Vermont user may access the features of the mental health chatbot; (2) at the beginning of any interaction with the Vermont user if the Vermont user has not accessed the mental health chatbot within the previous seven days; and (3) any time a Vermont user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
Passed 2027-02-01
T-01.1
Sec. 6(1)-(4)
Plain Language
Any government agency that deploys an AI system intended to interact with consumers must unconditionally disclose to each consumer — before or at the start of interaction — that the consumer is interacting with AI. This obligation applies regardless of whether a reasonable consumer would already realize they are dealing with an AI system. The disclosure must be clear, conspicuously posted, written in plain language, and may not use a dark pattern. The disclosure may be delivered via a hyperlink to a separate page. Note that this provision is codified in Title 42 RCW (public agencies) and has no express enforcement mechanism or penalty specified in the bill, unlike the covered provider provisions enforced by the AG under the CPA.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) An agency is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system. (4) For the purposes of this section, "artificial intelligence system" has the same meaning as in section 1 of this act.
Pending 2027-01-01
T-01.1
Sec. 3(4)(a)-(e)
Plain Language
Before or at the time a deployer uses a high-risk AI system to interact with a consumer, the deployer must disclose that the consumer is interacting with an AI system. This is an unconditional disclosure — not triggered by whether the consumer could be misled. In addition, the deployer must simultaneously provide substantial contextual information: the system's purpose, nature, the consequential decision type, deployer contact information, and a plain-language description covering what personal characteristics the system measures, how it measures them, their relevance to the decision, the human components, and how automated components inform the decision. This is a comprehensive pre-interaction transparency obligation that goes beyond simple AI identity disclosure.
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
Passed 2027-01-01
T-01.1T-01.2T-01.3
Sec. 3(1)-(3)
Plain Language
Operators must provide a clear, conspicuous disclosure that the AI companion chatbot is artificially generated and not human. This disclosure is unconditional — it must be given at the start of every interaction and repeated at least every three hours during continued use. Additionally, operators must take reasonable measures to prevent the chatbot from ever claiming to be human (including when directly asked) or generating any output that contradicts the AI disclosure. Unlike CA SB 243, which triggers disclosure only when a reasonable person could be misled, this provision applies to all interactions regardless of user perception.
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
Passed 2027-01-01
T-01.1T-01.2T-01.3
Sec. 4(1)(a), (2), (3)
Plain Language
When the operator knows a user is a minor, or when the AI companion chatbot is directed to minors, three heightened disclosure obligations apply: (1) the operator must unconditionally disclose that the chatbot is AI-generated and not human; (2) the reminder must repeat at least every hour during continuous interaction — three times more frequently than the general every-three-hours requirement under Sec. 3; and (3) the chatbot must be prevented from claiming to be human or contradicting the disclosure. The trigger is either actual knowledge that the user is a minor or the chatbot being directed to minors generally.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1)(a) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
Pending 2026-07-01
T-01.1
Sec. 10(1)-(3)
Plain Language
Government agencies that deploy any AI system intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with AI. The disclosure must be clear, conspicuous, in plain language, and may not use dark patterns. A hyperlink to a separate page is acceptable. Critically, this disclosure is unconditional — it must be made regardless of whether a reasonable consumer would already know they are interacting with AI. This provision applies to all AI systems, not just high-risk systems, and is codified separately in Title 42 RCW (government agencies) rather than the Title 19 RCW chapter covering private-sector deployers.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.
Pending 2027-01-01
T-01.1T-01.2T-01.3
Sec. 3(1)-(3)
Plain Language
Operators must unconditionally disclose to all users — before or at the start of every interaction — that the AI companion chatbot is artificially generated and not human. This disclosure must be repeated at least every three hours during continued interaction. Additionally, operators must implement reasonable measures to prevent the chatbot from ever claiming to be human, including when directly asked, and from generating any output that contradicts the AI identity disclosure. Unlike CA SB 243's general provision, this is not conditional on a 'reasonable person' test — the disclosure is required in every interaction regardless.
(1) An operator must provide a clear and conspicuous disclosure that an AI companion chatbot is artificially generated and not human. (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every three hours during continued interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the disclosure described in subsection (1) of this section.
Pending 2027-01-01
T-01.1T-01.2T-01.3
Sec. 4(1)(a), 4(2), 4(3)
Plain Language
When the operator knows a user is a minor or the chatbot is directed to minors, the AI identity disclosure must be provided at the beginning of the interaction and repeated at least every hour — three times more frequently than the general three-hour requirement for all users under Sec. 3. The operator must also prevent the chatbot from claiming to be human or generating output that contradicts the disclosure. This provision is triggered by actual knowledge of minor status or by the chatbot being directed to minors as a product category.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (a) Issue a clear and conspicuous notification indicating that the chatbot is artificially generated and not human; (2) The notification described in subsection (1) of this section must be provided: (a) At the beginning of the interaction; and (b) At least every hour during continuous interaction. (3) The operator must implement reasonable measures to prohibit and prevent AI companion chatbots from claiming to be human, including when asked by the person interacting with the AI chatbot, and from otherwise generating any output that refutes or conflicts with the notification described in subsection (1) of this section.
Pending 2027-01-01
T-01.1
Sec. 3(4)(a)-(e)
Plain Language
Before or at the time a high-risk AI system interacts with a consumer, the deployer must disclose that the consumer is interacting with AI and provide a comprehensive set of additional disclosures: the system's purpose, nature, the type of consequential decision at stake, deployer contact information, and a plain-language description covering what personal attributes the system measures, how it measures them, their relevance to the decision, human components, and how automated components inform decisions. This is an unconditional disclosure — it applies regardless of whether a reasonable person would be misled.
(4) Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the consumer is interacting with an artificial intelligence system. At such time, the deployer shall also disclose to the consumer: (a) The purpose of such high-risk artificial intelligence system; (b) The nature of such system; (c) The nature of the consequential decision; (d) The contact information for the deployer; and (e) A description of the artificial intelligence system in plain language, which must include: (i) A description of the personal characteristics or attributes that such system will measure or assess; (ii) The method by which the system measures or assesses such attributes or characteristics; (iii) How such attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; and (v) How any automated components of such system are used to inform such consequential decisions.
Pending 2026-07-01
T-01.1
Sec. 11(1)-(3)
Plain Language
Government agencies that deploy AI systems intended to interact with consumers must disclose — before or at the time of interaction — that the consumer is interacting with an AI system. The disclosure must be clear, conspicuous, in plain language, and may not use a dark pattern. A hyperlink to a separate web page is an acceptable disclosure method. Critically, the disclosure is unconditional — it must be provided even when it would be obvious to a reasonable consumer that they are interacting with AI. This provision is codified separately in Title 42 RCW (government operations), distinct from the private-sector obligations in Title 19 RCW.
(1) A government agency that makes available an artificial intelligence system intended to interact with consumers must disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system. The disclosure must be: (a) Clear and conspicuously posted; (b) Written in plain language; and (c) May not use a dark pattern. (2) The disclosure may be provided by using a hyperlink to direct a consumer to a separate web page. (3) A person is required to make the disclosure under subsection (1) of this section regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.
Pending 2027-01-01
T-01.1T-01.2
§33-57-2(c)
Plain Language
Operators and licensed professionals must provide a clear and conspicuous notification at the beginning of any AI companion interaction stating that the user is not communicating with a human. This initial disclosure need not exceed once per day. For continuing interactions, a re-disclosure must be provided at least every three hours. The notification may be verbal or written. This is an unconditional disclosure requirement — it applies to all AI companion interactions regardless of whether a reasonable person would be misled.
(c) An operator or licensed professional shall provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction which need not exceed once per day. and at least every three hours for continuing AI companion interactions which states either verbally or in writing that the user is not communicating with a human.
Enacted 2026-01-01
T-01.1
Bus. & Prof. Code § 22602(a)
Plain Language
If a user could reasonably mistake the chatbot for a real person, the operator must display a clear, prominent notice that the companion chatbot is AI-generated and not human. This is a conditional trigger — if the chatbot clearly presents itself as AI from the outset such that no reasonable person would be misled, no disclosure is required under this provision. Compare to the stricter unconditional disclosure required for known minors under § 22602(c).
If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.
Enacted 2026-01-01
T-01.1T-01.2
Bus. & Prof. Code § 22602(c)(1)-(2)
Plain Language
When the operator knows a user is a minor, two obligations apply unconditionally: (1) always disclose that the user is talking to AI, regardless of whether a reasonable person would otherwise be misled; and (2) send a prominent reminder at least every three hours in ongoing conversations that the chatbot is AI and the user should take a break. The every-three-hours floor is a minimum — operators may remind more frequently. These obligations apply only when the operator has actual knowledge the user is a minor.
An operator shall, for a user that the operator knows is a minor, do all of the following: (1) Disclose to the user that the user is interacting with artificial intelligence. (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human.