HB-2737
AZ · State · USA
AZ
USA
● Pending
Proposed Effective Date
2026-01-01
Arizona HB 2737 — An Act amending title 44, chapter 9, Arizona Revised Statutes, by adding article 27; relating to commerce (ChatBot Protection Act)
Imposes data privacy, advertising, and safety obligations on chatbot providers operating in Arizona. Prohibits processing personal data without affirmative consent, bans use of chat logs for advertising, restricts profiling, prohibits the sale of chat logs, and requires parental consent for minors. Requires AI identity disclosure before any output, with hourly re-disclosure and on-demand disclosure. Mandates monthly risk-of-harm evaluations, public data security programs, and product liability for injuries caused by chatbots. Enforced by the Attorney General and county attorneys, with a private right of action for users; violations constitute injury in fact, with civil penalties up to $5,000 per violation and punitive damages for reckless or knowing conduct.
Summary

Imposes data privacy, advertising, and safety obligations on chatbot providers operating in Arizona. Prohibits processing personal data without affirmative consent, bans use of chat logs for advertising, restricts profiling, prohibits the sale of chat logs, and requires parental consent for minors. Requires AI identity disclosure before any output, with hourly re-disclosure and on-demand disclosure. Mandates monthly risk-of-harm evaluations, public data security programs, and product liability for injuries caused by chatbots. Enforced by the Attorney General and county attorneys, with a private right of action for users; violations constitute injury in fact, with civil penalties up to $5,000 per violation and punitive damages for reckless or knowing conduct.

Enforcement & Penalties
Enforcement Authority
The Attorney General or a county attorney may bring a civil action against a chatbot provider that violates the article. Private right of action is available to any user injured by a violation of § 44-1383.01 or § 44-1383.02; a violation of those sections constitutes an injury in fact to a user, eliminating the need to prove separate standing. The Attorney General is also granted rulemaking authority to implement the article.
Penalties
Civil penalty of not more than $5,000 per violation. Punitive damages available for reckless and knowing conduct. Injunctive relief, declaratory relief, and reasonable attorney fees and litigation costs are also available. A violation of § 44-1383.01 or § 44-1383.02 constitutes an injury in fact to a user, so actual monetary harm need not be proven. The Attorney General or county attorney may also obtain damages, civil penalties, restitution, or other remedies, plus reasonable attorney fees and litigation costs.
Who Is Covered
"Chatbot provider" means any person that creates, distributes or otherwise makes a chatbot available to a user.
What Is Covered
"Chatbot": (a) Means an algorithmic or automated system that generates information through text, audio, image or video in a manner that simulates interpersonal interactions or conversation. (b) Includes artificial intelligence.
Compliance Obligations 18 obligations · click obligation ID to open requirement page
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(1)
Plain Language
Chatbot providers may not process personal data to influence chatbot outputs unless the processing is necessary to fulfill a user's express request and the user has given affirmative consent. The affirmative consent standard is demanding: it requires a standalone, accessible disclosure in the user's language, with equal or easier decline mechanics, and cannot be obtained through terms of service, dark patterns, or user inaction. This effectively restricts personal data processing to a narrow necessity-plus-consent basis.
Statutory Text
A chatbot provider may not: 1. Process personal data to inform a chatbot output unless processing personal data is necessary to fulfill an express request that is made by a user and the user provides affirmative consent.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(2)
Plain Language
Chatbot providers are categorically prohibited from using a user's chat logs for any advertising purpose — including deciding whether to show an ad, selecting which product or service to advertise, or customizing ad content. This is an absolute ban with no consent exception, covering the full lifecycle of ad targeting and personalization based on chat interactions.
Statutory Text
A chatbot provider may not: 2. Process a user's chat log: (a) To determine whether to display an advertisement for a product or service to a user. (b) To determine a product or service or category of a product or service to advertise to a user. (c) To customize an advertisement for presentation to a user.
D-01 Automated Processing Rights & Data Controls · D-01.4D-01.5 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(3)(d), (A)(4)
Plain Language
Chatbot providers may not use chat logs and personal data for profiling — classifying or designating personality traits or behavioral characteristics — beyond what is strictly necessary to fulfill a user's express request. This restriction applies regardless of consent. Processing chat logs for user safety or statutory compliance is excluded from the definition of profiling and is therefore not restricted by this provision.
Statutory Text
A chatbot provider may not: 3. Process a user's chat log and personal data: (d) To engage in profiling beyond what is necessary to fulfill an express request. 4. Profile a user based on any classification or designation of the user's personality or behavioral characteristic beyond what is necessary to fulfill an express request made by the user.
MN-01 Minor User AI Safety Protections · MN-01.2 · DeployerDeveloper · ChatbotMinors
A.R.S. § 44-1383.01(A)(3)(a)-(b)
Plain Language
When a chatbot provider knows or reasonably should know (based on objective circumstances) that a user is a minor, the provider may not process the minor's chat logs and personal data — either generally or for training purposes — unless the minor's parent or legal guardian has provided affirmative consent. The knowledge standard is constructive (should have known based on objective circumstances), not actual knowledge only. Training excludes safety testing and harm-mitigation modifications.
Statutory Text
A chatbot provider may not: 3. Process a user's chat log and personal data: (a) If the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent. (b) For training purposes if the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(3)(c)
Plain Language
For adult users, chatbot providers may not use chat logs and personal data for training purposes unless the provider first obtains affirmative consent. Unlike the minor provision, which requires parental consent, here the adult user themselves must consent. The affirmative consent standard is the same demanding standard defined elsewhere in the statute.
Statutory Text
A chatbot provider may not: 3. Process a user's chat log and personal data: (c) for training purposes if the user is an adult, unless the chatbot provider first obtains affirmative consent.
Other · Chatbot
A.R.S. § 44-1383.01(A)(5)
Plain Language
Chatbot providers are categorically prohibited from selling users' chat logs. 'Sell' is defined broadly to include making data available to third parties for monetary or other valuable consideration, but excludes processor disclosures, user-directed disclosures with affirmative consent, and data the user intentionally made publicly available. This is an absolute prohibition — no consent mechanism can override it.
Statutory Text
A chatbot provider may not: 5. Sell a user's chat logs.
Other · Chatbot
A.R.S. § 44-1383.01(A)(6)
Plain Language
Chatbot providers must delete chat logs after ten years unless retention is necessary for compliance with this article or other law. This is a hard retention cap — it applies regardless of consent or purpose. The exception for legal compliance obligations means providers should identify any overlapping retention requirements before deleting.
Statutory Text
A chatbot provider may not: 6. Retain a user's chat log for more than ten years, unless retention is necessary to comply with this article or otherwise required by law.
Other · Chatbot
A.R.S. § 44-1383.01(A)(7)
Plain Language
Chatbot providers may not punish users for exercising their rights under this article — specifically, for refusing to consent to the use of their chat logs or personal data for training. Prohibited retaliation includes denying service, charging higher prices, or degrading service quality. This protects user autonomy in the consent process.
Statutory Text
A chatbot provider may not: 7. Discriminate or retaliate against a user, including: (a) Denying products or services to the user. (b) Charging different prices or rates for products or services to the user. (c) Providing lower quality products or service to the user for refusing to consent to the use of chat logs or personal data for training purposes.
D-01 Automated Processing Rights & Data Controls · D-01.1 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(B)
Plain Language
Users have an unconditional right to access their own chat logs at any time. Upon request, the chatbot provider must deliver the chat logs in a downloadable, easily readable format. Providers may not retaliate against users who exercise this access right. This is a standing access right — no triggering conditions or limitations on frequency.
Statutory Text
A user has a right to access the user's own chat logs at any time. A chatbot provider shall provide a user's own chat log on request by the user and shall provide the chat log in a downloadable and easy to read format. A chatbot provider may not discriminate or retaliate against a user pursuant to subsection A paragraph 7 of this section that requests the user's chat.
Other · Chatbot
A.R.S. § 44-1383.01(C)
Plain Language
Government entities may not compel chatbot providers to produce or grant access to input data or chat logs unless the government obtains a wiretap warrant. This creates a heightened Fourth Amendment-style protection for chatbot interaction data, treating it more like wiretap communications than ordinary business records. Chatbot providers should not comply with subpoenas or other compulsory process short of a wiretap warrant.
Statutory Text
A government entity may not compel the production of or access to input data or chat logs from a chatbot provider, except as pursuant to a wiretap warrant.
G-01 AI Governance Program & Documentation · G-01.1 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(D)
Plain Language
Chatbot providers must develop, implement, and maintain a written comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and nature of personal data and chat logs they hold. The written program must be publicly posted on the provider's website. This is both a governance obligation (establish and maintain a program) and a transparency obligation (publish it publicly).
Statutory Text
A chatbot provider shall develop, implement and maintain a comprehensive data security program that contains administrative, technical and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
Other · Chatbot
A.R.S. § 44-1383.01(E)
Plain Language
Chatbot providers must implement physical, administrative, and technical safeguards to ensure that de-identified data cannot be re-identified at any stage — during processing, retention, or transfer. This is a continuing obligation that applies throughout the data lifecycle, not just at the point of de-identification.
Statutory Text
A chatbot provider shall take the necessary physical, administrative and technical measures to prevent de-identified data from being re-identified and to process, retain and transfer de-identified data without any reasonable means of re-identification.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(A)(1)
Plain Language
Chatbot providers are prohibited from using any term, phrase, or language in advertising, the chatbot interface, or chatbot outputs that states or implies that the chatbot's content is endorsed by or equivalent to a licensed professional — including any professional licensed under Arizona Title 32, licensed legal professionals, CPAs, investment advisors, and licensed fiduciaries. This covers the full range of output touchpoints: advertising materials, in-app interface, and generated responses.
Statutory Text
A chatbot provider may not: 1. Use any term, letter or phrase in the advertising, interface or output data of a chatbot that states or implies that the advertising, interface or output data of a chatbot is endorsed by or equivalent to any of the following: (a) Any certified, registered or licensed professional pursuant to title 32. (b) A licensed legal professional. (c) A certified public accountant as defined in section 32-701. (d) An investment advisor or an investment adviser representative as defined in section 44-3101. (e) A licensed fiduciary as prescribed in title 14, chapter 5, article 7.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(A)(2)
Plain Language
Chatbot providers may not represent — in advertising, interface design, or chatbot outputs — that user input data or chat logs are confidential. This prevents creating a false expectation of privacy that could influence user behavior or trust. Providers should audit marketing materials, onboarding flows, and chatbot response templates for any language suggesting confidentiality.
Statutory Text
A chatbot provider may not: 2. Include any representation in the advertising, interface or output data of a chatbot that states or implies the user's input data or chat log is confidential.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(B)
Plain Language
Before any chatbot output, the provider must give a clear, conspicuous, and explicit notice that the user is interacting with a chatbot, not a human. This notice is unconditional — it must appear at the beginning of every communication, be repeated every hour during continuing interactions, and be provided each time a user asks whether they are talking to a natural person. The notice must be in the same language as the chatbot and in a font size at least as large as the largest font used in the chatbot's other communications. The Attorney General will adopt rules specifying the form and content of the notice, including a template.
Statutory Text
A chatbot provider shall provide clear, conspicuous and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user, every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice: 1. shall be written in the same language that the chatbot communicates with the user and shall appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications. 2. must comply with the rules adopted by the attorney general pursuant to section 44-1383.03.
S-01 AI System Safety Program · S-01.4S-01.7 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(C)
Plain Language
Chatbot providers must, on a monthly basis: (1) evaluate their chatbot for potential risk of harm to users, and (2) publish information about the chatbot on their website. Providers must also mitigate any identified risks of harm. The specifics of what constitutes 'risk of harm' and what risk-reduction requirements apply will be defined by Attorney General rulemaking. This is a continuing, monthly operational obligation — significantly more frequent than the typical annual review cadence in other jurisdictions.
Statutory Text
In compliance with the rules adopted by the attorney general pursuant to section 44-1383.03, a chatbot provider shall: 1. On a monthly basis: (a) Evaluate its chatbot for potential risk of harm to users. (b) Make information about its chatbot publicly available on its website. 2. Mitigate any risk of harm to users.
Other · Chatbot
A.R.S. § 44-1383.03
Plain Language
The Attorney General must adopt implementing rules that will define: the form and content of the AI identity disclosure notice, an example template, the definition of 'potential risk of harm,' and provider requirements to reduce harm. The AG may also adopt any other rules necessary to implement the article. Until these rules are adopted, several statutory obligations (notice form, risk-of-harm evaluations, mitigation requirements) lack specific compliance parameters.
Statutory Text
A. The attorney general shall adopt rules to implement this article. The rules shall: 1. Describe the form and content of the notice that is required pursuant to section 44-1383.02. 2. Provide an example template for the notice that is required pursuant to section 44-1383.02. 3. Describe any potential risk of harm to users. 4. Provide requirements for a chatbot provider to implement to reduce the risk of harm to users. B. The attorney general may adopt any other rules necessary to implement this article.
Other · Chatbot
A.R.S. § 44-1383.04
Plain Language
Chatbots are classified as 'products' under Arizona product liability law. Chatbot providers have a duty to ensure their chatbots do not cause injury. Notably, the provider is liable even if they exercised all reasonable care (strict liability) or even if they had no direct distribution or contractual relationship with the user. This goes beyond negligence — a chatbot provider cannot escape liability by showing reasonable care in design and distribution, and upstream developers remain liable even when a downstream distributor delivers the chatbot to the end user.
Statutory Text
A. A chatbot is considered a product for the purposes of a product liability action as prescribed in title 12, chapter 6, article 9. B. A chatbot provider has a duty to ensure that the use of the chatbot provider's does not cause injury to a user. C. A chatbot provider is liable for any injury that the chatbot causes to a user if either of the following occurs: 1. The chatbot provider exercised all reasonable care in the design and distribution of the chatbot. 2. The chatbot provider did not directly distribute the chatbot to the user or otherwise enter into a contractual relationship with the user.