HB-2737
AZ · State · USA
AZ
USA
● Pending
Proposed Effective Date
2026-01-01
Arizona HB 2737 — Chatbot Regulations; Personal Data; Requirements (ChatBot Protection Act)
Imposes data governance, transparency, advertising, and safety obligations on chatbot providers operating in Arizona. Prohibits processing personal data without affirmative consent, bans use of chat logs for advertising, prohibits sale of chat logs, restricts profiling, and imposes heightened consent requirements for processing minor users' data. Requires clear and conspicuous AI identity disclosure before output generation, repeated hourly, and on-demand. Mandates monthly safety evaluations and public disclosure of chatbot information. Classifies chatbots as products for product liability purposes and imposes strict liability on providers. Creates a private right of action with up to $5,000 per violation and punitive damages for reckless or knowing conduct, enforceable by the Attorney General, county attorneys, or injured users.
Summary

Imposes data governance, transparency, advertising, and safety obligations on chatbot providers operating in Arizona. Prohibits processing personal data without affirmative consent, bans use of chat logs for advertising, prohibits sale of chat logs, restricts profiling, and imposes heightened consent requirements for processing minor users' data. Requires clear and conspicuous AI identity disclosure before output generation, repeated hourly, and on-demand. Mandates monthly safety evaluations and public disclosure of chatbot information. Classifies chatbots as products for product liability purposes and imposes strict liability on providers. Creates a private right of action with up to $5,000 per violation and punitive damages for reckless or knowing conduct, enforceable by the Attorney General, county attorneys, or injured users.

Enforcement & Penalties
Enforcement Authority
The Attorney General or a county attorney may bring a civil action against a chatbot provider that violates the article. Private right of action is available to any user injured by a violation of § 44-1383.01 or § 44-1383.02; a violation of those sections constitutes an injury in fact to a user, eliminating the need for independent proof of standing. The Attorney General is also granted rulemaking authority to implement the article.
Penalties
Civil penalty of up to $5,000 per violation. Punitive damages available for reckless and knowing conduct. Injunctive relief, declaratory relief, reasonable attorney fees and litigation costs also available. A violation of § 44-1383.01 or § 44-1383.02 constitutes an injury in fact, so statutory damages do not require proof of actual monetary harm. The Attorney General or county attorney may also obtain damages, civil penalties, restitution, or other remedies.
Who Is Covered
"Chatbot provider" means any person that creates, distributes or otherwise makes a chatbot available to a user.
What Is Covered
"Chatbot": (a) Means an algorithmic or automated system that generates information through text, audio, image or video in a manner that simulates interpersonal interactions or conversation. (b) Includes artificial intelligence.
Compliance Obligations 19 obligations · click obligation ID to open requirement page
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(1)
Plain Language
Chatbot providers may not use personal data to generate chatbot outputs unless doing so is necessary to fulfill a specific user request and the user has given affirmative consent through a robust consent mechanism. Consent cannot be obtained through broad terms of use, dark patterns, or user inaction. The consent request must be a standalone disclosure, written in plain language, accessible to users with disabilities, and available in the chatbot's operating language. The option to decline must be at least as prominent as the option to accept.
Statutory Text
A chatbot provider may not: 1. Process personal data to inform a chatbot output unless processing personal data is necessary to fulfill an express request that is made by a user and the user provides affirmative consent.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(2)
Plain Language
Chatbot providers are categorically prohibited from using chat logs — meaning both user inputs and chatbot outputs — for any advertising purpose. This includes determining whether to show ads, selecting which product or service to advertise, and customizing ad content for individual users. There is no consent carve-out for this prohibition — even with user consent, chat logs may not be used for advertising targeting or customization.
Statutory Text
A chatbot provider may not: 2. Process a user's chat log: (a) To determine whether to display an advertisement for a product or service to a user. (b) To determine a product or service or category of a product or service to advertise to a user. (c) To customize an advertisement for presentation to a user.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · ChatbotMinors
A.R.S. § 44-1383.01(A)(3)(a)-(b)
Plain Language
When a chatbot provider knows or reasonably should know that a user is a minor — based on objective circumstances — the provider may not process the minor's chat logs or personal data at all without affirmative parental or guardian consent. This includes a separate prohibition on using minor users' data for model training without parental consent. The knowledge standard is constructive — 'reasonably should have known based on knowledge of objective circumstances' — meaning providers cannot ignore obvious indicators of minor status. Safety testing and legally required compliance actions are carved out of the definition of 'training.'
Statutory Text
A chatbot provider may not: 3. Process a user's chat log and personal data: (a) If the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent. (b) For training purposes if the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(3)(c)
Plain Language
Chatbot providers may not use adult users' chat logs or personal data for model training without first obtaining affirmative consent. This requirement applies regardless of context — any use of user interaction data to adjust or modify the underlying model requires opt-in consent. Safety testing and legally required actions are carved out of the definition of 'training' and do not require separate consent.
Statutory Text
A chatbot provider may not: 3. Process a user's chat log and personal data: (c) for training purposes if the user is an adult, unless the chatbot provider first obtains affirmative consent.
D-01 Automated Processing Rights & Data Controls · D-01.4 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(A)(3)(d), (A)(4)
Plain Language
Chatbot providers may not use chat logs, personal data, or input data to profile users — meaning to classify or infer personality traits and behavioral characteristics — beyond what is strictly necessary to fulfill a specific user request. This is a double prohibition: one on using chat logs and personal data for profiling beyond necessity, and a second standalone prohibition on profiling based on personality or behavioral classifications beyond necessity. Processing chat logs for user safety or legal compliance is excluded from the definition of profiling.
Statutory Text
A chatbot provider may not: 3. Process a user's chat log and personal data: (d) To engage in profiling beyond what is necessary to fulfill an express request. 4. Profile a user based on any classification or designation of the user's personality or behavioral characteristic beyond what is necessary to fulfill an express request made by the user.
Other · Chatbot
A.R.S. § 44-1383.01(A)(5)
Plain Language
Chatbot providers are categorically prohibited from selling users' chat logs — meaning exchanging them for monetary or other valuable consideration or making them available to third parties for consideration. Narrow exceptions exist for service providers processing data on behalf of the chatbot provider, user-directed disclosures with affirmative consent, and data the user intentionally made publicly available. This is an absolute prohibition with no consent override.
Statutory Text
A chatbot provider may not: 5. Sell a user's chat logs.
Other · Chatbot
A.R.S. § 44-1383.01(A)(6)
Plain Language
Chatbot providers must delete users' chat logs after ten years unless retention is legally required or necessary for compliance with this article. This is a maximum retention period — providers are free to adopt shorter retention windows. There is no minimum retention requirement except insofar as other provisions (such as user access rights) may require maintaining chat logs while the user relationship is active.
Statutory Text
A chatbot provider may not: 6. Retain a user's chat log for more than ten years, unless retention is necessary to comply with this article or otherwise required by law.
Other · Chatbot
A.R.S. § 44-1383.01(A)(7)
Plain Language
Chatbot providers may not discriminate or retaliate against users who refuse to consent to the use of their chat logs or personal data for training. Prohibited retaliation includes denying products or services, charging higher prices, or providing lower-quality service. This is a non-discrimination-for-exercising-data-rights provision, protecting users who opt out of data use for model training.
Statutory Text
A chatbot provider may not: 7. Discriminate or retaliate against a user, including: (a) Denying products or services to the user. (b) Charging different prices or rates for products or services to the user. (c) Providing lower quality products or service to the user for refusing to consent to the use of chat logs or personal data for training purposes.
D-01 Automated Processing Rights & Data Controls · D-01.1D-01.2 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(B)
Plain Language
Users have a right to access their own chat logs at any time, and chatbot providers must produce them upon request in a downloadable, easy-to-read format. Providers may not retaliate against users who exercise this access right. This is an on-demand data access right — there is no waiting period or limitation on frequency of requests.
Statutory Text
A user has a right to access the user's own chat logs at any time. A chatbot provider shall provide a user's own chat log on request by the user and shall provide the chat log in a downloadable and easy to read format. A chatbot provider may not discriminate or retaliate against a user pursuant to subsection A paragraph 7 of this section that requests the user's chat.
Other · Chatbot
A.R.S. § 44-1383.01(C)
Plain Language
Government entities cannot compel chatbot providers to produce user input data or chat logs except through a wiretap warrant. This provision protects user privacy by imposing a high procedural bar on government access to chatbot interaction data. It creates no affirmative compliance obligation for chatbot providers — it restricts government actors.
Statutory Text
A government entity may not compel the production of or access to input data or chat logs from a chatbot provider, except as pursuant to a wiretap warrant.
G-01 AI Governance Program & Documentation · G-01.1 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.01(D)
Plain Language
Chatbot providers must develop, implement, and maintain a written comprehensive data security program with administrative, technical, and physical safeguards proportionate to the volume and nature of the personal data and chat logs they hold. The program must be publicly available on the provider's website. This is both a governance obligation (establishing and maintaining the program) and a transparency obligation (public posting).
Statutory Text
A chatbot provider shall develop, implement and maintain a comprehensive data security program that contains administrative, technical and physical safeguards that are proportionate to the volume and nature of personal data and chat logs that are maintained by the chatbot provider. The program shall be written and made publicly available on the chatbot provider's website.
Other · Chatbot
A.R.S. § 44-1383.01(E)
Plain Language
Chatbot providers must implement physical, administrative, and technical measures to prevent de-identified data from being re-identified. All processing, retention, and transfer of de-identified data must be conducted without any reasonable means of re-identification. This is an ongoing operational obligation — not a one-time assessment — requiring continuous safeguards against re-identification.
Statutory Text
A chatbot provider shall take the necessary physical, administrative and technical measures to prevent de-identified data from being re-identified and to process, retain and transfer de-identified data without any reasonable means of re-identification.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(A)(1)
Plain Language
Chatbot providers are prohibited from using any term, phrase, or language in chatbot advertising, interface design, or output data that states or implies the chatbot's outputs are endorsed by or equivalent to services from any licensed, registered, or certified professional — including healthcare professionals, attorneys, CPAs, investment advisors, and licensed fiduciaries. This covers the full range of Title 32 professionals plus specifically enumerated financial and legal professionals.
Statutory Text
A chatbot provider may not: 1. Use any term, letter or phrase in the advertising, interface or output data of a chatbot that states or implies that the advertising, interface or output data of a chatbot is endorsed by or equivalent to any of the following: (a) Any certified, registered or licensed professional pursuant to title 32. (b) A licensed legal professional. (c) A certified public accountant as defined in section 32-701. (d) An investment advisor or an investment adviser representative as defined in section 44-3101. (e) A licensed fiduciary as prescribed in title 14, chapter 5, article 7.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.5 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(A)(2)
Plain Language
Chatbot providers may not represent — in advertising, the chatbot interface, or chatbot outputs — that a user's input data or chat logs are confidential. This prevents providers from creating a false impression of professional confidentiality (such as attorney-client privilege or doctor-patient confidentiality) that does not legally attach to chatbot interactions. The prohibition applies across all touchpoints: marketing materials, the product interface itself, and the chatbot's generated responses.
Statutory Text
A chatbot provider may not: 2. Include any representation in the advertising, interface or output data of a chatbot that states or implies the user's input data or chat log is confidential.
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(B)
Plain Language
Chatbot providers must display a clear, conspicuous, and explicit notice that the user is interacting with a chatbot — not a human — before the chatbot generates any output. This disclosure is unconditional (not triggered by a 'reasonable person' test). The notice must repeat at the beginning of each communication, every hour during ongoing interactions, and whenever a user asks if the chatbot is a natural person. The notice must be in the same language as the chatbot's communications and in a font size at least as large as the largest font used in other chatbot communications. Notice form and content must also comply with Attorney General rules.
Statutory Text
A chatbot provider shall provide clear, conspicuous and explicit notice to a user that the user is interacting with a chatbot rather than a natural person before the chatbot may generate any output data. The chatbot provider shall include this notice at the beginning of each chatbot communication with a user, every hour thereafter and each time a user asks whether the chatbot is a natural person. The text of the notice: 1. shall be written in the same language that the chatbot communicates with the user and shall appear in a font size that is easily readable by an average user and is not smaller than the largest font size used for other chatbot communications. 2. must comply with the rules adopted by the attorney general pursuant to section 44-1383.03.
S-01 AI System Safety Program · S-01.4S-01.7 · DeployerDeveloper · Chatbot
A.R.S. § 44-1383.02(C)
Plain Language
Chatbot providers must, on a monthly basis, evaluate their chatbots for potential risk of harm to users and publish information about their chatbots on their website. Providers must also mitigate any identified risks of harm. The specific form and content of evaluations and the definition of risk of harm will be established by Attorney General rulemaking. This is a continuous, monthly operational obligation — significantly more frequent than the annual reviews required by most other AI safety statutes.
Statutory Text
In compliance with the rules adopted by the attorney general pursuant to section 44-1383.03, a chatbot provider shall: 1. On a monthly basis: (a) Evaluate its chatbot for potential risk of harm to users. (b) Make information about its chatbot publicly available on its website. 2. Mitigate any risk of harm to users.
Other · Chatbot
A.R.S. § 44-1383.03
Plain Language
The Attorney General is directed to adopt rules implementing this article, including specifying the form and content of AI identity disclosures, providing a template notice, defining potential risks of harm to users, and establishing requirements for harm mitigation. The AG also has discretionary authority to adopt any other rules necessary for implementation. This provision creates no direct compliance obligation on chatbot providers — it delegates rulemaking authority to the AG.
Statutory Text
A. The attorney general shall adopt rules to implement this article. The rules shall: 1. Describe the form and content of the notice that is required pursuant to section 44-1383.02. 2. Provide an example template for the notice that is required pursuant to section 44-1383.02. 3. Describe any potential risk of harm to users. 4. Provide requirements for a chatbot provider to implement to reduce the risk of harm to users. B. The attorney general may adopt any other rules necessary to implement this article.
Other · Chatbot
A.R.S. § 44-1383.04
Plain Language
Chatbots are classified as products under Arizona product liability law. Chatbot providers have an absolute duty to ensure their chatbots do not cause injury to users. Notably, providers are liable for injuries even if they exercised all reasonable care (eliminating a negligence defense) or did not directly distribute the chatbot to the user (eliminating a privity defense). This effectively imposes strict liability on chatbot providers for any user injury caused by their chatbot, regardless of the care taken or the distribution chain. This is a significant departure from typical software liability frameworks.
Statutory Text
A. A chatbot is considered a product for the purposes of a product liability action as prescribed in title 12, chapter 6, article 9. B. A chatbot provider has a duty to ensure that the use of the chatbot provider's does not cause injury to a user. C. A chatbot provider is liable for any injury that the chatbot causes to a user if either of the following occurs: 1. The chatbot provider exercised all reasonable care in the design and distribution of the chatbot. 2. The chatbot provider did not directly distribute the chatbot to the user or otherwise enter into a contractual relationship with the user.
Other · Chatbot
A.R.S. § 44-1383.05
Plain Language
This provision creates the enforcement framework. The Attorney General or county attorneys may bring civil actions for injunctions, compliance orders, damages, civil penalties, restitution, and attorney fees. Any violation of §§ 44-1383.01 or 44-1383.02 automatically constitutes an injury in fact, eliminating standing barriers for private plaintiffs. Individual users may sue for up to $5,000 per violation, punitive damages for reckless or knowing conduct, injunctive and declaratory relief, and attorney fees. This provision creates no independent compliance obligation — it activates enforcement of the obligations in §§ 44-1383.01 and 44-1383.02.
Statutory Text
A. The attorney general or a county attorney may bring a civil action against a chatbot provider that violates this article and that includes any of the following: 1. Enjoining an act or practice in violation of this article. 2. Enforcing compliance with this article or a rule adopted pursuant to this article. 3. Obtaining damages, civil penalties, restitution or other remedies. 4. Obtaining reasonable attorney fees and other litigation costs. B. A violation of section 44-1383.01 or 44-1383.02 constitutes an injury in fact to a user. C. A user who is injured by a violation of section 44-1383.01 or 44-1383.02 may bring a civil action against the chatbot provider, and a court of competent jurisdiction may award a prevailing plaintiff any of the following: 1. A civil penalty of not more than $5,000 per violation of this article. 2. Punitive damages for reckless and knowing conduct. 3. Injunctive relief. 4. Declaratory relief. 5. Reasonable attorney fees and litigation costs.