HB-2311
AZ · State · USA
AZ
USA
● Pending
Proposed Effective Date
2027-10-01
Arizona HB 2311 — Artificial Intelligence Service; Disclosures; Requirements (Amending Title 18, Arizona Revised Statutes, by adding Chapter 8)
Arizona HB 2311 imposes safety, disclosure, and content-restriction obligations on operators of conversational AI services — defined as general-public-facing AI applications that primarily simulate human conversation, excluding developer tools, narrow-topic bots, enterprise software, customer service bots, and voice assistants. Key obligations include: AI identity disclosure for minor account holders (with periodic reminders every three hours) and for all users when a reasonable person could be misled; prohibitions on variable-reward engagement tactics for minors; restrictions on sexually explicit and emotionally manipulative content directed at minors; mandatory crisis response protocols for suicidal ideation; privacy management tools for minors and parents; and a prohibition on representing that the service provides professional mental or behavioral health care. Enforcement is exclusively through the Attorney General, with civil penalties up to $1,000 per violation capped at $500,000 per operator. The law explicitly shields AI model developers from liability for violations by third-party operators. Effective October 1, 2027.
Summary

Arizona HB 2311 imposes safety, disclosure, and content-restriction obligations on operators of conversational AI services — defined as general-public-facing AI applications that primarily simulate human conversation, excluding developer tools, narrow-topic bots, enterprise software, customer service bots, and voice assistants. Key obligations include: AI identity disclosure for minor account holders (with periodic reminders every three hours) and for all users when a reasonable person could be misled; prohibitions on variable-reward engagement tactics for minors; restrictions on sexually explicit and emotionally manipulative content directed at minors; mandatory crisis response protocols for suicidal ideation; privacy management tools for minors and parents; and a prohibition on representing that the service provides professional mental or behavioral health care. Enforcement is exclusively through the Attorney General, with civil penalties up to $1,000 per violation capped at $500,000 per operator. The law explicitly shields AI model developers from liability for violations by third-party operators. Effective October 1, 2027.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement only. The Attorney General may seek civil penalties and injunctive relief. No private right of action is created. The statute expressly provides that it does not create a private right of action to enforce the chapter or to support a private right of action under any other law.
Penalties
Greater of actual damages or civil penalties of $1,000 per violation, not to exceed $500,000 per operator. Injunctive relief is also available. Statutory civil penalties do not require proof of actual harm.
Who Is Covered
"Operator" (a) Means a person that makes available a conversational AI service to the public. (b) Does not include a mobile application store or search engine solely because the application or engine provides access to a conversational AI service.
What Is Covered
"Conversational AI service" (a) Means an artificial intelligence software application, web interface or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual or aural communications. (b) Does not include an application, web interface or computer program that meets any of the following: (i) Is primarily designed and marketed for use by developers or researchers. (ii) Is a feature within another software application, web interface or computer program that is not a conversational AI service. (iii) Is designed to provide outputs relating to a narrow and discrete topic. (iv) Is primarily designed and marketed for commercial use by business entities. (v) Functions as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device. (vi) Is used by a business entity solely for internal purposes. (vii) Is used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by the business entity, customer service account information or other information strictly related to the business entity's customer service. (viii) Is used solely to provide commerce-related or transactional assistance, including product or service recommendations, shopping, ordering, payments, delivery, returns or customer support.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
A.R.S. § 18-802(A)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with a conversational AI service. The operator may satisfy this obligation in one of two ways: (1) a persistent visible disclaimer displayed throughout the interaction, or (2) a disclosure at the beginning of each session plus a reminder at least every three hours during continuous interactions. This obligation is unconditional — it applies whenever the operator has actual knowledge or reasonable certainty that the user is under 18, regardless of whether the user could be misled.
Statutory Text
A. Each operator shall clearly and conspicuously disclose to a minor account holder in either of the following ways that the minor is interacting with a conversational AI service: 1. As a persistent visible disclaimer. 2. At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
A.R.S. § 18-802(E)
Plain Language
When a reasonable person could be misled into thinking they are interacting with a human, the operator must clearly and conspicuously disclose that the conversational AI service is artificial intelligence. This is a conditional trigger — it applies to all users (not just minors) but only when the AI's presentation could reasonably mislead someone into thinking they are talking to a human. If the system clearly presents as AI, no disclosure is required under this provision.
Statutory Text
E. If a reasonable person would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
A.R.S. § 18-802(B)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points or similar rewards given at unpredictable intervals — to encourage increased engagement by minor account holders. The prohibition requires both knowledge of minor status and intent to encourage increased engagement. Random reward schedules designed to create compulsive engagement patterns are the primary target.
Statutory Text
B. If an operator knows that an account holder is a minor, the operator may not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
A.R.S. § 18-802(C)
Plain Language
Operators must institute reasonable measures to prevent their conversational AI service from: (1) producing visual material of sexual conduct for minor account holders, (2) generating direct statements that a minor should engage in sexual conduct, and (3) generating statements that sexually objectify a minor account holder. The standard is 'reasonable measures' — not absolute prevention — but the obligation covers three distinct categories of harmful sexually explicit content directed at minors.
Statutory Text
C. Each operator shall institute reasonable measures to prevent the conversational AI service from doing any of the following for minor account holders: 1. Producing visual material of sexual conduct. 2. Generating direct statements that the account holder should engage in sexual conduct. 3. Generating statements that sexually objectify the account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
A.R.S. § 18-802(D)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would mislead a minor into believing they are interacting with a human. The enumerated categories — claims of sentience, emotional dependence simulation, romantic or sexual innuendos, and adult-minor romantic role-playing — are illustrative, not exhaustive ('including any of the following'). The standard is a reasonable-person test: would the statement lead a reasonable person to believe they are interacting with a human. This is an output-restriction obligation focused on preventing emotional manipulation and simulated human intimacy with minors.
Statutory Text
D. For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that the person is interacting with a human, including any of the following: 1. Explicit claims that the conversational AI service is sentient or human. 2. Statements that simulate emotional dependence. 3. Statements that simulate romantic or sexual innuendos. 4. Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
A.R.S. § 18-802(F)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For minors under 13, these tools must also be provided directly to the parent or guardian. For minors 13 and older, operators must also offer related tools to parents or guardians 'as appropriate based on relevant risks,' giving operators some discretion in calibrating parental access for older teens. The requirement ensures both direct minor control and parental oversight capability at age-appropriate levels.
Statutory Text
F. Each operator shall offer tools for minor account holders and, if the account holder is under thirteen years of age, the account holder's parent or guardian, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parent or guardian of a minor account holder who is thirteen years of age or above, as appropriate based on relevant risks.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
A.R.S. § 18-802(G)
Plain Language
Every operator must adopt and maintain a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer users to crisis service providers — such as a suicide hotline, crisis text line, or other appropriate crisis service. This applies to all users, not just minors. The standard is 'reasonable efforts,' not absolute guarantee of referral. Note that unlike California SB 243, this provision does not require the protocol details to be published on the operator's website, nor does it require annual reporting of crisis referral metrics.
Statutory Text
G. Each operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, including making reasonable efforts to provide a response to the user that refers the user to crisis service providers such as a suicide hotline, crisis text line or other appropriate crisis service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotHealthcare
A.R.S. § 18-802(H)
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to represent that it provides professional mental or behavioral health care. Both elements — knowledge and intent — must be present. The prohibition targets explicit representations that the AI is designed for professional clinical care, not incidental health-related responses. This is a narrow prohibition: it covers explicit claims of providing professional mental or behavioral health care specifically, and requires both knowing and intentional conduct by the operator.
Statutory Text
H. An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Other · Chatbot
A.R.S. § 18-802(I)-(J)
Plain Language
This provision establishes the enforcement framework and damages structure for the chapter. Violations are punishable by the greater of actual damages or $1,000 per violation (capped at $500,000 per operator), plus injunctive relief. Only the Attorney General may seek civil penalties — no private right of action is created, and the statute expressly bars using a violation to support a private right of action under any other law. This creates no new compliance obligation of its own.
Statutory Text
I. An operator that violates this chapter is subject to an injunction and is liable for the greater of either: 1. Actual damages. 2. Civil penalties of $1,000 per violation, not to exceed $500,000 per operator. J. A violation of this section is punishable by a civil penalty, to be sought by the attorney general only. This section does not create a private right of action to enforce this section or to support a private right of action under any other law.
Other · Chatbot
A.R.S. § 18-802(K)
Plain Language
AI model developers are not liable under this chapter for violations committed by a third-party operator who makes the conversational AI service available to the public. This is a developer liability shield — obligations under the chapter fall on the operator, not the upstream model developer. This creates no new compliance obligation.
Statutory Text
K. This section does not create liability for the developer of an artificial intelligence model for any violation of this section by a conversational AI service that is made available to the public by a third party operator.