HB-2311
AZ · State · USA
AZ
USA
● Pending
Proposed Effective Date
2027-10-01
Arizona HB 2311 — Artificial Intelligence Service; Disclosures; Requirements (amending Title 18, Arizona Revised Statutes, by adding Chapter 8)
Arizona HB 2311 imposes disclosure, safety, and content restriction obligations on operators of conversational AI services accessible to the general public. For minor account holders, operators must display persistent or periodic AI identity disclosures, prohibit variable-ratio reward engagement features, prevent sexually explicit and emotionally manipulative content, and offer privacy management tools. For all users, operators must disclose AI identity when a reasonable person would be misled and must adopt crisis response protocols for suicidal ideation and self-harm. Operators may not represent their AI as providing professional mental or behavioral health care. Enforcement is exclusively through the Arizona attorney general, with civil penalties of up to $1,000 per violation capped at $500,000 per operator. The statute expressly shields AI model developers from liability for violations by third-party operators.
Summary

Arizona HB 2311 imposes disclosure, safety, and content restriction obligations on operators of conversational AI services accessible to the general public. For minor account holders, operators must display persistent or periodic AI identity disclosures, prohibit variable-ratio reward engagement features, prevent sexually explicit and emotionally manipulative content, and offer privacy management tools. For all users, operators must disclose AI identity when a reasonable person would be misled and must adopt crisis response protocols for suicidal ideation and self-harm. Operators may not represent their AI as providing professional mental or behavioral health care. Enforcement is exclusively through the Arizona attorney general, with civil penalties of up to $1,000 per violation capped at $500,000 per operator. The statute expressly shields AI model developers from liability for violations by third-party operators.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. Civil penalties may be sought exclusively by the attorney general. The statute expressly does not create a private right of action and expressly prohibits use of this section to support a private right of action under any other law. No cure period or safe harbor is specified.
Penalties
The greater of actual damages or $1,000 per violation in civil penalties, not to exceed $500,000 per operator. Injunctive relief is also available. Statutory civil penalties do not require proof of actual harm.
Who Is Covered
"Operator" (a) Means a person that makes available a conversational AI service to the public. (b) Does not include a mobile application store or search engine solely because the application or engine provides access to a Conversational AI Service.
What Is Covered
"Conversational AI Service" (a) Means an artificial intelligence software application, web interface or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual or aural communications. (b) Does not include an application, web interface or computer program that meets any of the following: (i) Is primarily designed and marketed for use by developers or researchers. (ii) Is a feature within another software application, web interface or computer program that is not a conversational AI service. (iii) Is designed to provide outputs relating to a narrow and discrete topic. (iv) Is primarily designed and marketed for commercial use by business entities. (v) Functions as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device. (vi) Is used by a business entity solely for internal purposes. (vii) Is used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by the business entity, customer service account information or other information strictly related to the business entity's customer service. (viii) Is used solely to provide commerce-related or transactional assistance, including product or service recommendations, shopping, ordering, payments, delivery, returns or customer support.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
A.R.S. § 18-802(A)
Plain Language
Operators must clearly and conspicuously disclose to minor account holders that they are interacting with an AI. The operator may choose between two disclosure methods: (1) a persistent visible disclaimer that remains on-screen throughout the interaction, or (2) a disclosure at the beginning of each session plus at least every three hours in a continuous session. This obligation is unconditional for minors — it applies regardless of whether the AI might be mistaken for a human. The minor definition is knowledge-based: it applies only when the operator has actual knowledge or reasonable certainty the user is under 18.
Statutory Text
A. Each operator shall clearly and conspicuously disclose to a minor account holder in either of the following ways that the minor is interacting with a conversational AI service: 1. As a persistent visible disclaimer. 2. At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
A.R.S. § 18-802(E)
Plain Language
For all users (not just minors), if a reasonable person could be misled into believing they are interacting with a human, the operator must clearly and conspicuously disclose that the service is AI. This is a conditional trigger — if the conversational AI service clearly presents itself as AI from the outset, no disclosure is required.
Statutory Text
E. If a reasonable person would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
A.R.S. § 18-802(B)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points, badges, or similar rewards delivered at unpredictable intervals — to encourage increased engagement by minor account holders. The prohibition includes an intent element: the rewards must be provided 'with the intent to encourage increased engagement.' This means that if an operator offers a gamified reward system but does not intend for it to specifically encourage minors to engage more, it may not violate this provision. However, if the operator designs the reward system in a way that is likely to increase engagement among minors and does so with that intent, it would be prohibited.
Statutory Text
B. If an Operator knows that an account holder is a minor, the operator may not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
A.R.S. § 18-802(C)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating three categories of content for minor account holders: (1) visual material depicting sexual conduct, (2) direct statements encouraging the minor to engage in sexual conduct, and (3) statements that sexually objectify the minor. The standard is 'reasonable measures' — not absolute prevention — giving operators some flexibility in implementation. 'Sexual conduct' is defined by cross-reference to A.R.S. § 13-3551, which covers a broad range of sexual acts.
Statutory Text
C. Each Operator shall institute reasonable measures to prevent the conversational AI service from doing any of the following for minor account holders: 1. Producing visual material of sexual conduct. 2. Generating direct statements that the account holder should engage in sexual conduct. 3. Generating statements that sexually objectify the account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
A.R.S. § 18-802(D)
Plain Language
For minor account holders, operators must implement reasonable measures to prevent the AI from generating statements that would lead a reasonable person to believe they are interacting with a human. The statute provides a non-exhaustive list of prohibited statement types: claims of sentience or humanity, emotional dependence simulation, romantic or sexual innuendos, and adult-minor romantic role-playing. The 'including' framing means this list is illustrative — any statement that would mislead a reasonable person into thinking they are talking to a human is covered.
Statutory Text
D. For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that the person is interacting with a human, including any of the following: 1. Explicit claims that the conversational AI service is sentient or human. 2. Statements that simulate emotional dependence. 3. Statements that simulate romantic or sexual innuendos. 4. Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
A.R.S. § 18-802(F)
Plain Language
Operators must provide privacy and account management tools to minor account holders. For minors under 13, these tools must also be provided directly to the parent or guardian. For minors 13 and older, the operator must also offer related tools to parents or guardians 'as appropriate based on relevant risks' — a flexible standard that gives operators discretion to calibrate parental access based on the specific risks their platform presents. This provision emphasizes parental involvement in managing minors' interactions with conversational AI services, particularly for younger children, while allowing for risk-based discretion for older minors.
Statutory Text
F. Each operator shall offer tools for minor account holders and, if the account holder is under thirteen years of age, the account holder's parent or guardian, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parent or guardian of a minor account holder who is thirteen years of age or above, as appropriate based on relevant risks.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
A.R.S. § 18-802(G)
Plain Language
Operators must adopt a protocol for responding to user prompts involving suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer users to crisis services such as suicide hotlines or crisis text lines. This obligation applies to all users (not just minors), and the standard is 'reasonable efforts' rather than an absolute prevention mandate. This statute does NOT separately require publication of the protocol on the operator's website or annual reporting of crisis referral metrics to a state authority.
Statutory Text
G. Each operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, including making reasonable efforts to provide a response to the user that refers the user to crisis service providers such as a suicide hotline, crisis text line or other appropriate crisis service.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotHealthcare
A.R.S. § 18-802(H)
Plain Language
Operators may not knowingly and intentionally cause or program their conversational AI service to explicitly represent itself as providing professional mental or behavioral health care. This is a dual-intent standard — the operator must both know and intend the misrepresentation. The prohibition is limited to explicit representations; it does not clearly cover implicit suggestions or interface designs that merely imply therapeutic capability without stating it directly.
Statutory Text
H. An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Other · DeveloperDeployer · Chatbot
A.R.S. § 18-802(K)
Plain Language
Developers of underlying AI models are not liable under this statute for violations committed by third-party operators who deploy those models as conversational AI services. This shields upstream model providers (e.g., a company that trains and licenses a large language model) from enforcement actions targeting downstream operators. The carve-out only applies when a third party makes the service available — if the developer also operates the conversational AI service directly, this shield does not apply.
Statutory Text
K. This section does not create liability for the developer of an artificial intelligence model for any violation of this section by a conversational AI service that is made available to the public by a third party operator.