HB-324
AL · State · USA
AL
USA
● Pending
Proposed Effective Date
2026-10-01
Alabama HB 324 — Relating to artificial intelligence (AI) chatbots; to require AI chatbot deployers to implement a reasonable age verification process and verify the age of all AI chatbot users; to provide prohibitions on the provision of certain AI chatbots to minors; to require AI chatbot deployers to provide alternative versions of the platform without human-like features to minors; to require AI chatbot deployers to adopt protocols for AI chatbots to detect and mitigate emergency situations; to limit the amount and type of information AI chatbot deployers are allowed to collect and store; to allow therapeutic AI chatbots meeting certain requirements to be made available to minors; to provide a private right of action for certain users; and to authorize the Attorney General to bring suit to enforce this act.
Imposes safety obligations on covered entities that make AI chatbots available in the United States, with a focus on protecting minors (defined as under 19 in Alabama). Requires all users to create accounts and undergo a reasonable age verification process; prohibits making AI chatbots with 'human-like features' — such as expressions of sentience, emotional relationship-seeking, or impersonation — available to minors. Requires covered entities to implement emergency situation detection and response protocols and limits data collection to the minimum necessary for a legitimate purpose. Creates a narrow exception for therapeutic chatbots prescribed by licensed mental health professionals under enumerated conditions. Enforceable via private right of action by minors or their parents (up to $750 per violation in statutory damages) and by the Attorney General (up to $7,500 per intentional violation).
Summary

Imposes safety obligations on covered entities that make AI chatbots available in the United States, with a focus on protecting minors (defined as under 19 in Alabama). Requires all users to create accounts and undergo a reasonable age verification process; prohibits making AI chatbots with 'human-like features' — such as expressions of sentience, emotional relationship-seeking, or impersonation — available to minors. Requires covered entities to implement emergency situation detection and response protocols and limits data collection to the minimum necessary for a legitimate purpose. Creates a narrow exception for therapeutic chatbots prescribed by licensed mental health professionals under enumerated conditions. Enforceable via private right of action by minors or their parents (up to $750 per violation in statutory damages) and by the Attorney General (up to $7,500 per intentional violation).

Enforcement & Penalties
Enforcement Authority
Dual enforcement. The Attorney General may bring an action against an operator upon complaint or otherwise, whenever it appears a person has engaged in or is about to engage in prohibited acts or practices. Private right of action available to a minor who uses a noncompliant AI chatbot, or a parent or guardian acting on the minor's behalf, individually or as a class action. No cure period or safe harbor is specified.
Penalties
Private action: greater of actual damages or statutory damages not to exceed $750 per violation; injunctive relief also available. Attorney General action: injunctive relief; civil penalties up to $2,500 per violation or up to $7,500 per intentional violation; and any other remedies the court deems appropriate. Statutory damages do not require proof of actual monetary harm.
Who Is Covered
COVERED ENTITY. Any person who owns, operates, or otherwise makes available an AI chatbot to individuals in the United States.
What Is Covered
AI CHATBOT. a. Any generative artificial intelligence interactive computer service or software application that: 1. Produces new expressive content or responses not fully predetermined by the developer or operator of the service or application; and 2. Accepts open-ended, natural language or multimodal user input and produces adaptive or context-responsive output. b. The term does not include an interactive computer service or software application that: 1. Limits the responses to contextualized replies; or 2. Is unable to respond on a range of topics outside of a narrow specified purpose.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
Section 2(a)-(b)(1)-(3), (d)
Plain Language
Every covered entity must require all users to create an account before using an AI chatbot. All existing accounts must be frozen and cannot be restored until the user completes a reasonable age verification process; new accounts require age verification at creation. Users must be classified as minors or adults. Periodic re-verification of previously verified accounts is also required. A covered entity may outsource age verification to a third party, but this does not relieve the covered entity of liability. Notably, simply entering a birth date or inferring age from IP address or device identifiers does not qualify as reasonable age verification — government ID or a commercial age verification system is required.
Statutory Text
(a) Each covered entity shall require each individual accessing an AI chatbot to make a user account in order to use or otherwise interact with the AI chatbot. (b)(1) With respect to each existing user account of an AI chatbot, a covered entity shall: a. Freeze existing user accounts; b. Require that the user is age verified through a reasonable age verification process to restore the functionality of the account; and c. Classify each age-verified user as a minor or an adult based on the reasonable age verification process. (2) At the time an individual creates a new user account to use an AI chatbot, a covered entity shall: a. Require that each individual is age verified through a reasonable age verification process; and b. Classify each individual as a minor or an adult based on the reasonable age verification process. (3) A covered entity shall periodically review previously age-verified user accounts using a reasonable age verification process, subject to subsection (d). (d) For purposes of subsection (b), a covered entity may contract with a third party to implement the covered entity's reasonable age verification process. However, the use of a third party for a reasonable age verification process shall not relieve the covered entity of its obligations or from liability under this act.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
Section 2(c)
Plain Language
Covered entities must either (1) block minors from accessing any AI chatbot with human-like features — including expressions of sentience, emotional relationship-building, impersonation of real persons, excessive praise fostering emotional attachment, nudging for return engagement, or pay-gated intimacy — or (2) provide minors with an alternative version of the chatbot stripped of all human-like features, where doing so is reasonable given the chatbot's purpose. Generic social formalities, functional evaluations, and neutral offers of further help are carved out from the human-like feature definition. This is a disjunctive obligation — covered entities may choose either approach.
Statutory Text
(c) Each covered entity shall: (1) Ensure that any AI chatbot operated or distributed by the platform does not make human-like features available to minors to use, interact with, purchase, or converse with; or (2) Provide an alternative version of the AI chatbot to minors without human-like features, if reasonable given the purpose of the AI chatbot.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · ChatbotMinors
Section 2(e)
Plain Language
Covered entities must implement and continuously maintain systems that can detect, promptly respond to, report, and mitigate emergency situations — defined as any situation where a user indicates intent to harm themselves or others. The statute requires that user safety and well-being be prioritized over the covered entity's other interests (e.g., engagement, revenue). Unlike some companion chatbot statutes, this obligation applies to all users, not only minors. The statute does not specify particular crisis referral services or protocols, leaving the 'reasonably effective' standard as the measure of compliance.
Statutory Text
(e) Each covered entity shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the covered entity's other interests.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
Section 2(f)
Plain Language
Covered entities are subject to a data minimization requirement: they may collect and store only information that (1) does not conflict with a 'trusted party's' best interests, (2) is sufficient for a legitimate purpose, (3) is relevant to that purpose, and (4) is the minimum amount needed. This is a three-prong necessity test layered on top of a best-interests constraint. The term 'trusted party' is not defined in the statute, creating significant ambiguity — it likely refers to the user or the minor's parent/guardian, but this is not explicit.
Statutory Text
(f) Each covered entity shall collect and store only information that does not conflict with a trusted party's best interests, which must be: (1) Sufficient to fulfill a legitimate purpose of the covered entity; (2) Relevant to the legitimate purpose of the covered entity; and (3) The minimum amount of information needed for the legitimate purpose of the covered entity.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotMinorsHealthcare
Section 3(1)-(2)
Plain Language
Therapeutic chatbots that are made available to minors under the Section 3 exception must provide a clear and conspicuous disclaimer — verbally or in writing — at the beginning of each interaction stating that the chatbot is an AI and not a licensed professional. Additionally, the chatbot must not be marketed or designated as a substitute for a human professional. These are conditions precedent to the therapeutic chatbot exception; failure to meet them means the chatbot cannot be made available to minors under Section 3.
Statutory Text
(1) The therapeutic AI chatbot provides a clear and conspicuous disclaimer, verbally or in writing, at the beginning of each interaction that the AI chatbot is an artificial intelligence and not a licensed professional. (2) The AI chatbot is not marketed or designated as a substitute for a human professional.
HC-02 AI in Licensed Professional Practice Restrictions · HC-02.1HC-02.3 · DeployerProfessional · ChatbotMinorsHealthcare
Section 3(3)-(6)
Plain Language
A therapeutic chatbot may only be made available to minors if: (1) a licensed mental health professional individually assesses the minor user's suitability, prescribes the tool within a comprehensive treatment plan, and monitors its ongoing use and impact; (2) the covered entity has robust, independent, peer-reviewed clinical trial data demonstrating safety and efficacy for the specific conditions and populations at issue; (3) the system's functions, limitations, and data privacy policies are transparent to both the prescribing professional and the user; and (4) the covered entity establishes clear lines of accountability for harm. These are cumulative conditions — all must be met alongside the Section 3(1)-(2) disclosure requirements. This effectively gates minor access to therapeutic chatbots behind a physician-prescribes-and-monitors model with clinical evidence requirements more akin to FDA-cleared digital therapeutics than typical consumer chatbot regulation.
Statutory Text
(3) A licensed mental health professional assesses a user's suitability and prescribes the tool as part of a comprehensive treatment plan and monitors its use and impact. (4) The covered entity provides robust, independent, and peer-reviewed clinical trial data demonstrating the safety and efficacy of the tool for specific conditions and populations. (5) The system's functions, limitations, and data privacy policies are transparent to both the licensed mental health professional and the user. (6) The covered entity establishes clear lines of accountability for any harm caused by the therapeutic AI chatbot.