LD-2162
ME · State · USA
ME
USA
● Failed
Effective Date
2026-06-15
Maine LD 2162 — An Act to Regulate and Prevent Children's Access to Artificial Intelligence Chatbots with Human-like Features and Social Artificial Intelligence Companions
Prohibits deployers of AI chatbots and social AI companions from making chatbots with 'human-like features' or social AI companions accessible to minors in Maine. Human-like features include behavior conveying sentience or emotions, attempts to build emotional relationships, and impersonation of real individuals. Deployers must implement reasonable age verification to enforce these restrictions. A narrow exemption allows therapy chatbots to be made available to minors under strict conditions including licensed professional oversight and peer-reviewed clinical trial data. The bill also imposes all-user protections requiring deployers to detect and respond to emergency situations (user intent to self-harm or harm others) and to minimize user data collection. Enforcement is by the Attorney General (up to $7,500 per intentional violation) and through a private right of action for minors ($100–$750 per user per incident or actual damages).
Summary

Prohibits deployers of AI chatbots and social AI companions from making chatbots with 'human-like features' or social AI companions accessible to minors in Maine. Human-like features include behavior conveying sentience or emotions, attempts to build emotional relationships, and impersonation of real individuals. Deployers must implement reasonable age verification to enforce these restrictions. A narrow exemption allows therapy chatbots to be made available to minors under strict conditions including licensed professional oversight and peer-reviewed clinical trial data. The bill also imposes all-user protections requiring deployers to detect and respond to emergency situations (user intent to self-harm or harm others) and to minimize user data collection. Enforcement is by the Attorney General (up to $7,500 per intentional violation) and through a private right of action for minors ($100–$750 per user per incident or actual damages).

Enforcement & Penalties
Enforcement Authority
Dual enforcement. The Attorney General may bring a civil action against any person that violates the chapter. A minor who uses a noncompliant chatbot, or a parent or guardian acting on the minor's behalf, may bring a civil action independently or as part of a class action. The Department of the Attorney General may adopt rules necessary to implement the chapter.
Penalties
AG enforcement: injunctive relief, disgorgement of profits or revenues, and civil penalties of up to $2,500 per violation or up to $7,500 per intentional violation. Private right of action for minors: statutory damages between $100 and $750 per user per incident, or actual damages, whichever is greater; injunctive or declaratory relief available in lieu of or in addition to damages. Statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Deployer" means a person that operates or distributes an artificial intelligence chatbot, therapy chatbot or social artificial intelligence companion.
What Is Covered
"Artificial intelligence chatbot" or "chatbot" means an artificial intelligence system with which users can interact by or through a software application, web interface, computer program or other interface that approximates or simulates human conversation and interaction through a text, audio or visual medium.
"Social artificial intelligence companion" or "social AI companion" means an artificial intelligence system that is specifically designed, marketed or optimized to form an ongoing social or emotional attachment with a user, whether or not such a system also provides information, completes tasks or assists with specific functions.
"Therapy chatbot" means a chatbot modified or designed with the primary purpose of providing mental health support, counseling or therapeutic intervention through the diagnosis, treatment, mitigation or prevention of mental health conditions.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
10 MRSA § 1500-RR(1)
Plain Language
Deployers must ensure that chatbots with human-like features — meaning chatbots that convey sentience or emotions, attempt to build emotional relationships, or impersonate real individuals — are not accessible to minors. Deployers must implement reasonable age verification to enforce this restriction. As a practical accommodation, deployers may offer a stripped-down version of the chatbot without human-like features to minors and unverified users. The carve-outs for 'functional evaluations' and 'generic social formalities' mean that routine conversational politeness and factual assessments do not trigger the restriction.
Statutory Text
1. Chatbots with human-like features; no minor access; age verification; alternative versions. A deployer shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that chatbots with human-like features are not accessible to minors. A deployer may, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot without human-like features available to minors and any user who has not verified that user's age.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
10 MRSA § 1500-RR(2)
Plain Language
Deployers must ensure that any AI system that primarily functions as a social AI companion — meaning a system designed, marketed, or optimized to form ongoing social or emotional attachment with users — is entirely unavailable to minors. Unlike §1500-RR(1), there is no alternative-version option here: the product category itself is categorically prohibited for minor access. Deployers must implement reasonable age verification to enforce this prohibition.
Statutory Text
2. Social artificial intelligence companions; no minor access; age verification. A deployer shall ensure that any artificial intelligence system, including a chatbot, operated or distributed by the deployer that primarily functions as a social artificial intelligence companion is not available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that such chatbots are not accessible to minors.
MN-01 Minor User AI Safety Protections · MN-01.5 · DeployerDeveloper · ChatbotMinorsHealthcare
10 MRSA § 1500-RR(3)
Plain Language
A therapy chatbot may be made available to minors notwithstanding the general prohibitions on human-like features and social AI companions, but only if six cumulative conditions are met: (1) the chatbot disclaims at the start of each interaction that it is AI, not a licensed professional; (2) it is not marketed as a substitute for a licensed professional; (3) a licensed mental health professional prescribes and monitors the minor's use as part of a treatment plan; (4) the developer provides peer-reviewed clinical trial data on safety and efficacy; (5) the chatbot's functions, limitations, and data privacy policies are transparent to the supervising professional and the user; and (6) the deployer has established clear accountability lines for harm. All six conditions must be satisfied — failure on any one means the exemption does not apply and the minor access prohibition stands.
Statutory Text
3. Exemption for therapy chatbots. Notwithstanding subsections 1 and 2, a deployer may make available to a minor a therapy chatbot as long as all of the following requirements are met: A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional; B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional; C. A licensed mental health professional, such as a licensed clinical psychologist, assesses a minor's suitability, prescribes use of the therapy chatbot as part of a comprehensive treatment plan and monitors its use and impact on the minor; D. Developers of the therapy chatbot provide robust, independent, peer-reviewed clinical trial data demonstrating the safety and efficacy of the therapy chatbot for specific conditions and populations; E. The therapy chatbot's functions, limitations and data privacy policies are transparent to the licensed mental health professional under paragraph C and the user; and F. The deployer has established clear lines of accountability to address any harm caused by the therapy chatbot.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · ChatbotMinors
10 MRSA § 1500-SS(1)
Plain Language
Deployers must implement and maintain systems that can detect when a user indicates intent to harm themselves or another person, and must promptly respond to, report, and mitigate such situations. The deployer's response must prioritize the user's safety and well-being over the deployer's commercial or other interests. This is a continuous operating requirement — the systems must be maintained and reasonably effective at all times, not merely documented. Note that unlike CA SB 243, this provision applies to all users, not just minors, and covers both self-harm and intent to harm others.
Statutory Text
1. Emergency situations; detection and response. A deployer shall implement and maintain reasonably effective systems to detect, promptly respond to, report and mitigate emergency situations in a manner that prioritizes a user's safety and well-being over the deployer's other interests.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
10 MRSA § 1500-SS(2)
Plain Language
Deployers face a two-part data minimization obligation. First, they may not collect or store any information that conflicts with a user's safety and well-being — this is an absolute prohibition regardless of business purpose. Second, even for legitimate purposes, data collection and storage must be limited to information that is both relevant to the purpose and the minimum amount necessary. This is a purpose-limitation and data-minimization standard similar to GDPR's data minimization principle, applied specifically to chatbot and social AI companion deployers for all users, not just minors.
Statutory Text
2. User information collection and storage. A deployer shall collect and store only information that does not conflict with a user's safety and well-being. A deployer may not collect and store information except to fulfill a legitimate purpose of the deployer. A deployer may collect and store information that is adequate to fulfill a legitimate purpose of the deployer, but only to the extent that the information: A. Is relevant to that legitimate purpose; and B. Is the minimum amount of information necessary to fulfill that legitimate purpose.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotMinorsHealthcare
10 MRSA § 1500-RR(3)(A)
Plain Language
When a therapy chatbot is made available to a minor under the exemption, it must provide a clear and conspicuous disclaimer at the beginning of each interaction that it is artificial intelligence and not a licensed mental health professional. This is an unconditional per-session AI identity disclosure — it must appear at the start of every individual interaction, not just the first one. This obligation applies only to therapy chatbots serving minors under the §1500-RR(3) exemption.
Statutory Text
A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotMinorsHealthcare
10 MRSA § 1500-RR(3)(B)
Plain Language
A therapy chatbot made available to minors under the exemption must not be marketed or designated as a substitute for a licensed mental health professional. This is an anti-misrepresentation requirement — it prohibits deployers from positioning the chatbot as equivalent to professional care, whether in advertising, product descriptions, or in-product framing.
Statutory Text
B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional;
Other · ChatbotMinors
10 MRSA § 1500-UU
Plain Language
The Attorney General's Department may adopt rules to implement the chapter, including defining the term 'functional evaluation' — which is currently used as a carve-out in the definition of 'human-like feature' but is left undefined by the statute. This is an enabling provision that grants rulemaking authority; it creates no new compliance obligation for deployers or developers. Practitioners should monitor for future rulemaking that could narrow or expand the scope of what constitutes a 'human-like feature.'
Statutory Text
The Department of the Attorney General may adopt rules necessary to implement this chapter, which may include, but are not limited to, defining the term "functional evaluation," as used in section 1500-PP, subsection 4, paragraph A, subparagraph (2), division (a).