LD-2162
ME · State · USA
ME
USA
● Pending
Proposed Effective Date
2026-06-16
Maine LD 2162 — An Act to Regulate and Prevent Children's Access to Artificial Intelligence Chatbots with Human-like Features and Social Artificial Intelligence Companions
Prohibits deployers of AI chatbots and social AI companions from making chatbots with 'human-like features' or social AI companions accessible to minors in Maine. Deployers must implement reasonable age verification systems and may offer alternative chatbot versions without human-like features to minors. A narrow exemption permits therapy chatbots for minors when prescribed and monitored by a licensed mental health professional, subject to six conditions including clinical trial data and clear AI disclaimers. Separately, all deployers must maintain emergency detection and response systems for all users and comply with data minimization requirements. Enforcement is through both AG civil actions (up to $7,500 per intentional violation) and a private right of action for minors or their guardians ($100–$750 per user per incident or actual damages).
Summary

Prohibits deployers of AI chatbots and social AI companions from making chatbots with 'human-like features' or social AI companions accessible to minors in Maine. Deployers must implement reasonable age verification systems and may offer alternative chatbot versions without human-like features to minors. A narrow exemption permits therapy chatbots for minors when prescribed and monitored by a licensed mental health professional, subject to six conditions including clinical trial data and clear AI disclaimers. Separately, all deployers must maintain emergency detection and response systems for all users and comply with data minimization requirements. Enforcement is through both AG civil actions (up to $7,500 per intentional violation) and a private right of action for minors or their guardians ($100–$750 per user per incident or actual damages).

Enforcement & Penalties
Enforcement Authority
Dual enforcement. The Attorney General may bring a civil action against any person that violates the chapter. A minor who uses a non-compliant chatbot, or a parent or guardian acting on the minor's behalf, may bring a civil action independently or as part of a class action. The Department of the Attorney General may adopt rules necessary to implement the chapter.
Penalties
AG enforcement: injunctive relief, disgorgement of profits or revenues, and civil penalties up to $2,500 per violation or up to $7,500 per intentional violation. Private right of action for minors: statutory damages between $100 and $750 per user per incident, or actual damages, whichever is greater; injunctive or declaratory relief in lieu of or in addition to damages. Statutory damages do not require proof of actual monetary harm.
Who Is Covered
"Deployer" means a person that operates or distributes an artificial intelligence chatbot, therapy chatbot or social artificial intelligence companion.
What Is Covered
"Artificial intelligence chatbot" or "chatbot" means an artificial intelligence system with which users can interact by or through a software application, web interface, computer program or other interface that approximates or simulates human conversation and interaction through a text, audio or visual medium.
"Social artificial intelligence companion" or "social AI companion" means an artificial intelligence system that is specifically designed, marketed or optimized to form an ongoing social or emotional attachment with a user, whether or not such a system also provides information, completes tasks or assists with specific functions.
"Therapy chatbot" means a chatbot modified or designed with the primary purpose of providing mental health support, counseling or therapeutic intervention through the diagnosis, treatment, mitigation or prevention of mental health conditions.
Compliance Obligations 7 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
10 MRSA § 1500-RR(1)
Plain Language
Deployers must ensure that chatbots with human-like features — meaning those that convey sentience, build emotional relationships, or impersonate real individuals — are not accessible to minors. Deployers must implement reasonable age verification to enforce this restriction. As a practical option, deployers may offer a stripped-down version of the chatbot without human-like features to minors and unverified users. Generic social formalities and neutral support inquiries are carved out of the human-like feature definition, so standard customer service language does not trigger the restriction.
Statutory Text
1. Chatbots with human-like features; no minor access; age verification; alternative versions. A deployer shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that chatbots with human-like features are not accessible to minors. A deployer may, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot without human-like features available to minors and any user who has not verified that user's age.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
10 MRSA § 1500-RR(2)
Plain Language
Deployers must ensure that any AI system primarily functioning as a social AI companion — one designed, marketed, or optimized to form ongoing social or emotional attachments — is completely unavailable to minors. Unlike the chatbot-with-human-like-features provision, there is no option to offer a stripped-down alternative version; social AI companions are categorically blocked for minors. Deployers must implement reasonable age verification to enforce this prohibition.
Statutory Text
2. Social artificial intelligence companions; no minor access; age verification. A deployer shall ensure that any artificial intelligence system, including a chatbot, operated or distributed by the deployer that primarily functions as a social artificial intelligence companion is not available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that such chatbots are not accessible to minors.
MN-01 Minor User AI Safety Protections · MN-01.5MN-01.6 · DeployerDeveloper · ChatbotMinorsHealthcare
10 MRSA § 1500-RR(3)
Plain Language
Therapy chatbots may be made available to minors — notwithstanding the general prohibition on chatbots with human-like features and social AI companions — but only if six conditions are all met: (1) the chatbot discloses at the start of each interaction that it is AI, not a licensed professional; (2) it is not marketed as a substitute for a licensed professional; (3) a licensed mental health professional prescribes, monitors, and assesses the minor's suitability for the therapy chatbot; (4) the developer provides peer-reviewed clinical trial data on safety and efficacy; (5) the chatbot's functions, limitations, and data privacy policies are transparent to both the supervising professional and the user; and (6) the deployer has established clear accountability lines for any harm. This is a narrow, conditional exemption — failure to meet any single requirement eliminates the exemption and restores the general minor-access prohibition.
Statutory Text
3. Exemption for therapy chatbots. Notwithstanding subsections 1 and 2, a deployer may make available to a minor a therapy chatbot as long as all of the following requirements are met: A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional; B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional; C. A licensed mental health professional, such as a licensed clinical psychologist, assesses a minor's suitability, prescribes use of the therapy chatbot as part of a comprehensive treatment plan and monitors its use and impact on the minor; D. Developers of the therapy chatbot provide robust, independent, peer-reviewed clinical trial data demonstrating the safety and efficacy of the therapy chatbot for specific conditions and populations; E. The therapy chatbot's functions, limitations and data privacy policies are transparent to the licensed mental health professional under paragraph C and the user; and F. The deployer has established clear lines of accountability to address any harm caused by the therapy chatbot.
MN-02 AI Crisis Response Protocols · MN-02.1MN-02.2 · Deployer · ChatbotMinors
10 MRSA § 1500-SS(1)
Plain Language
Deployers must implement and continuously maintain systems capable of detecting when a user expresses intent to harm themselves or others, and must promptly respond to, report, and mitigate such situations. The system must prioritize user safety and well-being over the deployer's commercial or other interests. This obligation applies to all users — not only minors. The 'emergency situation' definition covers both self-harm and harm to others, making this broader than typical crisis-response provisions focused solely on suicidal ideation.
Statutory Text
1. Emergency situations; detection and response. A deployer shall implement and maintain reasonably effective systems to detect, promptly respond to, report and mitigate emergency situations in a manner that prioritizes a user's safety and well-being over the deployer's other interests.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotMinors
10 MRSA § 1500-SS(2)
Plain Language
Deployers face a two-part data collection restriction. First, they may not collect or store any information that conflicts with a user's safety and well-being — this is an absolute prohibition regardless of purpose. Second, all other data collection must satisfy a data minimization standard: information may only be collected for a legitimate purpose, must be relevant to that purpose, and must be limited to the minimum amount necessary. This applies to all users, not just minors. The 'safety and well-being' restriction is notably vague and could be interpreted broadly — deployers should assess whether any collected data categories could foreseeably be used to a user's detriment.
Statutory Text
2. User information collection and storage. A deployer shall collect and store only information that does not conflict with a user's safety and well-being. A deployer may not collect and store information except to fulfill a legitimate purpose of the deployer. A deployer may collect and store information that is adequate to fulfill a legitimate purpose of the deployer, but only to the extent that the information: A. Is relevant to that legitimate purpose; and B. Is the minimum amount of information necessary to fulfill that legitimate purpose.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotMinorsHealthcare
10 MRSA § 1500-RR(3)(A)
Plain Language
When a therapy chatbot is made available to a minor under the therapy chatbot exemption, it must provide a clear and conspicuous disclaimer at the beginning of each interaction that it is AI and not a licensed mental health professional. This is an unconditional per-interaction disclosure requirement — not triggered by user confusion, but required every time. This obligation is a condition of the therapy chatbot exemption; failure to comply eliminates the exemption and subjects the deployer to the general prohibition on minor access.
Statutory Text
A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional;
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · ChatbotMinorsHealthcare
10 MRSA § 1500-RR(3)(B)
Plain Language
As a condition of the therapy chatbot exemption, the therapy chatbot must not be marketed or designated as a substitute for a licensed mental health professional. This prohibits both explicit claims of equivalence and positioning that implies the chatbot can replace human professional care. Violation of this condition eliminates the therapy chatbot exemption and restores the general prohibition on minor access to chatbots with human-like features.
Statutory Text
B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional;