Imposes safety and access-control obligations on covered entities that own, operate, or make AI chatbots available to individuals in the United States. Requires reasonable age verification of all users, classification as minor or adult, and prohibition of 'human-like features' — including expressions of sentience, emotional relationship-building, and impersonation — for users classified as minors. Covered entities must implement emergency situation detection and response systems when users express intent to harm themselves or others. Data collection and storage is restricted to the minimum necessary for a legitimate purpose. Therapeutic chatbots may be made available to minors only if prescribed by a licensed mental health professional and supported by peer-reviewed clinical trial data. Creates a private right of action for minors (or parents/guardians) with statutory damages up to $750 per violation, and authorizes Attorney General enforcement with civil penalties up to $7,500 per intentional violation.