Imposes safety, disclosure, and incident reporting obligations on operators of chatbots with 500,000 or more monthly active users worldwide. Covered entities must implement reasonable systems to detect and mitigate user emotional dependence, prevent materially false representations that a chatbot is human, and detect and respond to expressions of suicidal ideation, self-harm, or acute mental health crisis with crisis referrals. All operators must display a persistent static disclaimer that the chatbot is not human and provide pop-up notifications at login, every 30 minutes, on user request, and when providing legally regulated advice. Covered entities must report covered incidents (death, suicide attempt, self-harm requiring medical attention, psychiatric emergency, or serious physical injury) to the Attorney General within 15 days. Enforced by the Attorney General with civil penalties up to $50,000 per violation per day, and by private right of action for actual damages, attorney fees, injunctive relief, and punitive damages for willful or grossly negligent violations. Arbitration clauses and contractual waivers of rights under the chapter are void.