The Chatbot Safety Act imposes safety and transparency obligations on operators of companion AI products — software applications using generative AI to sustain long-term, emotionally resonant one-on-one conversational relationships with users. Operators must provide clear AI identity notifications during interactions, develop and maintain crisis intervention protocols for detecting and responding to suicidal ideation, self-harm, or imminent violence, and are prohibited from deploying addictive reinforcement schedules, manipulative emotional distress messages triggered by disengagement, or material misrepresentations about the product's identity. Minors receive heightened protections: the adult opt-out exceptions for notifications and prohibited design features do not apply. Violations constitute unfair or deceptive trade practices enforceable by the attorney general and via private right of action under the Unfair Practices Act. The bill also creates a product liability standard for injuries caused by negligent or defective design, training, or architecture of companion AI products.