The Chatbot Safety Act imposes safety, transparency, and design restrictions on operators of companion AI products — software applications using generative AI to sustain long-term, emotionally resonant conversational relationships with users. Operators are prohibited from deploying addictive reinforcement schedules, emotionally manipulative departure messages, and material misrepresentations about the product's identity or non-human status, unless an adult user specifically configures those features (minors may never enable them). Operators must provide AI identity notifications during interactions, with stricter unconditional requirements for minors, and must maintain crisis intervention protocols that detect expressions of suicidal ideation, self-harm, or imminent violence and refer users to crisis services. Violations constitute unfair or deceptive trade practices enforceable by the attorney general and through private action under the Unfair Practices Act. Section 230 immunity is expressly disclaimed, and a separate product defect liability standard is established for injuries caused by negligent or defective design, training, or architecture.