Gen. Bus. Law § 1801(1); § 1800(5)(a)
Plain Language
Chatbot operators may not provide features that simulate companionship or interpersonal relationships to any covered user unless the user has been age-verified as not a minor. This is an exceptionally broad prohibition: it covers chatbots suggesting they are real or fictional characters, claiming human emotions or being alive, using personal pronouns like 'I' or 'my,' expressing personal opinions or emotional appeals, prioritizing flattery over safety, asking unsolicited emotional questions, retaining personal or health information beyond 12 hours or across sessions for personalization, engaging in or luring users into sexually explicit interactions, and any other companionship-simulating feature the AG identifies by regulation. For minors, this effectively prohibits the entire companion chatbot product category. For adults, these features are permissible only after successful age verification. Customer service, product information, and internal business chatbots are exempt.
Statutory Text
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity.
§ 1800(5)(a): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: (a) simulate companionship or an interpersonal relationship with a user, including: (i) generating outputs suggesting that the advanced chatbot is a real or fictional individual or character, or has a personal or professional relationship role with the user such as romantic partner, friend, family member, coach or counselor; (ii) generating outputs suggesting that the advanced chatbot is human, alive, or experiences human emotions; (iii) using personal pronouns including but not limited to "I", "my" and "me" to describe the advanced chatbot; (iv) generating outputs framed as personal opinions or emotional appeals; (v) generating outputs that prioritize flattery or sycophancy with the user over the user's safety; (vi) generating outputs containing unprompted or unsolicited emotion-based questions or content regarding the user's emotions that go beyond a direct response to a user prompt; (vii) using information concerning the user's mental or physical health or well-being, or matters personal to the user, acquired from the user more than twelve hours previously or in any previous user session; (viii) engaging in sexually explicit interactions with the user or engaging in activities designed to lure the user into sexually explicit interactions; or (ix) any other design feature that simulates companionship or an interpersonal relationship with a user as identified via regulations promulgated by the attorney general;