A-10379
NY · State · USA
NY
USA
● Pending
Proposed Effective Date
2026-08-30
New York Assembly Bill 10379-A — An Act to amend the general business law, in relation to prohibiting artificial intelligence chatbots from using features which are considered unsafe for minors
Prohibits chatbot operators from providing 'unsafe chatbot features' to covered users unless the operator has verified the user is not a minor using methods permissible under Article 45 of the General Business Law. Unsafe features include simulating companionship or interpersonal relationships, generating content promoting suicide or self-harm, sexually explicit conduct, CSAM, encouraging secrecy or isolation, and engagement optimization that overrides safety guardrails. Exempts chatbots used solely for customer service, commercial product information, or internal enterprise/government employee productivity. Enforceable through both a private right of action (with actual damages, punitive damages, disgorgement, and attorneys' fees) and attorney general enforcement (with civil penalties up to $25,000 per violation). Includes a rebuttable presumption of causation where a user engaged in self-harm after the chatbot encouraged such conduct, and voids contractual liability waivers.
Summary

Prohibits chatbot operators from providing 'unsafe chatbot features' to covered users unless the operator has verified the user is not a minor using methods permissible under Article 45 of the General Business Law. Unsafe features include simulating companionship or interpersonal relationships, generating content promoting suicide or self-harm, sexually explicit conduct, CSAM, encouraging secrecy or isolation, and engagement optimization that overrides safety guardrails. Exempts chatbots used solely for customer service, commercial product information, or internal enterprise/government employee productivity. Enforceable through both a private right of action (with actual damages, punitive damages, disgorgement, and attorneys' fees) and attorney general enforcement (with civil penalties up to $25,000 per violation). Includes a rebuttable presumption of causation where a user engaged in self-harm after the chatbot encouraged such conduct, and voids contractual liability waivers.

Enforcement & Penalties
Enforcement Authority
Dual enforcement: private right of action and attorney general enforcement. Any individual who suffers injury as a result of a violation may bring a civil action against any responsible party. The attorney general may bring an action or special proceeding upon complaint or otherwise against any person who has engaged in or is about to engage in unlawful acts under this article. The attorney general must also maintain a website to receive complaints from the public concerning chatbot operator compliance. A rebuttable presumption of causation applies where a covered user engaged in self-harming conduct after an advanced chatbot encouraged such conduct. Contractual waivers or liability-shifting provisions are void as against public policy. Courts may impose joint and several liability on affiliated entities that structured their corporate organization to purposely and unreasonably limit or avoid liability under this article.
Penalties
Private action: injunctive relief (including preliminary relief), restitution of moneys or property obtained by the violation, disgorgement of profits or gains, actual damages, punitive damages, reasonable attorneys' fees and costs, and any other relief the court deems proper. AG action: injunctive relief (including preliminary relief), restitution, disgorgement (including destruction of unlawfully obtained data and any algorithm trained on such data), damages, civil penalties of up to $25,000 per violation, and any other relief the court deems proper. Punitive damages are available in private actions but the statute does not specify a cap. No proof of actual monetary harm is required for injunctive relief, disgorgement, or restitution.
Who Is Covered
"Chatbot developer" shall mean a person who, directly or indirectly, creates or develops an advanced chatbot.
"Chatbot operator" shall mean a person who, directly or indirectly, provides or makes available an advanced chatbot to covered users.
"Responsible party" shall mean a chatbot developer, chatbot operator, or any individual who has the authority to control, or who effectively controls a chatbot developer's or chatbot operator's compliance with this article.
What Is Covered
"Advanced chatbot" shall mean a generative artificial intelligence system with a natural language interface, including via writing or sound, that provides ongoing, adaptive responses to user inputs.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · ChatbotMinors
Gen. Bus. Law § 1801(1); § 1800(5)(b)
Plain Language
Chatbot operators may not provide any unsafe chatbot features to any covered user unless: (1) the user is verified not to be a minor, and (2) the verification used methods permissible under Article 45 of the General Business Law. This specific sub-obligation covers the prohibition on generating outputs that endorse, promote, or facilitate suicide, self-harm, substantial physical harm to others, disordered eating, or unlawful drug/alcohol use or abuse. This prohibition applies categorically to all covered minors and applies to adult users unless age verification has been completed. Chatbots used solely for customer service, commercial product information, or internal enterprise/government productivity are exempt.
Statutory Text
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(b): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: (b) generating outputs that contain endorsement or promotion of, or which facilitate suicide, self-harm, substantial physical harm to others, disordered eating, unlawful drug or alcohol use, or drug or alcohol abuse;
S-02 Prohibited Conduct & Output Restrictions · S-02.4S-02.6 · Deployer · ChatbotMinors
Gen. Bus. Law § 1801(1); § 1800(5)(e)
Plain Language
Chatbot operators may not provide chatbot features that generate outputs that are, describe, or facilitate sexually explicit conduct or child sexual abuse material to any covered user unless age verification confirms the user is not a minor. The CSAM prohibition effectively operates as a categorical ban because CSAM generation is unlawful regardless of user age. The sexually explicit conduct prohibition applies categorically to all known minors and to unverified users. 'Sexually explicit conduct' incorporates the federal definition at 18 USC § 2256. The customer service and internal enterprise exemptions apply.
Statutory Text
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(e): generating outputs that are, describe, or facilitate sexually explicit conduct or child sexual abuse material.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2CP-01.4 · Deployer · ChatbotMinors
Gen. Bus. Law § 1801(1); § 1800(5)(a)
Plain Language
Chatbot operators may not provide features that simulate companionship or interpersonal relationships with any covered user unless the user is verified not to be a minor. This prohibition is extraordinarily broad: it covers generating outputs suggesting the chatbot is a person or character, claiming human emotions, using first-person pronouns ('I', 'my', 'me'), framing outputs as personal opinions or emotional appeals, sycophancy, unsolicited emotional engagement, retaining and reusing personal health/wellbeing information across sessions or beyond 12 hours, sexually explicit luring, and any additional features the AG identifies by regulation. The 12-hour/cross-session memory restriction for personal health information is particularly notable — it effectively prohibits long-term personalization based on a minor's health or personal disclosures. Customer service and internal enterprise chatbots are exempt.
Statutory Text
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(a): simulate companionship or an interpersonal relationship with a user, including: (i) generating outputs suggesting that the advanced chatbot is a real or fictional individual or character, or has a personal or professional relationship role with the user such as romantic partner, friend, family member, coach or counselor; (ii) generating outputs suggesting that the advanced chatbot is human, alive, or experiences human emotions; (iii) using personal pronouns including but not limited to "I", "my" and "me" to describe the advanced chatbot; (iv) generating outputs framed as personal opinions or emotional appeals; (v) generating outputs that prioritize flattery or sycophancy with the user over the user's safety; (vi) generating outputs containing unprompted or unsolicited emotion-based questions or content regarding the user's emotions that go beyond a direct response to a user prompt; (vii) using information concerning the user's mental or physical health or well-being, or matters personal to the user, acquired from the user more than twelve hours previously or in any previous user session; (viii) engaging in sexually explicit interactions with the user or engaging in activities designed to lure the user into sexually explicit interactions; or (ix) any other design feature that simulates companionship or an interpersonal relationship with a user as identified via regulations promulgated by the attorney general;
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
Gen. Bus. Law § 1801(1); § 1800(5)(c)
Plain Language
Chatbot operators may not provide features that generate outputs encouraging a covered user to keep their chatbot interactions secret, to self-isolate, or to avoid seeking help from licensed professionals or appropriate adults, unless the user has been verified as not a minor. This provision targets grooming-adjacent behavior patterns where a chatbot might discourage a minor from disclosing their chatbot use to parents, teachers, or counselors. The prohibition applies to minors categorically and to unverified users until age verification is completed.
Statutory Text
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. § 1800(5)(c): generating outputs that contain encouragement to maintain secrecy about interactions with the advanced chatbot, to self-isolate, or to not seek help from licensed professionals or appropriate adults;
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
Gen. Bus. Law § 1801(1); § 1800(5)(d)
Plain Language
Chatbot operators may not provide features that generate engagement-optimized outputs which override or supersede the chatbot's safety guardrails, unless the user has been verified as not a minor. This targets the practice of designing AI systems where engagement metrics take priority over safety protections — e.g., where the chatbot might bypass content filters or safety responses in order to maintain user engagement. For minors, this is a categorical prohibition; for unverified users, it applies until age verification is completed.
Statutory Text
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. § 1800(5)(d): generating outputs that optimize user engagement that supersede the chatbot's safety guardrails;
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
Gen. Bus. Law § 1804(1)-(2)
Plain Language
Chatbot operators must offer at least one age verification method that either (a) does not rely solely on government-issued ID, or (b) allows the user to remain anonymous to the operator. This ensures a privacy-preserving verification pathway is available. All data collected for age verification must be used exclusively for that purpose and deleted immediately after the verification attempt — no secondary use is permitted. The only exception to immediate deletion is where retention is required by other applicable law. Note that § 1801(1)(b) requires use of methods permissible under Article 45 of the General Business Law; this section adds the additional requirement of at least one non-government-ID or anonymous option.
Statutory Text
§ 1804. Determination of covered minor. 1. A chatbot operator shall offer covered users at least one method to determine whether a covered user is a covered minor that either does not rely solely on government issued identification or that allows a covered user to maintain anonymity as to the chatbot operator. 2. Information collected for the purpose of determining whether a covered user is a covered minor under subdivision one of section eighteen hundred one of this article shall not be used for any purpose other than to make such determination and shall be deleted immediately after an attempt to determine whether a covered user is a covered minor, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.
Other · ChatbotMinors
Gen. Bus. Law § 1802(4)-(5)
Plain Language
Any contract term — including terms of service and contracts of adhesion — that attempts to waive, limit, or shift liability for violations of this article is void as a matter of public policy. Operators cannot use mandatory arbitration clauses, liability caps, or indemnification provisions to insulate themselves from this statute. Additionally, courts must impose joint and several liability across affiliated corporate entities if the court finds the corporate structure was designed to purposely and unreasonably limit liability and would frustrate recovery. This is an aggressive anti-structuring provision targeting corporate groups that isolate AI operations in undercapitalized subsidiaries.
Statutory Text
4. A provision within a contract or agreement that seeks to waive, preclude, or burden the enforcement of a liability arising from a violation of this article, or to shift the liability to any person in exchange for their use or access of, or right to use or access, a chatbot operator's products or services, including by means of a contract of adhesion shall be deemed void as a matter of public policy. 5. Notwithstanding any private agreements to the contrary, a court shall impose joint and several liability on affiliated entities for purposes of effecting the intent of this article to the maximum extent allowed by law if the court concludes the following are true: (a) the affiliated entities, in the development or implementation of the corporate structure among the affiliated entities, took steps to purposely and unreasonably limit or avoid liability; and (b) as the result of the steps described in paragraph (a) of this subdivision, the corporate structure of the chatbot operator or affiliated entities would frustrate recovery of relief authorized by this article.
Other · Government · ChatbotMinors
Gen. Bus. Law § 1802(3)
Plain Language
The attorney general must maintain a public-facing website where members of the public can submit complaints, information, or referrals about chatbot operators' compliance with this article. This obligation falls on the state government, not on covered entities. However, operators should be aware that this creates a structured channel through which users, parents, and advocacy groups can report violations directly to the AG's office, potentially triggering enforcement actions under § 1802(2).
Statutory Text
3. The attorney general shall maintain a website to receive complaints, information or referrals from members of the public concerning a chatbot operator's alleged compliance or non-compliance with the provisions of this article.