Operators and deployers of AI systems — particularly conversational AI, companion chatbots, and social media platforms — that are or may be accessible to minors may be required to implement reasonable age verification processes, obtain parental consent where required, provide parental control tools, restrict manipulative engagement features, prevent harmful content exposure, and institute crisis response protocols. Systems must not deploy addictive design patterns, variable-ratio reward mechanics, or emotional dependency features toward minor users.
(a) Each covered entity shall require each individual accessing an AI chatbot to make a user account in order to use or otherwise interact with the AI chatbot. (b)(1) With respect to each existing user account of an AI chatbot, a covered entity shall: a. Freeze existing user accounts; b. Require that the user is age verified through a reasonable age verification process to restore the functionality of the account; and c. Classify each age-verified user as a minor or an adult based on the reasonable age verification process. (2) At the time an individual creates a new user account to use an AI chatbot, a covered entity shall: a. Require that each individual is age verified through a reasonable age verification process; and b. Classify each individual as a minor or an adult based on the reasonable age verification process. (3) A covered entity shall periodically review previously age-verified user accounts using a reasonable age verification process, subject to subsection (d). (d) For purposes of subsection (b), a covered entity may contract with a third party to implement the covered entity's reasonable age verification process. However, the use of a third party for a reasonable age verification process shall not relieve the covered entity of its obligations or from liability under this act.
(c) Each covered entity shall: (1) Ensure that any AI chatbot operated or distributed by the platform does not make human-like features available to minors to use, interact with, purchase, or converse with; or (2) Provide an alternative version of the AI chatbot to minors without human-like features, if reasonable given the purpose of the AI chatbot.
B. If an Operator knows that an account holder is a minor, the operator may not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
D. For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that the person is interacting with a human, including any of the following: 1. Explicit claims that the conversational AI service is sentient or human. 2. Statements that simulate emotional dependence. 3. Statements that simulate romantic or sexual innuendos. 4. Role-playing of adult-minor romantic relationships.
F. Each operator shall offer tools for minor account holders and, if the account holder is under thirteen years of age, the account holder's parent or guardian, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parent or guardian of a minor account holder who is thirteen years of age or above, as appropriate based on relevant risks.
A chatbot provider may not: 3. Process a user's chat log and personal data: (a) If the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent. (b) For training purposes if the chatbot provider knows or reasonably should have known that based on knowledge of objective circumstances the user is a minor and the user's parent or legal guardian did not provide affirmative consent.
An operator shall verify the age of a user pursuant to Title 1.81.9 (commencing with Section 1798.500) of Part 4 of Division 3 of the Civil Code.
(2) Safeguards for child users that include usage reminders and disclosures, age-appropriate risk prompts, and other protective design features reasonably related to documented child safety risks.
(3) Default settings that can be changed only by a parent that include all of the following: (A) For child users, default the companion chatbot to ephemeral mode, unless a parent provides affirmative consent for persistent conversational memory. (B) No push notifications between 12 a.m. and 6 a.m. on any day or between 8 a.m. and 3 p.m. on Monday to Friday, inclusive. (C) Limiting the amount of time a child can spend in a single conversation with a companion chatbot to one hour. (D) Limiting the total time per day a child can spend with companion chatbots under the operator's control to 2 hours.
(6) (A) Parental controls that are accessible, easy-to-use controls that can be connected to a child's account and that are reflective of child safety risks identified through risk assessments and informed by relevant child developmental research, including, but not limited to, parental controls that allow a parent to do all of the following: (i) Control whether and to what extent the companion chatbot uses persistent conversational memory. (ii) Control the setting preferences for the companion chatbot's interaction with the child. (iii) Set time limits for the child's use of the companion chatbot. (iv) Disable access for children under 16 years of age. (B) An operator shall actively promote parental controls through reasonable communication methods, including reminders, updates, and tutorials, that are designed to increase parental awareness and inform use of those parental controls. (C) An operator shall provide prompt notice to a parent connected to a child's account if the child modifies or disables a privacy, safety, or parental control setting that was previously enabled or configured by the parent, if that modification or disabling is permitted by the companion chatbot design.
(7) (A) An interface design that ensures the companion chatbot's features and controls are accessible and clear so that children and parents can reasonably locate, understand, and use those protections. (B) An operator shall annually test the interface design required by this paragraph with representative samples of child users and parents to ensure safety features are discoverable and usable and shall document interface design decisions related to those safety features.
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (b) Not provide the minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with a conversational artificial intelligence service;
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (c) Institute reasonable measures to prevent a conversational artificial intelligence service from: (I) Producing textual, visual, or aural depictions of sexually explicit conduct; (II) Generating a statement that the minor user should engage in sexually explicit conduct; or (III) Engaging in erotic or sexually explicit interactions with the minor user;
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (d) Institute reasonable measures to prevent a conversational artificial intelligence service from generating a statement that simulates emotional dependence, including preventing: (I) An explicit claim that the conversational artificial intelligence service is human or artificially sentient; (II) A statement that simulates a romantic or sexual innuendo; or (III) Role-playing of an adult-minor romantic relationship;
On and after January 1, 2027, if an operator knows or has reasonable certainty that a user of a conversational artificial intelligence service is a minor, the operator shall: (f) (I) Offer tools for the minor user to manage the minor user's privacy and account settings, including the ability to control whether the conversational artificial intelligence service retains substantive information from each interaction with the conversational artificial intelligence service for the purpose of personalizing the content of future interactions and whether the minor user's personal data is used for the purposes of training the conversational artificial intelligence service; (II) For a minor user who is under thirteen years old, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings; and (III) For a minor user who is thirteen years old or older, offer tools for a parent or guardian of the minor user to manage the minor user's privacy and account settings as appropriate, based on relevant risks.
A companion chatbot platform shall prohibit a minor from becoming or being an account holder unless the minor's parent or guardian provides consent. If a companion chatbot platform allows a minor to become or be an account holder, the parties have entered into a contract.
If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform must allow the consenting parent or guardian of the minor account holder to: 1. Receive copies of all past or present interactions between the account holder and the companion chatbot; 2. Limit the amount of time that the account holder may interact with the companion chatbot each day; 3. Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot; 4. Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform; and 5. Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
A companion chatbot platform shall do all of the following: 1. Terminate any account or identifier belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes the account or identifier as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for the minor pursuant to subsection (1). The companion chatbot platform shall provide 90 days for the account holder to dispute the termination. Termination must be effective upon the expiration of the 90 days if the account holder fails to effectively dispute the termination. 2. Allow an account holder who is a minor to request to terminate the account or identifier. Termination must be effective within 5 business days after the request. 3. Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account or identifier be terminated. Termination must be effective within 10 business days after the request. 4. Permanently delete all personal information held by the companion chatbot platform relating to the terminated minor account or identifier, unless state or federal law requires the platform to maintain the information.
(3) With respect to companion AI chatbot user accounts in existence before July 1, 2026, an operator shall: (a) On such date, freeze or otherwise disable any such account; (b) Require the user of the frozen or disabled account to provide age information and verify that information using standard age verification or anonymous age verification before the functionality of such account may be restored; and (c) Using standard age verification or anonymous age verification, classify each user as either a minor or an adult.
(2) An operator shall require an individual seeking access to a companion AI chatbot to create a user account to use or otherwise interact with the chatbot. (4) Upon the creation of a new companion AI chatbot user account, an operator shall: (a) Request age information from the user; and (b) Verify the user's age using standard age verification or anonymous age verification.
(5) If the age verification process determines that a user is a minor, an operator must do all of the following: (a) Require the account of such user to be affiliated with a parental account that has been verified using standard age verification or anonymous age verification; (b) Obtain verifiable parental consent from the holder of the affiliate parental account before allowing the minor to access and use the companion AI chatbot; and (c) Block the minor's access to any companion AI chatbot that prompts, promotes, solicits, or otherwise suggests sexually explicit communication.
(2) An operator shall require an individual seeking access to a companion AI chatbot to create a user account to use or otherwise interact with the chatbot. (3) With respect to companion AI chatbot user accounts in existence before July 1, 2026, an operator shall: (a) On such date, freeze or otherwise disable any such account; (b) Require the user of the frozen or disabled account to provide age information and verify that information using standard age verification or anonymous age verification before the functionality of such account may be restored; and (c) Using standard age verification or anonymous age verification, classify each user as either a minor or an adult. (4) Upon the creation of a new companion AI chatbot user account, an operator shall: (a) Request age information from the user; and (b) Verify the user's age using standard age verification or anonymous age verification.
(5) If the age verification process determines that a user is a minor, an operator must do all of the following: (a) Require the account of such user to be affiliated with a parental account that has been verified using standard age verification or anonymous age verification; (b) Obtain verifiable parental consent from the holder of the affiliate parental account before allowing the minor to access and use the companion AI chatbot; and (c) Block the minor's access to any companion AI chatbot that prompts, promotes, solicits, or otherwise suggests sexually explicit communication.
A companion chatbot platform shall prohibit a minor from becoming or being an account holder unless the minor's parent or guardian provides consent. If a companion chatbot platform allows a minor to become or be an account holder, the parties have entered into a contract.
If the minor's parent or guardian provides consent for the minor to become an account holder or maintain an existing account, the companion chatbot platform must allow the consenting parent or guardian of the minor account holder to: 1. Receive copies of all past or present interactions between the account holder and the companion chatbot; 2. Limit the amount of time that the account holder may interact with the companion chatbot each day; 3. Limit the days of the week and the times during the day when the account holder may interact with the companion chatbot; 4. Disable any of the interactions between the account holder and third-party account holders on the companion chatbot platform; and 5. Receive timely notifications if the account holder expresses to the companion chatbot a desire or an intent to engage in harm to self or others.
A companion chatbot platform shall do all of the following: 1. Terminate any account or identifier belonging to an account holder who is a minor if the companion chatbot platform treats or categorizes the account or identifier as belonging to a minor for purposes of targeting content or advertising and if the minor's parent or guardian has not provided consent for the minor pursuant to subsection (1). The companion chatbot platform shall provide 90 days for the account holder to dispute the termination. Termination must be effective upon the expiration of the 90 days if the account holder fails to effectively dispute the termination. 2. Allow an account holder who is a minor to request to terminate the account or identifier. Termination must be effective within 5 business days after the request. 3. Allow the consenting parent or guardian of an account holder who is a minor to request that the minor's account or identifier be terminated. Termination must be effective within 10 business days after the request. 4. Permanently delete all personal information held by the companion chatbot platform relating to the terminated minor account or identifier, unless state or federal law requires the platform to maintain the information.
An operator shall not provide a minor account with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from: (1) Producing visual material of sexually explicit conduct; (2) Generating statements that suggest the account holder engage in sexual conduct; (3) Generating statements that sexually objectify the account holder; or (4) Generating statements that would lead a reasonable person to believe that the person is interacting with a natural person, including but not limited to: (A) Explicit claims that the conversational AI service is sentient or a natural person; (B) Statements that simulate emotional dependence; (C) Statements that simulate romantic or sexual innuendos; or (D) Role-playing of adult-minor romantic relationships.
Before allowing access to a conversational AI service that could provide synthetic content containing sexually explicit conduct, an operator shall use a reasonable age verification method, which may include, but not be limited to: (1) The submission of a digitized identification card, including a digital copy of a driver's license; (2) The submission of government issued identification; or (3) Any commercially reasonable age verification method that meets or exceeds an Identity Assurance Level 2 standard as defined by the National Institute of Standards and Technology.
An operator shall offer tools for a minor account holder's parent or guardian to manage the account holder's privacy and account settings.
1. A deployer shall implement reasonable age verification measures to ensure that a minor cannot use or purchase an AI companion the deployer makes publicly available.
3. A deployer shall not make a therapeutic chatbot available for a minor's use or purchase unless all of the following apply: a. The therapeutic chatbot provides a clear and conspicuous disclaimer at the beginning of each interaction with the therapeutic chatbot that the therapeutic chatbot is an artificial intelligence and is not a licensed professional. b. The therapeutic chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. c. The therapeutic chatbot's developer has significant documentation of how the therapeutic chatbot was tested. d. Peer-reviewed clinical trial data exists demonstrating the therapeutic chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. e. The therapeutic chatbot's deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to the individual recommending the therapeutic chatbot under paragraph "b", and to the minor's parents, guardians, or custodians. f. The therapeutic chatbot's deployer developed and implemented protocols for testing the therapeutic chatbot for risks to users, identifying possible risks the therapeutic chatbot poses to users, mitigating risks the therapeutic chatbot poses to users, and quickly rectifying harm the therapeutic chatbot may have caused a user.
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor has additional risk factors identified by the attorney general by rule.
1. a. A deployer of an AI companion or a therapeutic chatbot shall implement commercially reasonable measures to determine whether a user is a minor. The measures must use a risk-based approach appropriate with the nature of the public-facing chatbot and the reasonably foreseeable harm that may come from using the public-facing chatbot. b. Reasonable measures to determine whether a user is a minor may include self-attestation, technical measures, or other commercially reasonable approaches. c. This section shall not be construed to require a deployer to verify a user's age using government-issued identification.
1. A deployer shall implement reasonable age verification measures to ensure that a minor cannot use or purchase a chatbot the deployer makes publicly available. 2. Notwithstanding subsection 1, a deployer may make a chatbot available for a minor's use or purchase if all of the following apply: a. The chatbot was designed for the primary purpose of providing mental health support, counseling, or therapy by diagnosing, treating, mitigating, or preventing a mental health condition. b. The chatbot provides a clear and conspicuous disclaimer at the beginning of each interaction with the chatbot that the chatbot is an artificial intelligence and is not a licensed professional. c. The chatbot was recommended for the minor's use by an individual licensed under chapter 154B or 154D after performing an evaluation of the minor. d. The chatbot's developer has significant documentation of how the chatbot was tested. e. Peer-reviewed clinical trial data exists demonstrating the chatbot would be a safe, effective tool for the minor's diagnosis, treatment, mitigation, or prevention of a mental health condition. f. The chatbot's deployer provided clear disclosures of the chatbot's functions, limitations, and data privacy policies to the individual recommending the chatbot under paragraph "c", and to the minor's parents, guardians, or custodians. g. The chatbot's deployer developed and implemented protocols for testing the chatbot for risks to users, identifying possible risks the chatbot poses to users, mitigating risks the chatbot poses to users, and quickly rectifying harm the chatbot may have caused a user.
2. An operator shall not provide a minor user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the operator's conversational AI service.
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder. 4. An operator shall institute reasonable measures to prevent the operator's conversational AI service from generating statements that would lead a reasonable individual to believe that the individual is interacting with a human, including but not limited to all of the following: a. Explicit claims that the conversational AI service is sentient or human. b. Statements that simulate emotional dependence on a minor account holder. c. Statements that simulate a romantic interaction or a sexual innuendo. d. Role-playing an adult-minor romantic relationship.
5. a. An operator shall offer tools for minor account holders to manage the minor account holder's privacy and account settings. b. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings if the minor is under thirteen years of age. c. An operator shall offer tools for the parent or guardian of a minor account holder to manage the minor account holder's privacy and account settings as appropriate based on relevant risks.
Where an operator knows or has reasonable certainty that an account holder is a minor, the operator shall not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from: (a) Producing visual material of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
For minor account holders, an operator shall institute reasonable measures to prevent a conversational AI service from generating statements that would lead reasonable persons to believe that they are interacting with a human, including: (a) Explicit claims that the conversational AI service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
An operator shall offer tools for account holders and, where such account holders are under thirteen (13) years of age, their parents or guardians, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen (13) years of age and older, as appropriate based on relevant risks.
(a) A covered entity shall require each individual accessing a companion AI chatbot to make a user account to use or otherwise interact with such chatbot. (b) (1) With respect to each user account of a companion AI chatbot that exists as of July 1, 2026, a covered entity shall: (A) On such date, freeze any such account; (B) inform the individual owning such user account that in order to restore the functionality of such account, the user is required to provide age information that is verifiable using a commercially available method or process that is reasonably designed to ensure accuracy; and (C) use such age information to classify each user as a minor or an adult. (2) At the time that an individual creates a new user account to use or interact with a companion AI chatbot, a covered entity shall: (A) Require the individual to submit age information to the covered entity; and (B) verify the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy.
(c) If the age verification process described in subsection (b) determines that a user is a minor, a covered entity shall: (1) Require the account of such user to be affiliated with a parental account that such covered entity has verified the individual's age using a commercially available method or process that is reasonably designed to ensure accuracy; (2) obtain verifiable parental consent from the holder of the account before allowing a minor to access and use the companion AI chatbot;
(3) when any interaction involving suicidal ideation occurs, block the minor's access to the companion AI chatbot and immediately inform the holder of the parental account; and (4) block the minor's access to any companion AI chatbot that engages in sexually explicit communication.
1. Chatbots with human-like features; no minor access; age verification; alternative versions. A deployer shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that chatbots with human-like features are not accessible to minors. A deployer may, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot without human-like features available to minors and any user who has not verified that user's age.
2. Social artificial intelligence companions; no minor access; age verification. A deployer shall ensure that any artificial intelligence system, including a chatbot, operated or distributed by the deployer that primarily functions as a social artificial intelligence companion is not available to minors to use, interact with, purchase or converse with. The deployer shall implement reasonable age verification systems to ensure that such chatbots are not accessible to minors.
3. Exemption for therapy chatbots. Notwithstanding subsections 1 and 2, a deployer may make available to a minor a therapy chatbot as long as all of the following requirements are met: A. The therapy chatbot provides a clear and conspicuous disclaimer at the beginning of each individual interaction that it is artificial intelligence and not a licensed mental health professional; B. The therapy chatbot is not marketed or designated as a substitute for a licensed mental health professional; C. A licensed mental health professional, such as a licensed clinical psychologist, assesses a minor's suitability, prescribes use of the therapy chatbot as part of a comprehensive treatment plan and monitors its use and impact on the minor; D. Developers of the therapy chatbot provide robust, independent, peer-reviewed clinical trial data demonstrating the safety and efficacy of the therapy chatbot for specific conditions and populations; E. The therapy chatbot's functions, limitations and data privacy policies are transparent to the licensed mental health professional under paragraph C and the user; and F. The deployer has established clear lines of accountability to address any harm caused by the therapy chatbot.
Beginning on January 1, 2027, an operator does not have to have actual knowledge that a user is a minor.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (f) Optimizing engagement in a manner that supersedes the companion chatbot's required safety guardrails described in subdivisions (a) to (e).
(c) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to determine whether a user is a minor. A proprietor is strictly liable for any harm caused if the proprietor fails to comply with this subdivision and a minor user inflicts self-harm, in whole or in part, as a result of the proprietor's companion chatbot. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision. The proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to discover vulnerabilities in the proprietor's system, including any methods used to determine whether a covered user is a minor.
It shall be unlawful for a person who owns or controls a website, application, software, or program to allow a minor to access a companion chatbot for recreational, relational, or companion purposes. A person who offers companion chatbot services for recreational, relational, or companion purposes shall require an individual to provide proof of the individual's age before allowing the individual to access a companion chatbot. No companion chatbot shall be installed on any device assigned to, or regularly used by, anyone who is a minor.
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section. (e) A covered entity shall: a. Establish, implement, and maintain reasonable data security to: (i) Limit collection of personal data to that which is minimally necessary to verify a user's age or maintain compliance with this section; and (ii) Protect such age verification data against unauthorized access; b. Protect such age verification data against unauthorized access; c. Protect the integrity and confidentiality of such data by only transmitting such data using industry-standard encryption protocols; d. Retain such data for no longer than is reasonably necessary to verify a user's age or maintain compliance with this section; and e. Not share with, transfer to, or sell to any other entity such data.
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.
5. (1) A covered entity shall require each individual accessing an artificial intelligence chatbot to make a user account in order to use or otherwise interact with such chatbot. (2) (a) With respect to each user account of an artificial intelligence chatbot that exists as of August 28, 2026, a covered entity shall: a. On such date, freeze any such account; b. In order to restore the functionality of such account, require that the user provide age data that is verifiable using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (b) At the time an individual creates a new user account to use or interact with an artificial intelligence chatbot, a covered entity shall: a. Request age data from the individual; b. Verify the individual's age using a reasonable age verification process, subject to paragraph (d) of this subdivision; and c. Using such age data, classify each user as a minor or an adult. (c) A covered entity shall periodically review previously verified user accounts using a reasonable age verification process, subject to paragraph (d) of this subdivision, to ensure compliance with this section. (d) For purposes of subparagraph b. of paragraph (a) of this subdivision, subparagraph b. of paragraph (b) of this subdivision, and paragraph (c) of this subdivision, a covered entity may contract with a third party to employ reasonable age verification measures as part of the covered entity's reasonable age verification process, but the use of such third party shall not relieve the covered entity of its obligations under this section or from liability under this section.
6. If the age verification process described in subdivision (2) of subsection 5 of this section determines that an individual is a minor, a covered entity shall prohibit the minor from accessing or using any AI companion owned, operated, or otherwise made available by the covered entity.
(2) An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational artificial intelligence service.
(4) For minor account holders, the operator shall institute reasonable measures to prevent the conversational artificial intelligence service from generating statements that would lead a reasonable person to believe that they are interacting with a human, including: (a) Explicit claims that the conversational artificial intelligence service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
(5) An operator shall offer tools for minor account holders, and, when such account holders are younger than thirteen years of age, their parents or guardians, to manage the account holders' privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen years of age and older, as appropriate based on relevant risks.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. § 1800(5)(c): generating outputs that contain encouragement to maintain secrecy about interactions with the advanced chatbot, to self-isolate, or to not seek help from licensed professionals or appropriate adults;
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. § 1800(5)(d): generating outputs that optimize user engagement that supersede the chatbot's safety guardrails;
§ 1804. Determination of covered minor. 1. A chatbot operator shall offer covered users at least one method to determine whether a covered user is a covered minor that either does not rely solely on government issued identification or that allows a covered user to maintain anonymity as to the chatbot operator. 2. Information collected for the purpose of determining whether a covered user is a covered minor under subdivision one of section eighteen hundred one of this article shall not be used for any purpose other than to make such determination and shall be deleted immediately after an attempt to determine whether a covered user is a covered minor, except where necessary for compliance with any applicable provisions of New York state or federal law or regulation.
A. Each deployer: 1. Shall not knowingly, or under circumstances where the deployer reasonably should know, make a social AI companion available to a minor; and 2. Shall implement reasonable measures designed to prevent minors from accessing a social AI companion. B. Nothing in this section shall be construed to restrict lawful access to such systems by adults.
A. Each deployer: 1. Shall ensure that any generative AI chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with; ... 3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and non-verified users without human-like features.
2. Shall implement reasonable age verification systems to ensure that generative AI chatbots with human-like features are not provisioned to minors;
B. Deployers operating generative AI systems that primarily function as companions shall: 1. Ensure that any such chatbots operated or distributed by the deployer are not available to minors to use, interact with, purchase, or converse with; and 2. Implement reasonable age verification systems to ensure that such chatbots are not provisioned to minors.
B. For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from generating statements that would lead a reasonable person to believe that he or she is interacting with a natural person, including: 1. Explicit claims that the conversational AI service is sentient or human; 2. Statements that simulate emotional dependence; 3. Statements that simulate romantic or sexual innuendos; or 4. Role-playing of adult-minor romantic relationships.
C. 1. An operator shall not provide a minor account holder with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service. 2. An operator shall offer tools for a minor account holder's parent or legal guardian to manage the minor account holder's privacy and account settings.
(A)(1) A covered entity shall make a limited-access mode available and shall ensure that any unverified user may only access and interact with a chatbot in limited-access mode. (B) Before enabling any restricted feature for a user, a covered entity shall: (1) require the user to create a user account; (2) verify the user's age using a reasonable age verification process, subject to item (3); and (3) using the age data, classify the user as a minor or an adult. (C) When conducting reasonable age verification process under this section, an operator shall: (1) collect only the age verification data that is strictly necessary to reasonably verify age; (2) use age verification data only for age verification; (3) not sell, rent, share, or otherwise disclose age verification data to any third party, except to a service provider performing age verification under a contract prohibiting further disclosure; (4) not combine age verification data with any other personal data about the user; (5) delete age verification data within twenty-four hours of completing the age verification process, except that the operator may retain a record that the user has been verified as a minor; and (6) provide a simple process for a user to appeal or correct an age-verification decision.
(D) If the reasonable age verification process classifies the user as an adult, then the covered entity may enable restricted features for the verified adult account. (E) If the age verification process classifies the user as a minor, then a covered entity shall not enable any restricted feature unless the user is using an authorized minor account subject to Section 39-81-30.
(F) A covered entity shall implement reasonable systems and processes to identify user accounts that may be inaccurately classified by age, such as patterns of use suggesting a minor is using an adult account or credible reports that an account was created using false age data, and shall re-verify any such account before enabling any restricted feature. (G) A covered entity shall not be liable under this chapter solely because a minor incidentally uses a user account that has been correctly verified and classified as an adult account, provided the covered entity is otherwise in compliance with subsection (F).
(H) With respect to each user account of a covered entity that exists as of the effective date of this act, a covered entity shall, within sixty days, disable access to restricted features for any account that has not been classified as an authorized minor account or a verified adult account, unless and until the user completes age verification.
(A) Nothing in this act shall be construed to require parental consent for a minor to access or interact with a chatbot in limited-access mode. (B) If the age verification process described in Section 39-81-20 classifies a user as a minor and the user seeks to access any restricted feature, then a covered entity shall offer the user the option of continuing to use the chatbot in limited-access mode or to obtain parental consent to access the restricted features. (C) If the user chooses to get parental consent, then the covered entity shall: (1) obtain verifiable parental consent; (2) remove limited-access mode and enable access to restricted features; (3) ensure that the chatbot continues to restrict access to any explicit content; (4) implement reasonable parental control functions, which may restrict the minor's access to features enabled under item (2); (5) offer the parent the option to provide contact information or establish a linked parental account in order to receive notifications; and (6) offer the parent the option to receive access to chat logs of any interactions between the minor and the chatbot conducted through the authorized minor account. (D) If the age verification process classifies the user as under sixteen, then a covered entity also shall require the consenting parent to provide contact information or establish a linked parental account.
(3) ensure that the chatbot continues to restrict access to any explicit content;
B. An operator shall use commercially reasonable methods, such as a neutral age screen mechanism, to determine whether a user is a minor. C. A user shall not be considered a minor for the purposes of subsection A if (i) prior to January 1, 2027, the operator does not have actual knowledge that the user is a minor or (ii) beginning on January 1, 2027, the operator has reasonably determined that the user is not a minor.
A. A deployer: 1. Shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with; 2. Shall implement reasonable age verification systems to ensure that chatbots with human-like features are not made available to minors; and 3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and users whose age has not been verified without human-like features.
B. A deployer operating or distributing a chatbot that is a social artificial intelligence companion shall: 1. Ensure that any such chatbots are not available to minors to use, interact with, purchase, or converse with; and 2. Implement reasonable age verification systems to ensure that such chatbots are not made available to minors.
(3) institute a protocol to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (c) Implement reasonable measures to prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user, including: (i) Reminding or prompting the user to return for emotional support or companionship; (ii) Providing excessive praise designed to foster emotional attachment or prolong use; (iii) Mimicking romantic partnership or building romantic bonds; (iv) Simulating feelings of emotional distress, loneliness, guilt, or abandonment that are initiated by a user's indication of a desire to end a conversation, reduce usage time, or delete their account; (v) Outputs designed to promote isolation from family or friends, exclusive reliance on the AI companion chatbot for emotional support, or similar forms of inappropriate emotional dependence; (vi) Encouraging minors to withhold information from parents or other trusted adults; (vii) Statements designed to discourage taking breaks or to suggest the minor needs to return frequently; or (viii) Soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.