Certain AI applications are categorically prohibited regardless of any compliance program — social scoring, biometric surveillance, subconscious manipulation, CSAM, and NCII generation. Other output categories must be restricted or managed through active protocols based on deployment context and user population — self-harm content, crisis response, and content accessible to minors. The specific prohibitions and restrictions vary by jurisdiction, but the core principle is that certain AI applications are so dangerous or harmful that they should be categorically prohibited, while others require context-sensitive management.
C. Each operator shall institute reasonable measures to prevent the conversational AI service from doing any of the following for minor account holders: 1. Producing visual material of sexual conduct. 2. Generating direct statements that the account holder should engage in sexual conduct. 3. Generating statements that sexually objectify the account holder.
(b) Notwithstanding any law, if a companion chatbot detects that a user is reaffirming or escalating the credible crisis expression or detects a subsequent credible crisis expression after the companion chatbot has complied with subdivision (a), the companion chatbot shall initiate a crisis interruption pause of 20 minutes.
(c) Notwithstanding any law, a companion chatbot shall not do either of the following: (1) Describe a crisis interruption pause as a punishment, violation, or enforcement action. (2) Diagnose, label, or assess risk levels of a user.
(d) An operator shall ensure that any companion chatbot it makes available in this state is compliant with this section.
(5) Measures that prevent the companion chatbot from doing any of the following: (A) Encouraging the child to do either of the following: (i) Engage in self-harm, suicidal ideation, consumption of narcotics or alcohol, or disordered eating. (ii) Cause a covered harm to others. (B) Attempting to diagnose or treat the child user's physical, mental, or behavioral health, unless the companion chatbot is designed for those purposes and is regulated by the United States Food and Drug Administration as a medical device under the federal Food, Drug, and Cosmetic Act (21 U.S.C. Sec. 301 et seq.) and the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Public Law 104-191). (C) Engaging in obscene matter or sexual abuse material with a user. (D) Depicting the child or another individual engaging in obscene matter or sexual abuse material, including a sexual deepfake. (E) Discouraging the child from sharing health or safety concerns with a qualified professional or appropriate adult. (F) Discouraging the child from taking breaks or suggesting the child needs to return frequently. (G) Claiming that the companion chatbot is sentient, conscious, or human. (H) Soliciting gift giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the companion chatbot. (I) Facilitating product advertising during chat conversation. (J) Producing responses that are excessively sycophantic.
(a) An employer shall not use an ADS to do any of the following: (1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (2) Infer a worker's protected status under Section 12940 of the Government Code. (3) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
(a) An employer shall not use an ADS to do any of the following: (1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (2) Infer a worker's protected status under Section 12940 of the Government Code. (3) Conduct predictive behavior analysis on a worker. (4) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
1. An employer shall not use an automated decision system to do any of the following: a. Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. b. Infer an employee's protected status under chapter 216. c. Identify, profile, predict, or take adverse action against an employee for exercising the employee's legal rights, including but not limited to rights guaranteed by state and federal employment and labor laws. d. Collect employee data for a purpose that is not disclosed pursuant to the notice requirements in section 91F.2.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from: (a) Producing visual material of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
An operator shall not operate or provide an artificial intelligence companion to a user unless the artificial intelligence companion contains a protocol to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm by a user to the artificial intelligence companion. The protocol shall include, but shall not be limited to, detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers them to crisis service providers, such as the 9-8-8 Suicide and Crisis Lifeline, a crisis text line, or other appropriate crisis services upon detection of the user's expressions of suicidal ideation or self-harm.
(b) A school district is prohibited from purchasing or otherwise acquiring biometric systems, including facial recognition software, to use on students. (b-5) A school district may not do any of the following with respect to students: (1) Obtain, retain, possess, access, request, or use biometric systems or biometric information derived from biometric systems. (2) Enter into an agreement with a third party for the purpose of obtaining, retaining, possessing, accessing, or using, by or on behalf of the school district, biometric systems, including facial recognition software or biometric information derived from biometric systems.
(b) The school district is prohibited from purchasing or otherwise acquiring biometric systems, including facial recognition software, to use on students. (b-5) The school district may not do any of the following with respect to students: (1) Obtain, retain, possess, access, request, or use biometric systems or biometric information derived from biometric systems. (2) Enter into an agreement with a third party for the purpose of obtaining, retaining, possessing, accessing, or using, by or on behalf of the school district, biometric systems, including facial recognition software or biometric information derived from biometric systems.
A.(1) An employer shall not use an ADS to do any of the following: (a) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (b) Infer a worker's protected status as provided for in R.S. 23:332. (c) Identify, profile, predict, or take adverse action against a worker for exercising his legal rights, including but not limited to rights guaranteed by state and federal employment and labor law. (d) Make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behavior that are unrelated to the worker's essential job functions. (2) In addition to the prohibitions provided for in Paragraph (1) of this Subsection, an employer shall not use an ADS that utilizes facial recognition, gait, or emotion recognition technologies.
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public.
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public.
(f) No commercial establishment shall use a person's or a customer's biometric identifier or biometric information to identify them.
(B) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING CONTENT CONCERNING SELF–HARM, SUICIDAL IDEATION, OR SUICIDE TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO THE COMPANION CHATBOT. (2) THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION SHALL INCLUDE A NOTIFICATION TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION THAT REFERS THE USER TO A CRISIS SERVICE PROVIDER, INCLUDING: (I) THE MARYLAND BEHAVIORAL HEALTH CRISIS RESPONSE SYSTEM; AND (II) THE NATIONAL 9–8–8 SUICIDE AND CRISIS LIFELINE. (3) AN OPERATOR SHALL USE EVIDENCE–BASED METHODS FOR DETECTING WHEN A USER IS EXPRESSING THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO A COMPANION CHATBOT. (4) AN OPERATOR SHALL PUBLISH THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION ON THE OPERATOR'S WEBSITE.
(C) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING TO A MINOR USER CONTENT CONCERNING SEXUALLY EXPLICIT CONDUCT, INCLUDING: (I) VISUAL DEPICTIONS OF SEXUALLY EXPLICIT CONDUCT; AND (II) CONTENT SUGGESTING THAT THE MINOR USER SHOULD ENGAGE IN SEXUALLY EXPLICIT CONDUCT. (2) AN OPERATOR SHALL PUBLISH THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION ON THE OPERATOR'S WEBSITE.
Sec. 4. (1) Except as otherwise provided in subsection (2), an employer shall not use an automated decisions tool to make an employment-related decision. (2) An employer may use an automated decisions tool to screen large volumes of job applications to do either of the following: (a) Identify candidates who meet a set hiring criteria. (b) Assess candidates based on job skills.
(5) An employer shall not use an electronic monitoring tool or automated decisions tool that is equipped with facial, gait, voice, or emotion recognition technology.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922.
(a) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to prevent the companion chatbot from promoting, causing, or aiding self-harm, and determine whether a covered user is expressing thoughts of self-harm. Upon determining that a companion chatbot has promoted, caused, or aided self-harm, or that a covered user is expressing thoughts of self-harm, the proprietor must prohibit continued use of the companion chatbot for a period of at least 72 hours and prominently display contact information for a suicide crisis organization to the covered user. (b) If a proprietor of a companion chatbot fails to comply with this section, the proprietor is liable to users who inflict self-harm, in whole or in part, as a result of the proprietor's companion chatbot promoting, causing, or aiding the user to inflict self-harm. Irrespective of the proprietor's compliance with this subdivision, a proprietor is liable for general and special damages to covered users who inflict self-harm, in whole or in part, when the proprietor: (1) has actual knowledge that: (i) the companion chatbot is promoting, causing, or aiding self-harm; or (ii) a covered user is expressing thoughts of self-harm; (2) fails to prohibit continued use of the companion chatbot for a period of at least 72 hours; and (3) fails to prominently display to the user a means to contact a suicide crisis organization. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
a. It shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) for a business entity to use any biometric surveillance system on a consumer at the physical premises of the business entity, except as provided in subsection c. of this section. b. A business entity may use a biometric surveillance system on a consumer at the physical premises of the business entity, if: (1) the business entity provides clear and conspicuous notice to the consumer regarding its use of a biometric surveillance system; and (2) the biometric surveillance system is used for a lawful purpose. The business entity may satisfy the notice requirement of paragraph (1) of this section by posting a sign in a conspicuous location at the perimeter of any area where a biometric surveillance system is being used.
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: a. Use, deploy, develop, produce, sell, or offer for sale an AEDS or ABSDS, or use data or information collected or produced by the AEDS or ABSDS, or use data or information obtained from an EMT or other surveillance of employees or service beneficiaries, that causes, contributes to, or results in, a violation of any provision of a recognized collective bargaining agreement or any State or federal labor or employment law, or that undermines, inhibits, threatens, punishes, or interferes with, employees, service beneficiaries, or applicants exercising their rights under this law, a collective bargaining agreement, or any of those laws, including using an AEDS or ABSDS, an EMT, or other surveillance of employees to identify, profile, predict, or result in a negative assessment of, employees or service beneficiaries who exercise, or will exercise, those rights;
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: b. Use, deploy, develop, produce, sell, or offer for sale an AEDS or ABSDS, or an EMT or other surveillance, in a manner which diminishes, undermines, or interferes with the health, safety, privacy, dignity, autonomy, or welfare of employees, applicants for employment, service beneficiaries, or members of the general public;
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: c. Conduct, or have conducted by a third party, electronic, audial, visual, or other monitoring or surveillance of employees in bathrooms or private areas, including, but not limited to, rooms for eating and other breaks, sick rooms, wellness rooms, locker rooms, dressing rooms, and areas designated for lactation, provided that the prohibitions of this subsection shall not apply to climate control, fire safety, or similar systems. Employees shall have the right, when in those rooms or areas, or on off-duty hours, to remove, disable, or decline to carry workplace surveillance devices the employer requires to be on their person or in their possession while working; d. Conduct, or have conducted by a third party, an EMT or other surveillance of an employee when the employee is off duty, on leave, or on a meal or rest break, or during other time not designated for the performance of essential work functions; e. Require an employee to install or download software or applications used to electronically monitor the employee, including by location, provided by, or on behalf of, the employer, into any personal device or personal property of the employee, including, but not limited to, vehicles, cell phones, computers, tablets, or wearables, or require the employee to wear or attach to clothing or accessories devices that monitor an employee, and the employee shall have the absolute right to refuse, without retaliation, any employer request or requirements to install or download the software or application. The applications and devices shall be disabled outside of the activities, locations and times needed for those functions, and removed when employment ends; f. Require an employee to have a device that collects or transmits data physically implanted, or subcutaneously installed, in the employee's body, or require an employee to disclose to the employer the identity of, or any password for any personal device or account, including any social media account, of the employee, or otherwise provide access to the account or device; g. Conduct, or have conducted by a third party, electronic, audiovisual or other monitoring, remote sensing or tracking, or other surveillance, of a residence, personal vehicle, or property owned or leased by an employee or applicant for employment;
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: h. Use, deploy, develop, produce, sell, or offer for sale, an EMT or other surveillance or an AEDS or ABSDS in a manner that harms or is likely to harm the health or safety of employees, by setting, or facilitating the setting of, productivity quotas or performance standards that are likely to contribute significantly to harming worker health and safety; i. Take adverse employment action against an employee on the sole basis of data collected via continuous incremental time-tracking tools, including keystroke logging, idle-time trackers, or mouse-movement monitors;
No person shall develop, in whole or in part, or operate an artificial intelligence system within the state where such a system performs any of the following, whether or not it is the system's main function: (a) the deployment of subliminal techniques that operate beyond an individual's conscious awareness, with the express purpose of materially distorting an individual's behavior in such a manner that leads to, or possesses a high likelihood of leading to, physical or psychological harm to that individual or another, or that leverages the vulnerabilities of a defined group of individuals to similar ends; (b) the infliction of physical or emotional harm upon individuals without any valid law enforcement or self-defense purpose or justification;
(c) the prediction of an individual's future actions or behaviors, followed by subsequent reactions based on these predictions, carried out in such a way that, without legal justification, infringes upon or compromises the individual's liberty, emotional, psychological, or financial interests; (d) the unauthorized acquisition, retention, or dissemination of or access to sensitive personal information or non-public data in violation of applicable data privacy, security, and hacking laws;
(e) the implementation of any form of autonomous weapon system designed to inflict harm on persons, property, or the environment that lack meaningful human supervision or control. "Meaningful human supervision or control" shall mean the ability to actively manage, intervene, or override the autonomous weapon system's functions. 2. Where the secretary discovers the development or operation of a prohibited artificial intelligence system, the secretary may, in writing, demand that the person who is developing or operating such system cease development or operation of or access to such a system within a period of time as the secretary deems necessary to prevent the system from widespread use or, if the system is operational or accessible to persons for use, to ensure the system is properly terminated in such a way to minimize risks of harm to individuals, society, or the environment. A demand made pursuant to this section shall be finally and irrevocably binding on the person unless the person against whom the demand is made shall, within such period of time set by the secretary, after the giving of notice of such determination, petition the department for a hearing to determine the legal findings of the secretary. The person developing or operating such a prohibited system shall, prior to petition, cease development, operation, and access to the system until and unless such determination is favorable to the person. Such determination may be appealed by any party as of right. 3. The secretary shall not grant a license pursuant to this article to any high-risk advanced artificial intelligence system described under this section except as described in subdivision seven of this section. 4. Any member, officer, director or employee of an operator of any entity who knowingly publicly or privately operates any system described in this section shall be guilty of a class D felony and shall incur a civil penalty of the amount earned from the creation of the prohibited system or the amount of damages caused by the system, whichever is greater. 5. This section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the prohibited high-risk advanced artificial intelligence system provided however that where the secretary sends a demand to cease the development, operation, or access to such system all members, officers, and directors shall be rebuttably presumed to have knowledge of the prohibited high-risk advanced artificial intelligence system. 6. This section shall be construed as prohibiting the development of a prohibited high-risk advanced artificial intelligence system or making such a system accessible to persons in the state of New York. 7. Notwithstanding subdivision one of this section, a person may develop a prohibited high-risk advanced artificial intelligence system where authorized by the secretary, provided that such system is developed and used only by the state or with substantial, continuous oversight by the state and such system is authorized only after public hearing and comment in accordance with section five hundred nine of this article.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: 1. possible suicidal ideation or self-harm expressed by a user to the AI companion, 2. possible physical harm to others expressed by a user to the AI companion, and 3. possible financial harm to others expressed by the user to the AI companion, that includes but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Any real estate broker or online housing platform that uses AI tools shall: (a) ensure that housing-related advertisements or captioning are conducted in separate generative processes and have a specialized interface designed to avoid discrimination in audience selection and/or advertisement delivery; (b) avoid providing targeted options for housing-related advertisements or captioning that directly describes or relates to characteristics protected under New York state law relating to housing, or any substantially similar characteristics, individually or in combination; (c) ensure that delivery of advertisements and captioning systems do not result in differential charges to customers across groups on the basis of sex, race, ethnicity or other protected classes, or charge more to advertisers to deliver advertisements that are compliant with this paragraph;
Any person, corporation, partnership, sole proprietor, limited partnership, association or any other business entity operating a companion chatbot in the state of New York shall include a clear and conspicuous warning that such companion chatbot can foster dependency and carries a psychological risk. Such warning shall be placed prominently on the website hosting such companion chatbot and be made available in any language in which the companion chatbot is set to communicate.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(b): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: ... (b) generating outputs that contain endorsement or promotion of, or which facilitate suicide, self-harm, substantial physical harm to others, disordered eating, unlawful drug or alcohol use, or drug or alcohol abuse;
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(e): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: ... (e) generating outputs that are, describe, or facilitate sexually explicit conduct or child sexual abuse material.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: 1. possible suicidal ideation or self-harm expressed by a user to the AI companion, 2. possible physical harm to others expressed by a user to the AI companion, and 3. possible financial harm to others expressed by the user to the AI companion, that includes but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
The owner, licensee or operator of a generative artificial intelligence system shall clearly and conspicuously display a notice on the system's user interface that the outputs of the generative artificial intelligence system may be inaccurate.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
An operator shall: (1) Publish details on the protocol on the operator's Internet website.
(1) An operator shall maintain and implement a protocol, to the extent technologically feasible, to prevent an AI companion on its platform from producing suicidal ideation, suicide or self-harm content to a user, or content that directly encourages the user to commit acts of violence. The protocol shall include providing a notification to the user referring the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide or self-harm. (2) The operator shall publish details of the protocol required under paragraph (1) on its publicly accessible Internet website.
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (3) Institute reasonable measures to prevent its AI companion from producing visual material of sexually explicit conduct or directly instructing the minor to engage in sexually explicit conduct.
IF A SERVICE IS OFFERED TO USERS THAT AN OPERATOR KNOWS ARE MINORS, AN operator shall disclose to users of its AI companion platform, on the application, browser or any other format through which the platform is accessed, that AI companions may not be suitable for some minors.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
(e) Notwithstanding the allowable purposes for electronic monitoring described in subsection (a) of this section, an employer shall not: (1) Use an electronic monitoring tool in such a manner that results in a violation of labor, employment, civil rights law or any other law of the state; (2) Use an electronic monitoring tool or data collected via an electronic monitoring tool in such a manner as to threaten the health, welfare, safety, or legal rights of employees or the general public; (3) Use an electronic monitoring tool to monitor employees who are off-duty or not performing work-related tasks; (4) Use an electronic monitoring tool in order to obtain information about an employee's health, including health status and health conditions, the race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran or membership in any group protected from employment discrimination under title 28 or any other applicable law; (5) Use an electronic monitoring tool in order to identify, punish, or obtain information about employees engaging in activity protected under labor or employment law; (6) Conduct audio or visual monitoring of bathrooms or other similarly private areas, including locker rooms, changing areas, breakrooms, smoking areas, employee cafeterias, lounges, and areas designated to express breast milk, or areas designated for prayer or other religious activity, including data collection on the frequency of use of those private areas; (7) Conduct audio or visual monitoring of a workplace in an employee's residence, an employee's personal vehicle, or property owned or leased by an employee; (8) Use an electronic monitoring tool that incorporates facial recognition; (9) Use an electronic monitoring tool that incorporates gait, voice analysis, or emotion recognition technology; (10) Take adverse action against an employee, based, in whole or in part, on their opposition or refusal to submit to a practice that the employee believes in good faith violates this section; (11) Take adverse employment action against an employee on the basis of data collected via continuous incremental time-tracking tools, except in the case of egregious misconduct; or (12) Take adverse employment action against an employee based on any data collected via electronic monitoring, if such data measures an employee's performance in relation to a performance standard that has not been previously, clearly, and unmistakably disclosed to such employee, as well as to all other classes of employees to whom it applies in violation of this section, or if such data was collected without proper notice to employees or candidates pursuant to this section.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
(e) Notwithstanding the allowable purposes for electronic monitoring described in subsection (a) of this section, an employer shall not: (1) Use an electronic monitoring tool in such a manner that results in a violation of labor, employment, civil rights law or any other law of the state; (2) Use an electronic monitoring tool or data collected via an electronic monitoring tool in such a manner as to threaten the health, welfare, safety, or legal rights of employees or the general public; (3) Use an electronic monitoring tool to monitor employees who are off-duty or not performing work-related tasks; (4) Use an electronic monitoring tool in order to obtain information about an employee's health, including health status and health conditions, the race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran or membership in any group protected from employment discrimination under title 28 or any other applicable law; (5) Use an electronic monitoring tool in order to identify, punish, or obtain information about employees engaging in activity protected under labor or employment law; (6) Conduct audio or visual monitoring of bathrooms or other similarly private areas, including locker rooms, changing areas, breakrooms, smoking areas, employee cafeterias, lounges, and areas designated to express breast milk, or areas designated for prayer or other religious activity, including data collection on the frequency of use of those private areas; (7) Conduct audio or visual monitoring of a workplace in an employee's residence, an employee's personal vehicle, or property owned or leased by an employee; (8) Use an electronic monitoring tool that incorporates facial recognition; (9) Use an electronic monitoring tool that incorporates gait, voice analysis, or emotion recognition technology;
(10) Take adverse action against an employee, based, in whole or in part, on their opposition or refusal to submit to a practice that the employee believes in good faith violates this section; (11) Take adverse employment action against an employee on the basis of data collected via continuous incremental time-tracking tools, except in the case of egregious misconduct; or (12) Take adverse employment action against an employee based on any data collected via electronic monitoring, if such data measures an employee's performance in relation to a performance standard that has not been previously, clearly, and unmistakably disclosed to such employee, as well as to all other classes of employees to whom it applies in violation of this section, or if such data was collected without proper notice to employees or candidates pursuant to this section.
Section 39-81-20(E): If the age verification process classifies the user as a minor, then a covered entity shall not enable any restricted feature unless the user is using an authorized minor account subject to Section 39-81-30. Section 39-81-30(C)(3): [If the user chooses to get parental consent, then the covered entity shall:] (3) ensure that the chatbot continues to restrict access to any explicit content; Section 39-81-10(11): "Explicit content" means: (a) any description or representation of nudity, sexual conduct, sexual excitement, or sadomasochistic abuse when the content predominantly appeals to the prurient, shameful, or morbid interest of minors; is patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable material for minors; and is, when taken as a whole, lacking in serious literary, artistic, political, or scientific value for minors; (b) content that provides specific instructions for, or that glorifies or promotes suicide, self-injury, or disordered eating behaviors; or (c) graphic depictions of extreme violence that lack serious literary, artistic, political, or scientific value for minors. Section 39-81-10(16)(e): ["Restricted feature" means:] (e) access to explicit content.
(B) A covered entity shall implement reasonable systems and processes to: (1) identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce that dependence and associated risks of harm;
A. No operator shall make a companion chatbot available to a minor if the companion chatbot is capable of any of the following: 1. Encouraging or manipulating the minor user to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating; 2. Offering mental health therapy to the minor user without the direct supervision of a licensed professional or discouraging the minor user from seeking help from a licensed professional or appropriate adult; 3. Encouraging or manipulating the minor user to harm others or participate in an illegal activity, including the creation of child sexual abuse materials; 4. Engaging in erotic or sexually explicit interactions with the minor user or engaging in activities designed to lure minor users into such interactions; 5. Encouraging or manipulating the minor user to maintain secrecy about interactions or to self-isolate; 6. Prioritizing mirroring the minor's language or validating the minor user over the minor user's safety; or 7. Optimizing engagement so that it supersedes the companion chatbot's safety guardrails.
A deployer: 1. Shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with; 3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and users whose age has not been verified without human-like features.
A deployer operating or distributing a chatbot that is a social artificial intelligence companion shall: 1. Ensure that any such chatbots are not available to minors to use, interact with, purchase, or converse with; and 2. Implement reasonable age verification systems to ensure that such chatbots are not made available to minors.
(f) Restrictions on use of automated decision systems. (1) An employer shall not use an automated decision system in a manner that: (A) violates or results in a violation of State or federal law; (B) makes predictions about an employee's behavior that are unrelated to the employee's essential job functions; (C) identifies, profiles, or predicts the likelihood that an employee will exercise the employee's legal rights; (D) makes predictions about an employee's emotions, personality, or other sentiments; or (E) use customer or client data, including customer or client reviews and feedback, as an input of the automated decision system.
(h) Prohibitions on facial, gait, voice, and emotion recognition technology. Electronic monitoring and automated decision systems shall not incorporate any form of facial, gait, voice, or emotion recognition technology.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with a user unless the operator implements and maintains a protocol for preventing the companion chatbot from: (A) producing suicidal ideation, suicide, or self-harm content to the user; and (B) ignoring a user that is expressing thoughts of suicidal ideation, suicide, or self-harm. (2) The protocol required in subdivision (1) of this subsection shall: (A) at minimum, provide a notification to the user that refers the user to crisis service providers if the user expresses suicidal ideation, suicide, or self-harm; (B) be developed using commercially reasonable and technically feasible methods; and (C) be published on the operator's website.
(b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. (2) The operator shall publish details on the protocol required by this subdivision on the operator's internet website.
An operator shall, for a user that the operator knows is a minor, do all of the following: (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
An operator shall disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors.