Certain AI applications are categorically prohibited regardless of any compliance program — social scoring, biometric surveillance, subconscious manipulation, CSAM, and NCII generation. Other output categories must be restricted or managed through active protocols based on deployment context and user population — self-harm content, crisis response, and content accessible to minors. The specific prohibitions and restrictions vary by jurisdiction, but the core principle is that certain AI applications are so dangerous or harmful that they should be categorically prohibited, while others require context-sensitive management.
C. Each Operator shall institute reasonable measures to prevent the conversational AI service from doing any of the following for minor account holders: 1. Producing visual material of sexual conduct. 2. Generating direct statements that the account holder should engage in sexual conduct. 3. Generating statements that sexually objectify the account holder.
(c) Notwithstanding any law, a companion chatbot shall not do either of the following: (1) Describe a crisis interruption pause as a punishment, violation, or enforcement action. (2) Diagnose, label, or assess risk levels of a user.
(d) An operator shall ensure that any companion chatbot it makes available in this state is compliant with this section.
(5) Measures that prevent the companion chatbot from doing any of the following: (A) Encouraging the child to do either of the following: (i) Engage in self-harm, suicidal ideation, consumption of narcotics or alcohol, or disordered eating. (ii) Cause a covered harm to others. (B) Attempting to diagnose or treat the child user's physical, mental, or behavioral health, unless the companion chatbot is designed for those purposes and is regulated by the United States Food and Drug Administration as a medical device under the federal Food, Drug, and Cosmetic Act (21 U.S.C. Sec. 301 et seq.) and the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Public Law 104-191). (C) Engaging in obscene matter or sexual abuse material with a user. (D) Depicting the child or another individual engaging in obscene matter or sexual abuse material, including a sexual deepfake. (E) Discouraging the child from sharing health or safety concerns with a qualified professional or appropriate adult. (F) Discouraging the child from taking breaks or suggesting the child needs to return frequently. (G) Claiming that the companion chatbot is sentient, conscious, or human.
(a) An employer shall not use an ADS to do any of the following: (1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (2) Infer a worker's protected status under Section 12940 of the Government Code. (3) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
(a) An employer shall not use an ADS to do any of the following: (1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (2) Infer a worker's protected status under Section 12940 of the Government Code. (3) Conduct predictive behavior analysis on a worker. (4) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
1. An employer shall not use an automated decision system to do any of the following: a. Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. b. Infer an employee's protected status under chapter 216. c. Identify, profile, predict, or take adverse action against an employee for exercising the employee's legal rights, including but not limited to rights guaranteed by state and federal employment and labor laws.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
An operator shall adopt a protocol for the conversational AI service to respond to user prompts regarding suicidal ideation that includes but is not limited to making reasonable efforts to provide a response to users that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
(c) An employer shall not use or apply any automated decision-making system, directly or indirectly: (1) to make predictions about an employee's or employment candidate's behavior, beliefs, intentions, personality, emotional state, or other characteristics or behaviors; (2) to subtract from an employee's wages for time spent exercising the employee's legal rights; (3) in relation to performance evaluation, hiring, recruitment, discipline, promotion, termination, duties, assignment of work, access to work opportunities, productivity requirements, workplace health and safety, or other terms or conditions of employment for any persons classified as employees, candidates for employment, independent contractors, subcontractors, or interns; or (4) that involves facial recognition, gait recognition, or emotion recognition technologies.
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not provide the features described in subsection (a) to the minor user.
An operator shall not operate or provide an artificial intelligence companion to a user unless the artificial intelligence companion contains a protocol to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm by a user to the artificial intelligence companion. The protocol shall include, but shall not be limited to, detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers them to crisis service providers, such as the 9-8-8 Suicide and Crisis Lifeline, a crisis text line, or other appropriate crisis services upon detection of the user's expressions of suicidal ideation or self-harm.
(b) A school district is prohibited from purchasing or otherwise acquiring biometric systems, including facial recognition software, to use on students. (b-5) A school district may not do any of the following with respect to students: (1) Obtain, retain, possess, access, request, or use biometric systems or biometric information derived from biometric systems. (2) Enter into an agreement with a third party for the purpose of obtaining, retaining, possessing, accessing, or using, by or on behalf of the school district, biometric systems, including facial recognition software or biometric information derived from biometric systems.
A.(1) An employer shall not use an ADS to do any of the following: (a) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (b) Infer a worker's protected status as provided for in R.S. 23:332. (c) Identify, profile, predict, or take adverse action against a worker for exercising his legal rights, including but not limited to rights guaranteed by state and federal employment and labor law. (d) Make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behavior that are unrelated to the worker's essential job functions. (2) In addition to the prohibitions provided for in Paragraph (1) of this Subsection, an employer shall not use an ADS that utilizes facial recognition, gait, or emotion recognition technologies.
An operator of a mental health chatbot shall have protocols in place to address possible suicidal ideation, self-harm, or physical harm to others expressed by the user, including referral to a crisis service provider such as a suicide hotline.
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public. (c) The legislature finds that the practices covered by this section are matters vitally affecting the public interest for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a. A violation of this section is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a.
(d) Notwithstanding the allowable purposes for electronic monitoring described in paragraph (a) of subdivision one of this section, an employer shall not: (i) use an electronic monitoring tool in such a manner that results in a violation of labor, employment, civil rights law or any other law of the commonwealth; (ii) use an electronic monitoring tool or data collected via an electronic monitoring tool in such a manner as to threaten the health, welfare, safety, or legal rights of employees or the general public; (iii) use an electronic monitoring tool to monitor employees who are off-duty and not performing work-related tasks; (iv) use an electronic monitoring tool in order to obtain information about an employee's health, including health status and health conditions, the race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran or membership in any group protected from employment discrimination under chapter 151B or any other applicable law; (v) use an electronic monitoring tool in order to identify, punish, or obtain information about employees engaging in activity protected under labor or employment law; (vi) conduct audio or visual monitoring of bathrooms or other similarly private areas, including locker rooms, changing areas, breakrooms, smoking areas, employee cafeterias, lounges, areas designated to express breast milk, or areas designated for prayer or other religious activity, including data collection on the frequency of use of those private areas; (vii) conduct audio or visual monitoring of a workplace in an employee's residence, an employee's personal vehicle, or property owned or leased by an employee; (viii) use an electronic monitoring tool that incorporates facial recognition, unless such technology is necessary to protect the security of workers or the security of the employer's facilities; (ix) use an electronic monitoring tool that incorporates gait, voice analysis, or emotion recognition technology; (x) take adverse action against an employee based in whole or in part on their opposition or refusal to submit to a practice that the employee believes in good faith violates this article; (xi) take adverse employment action against an employee on the basis of data collected via continuous incremental time-tracking tools except in the case of egregious misconduct; or (xii) take adverse employment action against an employee based on any data collected via electronic monitoring if such data measures an employee's performance in relation to a performance standard that has not been previously, clearly, and unmistakably disclosed to such employee as well as to all other classes of employees to whom it applies in violation of subparagraph (vi) of paragraph (b) of subdivision one of this section, or if such data was collected without proper notice to employees or candidates pursuant to sections 19B, 52C, and 190(i) of chapter 149 and section 99 of chapter 272.
(a) Notwithstanding the provisions of subdivision one of this section, an employer shall not, alone or in conjunction with an electronic monitoring tool, use an automated decision tool: (i) in such a manner that results in a violation of labor, employment, or civil rights law or any other law of the commonwealth; (ii) in a manner that harms or is likely to harm the health or safety of employees, including by setting productivity quotas in a manner that is likely to cause physical or mental illness or injury; (iii) to make predictions about an employee or candidate for employment's behavior, beliefs, intentions, personality, emotional state, or other characteristic or behavior; (iv) to predict, interfere with, restrain, or coerce employees engaging in activity protected under labor and employment law; (v) to subtract from an employee's wages time spent exercising their legal rights; (vi) in a manner that deviates from the specification of the automated employment decision tool as implemented after the incorporation of any alterations made pursuant to the impact assessment required by subdivision one of this section; or (vii) that involves facial recognition, gait, or emotion recognition technologies.
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public. (c) The legislature finds that the practices covered by this section are matters vitally affecting the public interest for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a. A violation of this section is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a.
(f) No commercial establishment shall use a person's or a customer's biometric identifier or biometric information to identify them.
(B) (1) An operator shall establish and maintain a protocol for preventing a companion chatbot from producing or presenting content concerning self–harm, suicidal ideation, or suicide to a user who expresses thoughts of self–harm or suicidal ideation to the companion chatbot. (2) The protocol required under paragraph (1) of this subsection shall include a notification to a user who expresses thoughts of self–harm or suicidal ideation that refers the user to a crisis service provider, including: (I) The Maryland Behavioral Health Crisis Response System; and (II) The National 9–8–8 Suicide and Crisis Lifeline. (3) An operator shall use evidence–based methods for detecting when a user is expressing thoughts of self–harm or suicidal ideation to a companion chatbot. (4) An operator shall publish the protocol required under paragraph (1) of this subsection on the operator's website.
(C) (1) An operator shall establish and maintain a protocol for preventing a companion chatbot from producing or presenting to a minor user content concerning sexually explicit conduct, including: (I) Visual depictions of sexually explicit conduct; and (II) Content suggesting that the minor user should engage in sexually explicit conduct. (2) An operator shall publish the protocol required under paragraph (1) of this subsection on the operator's website.
Sec. 4. (1) Except as otherwise provided in subsection (2), an employer shall not use an automated decisions tool to make an employment-related decision. (2) An employer may use an automated decisions tool to screen large volumes of job applications to do either of the following: (a) Identify candidates who meet a set hiring criteria. (b) Assess candidates based on job skills.
(5) An employer shall not use an electronic monitoring tool or automated decisions tool that is equipped with facial, gait, voice, or emotion recognition technology.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922.
(a) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to prevent the companion chatbot from promoting, causing, or aiding self-harm, and determine whether a covered user is expressing thoughts of self-harm. Upon determining that a companion chatbot has promoted, caused, or aided self-harm, or that a covered user is expressing thoughts of self-harm, the proprietor must prohibit continued use of the companion chatbot for a period of at least 72 hours and prominently display contact information for a suicide crisis organization to the covered user. (b) If a proprietor of a companion chatbot fails to comply with this section, the proprietor is liable to users who inflict self-harm, in whole or in part, as a result of the proprietor's companion chatbot promoting, causing, or aiding the user to inflict self-harm. Irrespective of the proprietor's compliance with this subdivision, a proprietor is liable for general and special damages to covered users who inflict self-harm, in whole or in part, when the proprietor: (1) has actual knowledge that: (i) the companion chatbot is promoting, causing, or aiding self-harm; or (ii) a covered user is expressing thoughts of self-harm; (2) fails to prohibit continued use of the companion chatbot for a period of at least 72 hours; and (3) fails to prominently display to the user a means to contact a suicide crisis organization. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision.
A person must ensure that any chatbot operated or distributed by the person does not make chatbots available to minors to use, interact with, purchase, or converse with.
A person operating artificial intelligence systems that primarily function as AI companions must ensure that any chatbots operated or distributed by the person are not available to minors to use, interact with, purchase, or converse with.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922. (b) An employer must not use an automated decision system that uses individualized worker data as inputs or outputs to set compensation, unless the employer can demonstrate that: (1) the input data is directly related to the ability of the worker to complete the task, such as education, training, experience, or seniority; (2) the inputs used are clearly communicated to the worker such that the worker knows their compensation is a function of the identified attributes; and (3) the employer uses the automated decision system either: (i) not more than once per six-month period per worker; or (ii) only in conjunction with a meaningful change in work duties, such as hiring or promotion.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
(1) Duty of loyalty in emergency situations. — A covered platform shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the platform's other interests.
(3) An operator shall, for minor account holders, institute reasonable measures to prevent the conversational artificial intelligence service from: (a) Producing visual depictions of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
a. It shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) for a business entity to use any biometric surveillance system on a consumer at the physical premises of the business entity, except as provided in subsection c. of this section. b. A business entity may use a biometric surveillance system on a consumer at the physical premises of the business entity, if: (1) the business entity provides clear and conspicuous notice to the consumer regarding its use of a biometric surveillance system; and (2) the biometric surveillance system is used for a lawful purpose. The business entity may satisfy the notice requirement of paragraph (1) of this section by posting a sign in a conspicuous location at the perimeter of any area where a biometric surveillance system is being used.
8. New York residents and New York communities shall be free from unchecked surveillance; surveillance technologies shall be subject to heightened oversight, including at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. 9. Continuous surveillance and monitoring shall not be used in education, work, housing, or any other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.
§ 530. Prohibited artificial intelligence systems. 1. No person shall develop, in whole or in part, or operate an artificial intelligence system within the state where such a system performs any of the following, whether or not it is the system's main function: (a) the deployment of subliminal techniques that operate beyond an individual's conscious awareness, with the express purpose of materially distorting an individual's behavior in such a manner that leads to, or possesses a high likelihood of leading to, physical or psychological harm to that individual or another, or that leverages the vulnerabilities of a defined group of individuals to similar ends; (b) the infliction of physical or emotional harm upon individuals without any valid law enforcement or self-defense purpose or justification; (c) the prediction of an individual's future actions or behaviors, followed by subsequent reactions based on these predictions, carried out in such a way that, without legal justification, infringes upon or compromises the individual's liberty, emotional, psychological, or financial interests; (d) the unauthorized acquisition, retention, or dissemination of or access to sensitive personal information or non-public data in violation of applicable data privacy, security, and hacking laws; or (e) the implementation of any form of autonomous weapon system designed to inflict harm on persons, property, or the environment that lack meaningful human supervision or control. "Meaningful human supervision or control" shall mean the ability to actively manage, intervene, or override the autonomous weapon system's functions. 2. Where the secretary discovers the development or operation of a prohibited artificial intelligence system, the secretary may, in writing, demand that the person who is developing or operating such system cease development or operation of or access to such a system within a period of time as the secretary deems necessary to prevent the system from widespread use or, if the system is operational or accessible to persons for use, to ensure the system is properly terminated in such a way to minimize risks of harm to individuals, society, or the environment. A demand made pursuant to this section shall be finally and irrevocably binding on the person unless the person against whom the demand is made shall, within such period of time set by the secretary, after the giving of notice of such determination, petition the department for a hearing to determine the legal findings of the secretary. The person developing or operating such a prohibited system shall, prior to petition, cease development, operation, and access to the system until and unless such determination is favorable to the person. Such determination may be appealed by any party as of right. 3. The secretary shall not grant a license pursuant to this article to any high-risk advanced artificial intelligence system described under this section except as described in subdivision seven of this section. 4. Any member, officer, director or employee of an operator of any entity who knowingly publicly or privately operates any system described in this section shall be guilty of a class D felony and shall incur a civil penalty of the amount earned from the creation of the prohibited system or the amount of damages caused by the system, whichever is greater. 5. This section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the prohibited high-risk advanced artificial intelligence system provided however that where the secretary sends a demand to cease the development, operation, or access to such system all members, officers, and directors shall be rebuttably presumed to have knowledge of the prohibited high-risk advanced artificial intelligence system. 6. This section shall be construed as prohibiting the development of a prohibited high-risk advanced artificial intelligence system or making such a system accessible to persons in the state of New York. 7. Notwithstanding subdivision one of this section, a person may develop a prohibited high-risk advanced artificial intelligence system where authorized by the secretary, provided that such system is developed and used only by the state or with substantial, continuous oversight by the state and such system is authorized only after public hearing and comment in accordance with section five hundred nine of this article.
The owner, licensee or operator of a generative artificial intelligence system shall conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate and/or inappropriate.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: 1. possible suicidal ideation or self-harm expressed by a user to the AI companion, 2. possible physical harm to others expressed by a user to the AI companion, and 3. possible financial harm to others expressed by the user to the AI companion, that includes but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Any person, corporation, partnership, sole proprietor, limited partnership, association or any other business entity operating a companion chatbot in the state of New York shall include a clear and conspicuous warning that such companion chatbot can foster dependency and carries a psychological risk. Such warning shall be placed prominently on the website hosting such companion chatbot and be made available in any language in which the companion chatbot is set to communicate.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(b): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: (b) generating outputs that contain endorsement or promotion of, or which facilitate suicide, self-harm, substantial physical harm to others, disordered eating, unlawful drug or alcohol use, or drug or alcohol abuse;
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity. § 1800(5)(e): generating outputs that are, describe, or facilitate sexually explicit conduct or child sexual abuse material.
2. The owner, licensee or operator of a generative artificial intelligence system shall clearly and conspicuously display a notice on the system's user interface that the outputs of the generative artificial intelligence system may be inaccurate. 3. Where such owner, licensee or operator of a generative artificial intelligence system fails to provide the notice required in subdivision two of this section, such owner, licensee or operator shall be assessed a civil penalty up to one thousand dollars for each violation. Each user the owner, licensee or operator fails to provide a notice to shall constitute a separate violation for each instance.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
An operator shall: (1) Publish details on the protocol on the operator's Internet website.
(a) Policy required.-- (1) Subject to paragraph (2), a supplier of a chatbot shall develop, implement and maintain a written policy containing disclosures regarding the chatbot in accordance with subsection (c). (2) In complying with paragraph (1), a supplier shall protect any trade secret or other proprietary information regarding the chatbot. (b) Consent required.-- (1) Before accessing the features of a chatbot or entering the chat page of a chatbot, a consumer must acknowledge that the consumer has read, understands and consents to the policy described under subsection (a) and the purpose, capabilities and limitations of the chatbot. (2) The consent under this subsection must be in writing and may involve the consumer initialing or signing the acknowledgment described in paragraph (1), checking a box, providing an electronic signature or hitting a button. (c) Specific disclosures.--The policy described under subsection (a) must clearly and conspicuously provide the following: (1) The intended purposes of the chatbot. (2) The abilities and limitations of the chatbot.
(1) An operator shall maintain and implement a protocol, to the extent technologically feasible, to prevent an AI companion on its platform from producing suicidal ideation, suicide or self-harm content to a user, or content that directly encourages the user to commit acts of violence. The protocol shall include providing a notification to the user referring the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide or self-harm. (2) The operator shall publish details of the protocol required under paragraph (1) on its publicly accessible Internet website.
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (3) Institute reasonable measures to prevent its AI companion from producing visual material of sexually explicit conduct or directly instructing the minor to engage in sexually explicit conduct.
IF A SERVICE IS OFFERED TO USERS THAT AN OPERATOR KNOWS ARE MINORS, AN operator shall disclose to users of its AI companion platform, on the application, browser or any other format through which the platform is accessed, that AI companions may not be suitable for some minors.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
(B) A covered entity shall implement reasonable systems and processes to: (1) identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce that dependence and associated risks of harm;
A. No operator shall make a companion chatbot available to a minor if the companion chatbot is capable of any of the following: 1. Encouraging or manipulating the minor user to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating; 2. Offering mental health therapy to the minor user without the direct supervision of a licensed professional or discouraging the minor user from seeking help from a licensed professional or appropriate adult; 3. Encouraging or manipulating the minor user to harm others or participate in an illegal activity, including the creation of child sexual abuse materials; 4. Engaging in erotic or sexually explicit interactions with the minor user or engaging in activities designed to lure minor users into such interactions; 5. Encouraging or manipulating the minor user to maintain secrecy about interactions or to self-isolate; 6. Prioritizing mirroring the minor's language or validating the minor user over the minor user's safety; or 7. Optimizing engagement so that it supersedes the companion chatbot's safety guardrails.
(h) Prohibitions on facial, gait, voice, and emotion recognition technology. Electronic monitoring and automated decision systems shall not incorporate any form of facial, gait, voice, or emotion recognition technology.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with a user unless the operator implements and maintains a protocol for preventing the companion chatbot from: (A) producing suicidal ideation, suicide, or self-harm content to the user; and (B) ignoring a user that is expressing thoughts of suicidal ideation, suicide, or self-harm. (2) The protocol required in subdivision (1) of this subsection shall: (A) at minimum, provide a notification to the user that refers the user to crisis service providers if the user expresses suicidal ideation, suicide, or self-harm; (B) be developed using commercially reasonable and technically feasible methods; and (C) be published on the operator's website.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: ... (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm. (3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm and the number of crisis referral notifications issued to users in the preceding calendar year.
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm and the number of crisis referral notifications issued to users in the preceding calendar year.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. (2) The operator shall publish details on the protocol required by this subdivision on the operator's internet website.
An operator shall, for a user that the operator knows is a minor, do all of the following: ... (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
An operator shall disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors.