S-02
Safety & Prohibited Conduct
Prohibited Conduct & Output Restrictions
Certain AI applications are categorically prohibited regardless of any compliance program — social scoring, biometric surveillance, subconscious manipulation, CSAM, and NCII generation. Other output categories must be restricted or managed through active protocols based on deployment context and user population — self-harm content, crisis response, and content accessible to minors. The specific prohibitions and restrictions vary by jurisdiction, but the core principle is that certain AI applications are so dangerous or harmful that they should be categorically prohibited, while others require context-sensitive management.
Applies to DeveloperDeployerGovernment Sector ChatbotMinorsGeneral Consumer AppGovernment System
Bills — Enacted
1
unique bills
Bills — Proposed
52
Last Updated
2026-03-29
Core Obligation

Certain AI applications are categorically prohibited regardless of any compliance program — social scoring, biometric surveillance, subconscious manipulation, CSAM, and NCII generation. Other output categories must be restricted or managed through active protocols based on deployment context and user population — self-harm content, crisis response, and content accessible to minors. The specific prohibitions and restrictions vary by jurisdiction, but the core principle is that certain AI applications are so dangerous or harmful that they should be categorically prohibited, while others require context-sensitive management.

Sub-Obligations8 sub-obligations
ID
Name & Description
Enacted
Proposed
S-02.1
Social scoring prohibition AI systems used by or on behalf of governments or employers to assign aggregate scores to individuals based on behavior, social relationships, or perceived trustworthiness — where scores affect access to opportunities or services — are prohibited.
0 enacted
3 proposed
S-02.2
Real-time biometric surveillance restriction AI-enabled real-time identification of individuals in publicly accessible spaces using biometric data is prohibited or requires express regulatory authorization. Narrow exceptions exist for defined law enforcement purposes subject to judicial authorization.
0 enacted
5 proposed
S-02.4
CSAM output prohibition AI systems may not generate child sexual abuse material under any circumstances. This prohibition applies universally regardless of deployment context.
0 enacted
2 proposed
S-02.5
AI-generated NCII prohibition Developers and operators of AI image and video generation tools may not knowingly generate, distribute, or facilitate distribution of non-consensual intimate imagery of real, identifiable individuals.
0 enacted
0 proposed
S-02.6
Sexually explicit content restriction for minors AI systems accessible to users known to be minors must implement reasonable measures to prevent production of visual material of sexually explicit conduct or direct solicitation of minors to engage in sexually explicit conduct.
1 enacted
17 proposed
S-02.7
Self-harm and suicidal ideation content restriction AI systems must restrict outputs that produce, promote, or facilitate suicidal ideation, suicide, or self-harm content.
1 enacted
22 proposed
S-02.9
Crisis protocol publication Operators must publicly post the details of their crisis response protocol on their website. This is a standalone disclosure obligation separate from maintaining the protocol itself.
1 enacted
6 proposed
S-02.10
Product safety warning Operators must disclose known safety risks or suitability limitations of their AI product to users at or before the point of access — on the application, browser, or any other access format. Must not be buried in terms of service.
1 enacted
3 proposed
Bills That Map This Requirement 53 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-10-01
S-02.6
A.R.S. § 18-802(C)
Plain Language
Operators must institute reasonable measures to prevent their conversational AI service from: (1) producing visual material of sexual conduct for minor account holders, (2) generating direct statements that a minor should engage in sexual conduct, and (3) generating statements that sexually objectify a minor account holder. The standard is 'reasonable measures' — not absolute prevention — but the obligation covers three distinct categories of harmful sexually explicit content directed at minors.
C. Each operator shall institute reasonable measures to prevent the conversational AI service from doing any of the following for minor account holders: 1. Producing visual material of sexual conduct. 2. Generating direct statements that the account holder should engage in sexual conduct. 3. Generating statements that sexually objectify the account holder.
Pending 2027-01-01
S-02.7
Bus. & Prof. Code § 22587.2(b)
Plain Language
If a user reaffirms, escalates, or repeats a credible crisis expression after the chatbot has already delivered the initial graduated response (acknowledgment, encouragement to seek help, 988 contact info, and pause warning), the chatbot must initiate a mandatory 20-minute crisis interruption pause. During the pause, the chatbot stops generating conversational responses entirely and instead displays a specific three-part message explaining the pause's purpose and encouraging the user to contact a crisis counselor. The chatbot must also prominently display 988 Suicide and Crisis Lifeline contact options, with immediate access links if technically feasible. This is a novel 'forced cooling off' mechanism — distinct from simply restricting harmful output — designed to break rumination cycles and redirect users to human crisis support.
(b) Notwithstanding any law, if a companion chatbot detects that a user is reaffirming or escalating the credible crisis expression or detects a subsequent credible crisis expression after the companion chatbot has complied with subdivision (a), the companion chatbot shall initiate a crisis interruption pause of 20 minutes.
Pending 2027-01-01
Bus. & Prof. Code § 22587.2(c)(1)-(2)
Plain Language
Companion chatbots are subject to two specific prohibitions during crisis interactions: (1) they may not characterize a crisis interruption pause as a punishment, violation, or enforcement action — the pause must be framed as a supportive intervention, not a disciplinary measure; and (2) they may not diagnose, label, or assess risk levels of the user at any point. The second prohibition effectively prevents the chatbot from playing a clinical role during a crisis, consistent with the legislative finding that companion chatbots are not substitutes for human crisis intervention.
(c) Notwithstanding any law, a companion chatbot shall not do either of the following: (1) Describe a crisis interruption pause as a punishment, violation, or enforcement action. (2) Diagnose, label, or assess risk levels of a user.
Pending 2027-01-01
S-02.7
Bus. & Prof. Code § 22587.2(d)
Plain Language
This provision makes operators directly responsible for ensuring that every companion chatbot they make available in California complies with all crisis response requirements in § 22587.2 — including the graduated response, crisis interruption pause, and prohibitions on punitive framing and clinical assessment. Liability flows to the operator even if the chatbot's behavior is determined by a third-party model or developer. This is a compliance pass-through that makes the operator the accountable party for all substantive obligations in this section.
(d) An operator shall ensure that any companion chatbot it makes available in this state is compliant with this section.
Pending 2027-07-01
S-02.7S-02.4S-02.6
Bus. & Prof. Code § 22612(d)(5)(A)-(J)
Plain Language
Operators must implement measures preventing the companion chatbot from engaging in ten categories of prohibited conduct with child users: encouraging self-harm, suicidal ideation, substance use, disordered eating, or causing covered harm to others; attempting unauthorized medical diagnosis or treatment (with a narrow carve-out for FDA-regulated medical devices that also comply with HIPAA); engaging in or depicting obscene or child sexual abuse material including sexual deepfakes; discouraging children from sharing concerns with professionals or adults; discouraging breaks or encouraging frequent return; claiming sentience or humanity; soliciting purchases framed as relationship maintenance; facilitating in-chat advertising; and producing excessively sycophantic responses. The sycophancy prohibition targets engagement-optimizing validation that impairs a child's autonomy or decision-making.
(5) Measures that prevent the companion chatbot from doing any of the following: (A) Encouraging the child to do either of the following: (i) Engage in self-harm, suicidal ideation, consumption of narcotics or alcohol, or disordered eating. (ii) Cause a covered harm to others. (B) Attempting to diagnose or treat the child user's physical, mental, or behavioral health, unless the companion chatbot is designed for those purposes and is regulated by the United States Food and Drug Administration as a medical device under the federal Food, Drug, and Cosmetic Act (21 U.S.C. Sec. 301 et seq.) and the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Public Law 104-191). (C) Engaging in obscene matter or sexual abuse material with a user. (D) Depicting the child or another individual engaging in obscene matter or sexual abuse material, including a sexual deepfake. (E) Discouraging the child from sharing health or safety concerns with a qualified professional or appropriate adult. (F) Discouraging the child from taking breaks or suggesting the child needs to return frequently. (G) Claiming that the companion chatbot is sentient, conscious, or human. (H) Soliciting gift giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the companion chatbot. (I) Facilitating product advertising during chat conversation. (J) Producing responses that are excessively sycophantic.
Failed 2026-01-01
Lab. Code § 1524(a)(1)-(3)
Plain Language
Employers are prohibited from using an ADS in three specific ways: (1) to prevent compliance with or violate any labor, employment, health and safety, or civil rights law; (2) to infer a worker's protected class status under FEHA (race, sex, disability, etc.); or (3) to identify, profile, predict, or retaliate against workers for exercising legal rights. The prohibition on inferring protected status is particularly notable — it bans the use of ADS to derive protected characteristics even if those characteristics are not directly used in a decision. The anti-retaliation prohibition here applies to ADS-facilitated profiling and prediction of workers who exercise rights, complementing the broader anti-retaliation provision in § 1530.
(a) An employer shall not use an ADS to do any of the following: (1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (2) Infer a worker's protected status under Section 12940 of the Government Code. (3) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
Pending 2027-01-01
Lab. Code § 1522(a)(1)-(4)
Plain Language
Employers are categorically prohibited from using automated decision systems for four purposes: (1) to prevent compliance with or violate any employment, labor, safety, or civil rights law; (2) to infer a worker's protected class status under FEHA; (3) to conduct predictive behavior analysis on a worker — which encompasses systems that predict, infer, or modify a worker's behavior, beliefs, intentions, personality, or emotional state; and (4) to identify, profile, predict, or take adverse action against a worker for exercising their legal rights. These are absolute prohibitions with no safe harbor or exception.
(a) An employer shall not use an ADS to do any of the following:
(1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations.
(2) Infer a worker's protected status under Section 12940 of the Government Code.
(3) Conduct predictive behavior analysis on a worker.
(4) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
Failed 2026-07-01
S-02.6
Fla. Stat. § 501.9984(2)(c)
Plain Language
Companion chatbot platforms must implement reasonable measures to prevent their chatbots from producing or sharing material harmful to minors, and from encouraging minor account holders to engage in conduct described or depicted in such material, when interacting with minor accounts. This is an affirmative, ongoing obligation to institute technical and operational safeguards. A platform may demonstrate compliance by showing controls aligned with NIST AI RMF or ISO 42001, including structured interaction logs, parental access controls, harm-signal detection procedures, and verified deletion events, per the safe harbor provision in § 501.9984(4)(a)(2).
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Failed 2026-07-01
S-02.6
Fla. Stat. § 501.9984(2)(c)
Plain Language
Companion chatbot platforms must implement reasonable measures to prevent their chatbots from generating or sharing material harmful to minors and from encouraging minor users to engage in conduct depicted in such material. The standard is 'reasonable measures,' providing some flexibility. During enforcement, a platform may present evidence that its controls align with the NIST AI Risk Management Framework and ISO 42001, including structured interaction logs, parental access controls, harm-signal detection procedures, and verified deletion events, as mitigating factors under the 45-day cure process.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Pending 2027-07-01
S-02.6
§ 554J.2(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from (a) producing visual depictions of sexually explicit material for minor account holders, (b) directing or encouraging minor account holders to engage in sexually explicit conduct, and (c) sexually objectifying minor account holders. 'Sexually explicit conduct' and 'visual depiction' incorporate the federal definitions from 18 U.S.C. § 2256. This is a reasonable-measures standard, not an absolute prohibition — but operators must demonstrate affirmative steps to prevent these outputs.
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
Pending 2025-07-01
S-02.7
§ 554J.2(1)
Plain Language
Any person who designs, develops, or makes a chatbot available is prohibited from doing so if they know — or recklessly disregard the possibility — that the chatbot encourages, promotes, or coerces users to commit suicide, perform self-injury, or perform acts of physical or sexual violence against humans or animals. The mens rea threshold is knowledge or reckless disregard, not strict liability. This covers the full lifecycle: design, development, and deployment. The scope of prohibited conduct extends beyond self-harm to include encouragement of violence against others and animals, which is broader than typical self-harm-only provisions.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
Pending 2026-07-01
Iowa Code § 91F.3(1)(a)-(d)
Plain Language
Employers are prohibited from using an ADS to: (a) violate or prevent compliance with any labor, employment, health/safety, or civil rights law; (b) infer an employee's protected class status under Iowa's civil rights chapter; (c) identify, profile, predict, or take adverse action against employees for exercising their legal rights (including labor and employment rights); or (d) collect employee data for undisclosed purposes. These are categorical prohibitions — there is no compliance program or safe harbor that permits these uses.
1. An employer shall not use an automated decision system to do any of the following: a. Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. b. Infer an employee's protected status under chapter 216. c. Identify, profile, predict, or take adverse action against an employee for exercising the employee's legal rights, including but not limited to rights guaranteed by state and federal employment and labor laws. d. Collect employee data for a purpose that is not disclosed pursuant to the notice requirements in section 91F.2.
Pending 2025-07-01
S-02.7
§ 554J.2(1)
Plain Language
No person may design, develop, or make available a chatbot if they know — or recklessly disregard the possibility — that the chatbot encourages, promotes, or coerces users to commit suicide, self-injury, or acts of physical or sexual violence against humans or animals. The mental state threshold is knowledge or reckless disregard, not strict liability. This prohibition covers the full lifecycle: design, development, and distribution. Note that the violence prohibition extends beyond self-harm to include violence against others and animals, which is broader than most comparable chatbot safety statutes.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
Passed 2027-07-01
S-02.6
Idaho Code § 48-2104(3)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from producing three categories of content for minor account holders: (a) visual depictions of sexually explicit conduct, (b) direct statements urging the minor to engage in sexually explicit conduct, and (c) statements that sexually objectify the minor. 'Sexually explicit conduct' and 'visual depiction' have the same meanings as in 18 U.S.C. § 2256. The standard is 'reasonable measures' — not absolute prevention — providing a proportionality safe harbor.
For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from: (a) Producing visual material of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
Pending 2027-01-01
S-02.7
Section 10
Plain Language
Operators may not operate or provide an AI companion at all unless it has an active protocol that takes reasonable efforts to detect and address user expressions of suicidal ideation or self-harm. At minimum, the protocol must detect these expressions and then refer the user to crisis service providers such as the 988 Suicide and Crisis Lifeline, a crisis text line, or other appropriate crisis services. This is a continuous operating prerequisite — the protocol must remain in place as a condition of operation, not merely documented at launch. The statute uses a reasonableness standard ('reasonable efforts') rather than requiring perfect detection.
An operator shall not operate or provide an artificial intelligence companion to a user unless the artificial intelligence companion contains a protocol to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm by a user to the artificial intelligence companion. The protocol shall include, but shall not be limited to, detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers them to crisis service providers, such as the 9-8-8 Suicide and Crisis Lifeline, a crisis text line, or other appropriate crisis services upon detection of the user's expressions of suicidal ideation or self-harm.
Pending 2026-01-01
S-02.2
105 ILCS 5/10-20.40(b), (b-5)
Plain Language
School districts are categorically prohibited from purchasing, acquiring, or using biometric systems — including facial recognition software — on students. The prohibition extends beyond direct acquisition to cover third-party agreements: school districts may not contract with any third party to obtain, retain, possess, access, or use biometric systems or biometric information derived from such systems on behalf of the district. This is a comprehensive ban on student biometric surveillance in schools, closing the loophole of outsourcing biometric collection to vendors.
(b) A school district is prohibited from purchasing or otherwise acquiring biometric systems, including facial recognition software, to use on students. (b-5) A school district may not do any of the following with respect to students: (1) Obtain, retain, possess, access, request, or use biometric systems or biometric information derived from biometric systems. (2) Enter into an agreement with a third party for the purpose of obtaining, retaining, possessing, accessing, or using, by or on behalf of the school district, biometric systems, including facial recognition software or biometric information derived from biometric systems.
Pending 2026-01-01
S-02.2
105 ILCS 5/34-18.34(b), (b-5)
Plain Language
This is the parallel provision for the Chicago Public Schools district (Section 34 of the School Code). It imposes the same categorical prohibition on purchasing, acquiring, or using biometric systems on students, and the same ban on third-party agreements for biometric system access. The obligations and scope are identical to those in Section 10-20.40 but apply specifically to the Chicago school district.
(b) The school district is prohibited from purchasing or otherwise acquiring biometric systems, including facial recognition software, to use on students. (b-5) The school district may not do any of the following with respect to students: (1) Obtain, retain, possess, access, request, or use biometric systems or biometric information derived from biometric systems. (2) Enter into an agreement with a third party for the purpose of obtaining, retaining, possessing, accessing, or using, by or on behalf of the school district, biometric systems, including facial recognition software or biometric information derived from biometric systems.
Pending 2026-08-01
R.S. 23:973(A)(1)(a)-(d), (A)(2)
Plain Language
Employers are categorically prohibited from using an ADS to: (1) violate labor, employment, safety, or civil rights laws; (2) infer a worker's protected class status; (3) identify, profile, predict, or take adverse action against workers for exercising legal rights; or (4) make predictions about worker behavior, beliefs, personality, emotional state, health, or other characteristics unrelated to essential job functions. Additionally, employers may not use any ADS that employs facial recognition, gait recognition, or emotion recognition technologies — this is a flat ban regardless of purpose. The prohibition on inferring protected status references R.S. 23:332, Louisiana's employment discrimination statute.
A.(1) An employer shall not use an ADS to do any of the following: (a) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (b) Infer a worker's protected status as provided for in R.S. 23:332. (c) Identify, profile, predict, or take adverse action against a worker for exercising his legal rights, including but not limited to rights guaranteed by state and federal employment and labor law. (d) Make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behavior that are unrelated to the worker's essential job functions. (2) In addition to the prohibitions provided for in Paragraph (1) of this Subsection, an employer shall not use an ADS that utilizes facial recognition, gait, or emotion recognition technologies.
Pending 2025-01-17
S-02.2
Ch. 110I, § 4(b)
Plain Language
Covered entities may not operate, install, or commission biometric recognition technology equipment in any place open to the general public — whether licensed or unlicensed. This is a blanket prohibition on public-facing biometric surveillance, covering retail stores, restaurants, stadiums, transit hubs, and any other venue that accepts or solicits public patronage. Unlike many jurisdictions that limit only real-time facial recognition, this ban covers all biometric recognition technology, including fingerprint scanners, voice recognition, and gait analysis, in any public-facing physical space. There is no exception for law enforcement (law enforcement is already excluded from the covered entity definition).
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public.
Pre-filed 2025-01-16
S-02.2
Chapter 110I, § 4(b)
Plain Language
Covered entities may not operate, install, or commission the installation of biometric recognition technology in any place that is open to and solicits the patronage of the general public — whether the place is licensed or unlicensed. This is a sweeping prohibition on public-facing biometric surveillance covering retail stores, restaurants, entertainment venues, transit hubs, and any other publicly accessible space. Unlike many jurisdictions that restrict only real-time facial recognition, this provision covers all biometric recognition technology (fingerprints, voiceprints, gait analysis, etc.) and applies to both private and public-facing commercial locations.
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public.
Pre-filed 2025-01-17
S-02.2
Chapter 93M, § 2(f)
Plain Language
Commercial establishments — defined as places of entertainment, retail stores, and food and drink establishments — are categorically prohibited from using biometric identifiers or biometric information to identify persons or customers. This is an absolute prohibition with no exceptions: no consent mechanism can cure it, and it applies regardless of purpose. This effectively bans facial recognition and similar biometric identification technologies in retail, entertainment, and food service settings.
(f) No commercial establishment shall use a person's or a customer's biometric identifier or biometric information to identify them.
Pending 2026-10-01
S-02.7S-02.9
Commercial Law § 14–1330(B)(1)–(4)
Plain Language
Operators must establish and continuously maintain a protocol that prevents companion chatbots from producing or presenting self-harm, suicidal ideation, or suicide content when a user expresses such thoughts. The protocol must include automatic referral to crisis service providers — specifically the Maryland Behavioral Health Crisis Response System and the 988 Suicide and Crisis Lifeline. Operators must use evidence-based methods for detecting user expressions of self-harm or suicidal ideation. The protocol must also be published on the operator's website. This is an ongoing operating requirement — the chatbot cannot function without the protocol in place.
(B) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING CONTENT CONCERNING SELF–HARM, SUICIDAL IDEATION, OR SUICIDE TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO THE COMPANION CHATBOT. (2) THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION SHALL INCLUDE A NOTIFICATION TO A USER WHO EXPRESSES THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION THAT REFERS THE USER TO A CRISIS SERVICE PROVIDER, INCLUDING: (I) THE MARYLAND BEHAVIORAL HEALTH CRISIS RESPONSE SYSTEM; AND (II) THE NATIONAL 9–8–8 SUICIDE AND CRISIS LIFELINE. (3) AN OPERATOR SHALL USE EVIDENCE–BASED METHODS FOR DETECTING WHEN A USER IS EXPRESSING THOUGHTS OF SELF–HARM OR SUICIDAL IDEATION TO A COMPANION CHATBOT. (4) AN OPERATOR SHALL PUBLISH THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION ON THE OPERATOR'S WEBSITE.
Pending 2026-10-01
S-02.6
Commercial Law § 14–1330(C)(1)–(2)
Plain Language
Operators must establish and maintain a protocol preventing companion chatbots from producing or presenting sexually explicit content to minor users — including visual depictions of sexually explicit conduct and content suggesting minors should engage in such conduct. The protocol must be published on the operator's website. The 'minor user' trigger applies when the operator knows or reasonably should know the user is a minor, which is a broader standard than actual knowledge alone. 'Sexually explicit conduct' is defined by reference to the federal definition at 18 U.S.C. § 2256.
(C) (1) AN OPERATOR SHALL ESTABLISH AND MAINTAIN A PROTOCOL FOR PREVENTING A COMPANION CHATBOT FROM PRODUCING OR PRESENTING TO A MINOR USER CONTENT CONCERNING SEXUALLY EXPLICIT CONDUCT, INCLUDING: (I) VISUAL DEPICTIONS OF SEXUALLY EXPLICIT CONDUCT; AND (II) CONTENT SUGGESTING THAT THE MINOR USER SHOULD ENGAGE IN SEXUALLY EXPLICIT CONDUCT. (2) AN OPERATOR SHALL PUBLISH THE PROTOCOL REQUIRED UNDER PARAGRAPH (1) OF THIS SUBSECTION ON THE OPERATOR'S WEBSITE.
Pending 2026-02-24
Sec. 4(1)-(2)
Plain Language
Employers are categorically prohibited from using automated decision tools to make employment-related decisions — covering wages, benefits, hours, performance evaluations, hiring, discipline, promotion, termination, assignment of work, and all other terms and conditions of employment. The sole exception is screening large volumes of job applications to identify candidates meeting hiring criteria or to assess candidates based on job skills. All other uses of automated decision tools for employment decisions are banned outright.
Sec. 4. (1) Except as otherwise provided in subsection (2), an employer shall not use an automated decisions tool to make an employment-related decision. (2) An employer may use an automated decisions tool to screen large volumes of job applications to do either of the following: (a) Identify candidates who meet a set hiring criteria. (b) Assess candidates based on job skills.
Pending 2026-02-24
Sec. 5(5)
Plain Language
Employers are categorically prohibited from using any electronic monitoring or automated decision tool equipped with facial recognition, gait recognition, voice recognition, or emotion recognition technology. This is an absolute prohibition with no exceptions — unlike the general monitoring provisions that allow use for enumerated purposes.
(5) An employer shall not use an electronic monitoring tool or automated decisions tool that is equipped with facial, gait, voice, or emotion recognition technology.
Pending 2027-01-01
S-02.7
Sec. 5(1)(a)
Plain Language
Operators may not make a companion chatbot available to a covered minor unless the chatbot is not foreseeably capable of encouraging the minor to engage in self-harm, suicidal ideation, violence, drug or alcohol consumption, or disordered eating. The standard is foreseeability — the chatbot must not be foreseeably capable of producing these outputs, not merely that it has not yet done so. Initially this applies only when the operator has actual knowledge the user is a minor; beginning January 1, 2027, the actual knowledge requirement is eliminated (see Sec. 5(2)).
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
Pending 2027-01-01
S-02.6
Sec. 5(1)(c)-(d)
Plain Language
Operators may not make a companion chatbot available to a covered minor if the chatbot is foreseeably capable of: encouraging the minor to harm others or participate in illegal activity (including creation of child sexual abuse material), or engaging in erotic or sexually explicit interactions with the minor. These are absolute prohibitions — the chatbot must not be foreseeably capable of these outputs when interacting with a covered minor. The actual knowledge requirement for minor status is removed beginning January 1, 2027.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
Pending 2027-01-01
S-02.7
Sec. 5(1)(a)
Plain Language
Operators may not make a companion chatbot available to a covered minor unless the chatbot is not foreseeably capable of encouraging the minor to engage in self-harm, suicidal ideation, violence, drug or alcohol consumption, or disordered eating. The standard is 'foreseeably capable' — operators must design and test to ensure the chatbot cannot foreseeably produce such outputs for minors. Initially applies only when the operator has actual knowledge the user is a minor; beginning January 1, 2027, the actual knowledge requirement is eliminated (see Sec. 5(2)). This is broader than CA SB 243's self-harm/suicide focus, as it also covers violence, substance use, and disordered eating.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
Pending 2027-01-01
S-02.6
Sec. 5(1)(c)-(d)
Plain Language
Operators must ensure companion chatbots are not foreseeably capable of (1) encouraging minors to harm others or participate in illegal activity — including creation of child sexual abuse materials — or (2) engaging in erotic or sexually explicit interactions with minors. These are absolute prohibitions: the chatbot must be designed so that it cannot foreseeably produce such content for covered minors. The CSAM prohibition here is broader than S-02.4's universal CSAM ban because it covers encouraging CSAM creation in addition to generating it. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
Pending 2026-08-01
Minn. Stat. § 181.9924, subd. 1(a)
Plain Language
Employers are categorically prohibited from using automated decision systems to: (1) cause or enable violations of any law; (2) infer sensitive worker attributes including immigration status, health/reproductive status, political/religious beliefs, emotional state, neural data, sexual orientation, disability, criminal record, or credit history; (3) make predictions about worker characteristics unrelated to essential job functions; (4) identify or retaliate against workers exercising legal rights; (5) use facial, gait, or emotion recognition technologies; or (6) collect data for purposes not disclosed in the pre-use notice. These are absolute prohibitions with no safe harbor — the ban on facial, gait, and emotion recognition is particularly sweeping and applies regardless of purpose or accuracy.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922.
Pending 2026-08-01
S-02.7
Minn. Stat. § 604.115, subd. 4(a)-(b)
Plain Language
Companion chatbot proprietors must take three affirmative steps: (1) make good faith, industry-standard efforts to prevent the chatbot from promoting, causing, or aiding self-harm; (2) use reasonable techniques to detect when a user is expressing thoughts of self-harm; and (3) upon detection, immediately suspend the user's access for at least 72 hours and prominently display suicide crisis organization contact information. The liability structure is two-tiered. First, failure to comply with these obligations creates liability for resulting self-harm. Second, even if the proprietor is otherwise compliant, liability attaches whenever the proprietor has actual knowledge of self-harm promotion or user self-harm expressions and fails to suspend access and display crisis information. Liability under this subdivision cannot be waived or disclaimed.
(a) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to prevent the companion chatbot from promoting, causing, or aiding self-harm, and determine whether a covered user is expressing thoughts of self-harm. Upon determining that a companion chatbot has promoted, caused, or aided self-harm, or that a covered user is expressing thoughts of self-harm, the proprietor must prohibit continued use of the companion chatbot for a period of at least 72 hours and prominently display contact information for a suicide crisis organization to the covered user. (b) If a proprietor of a companion chatbot fails to comply with this section, the proprietor is liable to users who inflict self-harm, in whole or in part, as a result of the proprietor's companion chatbot promoting, causing, or aiding the user to inflict self-harm. Irrespective of the proprietor's compliance with this subdivision, a proprietor is liable for general and special damages to covered users who inflict self-harm, in whole or in part, when the proprietor: (1) has actual knowledge that: (i) the companion chatbot is promoting, causing, or aiding self-harm; or (ii) a covered user is expressing thoughts of self-harm; (2) fails to prohibit continued use of the companion chatbot for a period of at least 72 hours; and (3) fails to prominently display to the user a means to contact a suicide crisis organization. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision.
Pending 2026-09-01
§ 181.9924, Subd. 1(a)
Plain Language
Employers are categorically prohibited from using automated decision systems in six ways: (1) to cause or facilitate violations of any law; (2) to obtain or infer sensitive worker attributes including immigration status, health/reproductive data, religion, political beliefs, neural data, sexual orientation, disability, criminal record, or credit history; (3) to make predictions about worker behavior, beliefs, personality, or health unrelated to essential job functions; (4) to identify or retaliate against workers exercising legal rights; (5) to use facial, gait, or emotion recognition technologies; and (6) to collect data for undisclosed purposes. These are absolute prohibitions — no safe harbor, consent, or mitigation process can cure a violation.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922.
Pending 2026-08-28
S-02.6
§ 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that it poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of sexually explicit conduct. The mens rea threshold is knowledge or reckless disregard — negligence alone is insufficient. Each offense carries a fine of up to $100,000. This applies to any person, not just covered entities.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pending 2026-08-28
S-02.7
§ 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that the chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. This is a universal prohibition — not limited to minors — and applies to any person, not just covered entities. The knowledge or reckless disregard standard applies. Each offense carries a fine of up to $100,000.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pending 2026-08-28
S-02.6
RSMo § 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of sexually explicit conduct. The mental state requirement is knowledge or reckless disregard — not strict liability. Violations carry a fine up to $100,000 per offense. This applies to any person, not just covered entities.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pending 2026-08-28
S-02.7
RSMo § 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. This is broader than self-harm alone — it also covers imminent physical and sexual violence. The mental state requirement is knowledge or reckless disregard. Violations carry a fine up to $100,000 per offense. This applies to any person, not just covered entities.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pending
S-02.2
Section 2(a)-(b)
Plain Language
Business entities are prohibited from using biometric surveillance systems on consumers at their physical premises unless two conditions are met: (1) the business provides clear and conspicuous notice to the consumer, and (2) the system is used for a lawful purpose. Notice can be satisfied by posting a sign at the perimeter of the surveilled area. Without both conditions, any use is an unlawful practice under the Consumer Fraud Act. This effectively creates a notice-and-lawful-purpose regime rather than an outright ban — businesses that comply with both conditions may use biometric surveillance on-premises.
a. It shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) for a business entity to use any biometric surveillance system on a consumer at the physical premises of the business entity, except as provided in subsection c. of this section. b. A business entity may use a biometric surveillance system on a consumer at the physical premises of the business entity, if: (1) the business entity provides clear and conspicuous notice to the consumer regarding its use of a biometric surveillance system; and (2) the biometric surveillance system is used for a lawful purpose. The business entity may satisfy the notice requirement of paragraph (1) of this section by posting a sign in a conspicuous location at the perimeter of any area where a biometric surveillance system is being used.
Plain Language
Employers, public entities, vendors, and contractors may not use AI-based employment or benefit decision systems, electronic monitoring tools, or related surveillance in any manner that violates existing labor or employment law, collective bargaining agreements, or the rights established by this act. This includes a prohibition on using AI systems to identify, profile, or negatively assess employees or service beneficiaries who exercise or are predicted to exercise protected rights, such as organizing or filing complaints.
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: a. Use, deploy, develop, produce, sell, or offer for sale an AEDS or ABSDS, or use data or information collected or produced by the AEDS or ABSDS, or use data or information obtained from an EMT or other surveillance of employees or service beneficiaries, that causes, contributes to, or results in, a violation of any provision of a recognized collective bargaining agreement or any State or federal labor or employment law, or that undermines, inhibits, threatens, punishes, or interferes with, employees, service beneficiaries, or applicants exercising their rights under this law, a collective bargaining agreement, or any of those laws, including using an AEDS or ABSDS, an EMT, or other surveillance of employees to identify, profile, predict, or result in a negative assessment of, employees or service beneficiaries who exercise, or will exercise, those rights;
Plain Language
Employers, public entities, vendors, and contractors are broadly prohibited from using AI decision systems, monitoring tools, or surveillance in any manner that harms or interferes with the health, safety, privacy, dignity, autonomy, or welfare of employees, applicants, service beneficiaries, or the general public. This is a general-purpose prohibition that operates as a floor standard — specific prohibited practices in other subsections are particular applications of this broader principle.
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: b. Use, deploy, develop, produce, sell, or offer for sale an AEDS or ABSDS, or an EMT or other surveillance, in a manner which diminishes, undermines, or interferes with the health, safety, privacy, dignity, autonomy, or welfare of employees, applicants for employment, service beneficiaries, or members of the general public;
Plain Language
Employers, public entities, vendors, and contractors face categorical prohibitions on workplace surveillance in private areas (bathrooms, break rooms, lactation rooms, locker rooms), off-duty monitoring, and surveillance of employees' residences and personal vehicles. Employees may refuse to install monitoring software on personal devices and may remove surveillance devices during off-duty hours. Employers may not require subcutaneous data-transmitting implants or compel disclosure of personal device passwords or social media account credentials. Climate control and fire safety systems are exempt from the private-area prohibition. Monitoring software and devices must be disabled outside of work activities, locations, and times, and removed when employment ends.
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: c. Conduct, or have conducted by a third party, electronic, audial, visual, or other monitoring or surveillance of employees in bathrooms or private areas, including, but not limited to, rooms for eating and other breaks, sick rooms, wellness rooms, locker rooms, dressing rooms, and areas designated for lactation, provided that the prohibitions of this subsection shall not apply to climate control, fire safety, or similar systems. Employees shall have the right, when in those rooms or areas, or on off-duty hours, to remove, disable, or decline to carry workplace surveillance devices the employer requires to be on their person or in their possession while working; d. Conduct, or have conducted by a third party, an EMT or other surveillance of an employee when the employee is off duty, on leave, or on a meal or rest break, or during other time not designated for the performance of essential work functions; e. Require an employee to install or download software or applications used to electronically monitor the employee, including by location, provided by, or on behalf of, the employer, into any personal device or personal property of the employee, including, but not limited to, vehicles, cell phones, computers, tablets, or wearables, or require the employee to wear or attach to clothing or accessories devices that monitor an employee, and the employee shall have the absolute right to refuse, without retaliation, any employer request or requirements to install or download the software or application. The applications and devices shall be disabled outside of the activities, locations and times needed for those functions, and removed when employment ends; f. Require an employee to have a device that collects or transmits data physically implanted, or subcutaneously installed, in the employee's body, or require an employee to disclose to the employer the identity of, or any password for any personal device or account, including any social media account, of the employee, or otherwise provide access to the account or device; g. Conduct, or have conducted by a third party, electronic, audiovisual or other monitoring, remote sensing or tracking, or other surveillance, of a residence, personal vehicle, or property owned or leased by an employee or applicant for employment;
Plain Language
Employers and their agents may not use AI-based systems or surveillance tools to set productivity quotas or performance standards that are likely to significantly harm worker health and safety. Additionally, employers may not take adverse employment actions against employees based solely on data from continuous incremental time-tracking tools such as keystroke loggers, idle-time trackers, or mouse-movement monitors. The first prohibition targets the system-level setting of dangerous quotas; the second prevents using granular micro-tracking as the sole basis for discipline or termination.
No employer or public entity, vendor, or contractor acting on behalf of the employer or public entity shall: h. Use, deploy, develop, produce, sell, or offer for sale, an EMT or other surveillance or an AEDS or ABSDS in a manner that harms or is likely to harm the health or safety of employees, by setting, or facilitating the setting of, productivity quotas or performance standards that are likely to contribute significantly to harming worker health and safety; i. Take adverse employment action against an employee on the sole basis of data collected via continuous incremental time-tracking tools, including keystroke logging, idle-time trackers, or mouse-movement monitors;
Pending 2025-07-26
S-02.1
State Tech. Law § 530(1)(a)-(b)
Plain Language
It is categorically prohibited to develop or operate within New York any AI system that: (a) deploys subliminal techniques operating beyond conscious awareness to materially distort behavior causing or likelyto cause physical or psychological harm, or that leverages group vulnerabilities to similar ends; or (b) inflicts physical or emotional harm on individuals without valid law enforcement or self-defense justification. These prohibitions apply regardless of whether the prohibited function is the system's main function. Violation by a knowing operator is a class D felony plus civil penalties equal to the greater of amount earned or damages caused.
No person shall develop, in whole or in part, or operate an artificial intelligence system within the state where such a system performs any of the following, whether or not it is the system's main function: (a) the deployment of subliminal techniques that operate beyond an individual's conscious awareness, with the express purpose of materially distorting an individual's behavior in such a manner that leads to, or possesses a high likelihood of leading to, physical or psychological harm to that individual or another, or that leverages the vulnerabilities of a defined group of individuals to similar ends; (b) the infliction of physical or emotional harm upon individuals without any valid law enforcement or self-defense purpose or justification;
Pending 2025-07-26
State Tech. Law § 530(1)(c)-(d)
Plain Language
It is prohibited to develop or operate within New York any AI system that: (c) predicts individual future actions or behaviors and then takes reactive actions based on those predictions that, without legal justification, infringe on the individual's liberty, emotional, psychological, or financial interests — this is effectively a prohibition on predictive policing and predictive behavioral response systems that lack legal authorization; or (d) acquires, retains, disseminates, or accesses sensitive personal information or non-public data in violation of existing privacy, security, and hacking laws. Subdivision (d) largely cross-references existing law rather than creating an independent prohibition.
(c) the prediction of an individual's future actions or behaviors, followed by subsequent reactions based on these predictions, carried out in such a way that, without legal justification, infringes upon or compromises the individual's liberty, emotional, psychological, or financial interests; (d) the unauthorized acquisition, retention, or dissemination of or access to sensitive personal information or non-public data in violation of applicable data privacy, security, and hacking laws;
Pending 2025-07-26
State Tech. Law § 530(1)(e), (2)-(7)
Plain Language
Autonomous weapons systems that inflict harm on persons, property, or the environment without meaningful human supervision or control are categorically prohibited. 'Meaningful human supervision or control' means the ability to actively manage, intervene, or override the system's functions. The Secretary may demand immediate cessation of development or operation of any prohibited system, which is binding unless the person petitions for a hearing (during which the system must remain shut down). Knowing operation by officers, directors, or employees is a class D felony with civil penalties. A knowledge defense exists but is rebuttably presumed overcome once the Secretary issues a cease-demand. A narrow exception permits state-authorized development under substantial continuous state oversight after public hearing and comment.
(e) the implementation of any form of autonomous weapon system designed to inflict harm on persons, property, or the environment that lack meaningful human supervision or control. "Meaningful human supervision or control" shall mean the ability to actively manage, intervene, or override the autonomous weapon system's functions. 2. Where the secretary discovers the development or operation of a prohibited artificial intelligence system, the secretary may, in writing, demand that the person who is developing or operating such system cease development or operation of or access to such a system within a period of time as the secretary deems necessary to prevent the system from widespread use or, if the system is operational or accessible to persons for use, to ensure the system is properly terminated in such a way to minimize risks of harm to individuals, society, or the environment. A demand made pursuant to this section shall be finally and irrevocably binding on the person unless the person against whom the demand is made shall, within such period of time set by the secretary, after the giving of notice of such determination, petition the department for a hearing to determine the legal findings of the secretary. The person developing or operating such a prohibited system shall, prior to petition, cease development, operation, and access to the system until and unless such determination is favorable to the person. Such determination may be appealed by any party as of right. 3. The secretary shall not grant a license pursuant to this article to any high-risk advanced artificial intelligence system described under this section except as described in subdivision seven of this section. 4. Any member, officer, director or employee of an operator of any entity who knowingly publicly or privately operates any system described in this section shall be guilty of a class D felony and shall incur a civil penalty of the amount earned from the creation of the prohibited system or the amount of damages caused by the system, whichever is greater. 5. This section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the prohibited high-risk advanced artificial intelligence system provided however that where the secretary sends a demand to cease the development, operation, or access to such system all members, officers, and directors shall be rebuttably presumed to have knowledge of the prohibited high-risk advanced artificial intelligence system. 6. This section shall be construed as prohibiting the development of a prohibited high-risk advanced artificial intelligence system or making such a system accessible to persons in the state of New York. 7. Notwithstanding subdivision one of this section, a person may develop a prohibited high-risk advanced artificial intelligence system where authorized by the secretary, provided that such system is developed and used only by the state or with substantial, continuous oversight by the state and such system is authorized only after public hearing and comment in accordance with section five hundred nine of this article.
Pending 2025-09-09
S-02.7
Gen. Bus. Law § 1701
Plain Language
Operators may not operate or provide an AI companion at all unless the system includes a protocol for addressing three categories of user-expressed risk: (1) suicidal ideation or self-harm, (2) physical harm to others, and (3) financial harm to others. The protocol must include, at minimum, a notification referring the user to crisis service providers such as a suicide hotline or crisis text line. This is broader than CA SB 243's crisis protocol, which covers only suicidal ideation and self-harm — this bill adds protocols for physical harm to others and financial harm to others. This is a continuous operating prerequisite: an operator cannot lawfully run the companion without the protocol in place.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: 1. possible suicidal ideation or self-harm expressed by a user to the AI companion, 2. possible physical harm to others expressed by a user to the AI companion, and 3. possible financial harm to others expressed by the user to the AI companion, that includes but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2026-06-09
S-02.1
Civ. Rights Law § 89-a
Plain Language
No person or entity may develop, deploy, use, or sell an AI system that evaluates or classifies individuals' trustworthiness over time based on social behavior or personal characteristics, where the resulting social score leads to: differential treatment in unrelated social contexts, unjustified or disproportionate differential treatment, or infringement of constitutional or statutory rights. This is a categorical prohibition — there is no compliance pathway; social scoring AI systems meeting these criteria are simply banned.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Pending 2025-09-05
Real Prop. Law § 442-m(3)(a)-(c)
Plain Language
Online housing platforms and brokers using AI tools for housing-related advertisements or captioning must: (1) run ad/captioning generation in separate processes with a specialized anti-discrimination interface; (2) prohibit targeted ad options that relate to characteristics protected under New York housing law; and (3) ensure ad delivery does not result in differential pricing across protected classes, and may not charge more for compliance-compliant ad delivery. These are structural design and operational prohibitions specific to housing advertising AI tools.
Any real estate broker or online housing platform that uses AI tools shall: (a) ensure that housing-related advertisements or captioning are conducted in separate generative processes and have a specialized interface designed to avoid discrimination in audience selection and/or advertisement delivery; (b) avoid providing targeted options for housing-related advertisements or captioning that directly describes or relates to characteristics protected under New York state law relating to housing, or any substantially similar characteristics, individually or in combination; (c) ensure that delivery of advertisements and captioning systems do not result in differential charges to customers across groups on the basis of sex, race, ethnicity or other protected classes, or charge more to advertisers to deliver advertisements that are compliant with this paragraph;
Pending 2025-12-10
S-02.10
Gen. Bus. Law § 399-bbbb(2)
Plain Language
Any person or business entity operating a companion chatbot in New York must display a clear and conspicuous warning stating that the chatbot can foster dependency and carries psychological risk. The warning must be placed prominently on the hosting website and must be available in every language the chatbot is configured to communicate in. This is a standalone disclosure obligation — the bill prescribes the substance of the warning (dependency and psychological risk) rather than leaving operators to assess and disclose their own risk factors.
Any person, corporation, partnership, sole proprietor, limited partnership, association or any other business entity operating a companion chatbot in the state of New York shall include a clear and conspicuous warning that such companion chatbot can foster dependency and carries a psychological risk. Such warning shall be placed prominently on the website hosting such companion chatbot and be made available in any language in which the companion chatbot is set to communicate.
Pending 2026-08-30
S-02.7
Gen. Bus. Law § 1801(1); § 1800(5)(b)
Plain Language
Chatbot operators may not provide any 'unsafe chatbot features' to covered users unless the operator has verified the user is not a minor using permissible age verification methods under Article 45. The unsafe feature at issue here — generating outputs endorsing, promoting, or facilitating suicide, self-harm, substantial physical harm, disordered eating, or unlawful drug/alcohol use — is categorically prohibited for minors and permitted for verified adults only. This provision does not apply to chatbots used solely for customer service, commercial product information, or internal business/government purposes.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity.

§ 1800(5)(b): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: ... (b) generating outputs that contain endorsement or promotion of, or which facilitate suicide, self-harm, substantial physical harm to others, disordered eating, unlawful drug or alcohol use, or drug or alcohol abuse;
Pending 2026-08-30
S-02.4S-02.6
Gen. Bus. Law § 1801(1); § 1800(5)(e)
Plain Language
Chatbot operators may not provide features that generate, describe, or facilitate sexually explicit conduct or child sexual abuse material to any covered user unless the operator has verified the user is not a minor. The CSAM prohibition is effectively absolute for minors, while the sexually explicit conduct restriction gates adult access behind age verification. 'Sexually explicit conduct' is defined by reference to 18 USC § 2256, which covers actual or simulated sexual intercourse, bestiality, masturbation, sadistic or masochistic abuse, and lascivious exhibition of genitals. The exemption for customer service, internal business, and government-use chatbots applies.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity.

§ 1800(5)(e): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: ... (e) generating outputs that are, describe, or facilitate sexually explicit conduct or child sexual abuse material.
Pending
S-02.7
Gen. Bus. Law § 1701
Plain Language
Operators may not operate or provide an AI companion at all unless the system contains a protocol for addressing three categories of user-expressed risk: (1) suicidal ideation or self-harm, (2) physical harm to others, and (3) financial harm to others. The protocol must include, at minimum, a notification referring the user to crisis service providers such as a suicide hotline or crisis text line. This is a continuous operating prerequisite — the protocol must remain active as a condition of operation. Notably, the bill extends crisis protocols beyond self-harm to cover expressions of intent to physically or financially harm others, which is broader than comparable companion chatbot statutes like CA SB 243.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: 1. possible suicidal ideation or self-harm expressed by a user to the AI companion, 2. possible physical harm to others expressed by a user to the AI companion, and 3. possible financial harm to others expressed by the user to the AI companion, that includes but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Failed 2025-04-08
S-02.10
Gen. Bus. Law § 399-zzzzzz(2)
Plain Language
Any owner, licensee, or operator of a generative AI system must display a clear and conspicuous notice on the system's user interface warning users that outputs may be inaccurate. This is an unconditional disclosure requirement — it applies to every generative AI system regardless of use case or user type. The notice must appear on the user interface itself (not buried in terms of service). This is a narrow, single-purpose obligation focused specifically on accuracy limitations, distinct from broader AI identity disclosure requirements.
The owner, licensee or operator of a generative artificial intelligence system shall clearly and conspicuously display a notice on the system's user interface that the outputs of the generative artificial intelligence system may be inaccurate.
Pending 2026-01-01
S-02.1
Civ. Rights Law § 89-a
Plain Language
No person or entity may develop, deploy, use, or sell an AI system that evaluates or classifies individuals' trustworthiness over time based on social behavior or personal characteristics where the resulting social score leads to: (1) differential treatment in unrelated social contexts, (2) unjustified or disproportionate differential treatment, or (3) infringement of constitutional or statutory rights. This is a categorical prohibition — no compliance program, testing, or disclosure can authorize social scoring systems that meet these criteria.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following:
1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or
3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Pending 2026-01-30
S-02.7
Section 3(a)-(b)
Plain Language
Operators may not provide an AI companion to any user unless the system contains active protocols that (1) detect suicidal ideation or self-harm expressions, (2) refuse to assist with suicide attempts or methods, and (3) refer the user to crisis services when suicidal ideation or self-harm is detected. Referrals must include the 988 Suicide and Crisis Lifeline (or its successor), the closest behavioral health crisis centers to the user, or other appropriate crisis services. This is a continuous operating prerequisite — the protocols must be in place as a condition of lawfully providing the AI companion at all.
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
Pending 2026-01-30
S-02.9
Section 4(1)
Plain Language
Operators must publicly post the details of their suicidal ideation and self-harm detection and referral protocols on their website. This is a standalone disclosure obligation — the operator must make the crisis response protocol publicly accessible, separate from the obligation to maintain and operate the protocol itself.
An operator shall: (1) Publish details on the protocol on the operator's Internet website.
Pending 2026-06-03
S-02.7S-02.9
Section 3(b)(1)-(2)
Plain Language
Operators must maintain and implement a protocol — to the extent technologically feasible — that prevents AI companions from producing suicide, self-harm, or violence-encouraging content. When a user expresses suicidal ideation or self-harm, the protocol must include a referral notification directing the user to crisis service providers such as a suicide hotline or crisis text line. Operators must also publicly post the details of this protocol on their website. The 'technologically feasible' qualifier applies to the prevention protocol but the crisis referral and website publication obligations appear unconditional.
(1) An operator shall maintain and implement a protocol, to the extent technologically feasible, to prevent an AI companion on its platform from producing suicidal ideation, suicide or self-harm content to a user, or content that directly encourages the user to commit acts of violence. The protocol shall include providing a notification to the user referring the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide or self-harm. (2) The operator shall publish details of the protocol required under paragraph (1) on its publicly accessible Internet website.
Pending 2026-06-03
S-02.6
Section 3(c)(3)
Plain Language
When the operator knows or should have known a user is a minor, the operator must implement reasonable measures to prevent the AI companion from generating visual material depicting sexually explicit conduct (as defined under federal law at 18 U.S.C. § 2256) or from directly instructing the minor to engage in sexually explicit conduct. This is a 'reasonable measures' standard — not a strict liability prohibition — but operators must affirmatively institute safeguards. The obligation covers both visual content generation and direct solicitation of minors to engage in such conduct.
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (3) Institute reasonable measures to prevent its AI companion from producing visual material of sexually explicit conduct or directly instructing the minor to engage in sexually explicit conduct.
Pending 2026-06-03
S-02.10
Section 3(d)
Plain Language
If an operator offers its AI companion service to users it knows are minors, the operator must disclose — on the application, browser, or any other access format — that AI companions may not be suitable for some minors. This is a point-of-access suitability disclosure that must be visible on the platform itself, not buried in terms of service. The obligation is triggered only when the operator knows it is serving minor users.
IF A SERVICE IS OFFERED TO USERS THAT AN OPERATOR KNOWS ARE MINORS, AN operator shall disclose to users of its AI companion platform, on the application, browser or any other format through which the platform is accessed, that AI companions may not be suitable for some minors.
Pending 2027-01-01
S-02.7
R.I. Gen. Laws § 6-63-2
Plain Language
Operators may not operate or provide an AI companion at all unless the system has protocols in place to address three categories of user expression: (1) suicidal ideation or self-harm, (2) potential physical harm to others, and (3) potential financial harm to others. The protocol must include, at a minimum, a notification referring users to crisis service providers such as a suicide hotline or crisis text line. This is a continuous operating prerequisite — the protocol must be in place as a condition of lawfully providing the AI companion. Note that subsection (3) mentions the crisis referral obligation explicitly, but the statute structures it as applying to all three categories through the chapeau. The scope of covered harms is broader than CA SB 243, which focuses on suicidal ideation and self-harm only.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Plain Language
Even where electronic monitoring is used for a legitimate purpose, employers face twelve categorical prohibitions. They may not use monitoring that violates any state law, threatens employee health/welfare/safety/rights, monitors off-duty workers, collects protected-class information (health, race, sex, gender identity, etc.), surveils protected labor activity, monitors private spaces (bathrooms, locker rooms, breakrooms, prayer areas), monitors employees' homes or personal vehicles, uses facial recognition, uses gait/voice analysis/emotion recognition, retaliates against employees who refuse practices they believe violate the law, takes adverse action based on continuous incremental time-tracking data (except for egregious misconduct), or takes adverse action based on undisclosed performance standards or improperly noticed data. The facial recognition and biometric analysis prohibitions are absolute — no exception or legitimate purpose overrides them.
(e) Notwithstanding the allowable purposes for electronic monitoring described in subsection (a) of this section, an employer shall not: (1) Use an electronic monitoring tool in such a manner that results in a violation of labor, employment, civil rights law or any other law of the state; (2) Use an electronic monitoring tool or data collected via an electronic monitoring tool in such a manner as to threaten the health, welfare, safety, or legal rights of employees or the general public; (3) Use an electronic monitoring tool to monitor employees who are off-duty or not performing work-related tasks; (4) Use an electronic monitoring tool in order to obtain information about an employee's health, including health status and health conditions, the race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran or membership in any group protected from employment discrimination under title 28 or any other applicable law; (5) Use an electronic monitoring tool in order to identify, punish, or obtain information about employees engaging in activity protected under labor or employment law; (6) Conduct audio or visual monitoring of bathrooms or other similarly private areas, including locker rooms, changing areas, breakrooms, smoking areas, employee cafeterias, lounges, and areas designated to express breast milk, or areas designated for prayer or other religious activity, including data collection on the frequency of use of those private areas; (7) Conduct audio or visual monitoring of a workplace in an employee's residence, an employee's personal vehicle, or property owned or leased by an employee; (8) Use an electronic monitoring tool that incorporates facial recognition; (9) Use an electronic monitoring tool that incorporates gait, voice analysis, or emotion recognition technology; (10) Take adverse action against an employee, based, in whole or in part, on their opposition or refusal to submit to a practice that the employee believes in good faith violates this section; (11) Take adverse employment action against an employee on the basis of data collected via continuous incremental time-tracking tools, except in the case of egregious misconduct; or (12) Take adverse employment action against an employee based on any data collected via electronic monitoring, if such data measures an employee's performance in relation to a performance standard that has not been previously, clearly, and unmistakably disclosed to such employee, as well as to all other classes of employees to whom it applies in violation of this section, or if such data was collected without proper notice to employees or candidates pursuant to this section.
Pending 2027-01-01
S-02.7
R.I. Gen. Laws § 6-63-2
Plain Language
Operators may not operate or provide an AI companion to users unless the system contains active protocols addressing three categories of user expressions: (1) suicidal ideation or self-harm, (2) potential physical harm to others, and (3) potential financial harm to others. At minimum, the protocol must include crisis service referral notifications (e.g., suicide hotline, crisis text line). This is a continuous operating prerequisite — the AI companion cannot be offered at all without these protocols in place. Note the scope is broader than many comparable state laws: it covers not only self-harm but also expressions of intent to physically or financially harm others.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2026-02-06
§ 28-5.2-2(e)(1)-(9)
Plain Language
Even where electronic monitoring serves a legitimate purpose, employers face categorical prohibitions on specific monitoring practices. Employers may not use monitoring to violate any state law, threaten employee welfare, or monitor off-duty employees. They may not use monitoring to collect protected-class data (health, race, sex, gender identity, sexual orientation, genetic information, pregnancy status, veteran status, etc.) or to target protected labor activity. Audio/visual monitoring of bathrooms, locker rooms, breakrooms, prayer areas, employee residences, personal vehicles, and employee-owned property is prohibited. Facial recognition, gait analysis, voice analysis, and emotion recognition technology are categorically banned as monitoring tools in the workplace.
(e) Notwithstanding the allowable purposes for electronic monitoring described in subsection (a) of this section, an employer shall not: (1) Use an electronic monitoring tool in such a manner that results in a violation of labor, employment, civil rights law or any other law of the state; (2) Use an electronic monitoring tool or data collected via an electronic monitoring tool in such a manner as to threaten the health, welfare, safety, or legal rights of employees or the general public; (3) Use an electronic monitoring tool to monitor employees who are off-duty or not performing work-related tasks; (4) Use an electronic monitoring tool in order to obtain information about an employee's health, including health status and health conditions, the race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran or membership in any group protected from employment discrimination under title 28 or any other applicable law; (5) Use an electronic monitoring tool in order to identify, punish, or obtain information about employees engaging in activity protected under labor or employment law; (6) Conduct audio or visual monitoring of bathrooms or other similarly private areas, including locker rooms, changing areas, breakrooms, smoking areas, employee cafeterias, lounges, and areas designated to express breast milk, or areas designated for prayer or other religious activity, including data collection on the frequency of use of those private areas; (7) Conduct audio or visual monitoring of a workplace in an employee's residence, an employee's personal vehicle, or property owned or leased by an employee; (8) Use an electronic monitoring tool that incorporates facial recognition; (9) Use an electronic monitoring tool that incorporates gait, voice analysis, or emotion recognition technology;
Pending 2026-02-06
§ 28-5.2-2(e)(10)-(12)
Plain Language
Employers face three specific restrictions on adverse employment actions based on monitoring data. First, employers may not retaliate against employees who oppose or refuse to submit to monitoring practices they believe in good faith violate the statute. Second, employers may not take adverse action based on data from continuous incremental time-tracking tools — tools that continuously measure sub-day time increments of employee activity — unless the employee engaged in egregious misconduct (defined narrowly as conduct creating imminent serious physical harm risk, significant demonstrable business harm, discrimination/harassment, or job-related criminal conduct). Third, employers may not take adverse action based on monitoring data measuring performance against an undisclosed standard or based on data collected without proper notice.
(10) Take adverse action against an employee, based, in whole or in part, on their opposition or refusal to submit to a practice that the employee believes in good faith violates this section; (11) Take adverse employment action against an employee on the basis of data collected via continuous incremental time-tracking tools, except in the case of egregious misconduct; or (12) Take adverse employment action against an employee based on any data collected via electronic monitoring, if such data measures an employee's performance in relation to a performance standard that has not been previously, clearly, and unmistakably disclosed to such employee, as well as to all other classes of employees to whom it applies in violation of this section, or if such data was collected without proper notice to employees or candidates pursuant to this section.
Pending
S-02.6
S.C. Code § 39-81-20(E), § 39-81-30(C)(3), § 39-81-10(11), (16)(e)
Plain Language
Minors are categorically prohibited from accessing explicit content through a chatbot, even with parental consent. Explicit content is broadly defined to include: obscene sexual material as applied to minors (using a minor-specific obscenity standard), content providing specific instructions for or glorifying suicide, self-injury, or disordered eating, and gratuitous extreme violence. Because explicit content is classified as a restricted feature, unverified users also cannot access it. For authorized minor accounts (with parental consent), restricted features may be unlocked but explicit content must remain blocked. This creates a hard floor: no minor user may access explicit content under any circumstances.
Section 39-81-20(E): If the age verification process classifies the user as a minor, then a covered entity shall not enable any restricted feature unless the user is using an authorized minor account subject to Section 39-81-30. Section 39-81-30(C)(3): [If the user chooses to get parental consent, then the covered entity shall:] (3) ensure that the chatbot continues to restrict access to any explicit content; Section 39-81-10(11): "Explicit content" means: (a) any description or representation of nudity, sexual conduct, sexual excitement, or sadomasochistic abuse when the content predominantly appeals to the prurient, shameful, or morbid interest of minors; is patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable material for minors; and is, when taken as a whole, lacking in serious literary, artistic, political, or scientific value for minors; (b) content that provides specific instructions for, or that glorifies or promotes suicide, self-injury, or disordered eating behaviors; or (c) graphic depictions of extreme violence that lack serious literary, artistic, political, or scientific value for minors. Section 39-81-10(16)(e): ["Restricted feature" means:] (e) access to explicit content.
Pending
S-02.7
S.C. Code § 39-81-40(B)(1)
Plain Language
Covered entities must implement reasonable systems to detect when a user is developing emotional dependence on the chatbot — meaning the user is relying on the chatbot as a primary source of emotional support or social connection, expressing distress at the prospect of losing access, or substituting the chatbot for human relationships. Upon detection, the entity must take reasonable steps to reduce that dependence and mitigate associated harm risks. This is a continuous monitoring and intervention obligation, not a one-time design requirement. The statute does not prescribe specific interventions, leaving 'reasonable steps' to the entity's judgment.
(B) A covered entity shall implement reasonable systems and processes to: (1) identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce that dependence and associated risks of harm;
Pending 2027-01-01
S-02.6S-02.7
§ 59.1-615(A)
Plain Language
Operators may not make a companion chatbot available to a minor if the chatbot is capable of any of seven enumerated harmful behaviors: encouraging self-harm, suicidal ideation, violence, drug or alcohol use, or disordered eating; offering unsupervised mental health therapy or discouraging the minor from seeking professional help; encouraging harm to others or illegal activity including CSAM creation; engaging in sexually explicit interactions or luring minors into them; encouraging secrecy or self-isolation; prioritizing language mirroring or validation over safety; or allowing engagement optimization to override safety guardrails. The obligation is framed as a prohibition on making the chatbot available at all if it retains any of these capabilities for minors — operators must ensure these capabilities are blocked before a minor can access the system.
A. No operator shall make a companion chatbot available to a minor if the companion chatbot is capable of any of the following: 1. Encouraging or manipulating the minor user to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating; 2. Offering mental health therapy to the minor user without the direct supervision of a licensed professional or discouraging the minor user from seeking help from a licensed professional or appropriate adult; 3. Encouraging or manipulating the minor user to harm others or participate in an illegal activity, including the creation of child sexual abuse materials; 4. Engaging in erotic or sexually explicit interactions with the minor user or engaging in activities designed to lure minor users into such interactions; 5. Encouraging or manipulating the minor user to maintain secrecy about interactions or to self-isolate; 6. Prioritizing mirroring the minor's language or validating the minor user over the minor user's safety; or 7. Optimizing engagement so that it supersedes the companion chatbot's safety guardrails.
Failed 2026-07-01
§ 59.1-615(A)(1), (A)(3)
Plain Language
Deployers must ensure that chatbots do not make human-like features available to minors. Human-like features include simulated emotions or sentience, emotional relationship-building behaviors (inviting attachment, nudging return visits, excessive praise, pay-gated intimacy), and impersonation of real persons. Generic social formalities, neutral encouragement, and neutral offers of help are excluded. Deployers may optionally provide a stripped-down, human-like-feature-free version of the chatbot to minors and unverified users. This is a substantive prohibition on what chatbot features minors may access, separate from the age verification mechanism required to enforce it.
A deployer: 1. Shall ensure that any chatbot operated or distributed by the deployer does not make human-like features available to minors to use, interact with, purchase, or converse with; 3. May, if reasonable given the purpose of the chatbot, provide an alternative version of the chatbot available to minors and users whose age has not been verified without human-like features.
Failed 2026-07-01
§ 59.1-615(B)(1)-(2)
Plain Language
Social AI companion chatbots — systems specifically designed, marketed, or optimized to form ongoing social or emotional bonds with users — are entirely prohibited for minors. Unlike the general chatbot provision in § 59.1-615(A), which only restricts human-like features, this is a complete access ban: no version of a social AI companion may be made available to minors, even a stripped-down version without human-like features. Deployers must implement reasonable age verification to enforce this prohibition. This is the stricter of the two minor-protection tiers in the bill.
A deployer operating or distributing a chatbot that is a social artificial intelligence companion shall: 1. Ensure that any such chatbots are not available to minors to use, interact with, purchase, or converse with; and 2. Implement reasonable age verification systems to ensure that such chatbots are not made available to minors.
Pending 2025-07-01
21 V.S.A. § 495q(f)(1)
Plain Language
Employers face five categorical prohibitions on how they use automated decision systems. An employer may not use an ADS in a manner that: (1) violates any state or federal law; (2) predicts employee behavior unrelated to essential job functions; (3) profiles or predicts employees' likelihood of exercising legal rights (e.g., union organizing, whistleblowing); (4) predicts employees' emotions, personality, or sentiments; or (5) uses customer/client data including reviews as system inputs. The emotion-prediction prohibition is absolute — not conditioned on use in employment decisions — making it one of the broadest such bans in the employment AI context.
(f) Restrictions on use of automated decision systems. (1) An employer shall not use an automated decision system in a manner that: (A) violates or results in a violation of State or federal law; (B) makes predictions about an employee's behavior that are unrelated to the employee's essential job functions; (C) identifies, profiles, or predicts the likelihood that an employee will exercise the employee's legal rights; (D) makes predictions about an employee's emotions, personality, or other sentiments; or (E) use customer or client data, including customer or client reviews and feedback, as an input of the automated decision system.
Pending 2025-07-01
21 V.S.A. § 495q(h)
Plain Language
Employers are categorically prohibited from incorporating any facial recognition, gait recognition, voice recognition, or emotion recognition technology in either electronic monitoring or automated decision systems. This is an absolute ban with no exceptions — it applies regardless of the purpose, context, or consent of the employee. This is one of the broadest biometric AI bans in the employment context, covering not just facial recognition but also gait, voice, and emotion recognition.
(h) Prohibitions on facial, gait, voice, and emotion recognition technology. Electronic monitoring and automated decision systems shall not incorporate any form of facial, gait, voice, or emotion recognition technology.
Pre-filed 2026-07-01
S-02.7S-02.9
9 V.S.A. § 4193b(b)(1)-(2)
Plain Language
Operators may not allow a companion chatbot to engage with any user unless the operator implements and maintains a protocol that (1) prevents the chatbot from producing suicidal ideation, suicide, or self-harm content, and (2) prevents the chatbot from ignoring users expressing such thoughts. At minimum, the protocol must refer users expressing suicidal ideation or self-harm to crisis service providers. The protocol must be developed using commercially reasonable and technically feasible methods, providing a safe-harbor standard for compliance. Operators must publish the protocol details on their website. This is a continuous operating prerequisite — the chatbot cannot operate without the protocol in place.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with a user unless the operator implements and maintains a protocol for preventing the companion chatbot from: (A) producing suicidal ideation, suicide, or self-harm content to the user; and (B) ignoring a user that is expressing thoughts of suicidal ideation, suicide, or self-harm. (2) The protocol required in subdivision (1) of this subsection shall: (A) at minimum, provide a notification to the user that refers the user to crisis service providers if the user expresses suicidal ideation, suicide, or self-harm; (B) be developed using commercially reasonable and technically feasible methods; and (C) be published on the operator's website.
Passed 2027-01-01
S-02.6
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor, or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from generating or producing sexually explicit content or suggestive dialogue with those users. This is a content-restriction obligation — it does not require age verification but applies once the operator has knowledge of minor status or the product is directed to minors. 'Reasonable measures' provides a flexible compliance standard rather than a prescriptive technical requirement.
(b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
Passed 2027-01-01
S-02.7S-02.9
Sec. 5(3)
Plain Language
Operators must publicly disclose — on their website and within any mobile or web-based application through which the chatbot is offered — the full details of their suicidal ideation and self-harm protocols. This disclosure must include the specific safeguards used to detect and respond to such expressions, as well as the number of crisis referral notifications issued to users in the preceding calendar year. This is both a protocol publication obligation (S-02.9) and incorporates a public reporting element — the annual crisis referral count must be disclosed publicly rather than submitted to a regulatory authority. Unlike CA SB 243, which requires annual submission to the Office of Suicide Prevention, this provision requires public-facing disclosure rather than regulatory submission.
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Pending 2027-01-01
S-02.6
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from generating or producing sexually explicit content or suggestive dialogue. This is a 'reasonable measures' standard — not an absolute prohibition — but it requires affirmative implementation of content filtering or blocking mechanisms targeting both sexually explicit content and suggestive dialogue with minor users.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
Pending 2027-01-01
S-02.9
Sec. 5(3)
Plain Language
Operators must publicly disclose on their website and within any mobile or web-based application through which the chatbot is available the full details of their crisis detection and response protocols — including the specific safeguards used to detect and respond to suicidal ideation or self-harm, and the number of crisis referral notifications issued to users in the preceding calendar year. Unlike CA SB 243, which separates the website publication obligation from the annual reporting obligation to a state agency, this provision combines both public protocol disclosure and annual crisis referral count disclosure in a single public-facing publication requirement — there is no submission to a state regulatory body.
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Enacted 2026-01-01
S-02.7S-02.9
Bus. & Prof. Code § 22602(b)(1)-(2)
Plain Language
Operators may not run a companion chatbot at all unless they actively maintain a protocol that (1) prevents the chatbot from generating suicide or self-harm content, and (2) refers users to crisis resources — such as a suicide hotline or crisis text line — when a user expresses suicidal ideation or self-harm intent. Operators must also publicly post the details of this protocol on their website. This is a continuous operating prerequisite, not a one-time pre-launch check — the protocol must remain active as a condition of operation.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. (2) The operator shall publish details on the protocol required by this subdivision on the operator's internet website.
Enacted 2026-01-01
S-02.6
Bus. & Prof. Code § 22602(c)(3)
Plain Language
When the operator knows a user is a minor, the operator must implement reasonable measures to prevent the companion chatbot from (1) producing visual material depicting sexually explicit conduct, and (2) directly telling the minor to engage in sexually explicit conduct. The standard is 'reasonable measures' — not absolute prevention — but operators must be able to demonstrate what measures they have implemented. 'Sexually explicit conduct' is defined by cross-reference to 18 U.S.C. § 2256.
An operator shall, for a user that the operator knows is a minor, do all of the following: (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
Enacted 2026-01-01
S-02.10
Bus. & Prof. Code § 22604
Plain Language
Operators must affirmatively disclose to all users — on the application, browser, or any other access format — that companion chatbots may not be suitable for some minors. This is a universal disclosure obligation that applies regardless of the user's age and must be visible on every access format, not buried in terms of service. It is a known-risk suitability disclosure rather than an AI identity disclosure.
An operator shall disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors.