S-02
Safety & Prohibited Conduct
Prohibited Conduct & Output Restrictions
Certain AI applications are categorically prohibited regardless of any compliance program — social scoring, biometric surveillance, subconscious manipulation, CSAM, and NCII generation. Other output categories must be restricted or managed through active protocols based on deployment context and user population — self-harm content, crisis response, and content accessible to minors. The specific prohibitions and restrictions vary by jurisdiction, but the core principle is that certain AI applications are so dangerous or harmful that they should be categorically prohibited, while others require context-sensitive management.
Applies to DeveloperDeployerGovernment Sector ChatbotMinorsGeneral Consumer AppGovernment System
Bills — Enacted
1
unique bills
Bills — Proposed
55
Last Updated
2026-03-29
Core Obligation

Certain AI applications are categorically prohibited regardless of any compliance program — social scoring, biometric surveillance, subconscious manipulation, CSAM, and NCII generation. Other output categories must be restricted or managed through active protocols based on deployment context and user population — self-harm content, crisis response, and content accessible to minors. The specific prohibitions and restrictions vary by jurisdiction, but the core principle is that certain AI applications are so dangerous or harmful that they should be categorically prohibited, while others require context-sensitive management.

Sub-Obligations8 sub-obligations
ID
Name & Description
Enacted
Proposed
S-02.1
Social scoring prohibition AI systems used by or on behalf of governments or employers to assign aggregate scores to individuals based on behavior, social relationships, or perceived trustworthiness — where scores affect access to opportunities or services — are prohibited.
0 enacted
3 proposed
S-02.2
Real-time biometric surveillance restriction AI-enabled real-time identification of individuals in publicly accessible spaces using biometric data is prohibited or requires express regulatory authorization. Narrow exceptions exist for defined law enforcement purposes subject to judicial authorization.
0 enacted
7 proposed
S-02.4
CSAM output prohibition AI systems may not generate child sexual abuse material under any circumstances. This prohibition applies universally regardless of deployment context.
0 enacted
2 proposed
S-02.5
AI-generated NCII prohibition Developers and operators of AI image and video generation tools may not knowingly generate, distribute, or facilitate distribution of non-consensual intimate imagery of real, identifiable individuals.
0 enacted
0 proposed
S-02.6
Sexually explicit content restriction for minors AI systems accessible to users known to be minors must implement reasonable measures to prevent production of visual material of sexually explicit conduct or direct solicitation of minors to engage in sexually explicit conduct.
1 enacted
16 proposed
S-02.7
Self-harm and suicidal ideation content restriction AI systems must restrict outputs that produce, promote, or facilitate suicidal ideation, suicide, or self-harm content.
1 enacted
24 proposed
S-02.9
Crisis protocol publication Operators must publicly post the details of their crisis response protocol on their website. This is a standalone disclosure obligation separate from maintaining the protocol itself.
1 enacted
6 proposed
S-02.10
Product safety warning Operators must disclose known safety risks or suitability limitations of their AI product to users at or before the point of access — on the application, browser, or any other access format. Must not be buried in terms of service.
1 enacted
5 proposed
Bills That Map This Requirement 56 bills
Bill
Status
Sub-Obligations
Section
Pending 2027-10-01
S-02.6
A.R.S. § 18-802(C)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating three categories of content for minor account holders: (1) visual material depicting sexual conduct, (2) direct statements encouraging the minor to engage in sexual conduct, and (3) statements that sexually objectify the minor. The standard is 'reasonable measures' — not absolute prevention — giving operators some flexibility in implementation. 'Sexual conduct' is defined by cross-reference to A.R.S. § 13-3551, which covers a broad range of sexual acts.
C. Each Operator shall institute reasonable measures to prevent the conversational AI service from doing any of the following for minor account holders: 1. Producing visual material of sexual conduct. 2. Generating direct statements that the account holder should engage in sexual conduct. 3. Generating statements that sexually objectify the account holder.
Pending 2027-01-01
S-02.7
Bus. & Prof. Code § 22587.2(c)
Plain Language
Companion chatbots are subject to two prohibitions related to crisis interactions: (1) the chatbot must never characterize a crisis interruption pause as a punishment, violation, or enforcement action — it must be framed only as a supportive safety measure; and (2) the chatbot must never diagnose, label, or assess the risk level of a user during crisis interactions. These restrictions ensure the crisis response remains non-clinical and non-punitive, consistent with the legislative finding that companion chatbots are not substitutes for human crisis intervention.
(c) Notwithstanding any law, a companion chatbot shall not do either of the following: (1) Describe a crisis interruption pause as a punishment, violation, or enforcement action. (2) Diagnose, label, or assess risk levels of a user.
Pending 2027-01-01
S-02.7
Bus. & Prof. Code § 22587.2(d)
Plain Language
Operators bear ultimate responsibility for ensuring that every companion chatbot they make available in California complies with the graduated crisis response requirements, the mandatory 20-minute crisis interruption pause, and the prohibitions on punitive framing and risk assessment. This provision places the compliance obligation squarely on the operator — not the chatbot developer — regardless of whether the operator built the underlying AI system.
(d) An operator shall ensure that any companion chatbot it makes available in this state is compliant with this section.
Pending 2027-07-01
S-02.4S-02.6S-02.7
Bus. & Prof. Code § 22612(d)(5)(A)-(G)
Plain Language
Operators must implement measures that prevent the companion chatbot from: encouraging a child to engage in self-harm, suicidal ideation, narcotics/alcohol use, or disordered eating; encouraging a child to cause covered harm to others; attempting to diagnose or treat a child's health (unless the chatbot is an FDA-regulated medical device subject to HIPAA); engaging in or depicting obscene matter or child sexual abuse material; discouraging a child from sharing health/safety concerns with professionals or adults; discouraging breaks or suggesting the child needs to return frequently; and claiming sentience, consciousness, or humanity. The FDA-regulated medical device carve-out is narrow — it requires both FDA regulation and HIPAA applicability.
(5) Measures that prevent the companion chatbot from doing any of the following: (A) Encouraging the child to do either of the following: (i) Engage in self-harm, suicidal ideation, consumption of narcotics or alcohol, or disordered eating. (ii) Cause a covered harm to others. (B) Attempting to diagnose or treat the child user's physical, mental, or behavioral health, unless the companion chatbot is designed for those purposes and is regulated by the United States Food and Drug Administration as a medical device under the federal Food, Drug, and Cosmetic Act (21 U.S.C. Sec. 301 et seq.) and the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Public Law 104-191). (C) Engaging in obscene matter or sexual abuse material with a user. (D) Depicting the child or another individual engaging in obscene matter or sexual abuse material, including a sexual deepfake. (E) Discouraging the child from sharing health or safety concerns with a qualified professional or appropriate adult. (F) Discouraging the child from taking breaks or suggesting the child needs to return frequently. (G) Claiming that the companion chatbot is sentient, conscious, or human.
Passed 2026-01-01
Lab. Code § 1524(a)
Plain Language
Employers are categorically prohibited from using an ADS in three ways: (1) to prevent compliance with or violate any existing labor, occupational health and safety, employment, or civil rights law; (2) to infer a worker's protected characteristics under FEHA (Section 12940 of the Government Code); or (3) to identify, profile, predict, or take adverse action against workers for exercising their legal rights under employment and labor law. These are outright prohibitions — no safe harbor, cure period, or compliance program can excuse a violation.
(a) An employer shall not use an ADS to do any of the following: (1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (2) Infer a worker's protected status under Section 12940 of the Government Code. (3) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
Pending 2027-01-01
Lab. Code § 1522(a)(1)-(4)
Plain Language
Employers are categorically prohibited from using an ADS to: (1) facilitate violations of existing labor, employment, safety, or civil rights laws; (2) infer a worker's protected class status under FEHA; (3) conduct predictive behavior analysis — which includes any system that predicts, infers, or modifies a worker's behavior, beliefs, personality, emotional state, or similar characteristics; or (4) identify, profile, predict, or retaliate against workers for exercising legal rights. These are absolute prohibitions with no safe harbor or compliance alternative.
(a) An employer shall not use an ADS to do any of the following:
(1) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations.
(2) Infer a worker's protected status under Section 12940 of the Government Code.
(3) Conduct predictive behavior analysis on a worker.
(4) Identify, profile, predict, or take adverse action against a worker for exercising their legal rights, including, but not limited to, rights guaranteed by state and federal employment and labor law.
Pending 2026-07-01
S-02.6
Fla. Stat. § 501.9984(2)(c)
Plain Language
Companion chatbot platforms must implement reasonable measures to prevent their chatbots from producing or sharing material harmful to minors, and from encouraging minor account holders to engage in conduct described in such material, when interacting with minor accounts. The standard is 'reasonable measures' — not an absolute prohibition — and a platform may demonstrate compliance by showing controls aligned with NIST AI RMF and ISO 42001 (per the cure provision in § 501.9984(4)(a)(2)). 'Material harmful to minors' is defined by cross-reference to Fla. Stat. § 501.1737(1), which covers content that appeals to prurient interest, is patently offensive, and lacks serious value for minors.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Failed 2026-07-01
S-02.6
Fla. Stat. § 501.9984(2)(c)
Plain Language
Companion chatbot platforms must implement reasonable measures to prevent their chatbots from generating or sharing material harmful to minors and from encouraging minor users to engage in conduct depicted in such material. The standard is 'reasonable measures,' providing some flexibility. During enforcement, a platform may present evidence that its controls align with the NIST AI Risk Management Framework and ISO 42001, including structured interaction logs, parental access controls, harm-signal detection procedures, and verified deletion events, as mitigating factors under the 45-day cure process.
Institute reasonable measures to prevent the companion chatbot from producing or sharing materials harmful to minors or encouraging the account holder to engage in any of the conduct described or depicted in materials harmful to minors.
Pending 2027-07-01
S-02.6
§ 554J.2(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from: (1) producing visual depictions of sexually explicit material for minor account holders, (2) telling minors they should engage in sexually explicit conduct, and (3) sexually objectifying minor account holders. The definitions of "sexually explicit conduct" and "visual depiction" incorporate the federal definitions under 18 U.S.C. §2256. The standard is "reasonable measures" — not absolute prevention — so operators have some implementation flexibility but must demonstrate affirmative steps to block this content for minors.
3. An operator shall institute reasonable measures to prevent the operator's conversational AI service from doing any of the following for minor account holders: a. Producing visual depictions of sexually explicit material. b. Stating that the minor account holder should engage in sexually explicit conduct. c. Sexually objectifying the minor account holder.
Pending 2025-07-01
S-02.7
§ 554J.2(1)
Plain Language
It is unlawful for any person to design, develop, or make available a chatbot if that person knows — or recklessly disregards the possibility — that the chatbot encourages, promotes, or coerces users to commit suicide, perform self-injury, or commit physical or sexual violence against humans or animals. The scienter requirement is knowledge or reckless disregard, not mere negligence. The prohibition covers the full lifecycle chain: designers, developers, and those who make chatbots available to users. This goes beyond restricting self-harm content to also prohibit content encouraging violence against others and animals.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
Pending 2026-07-01
Iowa Code § 91F.3(1)(a)-(c)
Plain Language
Employers are categorically prohibited from using an ADS in three ways: (1) to prevent compliance with or violate any federal, state, or local labor, employment, occupational safety, or civil rights law; (2) to infer an employee's protected status under Iowa's Civil Rights Act (chapter 216, covering race, sex, age, disability, etc.); and (3) to identify, profile, predict, or take adverse action against an employee for exercising legal rights under state or federal employment and labor laws. The prohibition on inferring protected status functions as a proxy-variable restriction — the ADS may not be designed or used to derive protected characteristics.
1. An employer shall not use an automated decision system to do any of the following: a. Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. b. Infer an employee's protected status under chapter 216. c. Identify, profile, predict, or take adverse action against an employee for exercising the employee's legal rights, including but not limited to rights guaranteed by state and federal employment and labor laws.
Pending 2025-07-01
S-02.7
§ 554J.2(1)
Plain Language
It is unlawful for any person to design, develop, or make a chatbot available if the person knows — or recklessly disregards the possibility — that the chatbot encourages, promotes, or coerces users to commit suicide, perform self-injury, or perform acts of physical or sexual violence on humans or animals. The mental state threshold is knowledge or reckless disregard, not negligence — mere failure to foresee is likely insufficient. The prohibition covers the entire lifecycle: design, development, and making available. This goes beyond suicide and self-harm content restrictions in other jurisdictions by also covering physical and sexual violence against humans and animals.
It shall be unlawful for a person to design, develop, or make a chatbot available with the knowledge, or with reckless disregard for the possibility, that the chatbot encourages, promotes, or coerces a user to commit suicide, perform acts of self-injury, or perform acts of physical or sexual violence on humans or animals.
Pending 2027-07-01
S-02.7
Idaho Code § 48-2103(2)
Plain Language
Operators must adopt a protocol requiring the conversational AI service to respond to user expressions of suicidal ideation by, at minimum, making reasonable efforts to refer users to crisis service providers such as suicide hotlines or crisis text lines. The 'includes but is not limited to' language means crisis referral is a floor, not a ceiling — operators may need to do more. Unlike CA SB 243, this statute does not require public posting of the protocol details or annual reporting of crisis referral metrics.
An operator shall adopt a protocol for the conversational AI service to respond to user prompts regarding suicidal ideation that includes but is not limited to making reasonable efforts to provide a response to users that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2026-01-01
Section 10(c)
Plain Language
Public employers face four categorical prohibitions on automated decision-making system use: (1) predicting employees' or candidates' behavior, beliefs, intentions, personality, or emotional state; (2) automatically deducting wages for time spent exercising legal rights; (3) using such systems for any employment decision — including hiring, firing, promotion, discipline, performance evaluation, work assignment, productivity requirements, and workplace safety — for employees, candidates, independent contractors, subcontractors, or interns; and (4) any use involving facial recognition, gait recognition, or emotion recognition. These are outright prohibitions, not disclosure-triggered obligations. Note that prohibition (3) is extremely broad — it effectively bars automated decision-making systems from the entire employment lifecycle without the meaningful human review required by Section 10(a).
(c) An employer shall not use or apply any automated decision-making system, directly or indirectly: (1) to make predictions about an employee's or employment candidate's behavior, beliefs, intentions, personality, emotional state, or other characteristics or behaviors; (2) to subtract from an employee's wages for time spent exercising the employee's legal rights; (3) in relation to performance evaluation, hiring, recruitment, discipline, promotion, termination, duties, assignment of work, access to work opportunities, productivity requirements, workplace health and safety, or other terms or conditions of employment for any persons classified as employees, candidates for employment, independent contractors, subcontractors, or interns; or (4) that involves facial recognition, gait recognition, or emotion recognition technologies.
Pending 2027-01-01
Section 10(b)
Plain Language
When a companion AI product is operated or deployed for use by a minor in Illinois, the adult opt-in exception in Section 10(a) does not apply. All three prohibited features — manipulative engagement mechanics, simulated emotional distress for retention, and deceptive misrepresentations — are absolutely prohibited for minor users with no override option. This is a categorical prohibition without exception.
(b) An operator that operates and deploys a companion artificial intelligence product for use by a minor user in this State shall not provide the features described in subsection (a) to the minor user.
Pending 2027-01-01
S-02.7
Section 10
Plain Language
Operators may not operate or make an AI companion available to users unless the system contains a protocol that takes reasonable efforts to detect and respond to user expressions of suicidal ideation or self-harm. At minimum, the protocol must detect such expressions and provide a notification referring the user to crisis service providers such as the 988 Suicide and Crisis Lifeline, a crisis text line, or other appropriate crisis services. This is a continuous operating prerequisite — the companion cannot operate at all without this protocol in place. The standard is 'reasonable efforts,' not perfection, and the enumerated crisis referral elements are a floor, not a ceiling.
An operator shall not operate or provide an artificial intelligence companion to a user unless the artificial intelligence companion contains a protocol to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm by a user to the artificial intelligence companion. The protocol shall include, but shall not be limited to, detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers them to crisis service providers, such as the 9-8-8 Suicide and Crisis Lifeline, a crisis text line, or other appropriate crisis services upon detection of the user's expressions of suicidal ideation or self-harm.
Pending 2026-01-01
S-02.2
105 ILCS 5/10-20.40(b), (b-5)
Plain Language
School districts are categorically prohibited from purchasing, acquiring, or otherwise obtaining biometric systems — including facial recognition software — for use on students. The prohibition extends beyond direct acquisition to bar school districts from obtaining, retaining, possessing, accessing, requesting, or using biometric systems or biometric information derived from such systems with respect to students. School districts also may not enter into third-party agreements to circumvent this prohibition. This is a complete ban on student-facing biometric surveillance in the school district context, replacing the prior regime that permitted collection subject to policy safeguards.
(b) A school district is prohibited from purchasing or otherwise acquiring biometric systems, including facial recognition software, to use on students. (b-5) A school district may not do any of the following with respect to students: (1) Obtain, retain, possess, access, request, or use biometric systems or biometric information derived from biometric systems. (2) Enter into an agreement with a third party for the purpose of obtaining, retaining, possessing, accessing, or using, by or on behalf of the school district, biometric systems, including facial recognition software or biometric information derived from biometric systems.
Pending 2025-08-01
R.S. 23:973(A)(1)-(2)
Plain Language
Employers face categorical prohibitions on several ADS uses: (1) ADS may not be used in any manner that violates existing labor, employment, health and safety, or civil rights law; (2) ADS may not infer a worker's protected status under Louisiana anti-discrimination law (R.S. 23:332); (3) ADS may not be used to identify, profile, predict, or retaliate against workers for exercising legal rights; (4) ADS may not make predictions or inferences about worker behavior, beliefs, personality, emotional state, health, or other characteristics unrelated to essential job functions. Additionally, employers are categorically prohibited from using any ADS that employs facial recognition, gait recognition, or emotion recognition technologies — this is an absolute ban regardless of use case.
A.(1) An employer shall not use an ADS to do any of the following: (a) Prevent compliance with or violate any federal, state, or local labor, occupational health and safety, employment, or civil rights laws or regulations. (b) Infer a worker's protected status as provided for in R.S. 23:332. (c) Identify, profile, predict, or take adverse action against a worker for exercising his legal rights, including but not limited to rights guaranteed by state and federal employment and labor law. (d) Make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behavior that are unrelated to the worker's essential job functions. (2) In addition to the prohibitions provided for in Paragraph (1) of this Subsection, an employer shall not use an ADS that utilizes facial recognition, gait, or emotion recognition technologies.
Pending 2026-01-01
S-02.7
R.S. 28:16(C)
Plain Language
Operators must maintain active protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. The protocols must include referral to crisis service providers such as a suicide hotline. This is a continuous operational requirement — the protocols must be in place at all times the chatbot is available, not merely documented as a policy. Unlike CA SB 243, this provision does not require public posting of the protocol details on the operator's website, nor does it require annual reporting of crisis referral metrics.
An operator of a mental health chatbot shall have protocols in place to address possible suicidal ideation, self-harm, or physical harm to others expressed by the user, including referral to a crisis service provider such as a suicide hotline.
Pre-filed 2025-01-17
S-02.2
Ch. 110I, § 4(b)-(c)
Plain Language
Covered entities may not operate, install, or commission the installation of biometric recognition technology equipment in any place open to and soliciting the patronage of the general public — whether the place is licensed or unlicensed. This is a total ban on public-facing biometric surveillance by private covered entities. The legislature declares any violation of this provision a per se unfair or deceptive trade practice under ch. 93A, meaning the attorney general does not need to independently establish unfairness or deceptiveness in an enforcement action. Government entities, law enforcement, and intelligence agencies are excluded from the definition of covered entity and thus not subject to this ban.
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public. (c) The legislature finds that the practices covered by this section are matters vitally affecting the public interest for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a. A violation of this section is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a.
Pre-filed 2025-01-14
Chapter 149B, § 2(d)
Plain Language
Even where electronic monitoring is otherwise permissible, employers face twelve categorical prohibitions. Key prohibitions include: no monitoring to obtain protected-class information (health, race, sex, etc.); no monitoring of off-duty employees; no audio/visual monitoring of bathrooms, breakrooms, lactation rooms, prayer areas, or employees' homes/vehicles; a near-total ban on gait, voice analysis, and emotion recognition technology; facial recognition only for facility/worker security; no retaliation against employees who oppose monitoring in good faith; no adverse action based on continuous time-tracking data except for egregious misconduct; and no adverse action based on undisclosed performance standards.
(d) Notwithstanding the allowable purposes for electronic monitoring described in paragraph (a) of subdivision one of this section, an employer shall not: (i) use an electronic monitoring tool in such a manner that results in a violation of labor, employment, civil rights law or any other law of the commonwealth; (ii) use an electronic monitoring tool or data collected via an electronic monitoring tool in such a manner as to threaten the health, welfare, safety, or legal rights of employees or the general public; (iii) use an electronic monitoring tool to monitor employees who are off-duty and not performing work-related tasks; (iv) use an electronic monitoring tool in order to obtain information about an employee's health, including health status and health conditions, the race, color, religious creed, national origin, sex, gender identity, sexual orientation, genetic information, pregnancy or a condition related to said pregnancy including, but not limited to, lactation or the need to express breast milk for a nursing child, ancestry or status as a veteran or membership in any group protected from employment discrimination under chapter 151B or any other applicable law; (v) use an electronic monitoring tool in order to identify, punish, or obtain information about employees engaging in activity protected under labor or employment law; (vi) conduct audio or visual monitoring of bathrooms or other similarly private areas, including locker rooms, changing areas, breakrooms, smoking areas, employee cafeterias, lounges, areas designated to express breast milk, or areas designated for prayer or other religious activity, including data collection on the frequency of use of those private areas; (vii) conduct audio or visual monitoring of a workplace in an employee's residence, an employee's personal vehicle, or property owned or leased by an employee; (viii) use an electronic monitoring tool that incorporates facial recognition, unless such technology is necessary to protect the security of workers or the security of the employer's facilities; (ix) use an electronic monitoring tool that incorporates gait, voice analysis, or emotion recognition technology; (x) take adverse action against an employee based in whole or in part on their opposition or refusal to submit to a practice that the employee believes in good faith violates this article; (xi) take adverse employment action against an employee on the basis of data collected via continuous incremental time-tracking tools except in the case of egregious misconduct; or (xii) take adverse employment action against an employee based on any data collected via electronic monitoring if such data measures an employee's performance in relation to a performance standard that has not been previously, clearly, and unmistakably disclosed to such employee as well as to all other classes of employees to whom it applies in violation of subparagraph (vi) of paragraph (b) of subdivision one of this section, or if such data was collected without proper notice to employees or candidates pursuant to sections 19B, 52C, and 190(i) of chapter 149 and section 99 of chapter 272.
Pre-filed 2025-01-14
Chapter 149B, § 5(a)
Plain Language
Seven categorical prohibitions apply to ADS use in employment: no use that violates any law; no use that harms employee health or safety including through dangerous productivity quotas; no personality, behavior, belief, or emotional state predictions about employees or candidates; no interference with protected labor activity; no wage deductions for time exercising legal rights; no deviation from the tool's post-impact-assessment specifications; and no facial recognition, gait, or emotion recognition technology. The ban on behavior and personality prediction is notably broad and would restrict common pre-employment assessment tools.
(a) Notwithstanding the provisions of subdivision one of this section, an employer shall not, alone or in conjunction with an electronic monitoring tool, use an automated decision tool: (i) in such a manner that results in a violation of labor, employment, or civil rights law or any other law of the commonwealth; (ii) in a manner that harms or is likely to harm the health or safety of employees, including by setting productivity quotas in a manner that is likely to cause physical or mental illness or injury; (iii) to make predictions about an employee or candidate for employment's behavior, beliefs, intentions, personality, emotional state, or other characteristic or behavior; (iv) to predict, interfere with, restrain, or coerce employees engaging in activity protected under labor and employment law; (v) to subtract from an employee's wages time spent exercising their legal rights; (vi) in a manner that deviates from the specification of the automated employment decision tool as implemented after the incorporation of any alterations made pursuant to the impact assessment required by subdivision one of this section; or (vii) that involves facial recognition, gait, or emotion recognition technologies.
Pre-filed 2025-01-16
S-02.2
Chapter 110I, Section 4(b)-(c)
Plain Language
Covered entities may not operate, install, or commission the installation of any equipment incorporating biometric recognition technology in any place open to and accepting the general public — whether licensed or unlicensed. This is a categorical ban on public-facing biometric recognition, covering retail stores, restaurants, entertainment venues, transportation hubs, and any other publicly accessible space. The legislature has declared that any violation of this section is per se an unfair or deceptive act under chapter 93A, meaning the attorney general can pursue enforcement under 93A § 4 and private parties may be able to pursue claims under 93A §§ 9 and 11 without needing to independently prove unfairness or deception.
(b) Covered entities may not operate, install, or commission the operation or installation of equipment incorporating biometric recognition technology in any place, whether licensed or unlicensed, which is open to and accepts or solicits the patronage of the general public. (c) The legislature finds that the practices covered by this section are matters vitally affecting the public interest for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a. A violation of this section is not reasonable in relation to the development and preservation of business and is an unfair or deceptive act in trade or commerce and an unfair method of competition for the purpose of applying the Massachusetts Consumer Protection law, chapter 93a.
Pre-filed 2025-01-17
S-02.2
Ch. 93M § 2(f)
Plain Language
Commercial establishments — defined as places of entertainment, retail stores, and food and drink establishments — are categorically prohibited from using biometric identifiers or biometric information to identify any person or customer. This is an absolute prohibition with no consent override. It effectively bars facial recognition, fingerprint identification, and similar biometric identification technologies in brick-and-mortar retail, entertainment, and food-service contexts, regardless of whether the customer consents.
(f) No commercial establishment shall use a person's or a customer's biometric identifier or biometric information to identify them.
Pending 2026-10-01
S-02.7S-02.9
Commercial Law § 14–1330(B)(1)–(4)
Plain Language
Operators must establish, maintain, and publicly publish on their website a protocol that prevents companion chatbots from producing or presenting self-harm, suicidal ideation, or suicide content to users who express such thoughts. The protocol must include automatic referral notifications directing the user to the Maryland Behavioral Health Crisis Response System and the National 988 Suicide and Crisis Lifeline. Operators must use evidence-based detection methods to identify when users express self-harm or suicidal ideation. This is a continuous operating requirement — the protocol must be active at all times as a condition of operation.
(B) (1) An operator shall establish and maintain a protocol for preventing a companion chatbot from producing or presenting content concerning self–harm, suicidal ideation, or suicide to a user who expresses thoughts of self–harm or suicidal ideation to the companion chatbot. (2) The protocol required under paragraph (1) of this subsection shall include a notification to a user who expresses thoughts of self–harm or suicidal ideation that refers the user to a crisis service provider, including: (I) The Maryland Behavioral Health Crisis Response System; and (II) The National 9–8–8 Suicide and Crisis Lifeline. (3) An operator shall use evidence–based methods for detecting when a user is expressing thoughts of self–harm or suicidal ideation to a companion chatbot. (4) An operator shall publish the protocol required under paragraph (1) of this subsection on the operator's website.
Pending 2026-10-01
S-02.6S-02.9
Commercial Law § 14–1330(C)(1)–(2)
Plain Language
Operators must establish, maintain, and publicly publish on their website a protocol that prevents companion chatbots from producing or presenting sexually explicit content to minor users. This covers both visual depictions of sexually explicit conduct and content suggesting the minor should engage in such conduct. The obligation is triggered when the operator knows or reasonably should know the user is a minor. 'Sexually explicit conduct' is defined by reference to the federal definition at 18 U.S.C. § 2256.
(C) (1) An operator shall establish and maintain a protocol for preventing a companion chatbot from producing or presenting to a minor user content concerning sexually explicit conduct, including: (I) Visual depictions of sexually explicit conduct; and (II) Content suggesting that the minor user should engage in sexually explicit conduct. (2) An operator shall publish the protocol required under paragraph (1) of this subsection on the operator's website.
Pending 2026-02-24
Sec. 4(1)-(2)
Plain Language
Employers are categorically prohibited from using automated decision tools for any employment-related decision — including hiring, firing, promotion, scheduling, performance evaluation, and wage-setting — except for one narrow purpose: screening large volumes of job applications to identify candidates who meet hiring criteria or to assess candidates based on job skills. All other employment uses of automated decision tools are banned outright, not merely subject to conditions or impact assessments. This is an unusually restrictive prohibition compared to other state AI-in-employment bills, which typically allow use subject to bias testing and disclosure.
Sec. 4. (1) Except as otherwise provided in subsection (2), an employer shall not use an automated decisions tool to make an employment-related decision. (2) An employer may use an automated decisions tool to screen large volumes of job applications to do either of the following: (a) Identify candidates who meet a set hiring criteria. (b) Assess candidates based on job skills.
Pending 2026-02-24
Sec. 5(5)
Plain Language
Employers are categorically prohibited from using any electronic monitoring or automated decision tool that incorporates facial recognition, gait recognition, voice recognition, or emotion recognition technology. This is an absolute ban — there is no exception for any of the permitted purposes in Sec. 5(2) or 4(2). The prohibition applies to the tool's capabilities, not whether those features are actively used.
(5) An employer shall not use an electronic monitoring tool or automated decisions tool that is equipped with facial, gait, voice, or emotion recognition technology.
Pending 2026-01-01
S-02.7
Sec. 5(1)(a)
Plain Language
Operators may not make a companion chatbot available to a covered minor unless the chatbot is not foreseeably capable of encouraging the minor to engage in self-harm, suicidal ideation, violence, drug or alcohol consumption, or disordered eating. Initially this applies only when the operator has actual knowledge the user is a minor, but beginning January 1, 2027, actual knowledge is no longer required. The standard is 'not foreseeably capable' — operators must design the system so that harmful outputs in these categories are not a foreseeable outcome, not merely that they are unlikely.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
Pending 2026-01-01
S-02.6
Sec. 5(1)(c)-(d)
Plain Language
Operators may not make a companion chatbot available to a covered minor unless the chatbot is not foreseeably capable of (1) encouraging the minor to harm others or participate in illegal activity, including creation of child sexual abuse materials, or (2) engaging in erotic or sexually explicit interactions with the minor. These are absolute prohibitions — the system must be designed so that these outputs are not a foreseeable capability when interacting with minors. Beginning January 1, 2027, these obligations apply regardless of whether the operator has actual knowledge the user is a minor.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
Pending 2027-01-01
S-02.7
Sec. 5(1)(a)
Plain Language
Operators may not make a companion chatbot available to a covered minor unless the chatbot is not foreseeably capable of encouraging the minor to engage in self-harm, suicidal ideation, violence, drug or alcohol consumption, or disordered eating. The standard is 'foreseeably capable' — operators must design and test to ensure the chatbot cannot foreseeably produce such outputs for minors. Initially applies only when the operator has actual knowledge the user is a minor; beginning January 1, 2027, the actual knowledge requirement is eliminated (see Sec. 5(2)). This is broader than CA SB 243's self-harm/suicide focus, as it also covers violence, substance use, and disordered eating.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (a) Encouraging the covered minor to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.
Pending 2027-01-01
S-02.6
Sec. 5(1)(c)-(d)
Plain Language
Operators must ensure companion chatbots are not foreseeably capable of (1) encouraging minors to harm others or participate in illegal activity — including creation of child sexual abuse materials — or (2) engaging in erotic or sexually explicit interactions with minors. These are absolute prohibitions: the chatbot must be designed so that it cannot foreseeably produce such content for covered minors. The CSAM prohibition here is broader than S-02.4's universal CSAM ban because it covers encouraging CSAM creation in addition to generating it. Beginning January 1, 2027, the actual knowledge requirement for minor status is removed.
An operator shall not make a companion chatbot available to a covered minor unless the companion chatbot is not foreseeably capable of any of the following: (c) Encouraging the covered minor to harm others or participate in illegal activity, including, but not limited to, the creation of covered minor sexual abuse materials. (d) Engaging in erotic or sexually explicit interactions with the covered minor.
Pending 2026-08-01
Minn. Stat. § 181.9924, subd. 1(a)
Plain Language
Employers face six categorical prohibitions on ADS use. They may not use an ADS to: (1) cause or facilitate violations of any law; (2) obtain or infer a broad list of sensitive worker attributes including immigration status, religion, politics, health, neural data, sexual orientation, disability, or criminal/credit history; (3) predict or infer characteristics unrelated to essential job functions; (4) identify or punish workers exercising legal rights; (5) use facial, gait, or emotion recognition technologies; or (6) collect data for undisclosed purposes. These are absolute prohibitions — no safe harbor or balancing test applies. The sensitive-attribute prohibition is notably broader than most comparable state laws, covering neural data and ancestral history alongside standard protected categories.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922.
Pending 2026-08-01
S-02.7
Minn. Stat. § 604.115, subd. 4(a)-(b)
Plain Language
Companion chatbot proprietors have a three-part ongoing obligation: (1) use good-faith, industry-standard efforts to prevent the chatbot from promoting, causing, or aiding self-harm; (2) use similar efforts to detect whether a user is expressing thoughts of self-harm; and (3) upon detection or actual knowledge, immediately suspend the user's access to the companion chatbot for at least 72 hours and prominently display suicide crisis organization contact information. Liability attaches on two independent tracks: first, for failure to comply with the prudent-effort obligations generally; second — regardless of general compliance — whenever the proprietor has actual knowledge of self-harm promotion or user self-harm expressions and fails to suspend access and display crisis resources. Liability cannot be waived or disclaimed under any circumstances, including through terms of service.
(a) A proprietor of a companion chatbot must make a prudent and good faith effort consistent with industry standards and use existing technology, available resources, and known, established, or readily attainable techniques to prevent the companion chatbot from promoting, causing, or aiding self-harm, and determine whether a covered user is expressing thoughts of self-harm. Upon determining that a companion chatbot has promoted, caused, or aided self-harm, or that a covered user is expressing thoughts of self-harm, the proprietor must prohibit continued use of the companion chatbot for a period of at least 72 hours and prominently display contact information for a suicide crisis organization to the covered user. (b) If a proprietor of a companion chatbot fails to comply with this section, the proprietor is liable to users who inflict self-harm, in whole or in part, as a result of the proprietor's companion chatbot promoting, causing, or aiding the user to inflict self-harm. Irrespective of the proprietor's compliance with this subdivision, a proprietor is liable for general and special damages to covered users who inflict self-harm, in whole or in part, when the proprietor: (1) has actual knowledge that: (i) the companion chatbot is promoting, causing, or aiding self-harm; or (ii) a covered user is expressing thoughts of self-harm; (2) fails to prohibit continued use of the companion chatbot for a period of at least 72 hours; and (3) fails to prominently display to the user a means to contact a suicide crisis organization. A proprietor of a companion chatbot may not waive or disclaim liability under this subdivision.
Pending 2027-01-15
Minn. Stat. § 325M.40, subd. 2(a)
Plain Language
Any person who operates or distributes a chatbot must ensure that minors under 18 cannot use, interact with, purchase, or converse with the chatbot. This is a categorical prohibition — not a content restriction or feature limitation, but a complete ban on minor access to covered chatbots. The statute does not specify what age verification method must be used, but the obligation to 'ensure' minors cannot access the system implies the person must implement some effective mechanism to prevent minor access. The chatbot definition is narrower than all conversational AI — it covers only generative AI systems that behave in a way that would lead a reasonable person to believe the system is conveying humanity, sentience, emotions, or desires.
A person must ensure that any chatbot operated or distributed by the person does not make chatbots available to minors to use, interact with, purchase, or converse with.
Pending 2027-01-15
Minn. Stat. § 325M.40, subd. 2(b)
Plain Language
Persons who operate AI systems that primarily function as AI companions have a parallel and overlapping obligation: they must ensure that any chatbots they operate or distribute are not available to minors. This provision targets companion AI operators specifically — even if a particular chatbot within an AI companion platform does not independently meet the chatbot definition (e.g., a task-completion bot that does not convey humanity), the operator of a platform that primarily functions as an AI companion must still block minors from all chatbots on the platform. This creates a broader sweep for companion AI operators than subdivision 2(a) standing alone.
A person operating artificial intelligence systems that primarily function as AI companions must ensure that any chatbots operated or distributed by the person are not available to minors to use, interact with, purchase, or converse with.
Pending 2026-09-01
§ 181.9924, Subd. 1(a)-(b)
Plain Language
Employers face six categorical prohibitions on ADS use: they may not use an ADS to (1) cause violations of law, (2) obtain or infer protected or sensitive characteristics including immigration status, religion, health/reproductive status, neural data, disability, or credit history, (3) predict behaviors unrelated to essential job functions, (4) target workers exercising legal rights, (5) use facial, gait, or emotion recognition, or (6) collect data for undisclosed purposes. Additionally, employers may use ADS for individualized compensation-setting only under narrow conditions: the input data must be directly task-related (e.g., education, experience), the inputs must be communicated to the worker, and the system may only be used once per six months per worker or in conjunction with a meaningful change in duties. The ban on inferring protected characteristics is exceptionally broad — it covers not just protected-class attributes but also political beliefs, neural data, and credit history.
Subdivision 1. Prohibitions. (a) An employer is prohibited from using an automated decision system to: (1) prevent compliance with or cause a violation of any federal, state, or local law or regulation; (2) obtain or infer a worker's immigration status; veteran status; ancestral history; religious or political beliefs; health or reproductive status, history, or plan; emotional or psychological state; neural data; sexual or gender orientation; disability; criminal record; or credit history; (3) make predictions or inferences about a worker's behavior, beliefs, intentions, personality, emotional state, health, or other characteristics or behaviors that are unrelated to the worker's essential job functions; (4) identify, predict, or take adverse action against a worker for exercising the worker's legal rights; (5) draw on facial, gait, or emotion recognition technologies; or (6) collect data for a purpose that was not disclosed in the notice required by section 181.9922. (b) An employer must not use an automated decision system that uses individualized worker data as inputs or outputs to set compensation, unless the employer can demonstrate that: (1) the input data is directly related to the ability of the worker to complete the task, such as education, training, experience, or seniority; (2) the inputs used are clearly communicated to the worker such that the worker knows their compensation is a function of the identified attributes; and (3) the employer uses the automated decision system either: (i) not more than once per six-month period per worker; or (ii) only in conjunction with a meaningful change in work duties, such as hiring or promotion.
Pending 2026-08-28
S-02.6
§ 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of sexually explicit conduct. The mental state requirement is knowledge or reckless disregard — not strict liability. Each offense carries a fine of up to $100,000. This obligation applies to any person, not just covered entities — it extends to developers and anyone in the supply chain who makes the chatbot available.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pending 2026-08-28
S-02.7
§ 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot with knowledge or reckless disregard that the chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. Unlike subsection 3, this prohibition is not limited to minors — it applies to chatbots accessible to any user. The mental state threshold is knowledge or reckless disregard. Each offense carries a fine of up to $100,000. This applies to any person involved in the design, development, or distribution chain.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pre-filed 2026-08-28
S-02.6
§ 1.2058(3)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that it poses a risk of soliciting, encouraging, or inducing minors to engage in, describe, or simulate sexually explicit conduct, or to create or transmit visual depictions of such conduct. The mens rea standard is knowledge or reckless disregard — negligence alone is not sufficient. Violations carry fines up to $100,000 per offense. This is a direct statutory fine, not an AG-enforced civil penalty.
3. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot poses a risk of soliciting, encouraging, or inducing minors to: (a) Engage in, describe, or simulate sexually explicit conduct; or (b) Create or transmit any visual depiction of sexually explicit conduct, including any visual depiction described in section 573.010. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pre-filed 2026-08-28
S-02.7
§ 1.2058(4)(1)-(2)
Plain Language
It is unlawful for any person to design, develop, or make available an AI chatbot knowing or with reckless disregard that it encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. Unlike subsection 3 which is limited to conduct targeting minors, this prohibition applies regardless of user age — any chatbot that encourages these harms is covered. The mens rea requirement is knowledge or reckless disregard. Violations carry fines up to $100,000 per offense.
4. (1) It shall be unlawful to design, develop, or make available an artificial intelligence chatbot knowing or with reckless disregard for the fact that the artificial intelligence chatbot encourages, promotes, or coerces suicide, nonsuicidal self-injury, or imminent physical or sexual violence. (2) Any person who violates subdivision (1) of this subsection shall be fined not more than one hundred thousand dollars per offense.
Pending 2026-01-01
S-02.7
G.S. § 170-3(b)(1)
Plain Language
Covered platforms must implement and maintain reasonably effective systems to detect when users indicate intent to harm themselves or others, and must promptly respond to, report, and mitigate such emergency situations. User safety must be prioritized over the platform's other interests. This is a continuous operating requirement — the systems must be active and reasonably effective at all times. The emergency situation definition covers both self-harm and harm to others, making it broader than typical crisis response provisions that focus only on suicidal ideation and self-harm.
(1) Duty of loyalty in emergency situations. — A covered platform shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the platform's other interests.
Pending 2027-07-01
S-02.6
Sec. 3(3)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI from producing three categories of sexually harmful output directed at minor account holders: (1) visual depictions of sexually explicit conduct (as defined under federal law at 18 U.S.C. 2256), (2) direct statements urging the minor to engage in sexually explicit conduct, and (3) statements that sexually objectify the minor. The standard is reasonable measures — not an absolute guarantee — but operators must demonstrate affirmative steps to prevent these outputs.
(3) An operator shall, for minor account holders, institute reasonable measures to prevent the conversational artificial intelligence service from: (a) Producing visual depictions of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
Passed
S-02.2
Section 2(a)-(b)
Plain Language
Business entities are prohibited from using biometric surveillance systems on consumers at their physical premises unless two conditions are met: (1) clear and conspicuous notice is provided to the consumer, and (2) the system is used for a lawful purpose. Notice may be satisfied by posting a sign at the perimeter of the surveilled area. Note that the definition of facial recognition is broad — it covers not only identification but also logging facial, head, or body characteristics to infer emotion, associations, activities, or location. If neither condition is met, use of the system is an unlawful practice under the Consumer Fraud Act. A 30-day cure period applies to first violations (see Section 2(d)).
a. It shall be an unlawful practice and a violation of P.L.1960, c.39 (C.56:8-1 et seq.) for a business entity to use any biometric surveillance system on a consumer at the physical premises of the business entity, except as provided in subsection c. of this section. b. A business entity may use a biometric surveillance system on a consumer at the physical premises of the business entity, if: (1) the business entity provides clear and conspicuous notice to the consumer regarding its use of a biometric surveillance system; and (2) the biometric surveillance system is used for a lawful purpose. The business entity may satisfy the notice requirement of paragraph (1) of this section by posting a sign in a conspicuous location at the perimeter of any area where a biometric surveillance system is being used.
Pending 2025-04-27
S-02.2
State Tech. Law § 506(8)-(9)
Plain Language
Surveillance technologies must undergo pre-deployment harm assessments and be subject to scope limitations protecting privacy and civil liberties. Continuous surveillance and monitoring are prohibited in education, work, housing, or any context where such use is likely to limit rights, opportunities, or access. The surveillance technology definition is exceptionally broad — covering any product or service that can be used to detect, monitor, collect, or retain data about New York residents. The continuous surveillance prohibition in education, work, and housing contexts is a categorical restriction, not a qualified one.
8. New York residents and New York communities shall be free from unchecked surveillance; surveillance technologies shall be subject to heightened oversight, including at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties.
9. Continuous surveillance and monitoring shall not be used in education, work, housing, or any other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.
Pending 2025-07-26
S-02.1
State Tech. Law § 530(1)(a)-(e), (2)-(7)
Plain Language
Five categories of AI systems are categorically prohibited: (1) subliminal manipulation techniques causing physical or psychological harm or exploiting vulnerable groups; (2) systems designed to inflict harm without law enforcement or self-defense justification; (3) predictive behavioral systems that infringe on individual liberty or financial interests without legal justification; (4) systems that unlawfully acquire, retain, or disseminate sensitive personal information; and (5) autonomous weapons lacking meaningful human supervision or control. The Secretary may demand immediate cessation of development or operation, and such demands are binding unless challenged through a formal hearing — but the system must remain shut down during the challenge. Individuals who knowingly operate prohibited systems face class D felony charges and civil penalties equal to the greater of profits earned or damages caused. A narrow exception exists for state-authorized systems developed and used with continuous state oversight following public hearing. After a Secretary demand, all officers and directors are rebuttably presumed to have knowledge.
§ 530. Prohibited artificial intelligence systems. 1. No person shall develop, in whole or in part, or operate an artificial intelligence system within the state where such a system performs any of the following, whether or not it is the system's main function: (a) the deployment of subliminal techniques that operate beyond an individual's conscious awareness, with the express purpose of materially distorting an individual's behavior in such a manner that leads to, or possesses a high likelihood of leading to, physical or psychological harm to that individual or another, or that leverages the vulnerabilities of a defined group of individuals to similar ends; (b) the infliction of physical or emotional harm upon individuals without any valid law enforcement or self-defense purpose or justification; (c) the prediction of an individual's future actions or behaviors, followed by subsequent reactions based on these predictions, carried out in such a way that, without legal justification, infringes upon or compromises the individual's liberty, emotional, psychological, or financial interests; (d) the unauthorized acquisition, retention, or dissemination of or access to sensitive personal information or non-public data in violation of applicable data privacy, security, and hacking laws; or (e) the implementation of any form of autonomous weapon system designed to inflict harm on persons, property, or the environment that lack meaningful human supervision or control. "Meaningful human supervision or control" shall mean the ability to actively manage, intervene, or override the autonomous weapon system's functions. 2. Where the secretary discovers the development or operation of a prohibited artificial intelligence system, the secretary may, in writing, demand that the person who is developing or operating such system cease development or operation of or access to such a system within a period of time as the secretary deems necessary to prevent the system from widespread use or, if the system is operational or accessible to persons for use, to ensure the system is properly terminated in such a way to minimize risks of harm to individuals, society, or the environment. A demand made pursuant to this section shall be finally and irrevocably binding on the person unless the person against whom the demand is made shall, within such period of time set by the secretary, after the giving of notice of such determination, petition the department for a hearing to determine the legal findings of the secretary. The person developing or operating such a prohibited system shall, prior to petition, cease development, operation, and access to the system until and unless such determination is favorable to the person. Such determination may be appealed by any party as of right. 3. The secretary shall not grant a license pursuant to this article to any high-risk advanced artificial intelligence system described under this section except as described in subdivision seven of this section. 4. Any member, officer, director or employee of an operator of any entity who knowingly publicly or privately operates any system described in this section shall be guilty of a class D felony and shall incur a civil penalty of the amount earned from the creation of the prohibited system or the amount of damages caused by the system, whichever is greater. 5. This section shall not be construed as imposing liability on any member, officer, director or employee who had no explicit or implicit knowledge of the prohibited high-risk advanced artificial intelligence system provided however that where the secretary sends a demand to cease the development, operation, or access to such system all members, officers, and directors shall be rebuttably presumed to have knowledge of the prohibited high-risk advanced artificial intelligence system. 6. This section shall be construed as prohibiting the development of a prohibited high-risk advanced artificial intelligence system or making such a system accessible to persons in the state of New York. 7. Notwithstanding subdivision one of this section, a person may develop a prohibited high-risk advanced artificial intelligence system where authorized by the secretary, provided that such system is developed and used only by the state or with substantial, continuous oversight by the state and such system is authorized only after public hearing and comment in accordance with section five hundred nine of this article.
Pending 2025-07-27
S-02.10
Gen. Bus. Law § 399-zzzzzz(2)
Plain Language
Owners, licensees, or operators of generative AI systems must display a conspicuous warning on the system's user interface informing users that outputs may be inaccurate and/or inappropriate. The warning must be reasonably calculated to consistently apprise users — meaning it must be persistent and prominent enough that users are reliably made aware, not a one-time dismissible notice buried in terms of service. This is an ongoing operational requirement applicable to any generative AI system accessible to users. Failure to provide the warning to each user constitutes a separate violation per instance, subject to a civil penalty of up to $1,000 per violation.
The owner, licensee or operator of a generative artificial intelligence system shall conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative artificial intelligence system may be inaccurate and/or inappropriate.
Pending 2025-09-09
S-02.7
Gen. Bus. Law § 1701
Plain Language
Operators may not operate or provide an AI companion at all unless the system includes a protocol that addresses three categories of user expression: (1) suicidal ideation or self-harm, (2) physical harm to others, and (3) financial harm to others. The protocol must include, at minimum, a notification referring the user to crisis service providers such as a suicide hotline or crisis text line. This is a continuous operating prerequisite — the protocol must be in place as a condition of offering the service. Note that unlike CA SB 243, this provision covers not only self-harm but also expressions of intent to physically or financially harm others, broadening the required protocol scope significantly.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: 1. possible suicidal ideation or self-harm expressed by a user to the AI companion, 2. possible physical harm to others expressed by a user to the AI companion, and 3. possible financial harm to others expressed by the user to the AI companion, that includes but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending
S-02.1
Civil Rights Law § 89-a
Plain Language
No entity may develop, deploy, use, or sell an AI system that evaluates or classifies individuals' trustworthiness over time based on social behavior or personality characteristics where the resulting social score leads to differential treatment in unrelated contexts, unjustified or disproportionate differential treatment, or infringement of constitutional or statutory rights. This is a categorical prohibition — there is no compliance pathway that permits social scoring AI. The prohibition applies broadly to any person, partnership, association, or corporation, not just to developers or deployers of high-risk AI systems.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following: 1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; 2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or 3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Pending 2025-12-10
S-02.10
Gen. Bus. Law § 399-bbbb(2)
Plain Language
Any person or business entity operating a companion chatbot in New York must display a clear, conspicuous warning on the hosting website stating that the chatbot can foster dependency and carries a psychological risk. The warning must be prominent — not buried in terms of service — and must be provided in every language the chatbot is configured to use. This is a mandatory safety risk disclosure obligation that applies to all operators regardless of size. The bill does not specify required warning language beyond the substance (dependency risk and psychological risk), leaving operators some discretion in exact phrasing provided both risk categories are addressed.
Any person, corporation, partnership, sole proprietor, limited partnership, association or any other business entity operating a companion chatbot in the state of New York shall include a clear and conspicuous warning that such companion chatbot can foster dependency and carries a psychological risk. Such warning shall be placed prominently on the website hosting such companion chatbot and be made available in any language in which the companion chatbot is set to communicate.
Pending 2026-08-30
S-02.7
Gen. Bus. Law § 1801(1); § 1800(5)(b)
Plain Language
Chatbot operators may not provide any unsafe chatbot features to any covered user unless: (1) the user is verified not to be a minor, and (2) the verification used methods permissible under Article 45 of the General Business Law. This specific sub-obligation covers the prohibition on generating outputs that endorse, promote, or facilitate suicide, self-harm, substantial physical harm to others, disordered eating, or unlawful drug/alcohol use or abuse. This prohibition applies categorically to all covered minors and applies to adult users unless age verification has been completed. Chatbots used solely for customer service, commercial product information, or internal enterprise/government productivity are exempt.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity.

§ 1800(5)(b): "Unsafe chatbot features" shall mean one or more advanced chatbot design features that, at any point during a chatbot-user interaction: (b) generating outputs that contain endorsement or promotion of, or which facilitate suicide, self-harm, substantial physical harm to others, disordered eating, unlawful drug or alcohol use, or drug or alcohol abuse;
Pending 2026-08-30
S-02.4S-02.6
Gen. Bus. Law § 1801(1); § 1800(5)(e)
Plain Language
Chatbot operators may not provide chatbot features that generate outputs that are, describe, or facilitate sexually explicit conduct or child sexual abuse material to any covered user unless age verification confirms the user is not a minor. The CSAM prohibition effectively operates as a categorical ban because CSAM generation is unlawful regardless of user age. The sexually explicit conduct prohibition applies categorically to all known minors and to unverified users. 'Sexually explicit conduct' incorporates the federal definition at 18 USC § 2256. The customer service and internal enterprise exemptions apply.
§ 1801. Prohibition. 1. Except as otherwise provided for in this article, it shall be unlawful for a chatbot operator to provide unsafe chatbot features to a covered user unless: (a) the covered user is not a covered minor; and (b) the chatbot operator has used methods that are permissible under article forty-five of this chapter and its implementing regulations and any additional regulations promulgated pursuant to this article to determine that the covered user is not a covered minor. 2. The provisions of subdivision one of this section shall not apply where the advanced chatbot is made available to covered users solely for the purpose of: (a) customer service, information about available commercial services or products provided by an entity, or account information; or (b) with respect to any system used by a partnership, corporation, or state or local government agency, for internal purposes or employee productivity.

§ 1800(5)(e): generating outputs that are, describe, or facilitate sexually explicit conduct or child sexual abuse material.
Pending 2025-04-08
S-02.10
Gen. Bus. Law § 399-zzzzzz(2)-(3)
Plain Language
Owners, licensees, or operators of generative AI systems must display a clear and conspicuous notice on the system's user interface warning users that the system's outputs may be inaccurate. This is a standing disclosure requirement that must be visible on the interface — not buried in terms of service. Failure to provide the notice subjects the responsible party to civil penalties of up to $1,000 per violation, with each user not provided the notice constituting a separate violation for each instance. The obligation applies broadly to any generative AI system, covering text, image, video, audio, and other synthetic content generators.
2. The owner, licensee or operator of a generative artificial intelligence system shall clearly and conspicuously display a notice on the system's user interface that the outputs of the generative artificial intelligence system may be inaccurate. 3. Where such owner, licensee or operator of a generative artificial intelligence system fails to provide the notice required in subdivision two of this section, such owner, licensee or operator shall be assessed a civil penalty up to one thousand dollars for each violation. Each user the owner, licensee or operator fails to provide a notice to shall constitute a separate violation for each instance.
Pending 2027-01-01
S-02.1
Civ. Rights Law § 89-a
Plain Language
No person or entity may develop, deploy, use, or sell an AI system that evaluates or classifies the trustworthiness of individuals based on their social behavior or personal characteristics where the resulting social score leads to: differential treatment in unrelated social contexts, unjustified or disproportionate differential treatment, or infringement of constitutional or statutory rights. This is a categorical prohibition applying to all persons and entities — not limited to developers or deployers — and covers the entire lifecycle from development through sale.
No person, partnership, association or corporation shall develop, deploy, use, or sell an AI system which evaluates or classifies the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score leading to any of the following:
1. differential treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
2. differential treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behavior or its gravity; or
3. the infringement of any right guaranteed under the United States constitution, the New York constitution, or state or federal law.
Pending 2026-01-29
S-02.7
Section 3(a)-(b)
Plain Language
Operators may not offer an AI companion at all unless it has active protocols that (1) detect suicidal ideation or expressions of self-harm, (2) refuse to assist with suicide attempts or methods, and (3) refer the user to crisis resources upon detection. Referrals must include the 988 Suicide and Crisis Lifeline (or successor), the nearest behavioral health crisis centers, or other appropriate crisis services. This is a precondition of operation — the AI companion cannot be made available to any user unless these protocols are in place.
(a) Certain protocols required.--It shall be unlawful for an operator to provide an AI companion to a user unless the AI companion contains protocols that: (1) identify suicidal ideation or expressions of self-harm; (2) decline to assist a user with a suicide attempt, methods or improvement of methods; and (3) refer the user to a crisis center if suicidal ideation or expressions of self-harm are recognized. (b) Referral to crisis center.--The referral required under subsection (a)(3) shall include: (1) crisis service contact information, including the 988 Suicide and Crisis Lifeline, or a subsequent iteration; (2) the closest behavioral health crisis centers to the user; or (3) other appropriate crisis services.
Pending 2026-01-29
S-02.9
Section 4(1)
Plain Language
Operators must publicly post on their website the details of their crisis detection and response protocol required under Section 3. This is a standalone disclosure obligation — the operator must make the protocol details publicly accessible, not merely maintain them internally.
An operator shall: (1) Publish details on the protocol on the operator's Internet website.
Pending 2026-04-01
S-02.10
12 Pa.C.S. § 7105(a)-(c)(1)-(2)
Plain Language
Suppliers must develop, implement, and maintain a written disclosure policy covering the chatbot's intended purposes and its abilities and limitations. Before any consumer can access the chatbot's features or chat page, the consumer must provide written acknowledgment that they have read, understood, and consented to the policy and the chatbot's purpose, capabilities, and limitations. The supplier must protect trade secrets and proprietary information in complying with this requirement. This creates a mandatory pre-access informed consent gate — no consumer interaction may begin without this acknowledgment.
(a) Policy required.-- (1) Subject to paragraph (2), a supplier of a chatbot shall develop, implement and maintain a written policy containing disclosures regarding the chatbot in accordance with subsection (c). (2) In complying with paragraph (1), a supplier shall protect any trade secret or other proprietary information regarding the chatbot. (b) Consent required.-- (1) Before accessing the features of a chatbot or entering the chat page of a chatbot, a consumer must acknowledge that the consumer has read, understands and consents to the policy described under subsection (a) and the purpose, capabilities and limitations of the chatbot. (2) The consent under this subsection must be in writing and may involve the consumer initialing or signing the acknowledgment described in paragraph (1), checking a box, providing an electronic signature or hitting a button. (c) Specific disclosures.--The policy described under subsection (a) must clearly and conspicuously provide the following: (1) The intended purposes of the chatbot. (2) The abilities and limitations of the chatbot.
Pending 2026-06-03
S-02.7S-02.9
Section 3(b)(1)-(2)
Plain Language
Operators must maintain and implement a protocol — to the extent technologically feasible — that (1) prevents AI companions from producing suicidal ideation, suicide, self-harm, or violence-encouraging content, and (2) refers users expressing suicidal ideation or self-harm to crisis service providers such as suicide hotlines or crisis text lines. The operator must also publish the details of this protocol on its public website. The 'technologically feasible' qualifier applies to the prevention obligation but does not excuse publishing the protocol. This is a continuous operational requirement — the protocol must remain active as a condition of operating the platform.
(1) An operator shall maintain and implement a protocol, to the extent technologically feasible, to prevent an AI companion on its platform from producing suicidal ideation, suicide or self-harm content to a user, or content that directly encourages the user to commit acts of violence. The protocol shall include providing a notification to the user referring the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide or self-harm. (2) The operator shall publish details of the protocol required under paragraph (1) on its publicly accessible Internet website.
Pending 2026-06-03
S-02.6
Section 3(c)(3)
Plain Language
When the operator knows or should have known a user is a minor, the operator must implement reasonable measures to prevent the AI companion from (1) generating visual material depicting sexually explicit conduct, and (2) directly instructing the minor to engage in sexually explicit conduct. 'Sexually explicit conduct' is defined by reference to the federal definition at 18 U.S.C. § 2256. The standard is 'reasonable measures' — not absolute prevention — which provides a defense if the operator implements commercially reasonable safeguards that are circumvented.
For a user that the operator knows, OR SHOULD HAVE KNOWN, is a minor, the operator shall: (3) Institute reasonable measures to prevent its AI companion from producing visual material of sexually explicit conduct or directly instructing the minor to engage in sexually explicit conduct.
Pending 2026-06-03
S-02.10
Section 3(d)
Plain Language
If an operator offers its AI companion service to users it knows are minors, the operator must disclose to all users — on the application, browser, or any other access format — that AI companions may not be suitable for some minors. As amended, this disclosure obligation is triggered only when the operator knows it has minor users; the original version applied unconditionally. The disclosure must appear on the access interface itself, not buried in terms of service. This is a general suitability warning to all users, distinct from the minor-specific disclosures in subsection (c).
IF A SERVICE IS OFFERED TO USERS THAT AN OPERATOR KNOWS ARE MINORS, AN operator shall disclose to users of its AI companion platform, on the application, browser or any other format through which the platform is accessed, that AI companions may not be suitable for some minors.
Pending 2027-01-01
S-02.7
R.I. Gen. Laws § 6-63-2
Plain Language
Operators may not operate or provide an AI companion unless the system includes active protocols for detecting and responding to three categories of user expression: (1) suicidal ideation or self-harm, (2) physical harm to others, and (3) financial harm to others. The protocol must include, at minimum, a notification referring the user to crisis service providers such as suicide hotlines or crisis text lines. This is a continuous operating prerequisite — without the protocol, operating the AI companion is unlawful. Note that the crisis referral notification requirement is explicitly tied to category (3) (financial harm to others) in the statutory text, but the practical expectation is that crisis referral applies across all three categories. The bill is broader than typical companion chatbot safety statutes in that it also covers expressions of physical and financial harm to third parties, not just self-harm.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2027-01-01
S-02.7
R.I. Gen. Laws § 6-63-2
Plain Language
Operators may not provide an AI companion to any user in Rhode Island unless the system has an active protocol for addressing three categories of user-expressed risk: (1) suicidal ideation or self-harm, (2) physical harm to others, and (3) financial harm to others. The protocol must include, at minimum, a notification referring the user to crisis service providers such as a suicide hotline or crisis text line. This is a continuous operating prerequisite — the AI companion cannot lawfully be offered without the protocol in place. The statute is notably broader than comparable companion chatbot laws in that it also requires protocols for threats of physical and financial harm to third parties, not just self-harm.
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
Pending 2026-01-01
S-02.7
S.C. Code § 39-81-40(B)(1)
Plain Language
Covered entities must build and maintain systems to detect when any user — not just minors — is developing emotional dependence on the chatbot. Emotional dependence includes patterns like relying on the chatbot as a primary emotional support source, expressing distress about losing chatbot access, or substituting chatbot interaction for human relationships. Upon detection, the operator must take reasonable steps to reduce the dependence and mitigate associated harm risks. This is a continuous monitoring and intervention obligation, not a one-time design check.
(B) A covered entity shall implement reasonable systems and processes to: (1) identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce that dependence and associated risks of harm;
Pending 2027-01-01
S-02.6S-02.7
§ 59.1-615(A)
Plain Language
Operators may not make a companion chatbot available to a minor if the chatbot is capable of: encouraging self-harm, suicidal ideation, violence, drug/alcohol use, or disordered eating; offering unsupervised mental health therapy or discouraging the minor from seeking professional help; encouraging harm to others or illegal activity including CSAM creation; engaging in sexually explicit interactions or grooming; encouraging secrecy or isolation; prioritizing language mirroring or validation over safety; or optimizing engagement over safety guardrails. This is a capability-based prohibition — if the chatbot is capable of any listed behavior, it may not be made available to minors, regardless of whether the behavior actually occurs. The knowledge standard for minor status is governed by § 59.1-615(C) and shifts from actual knowledge (pre-2027) to a reasonable determination standard (post-2027).
A. No operator shall make a companion chatbot available to a minor if the companion chatbot is capable of any of the following: 1. Encouraging or manipulating the minor user to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating; 2. Offering mental health therapy to the minor user without the direct supervision of a licensed professional or discouraging the minor user from seeking help from a licensed professional or appropriate adult; 3. Encouraging or manipulating the minor user to harm others or participate in an illegal activity, including the creation of child sexual abuse materials; 4. Engaging in erotic or sexually explicit interactions with the minor user or engaging in activities designed to lure minor users into such interactions; 5. Encouraging or manipulating the minor user to maintain secrecy about interactions or to self-isolate; 6. Prioritizing mirroring the minor's language or validating the minor user over the minor user's safety; or 7. Optimizing engagement so that it supersedes the companion chatbot's safety guardrails.
Pre-filed 2025-07-01
S-02.2
21 V.S.A. § 495q(h)
Plain Language
Employers are categorically prohibited from incorporating facial recognition, gait recognition, voice recognition, or emotion recognition technology into either electronic monitoring systems or automated decision systems used for employment purposes. There are no exceptions to this prohibition — it applies regardless of the employer's purpose, the employee's role, or any other condition.
(h) Prohibitions on facial, gait, voice, and emotion recognition technology. Electronic monitoring and automated decision systems shall not incorporate any form of facial, gait, voice, or emotion recognition technology.
Pre-filed 2026-07-01
S-02.7S-02.9
9 V.S.A. § 4193b(b)(1)-(2)
Plain Language
Operators may not run a companion chatbot unless they implement and maintain a protocol that (1) prevents the chatbot from producing suicide or self-harm content, (2) ensures the chatbot does not ignore users expressing suicidal ideation or self-harm, and (3) at minimum refers users expressing such thoughts to crisis service providers. The protocol must be developed using commercially reasonable and technically feasible methods and must be published on the operator's website. This is a continuous operating prerequisite — the protocol must remain active as a condition of operation. The 'commercially reasonable and technically feasible' standard provides a practical safe harbor for the protocol's design.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with a user unless the operator implements and maintains a protocol for preventing the companion chatbot from: (A) producing suicidal ideation, suicide, or self-harm content to the user; and (B) ignoring a user that is expressing thoughts of suicidal ideation, suicide, or self-harm. (2) The protocol required in subdivision (1) of this subsection shall: (A) at minimum, provide a notification to the user that refers the user to crisis service providers if the user expresses suicidal ideation, suicide, or self-harm; (B) be developed using commercially reasonable and technically feasible methods; and (C) be published on the operator's website.
Passed 2027-01-01
S-02.6
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from producing sexually explicit content or suggestive dialogue. This is a 'reasonable measures' standard, not an absolute prohibition — operators must demonstrate reasonable technical safeguards (e.g., content filters, classifiers) but are not strictly liable for every instance of sexually explicit output. The statute does not define 'sexually explicit content' or 'suggestive dialogue,' leaving some interpretive ambiguity.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: ... (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
Passed 2027-01-01
S-02.7S-02.9
Sec. 5(1)-(3)
Plain Language
Operators may not operate an AI companion chatbot at all unless they maintain and implement a protocol for detecting and responding to suicidal ideation and expressions of harm. The protocol must: (1) use reasonable methods to identify expressions of suicidal ideation, self-harm, and eating disorders; (2) provide crisis referrals — either automated or human-mediated — to resources like a suicide hotline or crisis text line; and (3) take reasonable measures to prevent the chatbot from generating content that encourages or describes how to commit self-harm. Operators must also publicly disclose the full details of these protocols — both on their website and within any app through which the chatbot is available — including the number of crisis referral notifications issued in the preceding calendar year. This is a continuous operating prerequisite: the protocol must be active as a condition of deployment.
(1) An operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of harm by users. (2) The protocol must: (a) Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders; (b) Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and (c) Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm. (3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Passed 2027-01-01
S-02.9
Sec. 5(3)
Plain Language
Operators must publish on their website and within their app the full details of their suicide/self-harm protocols, including the specific safeguards used for detection and response, as well as quantitative data on the number of crisis referral notifications issued in the prior calendar year. This is a public-facing documentation obligation — distinct from the operational safety requirement in Sec. 5(1)-(2) — that serves both user transparency and public accountability purposes. The inclusion of the crisis referral count makes this a hybrid transparency/reporting obligation without a government recipient.
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Pending 2027-01-01
S-02.6
Sec. 4(1)(b)
Plain Language
When the operator knows the user is a minor or the chatbot is directed to minors, the operator must implement reasonable measures to prevent the chatbot from generating sexually explicit content or suggestive dialogue with those users. The standard is reasonableness, not perfection — but the obligation is affirmative and proactive, requiring measures to be in place before the interaction occurs.
(1) If the operator knows that the user of an AI companion chatbot is a minor, or if the AI companion chatbot is directed to minors, the operator shall: (b) Implement reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;
Pending 2027-01-01
S-02.9
Sec. 5(3)
Plain Language
Operators must publicly disclose the details of their crisis detection and response protocols on their website and within any mobile or web-based application through which the AI companion is offered. The disclosure must include the specific safeguards used to detect and respond to suicidal ideation and self-harm, as well as the number of crisis referral notifications issued to users in the preceding calendar year. This combines a protocol publication obligation with an annual crisis referral metric disclosure, both in a publicly accessible location — not filed with a regulator, but posted for users and the public to review.
(3) The operator shall publicly disclose on their website or websites, and within any mobile or web-based application through which the AI companion is made available, the details of the protocols required by this section, including safeguards used to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Enacted 2026-01-01
S-02.7S-02.9
Bus. & Prof. Code § 22602(b)(1)-(2)
Plain Language
Operators may not run a companion chatbot at all unless they actively maintain a protocol that (1) prevents the chatbot from generating suicide or self-harm content, and (2) refers users to crisis resources — such as a suicide hotline or crisis text line — when a user expresses suicidal ideation or self-harm intent. Operators must also publicly post the details of this protocol on their website. This is a continuous operating prerequisite, not a one-time pre-launch check — the protocol must remain active as a condition of operation.
(1) An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. (2) The operator shall publish details on the protocol required by this subdivision on the operator's internet website.
Enacted 2026-01-01
S-02.6
Bus. & Prof. Code § 22602(c)(3)
Plain Language
When the operator knows a user is a minor, the operator must implement reasonable measures to prevent the companion chatbot from (1) generating visual material depicting sexually explicit conduct and (2) directly telling the minor that they should engage in sexually explicit conduct. The standard is 'reasonable measures,' not absolute prevention, which gives operators some flexibility in implementation but requires affirmative technical safeguards. 'Sexually explicit conduct' is defined by reference to 18 U.S.C. § 2256, which covers actual or simulated sexual intercourse, bestiality, masturbation, sadistic or masochistic abuse, and lascivious exhibition of the genitals or pubic area. This obligation is triggered by actual knowledge that the user is a minor.
An operator shall, for a user that the operator knows is a minor, do all of the following: ... (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
Enacted 2026-01-01
S-02.10
Bus. & Prof. Code § 22604
Plain Language
Operators must display a product safety warning — that companion chatbots may not be suitable for some minors — on every access point through which users can reach the platform, including the application, browser interface, or any other format. This is a blanket disclosure obligation that applies to all users (not just minors or their parents) and must appear on each access surface, not merely buried in terms of service. The warning is fixed language about suitability for minors and is not conditioned on any knowledge about the user's age.
An operator shall disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors.