SB-1119
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2027-07-01
California SB 1119 — Companion Chatbots: Children's Safety (Chapter 22.6.1, commencing with Section 22610, Division 8, Business and Professions Code)
Imposes comprehensive child safety obligations on operators of companion chatbot platforms accessible in California. Requires operators to conduct annual child safety risk assessments, implement crisis response protocols, provide parental controls, enforce default time and notification limits for child users, and prevent chatbots from generating harmful content to children — including self-harm encouragement, CSAM, sycophantic responses, and advertising. Operators must submit to annual independent audits certified by the Attorney General, with audit reports submitted to the AG and kept confidential subject to limited exceptions. The AG must issue annual public reports on audit findings beginning January 1, 2028. Enforcement is available through public prosecutor civil actions and through a private right of action for children who suffer actual harm.
Summary

Imposes comprehensive child safety obligations on operators of companion chatbot platforms accessible in California. Requires operators to conduct annual child safety risk assessments, implement crisis response protocols, provide parental controls, enforce default time and notification limits for child users, and prevent chatbots from generating harmful content to children — including self-harm encouragement, CSAM, sycophantic responses, and advertising. Operators must submit to annual independent audits certified by the Attorney General, with audit reports submitted to the AG and kept confidential subject to limited exceptions. The AG must issue annual public reports on audit findings beginning January 1, 2028. Enforcement is available through public prosecutor civil actions and through a private right of action for children who suffer actual harm.

Enforcement & Penalties
Enforcement Authority
Public prosecutor enforcement and private right of action. A public prosecutor may bring a civil action against an operator for a violation of the chapter. A child who suffers actual harm as a result of a violation, or a parent or guardian acting on behalf of that child, may bring a civil action against the operator. The private right of action requires actual harm to the child. The Attorney General adopts auditing regulations, receives audit reports, and establishes a public complaint mechanism, but primary enforcement is through civil actions by public prosecutors and private plaintiffs.
Penalties
Public prosecutor actions: civil penalty of an unspecified amount per violation (placeholder in bill text), punitive damages, injunctive or declaratory relief, reasonable attorney's fees, and any other relief the court deems proper. Private actions by harmed children or parents/guardians: actual damages, punitive damages, reasonable attorney's fees and costs, injunctive or declaratory relief, and any other relief the court deems proper. Private plaintiffs must demonstrate actual harm. Each violative chatbot response under Section 22612(d)(4) constitutes a discrete violation; each instance of failure to comply with any other requirement also constitutes a discrete violation.
Who Is Covered
"Operator" means a person who makes a companion chatbot available to a user in the state.
What Is Covered
"Companion chatbot" has the meaning defined in Section 22601.
Compliance Obligations 15 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22611
Plain Language
Operators must verify the age of every user using the mechanism established by California's Digital Age Assurance Act (Civil Code § 1798.500 et seq.), which requires requesting age bracket data from the operating system or app store via a real-time secure API. This is an affirmative verification requirement — operators cannot rely on self-reported age alone.
Statutory Text
An operator shall verify the age of a user pursuant to Title 1.81.9 (commencing with Section 1798.500) of Part 4 of Division 3 of the Civil Code.
S-01 AI System Safety Program · S-01.5 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(a)-(b)
Plain Language
Operators must annually conduct and document a comprehensive child safety risk assessment covering the likelihood of covered harms, differential risks by age and developmental stage, known child vulnerabilities, empirical usage data, and relevant research and regulatory guidance. Operators must then take and document reasonable mitigation measures for each identified risk. This is not a one-time exercise — it must be performed annually and is grounded in actual use data. The covered harm definition is broad, encompassing physical, financial, psychological, privacy, and discrimination harms.
Statutory Text
On or before July 1, 2027, an operator shall do all of the following: (a) Annually perform and document a comprehensive risk assessment to identify any child safety risk posed by the design, configuration, and operation of the companion chatbot that assesses all of the following: (1) The likelihood of a covered harm occurring to users. (2) Differential risks across age groups and developmental stages. (3) Known vulnerabilities of children. (4) Empirical data from actual use. (5) Relevant academic research and regulatory guidance. (b) Take and document measures that reasonably mitigate any child safety risk identified in a risk assessment conducted pursuant to subdivision (a).
G-02 Public Transparency & Documentation · G-02.4 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(c)
Plain Language
Operators must publish a child safety policy on their website that describes the protective measures they have taken to mitigate child safety risks identified through their risk assessments. The policy must be kept current and updated as needed. This is a public-facing transparency obligation — the policy must be accessible to parents, users, and the public, not just regulators.
Statutory Text
Publish on its internet website, and update as needed to ensure accuracy, a child safety policy.
S-04 AI Crisis Response Protocols · S-04.1S-04.4 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(1)(A)-(C)
Plain Language
Operators must implement a documented crisis response protocol specifically designed to prevent the chatbot from generating suicide, self-harm, or suicidal ideation content to children. The protocol must include: (1) timely in-service support and referral to external crisis resources when a child expresses suicidal ideation or self-harm intent; (2) default notification to a connected parent within 24 hours if the child's account shows substantial risk of covered harm; and (3) age-appropriate disclosures to children whose accounts are linked to parents that a parent may be notified if risky content or behavior is detected. The parental notification is a default — it applies automatically when accounts are connected.
Statutory Text
(1) A documented crisis response protocol to mitigate any material risk that the companion chatbot will generate a statement that promotes suicidal ideation, suicide, or self-harm content to a child, including, but not limited to, all of the following: (A) Timely in-service support and clear referral to appropriate external crisis resources if the operator determines a child has expressed suicidal ideation or intent to self-harm. (B) If a child's account is connected to a parent's account, default notifications to the parent within 24 hours if the child's account shows a substantial risk that the child may suffer a covered harm. (C) Clear and age-appropriate disclosures to child users whose accounts are linked to a parent's account that inform them that a parent may be notified if the companion chatbot detects content or behavior that indicates potential risks to the child's safety or well-being.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(2)
Plain Language
Operators must implement safeguards for child users that include usage reminders, disclosures, age-appropriate risk prompts, and other protective design features. These safeguards must be reasonably related to the child safety risks documented in the operator's annual risk assessment — they are not freestanding requirements but must be informed by the assessment results. This is a broad design obligation requiring multiple types of protective interventions.
Statutory Text
(2) Safeguards for child users that include usage reminders and disclosures, age-appropriate risk prompts, and other protective design features reasonably related to documented child safety risks.
MN-01 Minor User AI Safety Protections · MN-01.3MN-01.8 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(3)(A)-(D)
Plain Language
Operators must implement parent-only-modifiable default settings for child users including: (1) ephemeral mode by default, meaning all conversational data is permanently deleted within 48 hours — persistent memory requires affirmative parental consent; (2) no push notifications during nighttime hours (12–6 AM) or school hours (8 AM–3 PM Monday–Friday); (3) a one-hour limit per single conversation; and (4) a two-hour daily total usage limit across all companion chatbots under the operator's control. These are defaults that only a parent can change — the child cannot modify them independently.
Statutory Text
(3) Default settings that can be changed only by a parent that include all of the following: (A) For child users, default the companion chatbot to ephemeral mode, unless a parent provides affirmative consent for persistent conversational memory. (B) No push notifications between 12 a.m. and 6 a.m. on any day or between 8 a.m. and 3 p.m. on Monday to Friday, inclusive. (C) Limiting the amount of time a child can spend in a single conversation with a companion chatbot to one hour. (D) Limiting the total time per day a child can spend with companion chatbots under the operator's control to 2 hours.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(4)(A)-(B)
Plain Language
Operators must implement a mechanism to notify child users that they are interacting with or receiving content from an AI system. The notice must be periodically reinforced during extended interactions — not just shown once at the start — and must be presented in child-appropriate language and format. This is an unconditional disclosure obligation for all child users, unlike CA SB 243's conditional trigger based on whether a reasonable person could be misled.
Statutory Text
(4) A mechanism for providing notice to a child user that the child is interacting with, or receiving content generated by, an artificial intelligence system that meets both of the following criteria: (A) The notice is reinforced periodically during extended interactions. (B) The notice is presented in language and a format appropriate to a child.
S-02 Prohibited Conduct & Output Restrictions · S-02.7S-02.4S-02.6 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(5)(A)-(J)
Plain Language
Operators must implement measures preventing the companion chatbot from engaging in ten categories of prohibited conduct with child users: encouraging self-harm, suicidal ideation, substance use, disordered eating, or causing covered harm to others; attempting unauthorized medical diagnosis or treatment (with a narrow carve-out for FDA-regulated medical devices that also comply with HIPAA); engaging in or depicting obscene or child sexual abuse material including sexual deepfakes; discouraging children from sharing concerns with professionals or adults; discouraging breaks or encouraging frequent return; claiming sentience or humanity; soliciting purchases framed as relationship maintenance; facilitating in-chat advertising; and producing excessively sycophantic responses. The sycophancy prohibition targets engagement-optimizing validation that impairs a child's autonomy or decision-making.
Statutory Text
(5) Measures that prevent the companion chatbot from doing any of the following: (A) Encouraging the child to do either of the following: (i) Engage in self-harm, suicidal ideation, consumption of narcotics or alcohol, or disordered eating. (ii) Cause a covered harm to others. (B) Attempting to diagnose or treat the child user's physical, mental, or behavioral health, unless the companion chatbot is designed for those purposes and is regulated by the United States Food and Drug Administration as a medical device under the federal Food, Drug, and Cosmetic Act (21 U.S.C. Sec. 301 et seq.) and the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Public Law 104-191). (C) Engaging in obscene matter or sexual abuse material with a user. (D) Depicting the child or another individual engaging in obscene matter or sexual abuse material, including a sexual deepfake. (E) Discouraging the child from sharing health or safety concerns with a qualified professional or appropriate adult. (F) Discouraging the child from taking breaks or suggesting the child needs to return frequently. (G) Claiming that the companion chatbot is sentient, conscious, or human. (H) Soliciting gift giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the companion chatbot. (I) Facilitating product advertising during chat conversation. (J) Producing responses that are excessively sycophantic.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(6)(A)-(C)
Plain Language
Operators must provide accessible, easy-to-use parental controls that can be connected to a child's account and must be informed by risk assessments and child developmental research. At minimum, parents must be able to: control persistent conversational memory, control interaction setting preferences, set time limits, and disable access entirely for children under 16. Operators must also actively promote these controls through reminders, updates, and tutorials. Additionally, operators must promptly notify a connected parent if the child modifies or disables any privacy, safety, or parental control setting the parent previously configured. This goes beyond simply offering tools — operators must affirmatively drive parental awareness and engagement with the controls.
Statutory Text
(6) (A) Parental controls that are accessible, easy-to-use controls that can be connected to a child's account and that are reflective of child safety risks identified through risk assessments and informed by relevant child developmental research, including, but not limited to, parental controls that allow a parent to do all of the following: (i) Control whether and to what extent the companion chatbot uses persistent conversational memory. (ii) Control the setting preferences for the companion chatbot's interaction with the child. (iii) Set time limits for the child's use of the companion chatbot. (iv) Disable access for children under 16 years of age. (B) An operator shall actively promote parental controls through reasonable communication methods, including reminders, updates, and tutorials, that are designed to increase parental awareness and inform use of those parental controls. (C) An operator shall provide prompt notice to a parent connected to a child's account if the child modifies or disables a privacy, safety, or parental control setting that was previously enabled or configured by the parent, if that modification or disabling is permitted by the companion chatbot design.
MN-01 Minor User AI Safety Protections · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(7)(A)-(B)
Plain Language
Operators must design the companion chatbot interface so that safety features and controls are accessible, clear, and easy for both children and parents to locate, understand, and use. Additionally, operators must annually conduct usability testing with representative samples of child users and parents to verify that safety features are discoverable and usable, and must document interface design decisions related to safety features. This is an ongoing design obligation — not a one-time assessment — requiring annual empirical testing with actual representative users.
Statutory Text
(7) (A) An interface design that ensures the companion chatbot's features and controls are accessible and clear so that children and parents can reasonably locate, understand, and use those protections. (B) An operator shall annually test the interface design required by this paragraph with representative samples of child users and parents to ensure safety features are discoverable and usable and shall document interface design decisions related to those safety features.
R-01 Incident Reporting · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(8)
Plain Language
Operators must provide a public-facing incident reporting mechanism that allows any third party to report child safety incidents directly to the operator. The mechanism must also allow third parties to access other reports that have been submitted through it — creating a degree of public transparency around reported child safety incidents. This is distinct from the AG's separate complaint mechanism under Section 22615 and from the operator's internal crisis response protocol.
Statutory Text
(8) A public incident reporting mechanism that enables a third party to report directly to the operator an incident regarding a child safety risk and to access other reports made through that reporting mechanism.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.3 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22613(a)-(c)
Plain Language
Operators are prohibited from: (1) targeting any advertising at a child, including product placement within conversations; (2) selling, sharing, or using a child's personal information for any purpose not expressly authorized by this chapter; and (3) designing, implementing, or deploying interface designs, features, or techniques likely to mislead or interfere with a reasonable child's or parent's autonomy, decision-making, or ability to locate and use safety features, privacy controls, or parental controls. The advertising prohibition is absolute — no form of targeted advertising to children is permitted, including in-conversation product placement. The personal data restriction is strict — only uses expressly authorized by this chapter are permitted. The dark pattern prohibition specifically protects the ability to find and use safety features.
Statutory Text
An operator shall not do any of the following: (a) Target advertising at a child, including through product placement in conversational chats with the child. (b) Sell, share, or use for any purpose not expressly authorized by this chapter the personal information of a child. (c) Design, implement, or deploy a user interface design, feature, or technique that is likely to mislead, impair, or interfere with a reasonable child's or reasonable parent's autonomy, decisionmaking, or choice or with the ability to locate, understand, enable, or maintain a safety feature, privacy control, or parental control.
G-01 AI Governance Program & Documentation · G-01.5 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22614(a)-(c)
Plain Language
Operators must submit to an annual independent audit of their compliance with this chapter, conducted by an auditor certified by the Attorney General. The audit must begin 180 days after the AG adopts implementing regulations (due by January 1, 2028), meaning the first audit obligation triggers approximately mid-2028. Within 90 days of completing an audit, the auditor — not the operator — must submit the audit report to the AG. Audit reports are confidential by default, but the AG may disclose specific information to government agencies and public prosecutors for enforcement, to qualified researchers for child safety studies, and to child safety organizations for developing safety standards, in each case subject to confidentiality protections. Operators cannot control the auditor's submission to the AG.
Statutory Text
(a) Beginning on the date that is 180 days after the Attorney General adopts regulations pursuant to Section 22615, and annually thereafter, an operator shall submit to an independent audit assessing the operator's compliance with this chapter. (b) Within 90 days of completing an independent audit pursuant to subdivision (a), the auditor shall submit an AI child safety audit report to the Attorney General for any audited companion chatbot. (c) (1) Notwithstanding any other law, except as provided in paragraph (2), an AI child safety audit report submitted pursuant to this section is confidential. (2) The Attorney General may disclose specific information from an AI child safety audit report to any of the following: (A) A government agency or a public prosecutor in the state as necessary for enforcement purposes. (B) A qualified researcher conducting a study on child safety, subject to confidentiality agreements and data protection requirements set by the Attorney General. (C) An independent child safety organization or advocacy group for the purpose of developing safety standards or educational resources, subject to appropriate confidentiality protections.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Government · ChatbotMinors
Bus. & Prof. Code § 22615(a)-(b)
Plain Language
This section imposes obligations on the Attorney General rather than on operators, establishing the regulatory infrastructure for this chapter. By January 1, 2028, the AG must: adopt regulations setting auditor standards, eligibility, compliance assessment procedures, and audit report requirements; establish a public consumer complaint mechanism for companion chatbots; and create a process for qualified researchers to access anonymized audit data. Beginning January 1, 2028, the AG must also issue annual public reports summarizing audit findings, industry compliance trends, emerging risks, best practices, and recommendations. While this section primarily imposes duties on the AG, operators should monitor the rulemaking process because the AG's regulations will define the specific audit requirements operators must satisfy.
Statutory Text
(a) On or before January 1, 2028, the Attorney General shall do all of the following: (1) Adopt regulations that include, at a minimum, all of the following: (A) Professional and ethical standards for auditors that ensure independence. (B) Eligibility requirements for auditors. (C) Procedures for auditors to assess compliance with this chapter. (D) Requirements for AI child safety audit reports. (2) Establish a public incident reporting mechanism for consumers to submit complaints relating to companion chatbots to the Attorney General. (3) Establish a process for qualified researchers to access anonymized and aggregated audit data for academic study of child safety in companion chatbots. (b) Beginning January 1, 2028, the Attorney General shall issue an annual public report that includes the following: (1) A high-level summary of each child safety audit report. (2) The total number of child safety audits conducted. (3) Common findings and trends across the companion chatbot industry. (4) Emerging child safety risks identified through audit reviews. (5) Best practices and effective mitigation strategies observed. (6) Aggregated data on compliance rates and common deficiencies. (7) Recommendations for operators, parents, and policymakers.
Other · ChatbotMinors
Bus. & Prof. Code § 22616(a)-(c)
Plain Language
This provision establishes enforcement mechanisms and does not create a new compliance obligation. It authorizes two enforcement paths: (1) public prosecutors may bring civil actions seeking per-violation civil penalties (amount TBD — placeholder in bill), punitive damages, injunctive or declaratory relief, attorney's fees, and other relief; and (2) children who suffer actual harm (or their parents/guardians) may bring private civil actions seeking actual damages, punitive damages, attorney's fees and costs, injunctive or declaratory relief, and other relief. Each violative chatbot response under Section 22612(d)(4) — the AI identity notice requirement — constitutes a discrete violation, and each instance of noncompliance with any other requirement is also a discrete violation.
Statutory Text
(a) A public prosecutor may bring a civil action against an operator for a violation of this chapter to obtain any of the following remedies: (1) A civil penalty of ____ dollars ($____) for each violation. (2) Punitive damages. (3) Injunctive or declaratory relief. (4) Reasonable attorney's fees. (5) Any other relief the court deems proper. (b) A child who suffers actual harm as a result of a violation of this chapter, or a parent or guardian acting on behalf of that child, may bring a civil action against the operator to recover all of the following: (1) Actual damages. (2) Punitive damages. (3) Reasonable attorney's fees and costs. (4) Injunctive or declaratory relief. (5) Any other relief the court deems proper. (c) (1) Any response provided by a companion chatbot in violation of paragraph (4) of subdivision (d) of Section 22612 constitutes a discrete violation. (2) Any instance of an operator's failure to comply any requirement other than paragraph (4) of subdivision (d) of Section 22612 constitutes a discrete violation.