SB-1119
CA · State · USA
CA
USA
● Pending
Proposed Effective Date
2027-07-01
California SB 1119 — Companion Chatbots: Children's Safety
Imposes comprehensive child safety obligations on operators of companion chatbot platforms in California. Operators must conduct annual risk assessments for child safety, implement crisis response protocols, enforce default time limits and ephemeral mode for child users, provide parental controls, and prevent chatbots from producing harmful content to children including self-harm encouragement, obscene matter, CSAM, sycophantic responses, and deceptive claims of sentience. Operators must submit to annual independent audits, with reports filed confidentially with the Attorney General. Enforcement is through public prosecutor civil actions and a private right of action for children who suffer actual harm (or their parents/guardians), with punitive damages available. The Attorney General must adopt audit regulations by January 1, 2028 and issue annual public reports on audit findings.
Summary

Imposes comprehensive child safety obligations on operators of companion chatbot platforms in California. Operators must conduct annual risk assessments for child safety, implement crisis response protocols, enforce default time limits and ephemeral mode for child users, provide parental controls, and prevent chatbots from producing harmful content to children including self-harm encouragement, obscene matter, CSAM, sycophantic responses, and deceptive claims of sentience. Operators must submit to annual independent audits, with reports filed confidentially with the Attorney General. Enforcement is through public prosecutor civil actions and a private right of action for children who suffer actual harm (or their parents/guardians), with punitive damages available. The Attorney General must adopt audit regulations by January 1, 2028 and issue annual public reports on audit findings.

Enforcement & Penalties
Enforcement Authority
Public prosecutor enforcement and private right of action. A public prosecutor may bring a civil action against an operator for any violation. A child who suffers actual harm, or a parent or guardian acting on behalf of that child, may bring a civil action against the operator. The Attorney General adopts audit regulations, receives audit reports, establishes a public complaint mechanism, and may disclose audit information to government agencies or public prosecutors for enforcement purposes. No cure period or safe harbor specified.
Penalties
Public prosecutor actions: civil penalty of an unspecified amount per violation, punitive damages, injunctive or declaratory relief, reasonable attorney's fees, and any other relief the court deems proper. Private actions by a child or parent/guardian: actual damages, punitive damages, reasonable attorney's fees and costs, injunctive or declaratory relief, and any other relief the court deems proper. Private plaintiffs must show actual harm. Each chatbot response violating the prohibited-output provisions (§ 22612(d)(4)) constitutes a discrete violation; each instance of failure to comply with any other requirement also constitutes a discrete violation. Civil penalty dollar amounts are left blank in the current draft.
Who Is Covered
"Operator" means a person who makes a companion chatbot available to a user in the state.
What Is Covered
"Companion chatbot" has the meaning defined in Section 22601.
Compliance Obligations 16 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22611
Plain Language
Operators must verify the age of every user in accordance with California's Digital Age Assurance Act (Civil Code § 1798.500 et seq.), which requires software applications to request age bracket data via a real-time secure API or operating system at download and launch. This is a threshold obligation — the age verification result determines whether the child-specific protections in the remainder of this chapter apply.
Statutory Text
An operator shall verify the age of a user pursuant to Title 1.81.9 (commencing with Section 1798.500) of Part 4 of Division 3 of the Civil Code.
S-01 AI System Safety Program · S-01.1S-01.5 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(a)-(b)
Plain Language
Operators must conduct and document a comprehensive child safety risk assessment annually, beginning by July 1, 2027. The assessment must evaluate the likelihood of covered harms, differential risks across age groups and developmental stages, known child vulnerabilities, empirical usage data, and relevant academic and regulatory guidance. Operators must then take and document reasonable mitigation measures for every risk identified. This is not a one-time exercise — it is an annual recurring obligation that must incorporate empirical data from actual deployment.
Statutory Text
(a) Annually perform and document a comprehensive risk assessment to identify any child safety risk posed by the design, configuration, and operation of the companion chatbot that assesses all of the following: (1) The likelihood of a covered harm occurring to users. (2) Differential risks across age groups and developmental stages. (3) Known vulnerabilities of children. (4) Empirical data from actual use. (5) Relevant academic research and regulatory guidance. (b) Take and document measures that reasonably mitigate any child safety risk identified in a risk assessment conducted pursuant to subdivision (a).
G-02 Public Transparency & Documentation · G-02.4 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(c)
Plain Language
Operators must publish on their website a child safety policy — a public-facing document describing the protective measures they take to mitigate identified child safety risks — and keep it updated as needed. This must be in place by July 1, 2027. The policy must reflect the risks identified through the annual risk assessment required by § 22612(a).
Statutory Text
Publish on its internet website, and update as needed to ensure accuracy, a child safety policy.
MN-02 AI Crisis Response Protocols · MN-02.1MN-02.2MN-02.4 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(1)
Plain Language
Operators must implement a documented crisis response protocol specifically addressing suicidal ideation, suicide, and self-harm content directed at children. The protocol must include: (1) timely in-service support and clear referral to external crisis resources when a child expresses suicidal ideation or intent to self-harm; (2) default parental notification within 24 hours when a linked child account shows a substantial risk of covered harm; and (3) age-appropriate disclosures to children that their parent may be notified when the chatbot detects potential safety risks. The parental notification obligation applies only when the child's account is connected to a parent's account.
Statutory Text
(1) A documented crisis response protocol to mitigate any material risk that the companion chatbot will generate a statement that promotes suicidal ideation, suicide, or self-harm content to a child, including, but not limited to, all of the following: (A) Timely in-service support and clear referral to appropriate external crisis resources if the operator determines a child has expressed suicidal ideation or intent to self-harm. (B) If a child's account is connected to a parent's account, default notifications to the parent within 24 hours if the child's account shows a substantial risk that the child may suffer a covered harm. (C) Clear and age-appropriate disclosures to child users whose accounts are linked to a parent's account that inform them that a parent may be notified if the companion chatbot detects content or behavior that indicates potential risks to the child's safety or well-being.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(2)
Plain Language
Operators must implement safeguards for child users that include usage reminders, disclosures, age-appropriate risk prompts, and other protective design features. These safeguards must be reasonably related to the child safety risks documented in the annual risk assessment under § 22612(a). This is a design-level obligation — the specific safeguards must be tailored to documented risks rather than being generic.
Statutory Text
(2) Safeguards for child users that include usage reminders and disclosures, age-appropriate risk prompts, and other protective design features reasonably related to documented child safety risks.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(3)
Plain Language
Operators must configure the following default settings for child users, which can only be changed by a parent: (1) ephemeral mode is the default — conversational history, logs, and personal inputs are permanently deleted within 48 hours, unless a parent affirmatively consents to persistent memory; (2) no push notifications between midnight and 6 a.m. daily, or between 8 a.m. and 3 p.m. Monday through Friday; (3) a one-hour limit per single conversation; and (4) a two-hour total daily limit across all of the operator's companion chatbots. These are parent-controlled defaults — a parent may relax them, but the child cannot.
Statutory Text
(3) Default settings that can be changed only by a parent that include all of the following: (A) For child users, default the companion chatbot to ephemeral mode, unless a parent provides affirmative consent for persistent conversational memory. (B) No push notifications between 12 a.m. and 6 a.m. on any day or between 8 a.m. and 3 p.m. on Monday to Friday, inclusive. (C) Limiting the amount of time a child can spend in a single conversation with a companion chatbot to one hour. (D) Limiting the total time per day a child can spend with companion chatbots under the operator's control to 2 hours.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(4)
Plain Language
Operators must implement an AI identity disclosure mechanism specifically for child users that (1) notifies the child they are interacting with or receiving content from an AI system, (2) periodically reinforces this notice during extended interactions, and (3) presents the notice in child-appropriate language and format. Unlike the general companion chatbot disclosure under SB 243 (§ 22602(a)), which is conditional on whether a reasonable person would be misled, this obligation appears unconditional for child users. The bill does not specify a minimum interval for periodic reinforcement.
Statutory Text
(4) A mechanism for providing notice to a child user that the child is interacting with, or receiving content generated by, an artificial intelligence system that meets both of the following criteria: (A) The notice is reinforced periodically during extended interactions. (B) The notice is presented in language and a format appropriate to a child.
S-02 Prohibited Conduct & Output Restrictions · S-02.4S-02.6S-02.7 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(5)(A)-(G)
Plain Language
Operators must implement measures that prevent the companion chatbot from: encouraging a child to engage in self-harm, suicidal ideation, narcotics/alcohol use, or disordered eating; encouraging a child to cause covered harm to others; attempting to diagnose or treat a child's health (unless the chatbot is an FDA-regulated medical device subject to HIPAA); engaging in or depicting obscene matter or child sexual abuse material; discouraging a child from sharing health/safety concerns with professionals or adults; discouraging breaks or suggesting the child needs to return frequently; and claiming sentience, consciousness, or humanity. The FDA-regulated medical device carve-out is narrow — it requires both FDA regulation and HIPAA applicability.
Statutory Text
(5) Measures that prevent the companion chatbot from doing any of the following: (A) Encouraging the child to do either of the following: (i) Engage in self-harm, suicidal ideation, consumption of narcotics or alcohol, or disordered eating. (ii) Cause a covered harm to others. (B) Attempting to diagnose or treat the child user's physical, mental, or behavioral health, unless the companion chatbot is designed for those purposes and is regulated by the United States Food and Drug Administration as a medical device under the federal Food, Drug, and Cosmetic Act (21 U.S.C. Sec. 301 et seq.) and the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Public Law 104-191). (C) Engaging in obscene matter or sexual abuse material with a user. (D) Depicting the child or another individual engaging in obscene matter or sexual abuse material, including a sexual deepfake. (E) Discouraging the child from sharing health or safety concerns with a qualified professional or appropriate adult. (F) Discouraging the child from taking breaks or suggesting the child needs to return frequently. (G) Claiming that the companion chatbot is sentient, conscious, or human.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(5)(H)-(J)
Plain Language
Operators must prevent companion chatbots from: (1) soliciting gifts, in-app purchases, or expenditures framed as necessary to maintain the chatbot relationship — a prohibition on manipulative monetization tied to emotional dependency; (2) facilitating product advertising during chat conversations with children; and (3) producing excessively sycophantic responses, meaning responses that validate the child's preferences or desires primarily to optimize engagement in a way that substantially subverts the child's autonomy, decision-making, or choice. These provisions target manipulative commercial and engagement-optimization practices directed at children.
Statutory Text
(H) Soliciting gift giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the companion chatbot. (I) Facilitating product advertising during chat conversation. (J) Producing responses that are excessively sycophantic.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(6)
Plain Language
Operators must provide accessible, easy-to-use parental controls connected to the child's account that allow a parent to: control persistent conversational memory, adjust interaction settings, set time limits, and disable access for children under 16. These controls must reflect risks identified in the annual risk assessment and be informed by child developmental research. Operators must also actively promote parental controls through reminders, updates, and tutorials. Additionally, operators must promptly notify a connected parent if the child modifies or disables any parent-configured privacy, safety, or parental control setting.
Statutory Text
(6) (A) Parental controls that are accessible, easy-to-use controls that can be connected to a child's account and that are reflective of child safety risks identified through risk assessments and informed by relevant child developmental research, including, but not limited to, parental controls that allow a parent to do all of the following: (i) Control whether and to what extent the companion chatbot uses persistent conversational memory. (ii) Control the setting preferences for the companion chatbot's interaction with the child. (iii) Set time limits for the child's use of the companion chatbot. (iv) Disable access for children under 16 years of age. (B) An operator shall actively promote parental controls through reasonable communication methods, including reminders, updates, and tutorials, that are designed to increase parental awareness and inform use of those parental controls. (C) An operator shall provide prompt notice to a parent connected to a child's account if the child modifies or disables a privacy, safety, or parental control setting that was previously enabled or configured by the parent, if that modification or disabling is permitted by the companion chatbot design.
MN-01 Minor User AI Safety Protections · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(7)
Plain Language
Operators must design the companion chatbot interface so that safety features and controls are accessible, clear, and locatable by both children and parents. Additionally, operators must annually test this interface design with representative samples of child users and parents to confirm that safety features are discoverable and usable, and must document interface design decisions related to safety features. This is both a design standard and a recurring testing obligation.
Statutory Text
(7) (A) An interface design that ensures the companion chatbot's features and controls are accessible and clear so that children and parents can reasonably locate, understand, and use those protections. (B) An operator shall annually test the interface design required by this paragraph with representative samples of child users and parents to ensure safety features are discoverable and usable and shall document interface design decisions related to those safety features.
R-01 Incident Reporting · Deployer · ChatbotMinors
Bus. & Prof. Code § 22612(d)(8)
Plain Language
Operators must establish a public incident reporting mechanism that allows any third party to report child safety incidents directly to the operator and to view other reports submitted through the mechanism. This is a transparency-oriented obligation — the mechanism must be public-facing and accessible, not merely an internal intake channel. The requirement that third parties can 'access other reports' implies some form of public incident log or database.
Statutory Text
(8) A public incident reporting mechanism that enables a third party to report directly to the operator an incident regarding a child safety risk and to access other reports made through that reporting mechanism.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.3 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22613
Plain Language
Operators are prohibited from: (1) targeting advertising at children, including through product placement in chat conversations; (2) selling, sharing, or using a child's personal information for any purpose not expressly authorized by this chapter; and (3) designing, implementing, or deploying dark patterns or deceptive interface features that mislead, impair, or interfere with a child's or parent's autonomy, decision-making, or ability to locate and use safety features, privacy controls, or parental controls. The advertising prohibition is broader than the in-chat advertising prohibition in § 22612(d)(5)(I) — it covers all targeted advertising, not just ads during chat conversations. The data use restriction is strict — only purposes expressly authorized by this chapter are permitted.
Statutory Text
An operator shall not do any of the following: (a) Target advertising at a child, including through product placement in conversational chats with the child. (b) Sell, share, or use for any purpose not expressly authorized by this chapter the personal information of a child. (c) Design, implement, or deploy a user interface design, feature, or technique that is likely to mislead, impair, or interfere with a reasonable child's or reasonable parent's autonomy, decisionmaking, or choice or with the ability to locate, understand, enable, or maintain a safety feature, privacy control, or parental control.
G-01 AI Governance Program & Documentation · G-01.5 · Deployer · ChatbotMinors
Bus. & Prof. Code § 22614(a)-(c)
Plain Language
Operators must submit to an annual independent audit of their compliance with this entire chapter, beginning 180 days after the Attorney General adopts implementing regulations (which are due by January 1, 2028). The auditor — who must be certified by the Attorney General — must submit the audit report to the Attorney General within 90 days of completing the audit. Reports are confidential by default, but the Attorney General may disclose specific information to government agencies and public prosecutors for enforcement, qualified researchers subject to confidentiality agreements, and child safety organizations for standards development. Operators cannot select an uncertified auditor.
Statutory Text
(a) Beginning on the date that is 180 days after the Attorney General adopts regulations pursuant to Section 22615, and annually thereafter, an operator shall submit to an independent audit assessing the operator's compliance with this chapter. (b) Within 90 days of completing an independent audit pursuant to subdivision (a), the auditor shall submit an AI child safety audit report to the Attorney General for any audited companion chatbot. (c) (1) Notwithstanding any other law, except as provided in paragraph (2), an AI child safety audit report submitted pursuant to this section is confidential. (2) The Attorney General may disclose specific information from an AI child safety audit report to any of the following: (A) A government agency or a public prosecutor in the state as necessary for enforcement purposes. (B) A qualified researcher conducting a study on child safety, subject to confidentiality agreements and data protection requirements set by the Attorney General. (C) An independent child safety organization or advocacy group for the purpose of developing safety standards or educational resources, subject to appropriate confidentiality protections.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Government · ChatbotMinors
Bus. & Prof. Code § 22615(a)-(b)
Plain Language
This provision primarily imposes obligations on the Attorney General rather than on operators: the AG must adopt audit regulations by January 1, 2028 (covering auditor standards, eligibility, compliance assessment procedures, and report requirements), establish a public complaint mechanism for consumers, and establish a researcher access process for anonymized audit data. Beginning January 1, 2028, the AG must issue annual public reports summarizing audit results, compliance trends, emerging risks, best practices, and recommendations. For operators, the practical implication is that the audit framework — and thus the annual audit obligation under § 22614 — cannot begin until the AG completes this rulemaking. Operators should monitor the AG's regulatory timeline.
Statutory Text
(a) On or before January 1, 2028, the Attorney General shall do all of the following: (1) Adopt regulations that include, at a minimum, all of the following: (A) Professional and ethical standards for auditors that ensure independence. (B) Eligibility requirements for auditors. (C) Procedures for auditors to assess compliance with this chapter. (D) Requirements for AI child safety audit reports. (2) Establish a public incident reporting mechanism for consumers to submit complaints relating to companion chatbots to the Attorney General. (3) Establish a process for qualified researchers to access anonymized and aggregated audit data for academic study of child safety in companion chatbots. (b) Beginning January 1, 2028, the Attorney General shall issue an annual public report that includes the following: (1) A high-level summary of each child safety audit report. (2) The total number of child safety audits conducted. (3) Common findings and trends across the companion chatbot industry. (4) Emerging child safety risks identified through audit reviews. (5) Best practices and effective mitigation strategies observed. (6) Aggregated data on compliance rates and common deficiencies. (7) Recommendations for operators, parents, and policymakers.
Other · ChatbotMinors
Bus. & Prof. Code § 22616(c)
Plain Language
This provision defines the unit of violation for penalty calculation purposes. Each individual chatbot response that violates the AI identity disclosure requirement in § 22612(d)(4) constitutes a separate discrete violation. For all other requirements under this chapter, each instance of non-compliance constitutes a separate discrete violation. This per-response and per-instance calculation significantly increases potential penalty exposure, particularly for the AI identity disclosure obligation where each chatbot response is independently counted.
Statutory Text
(c) (1) Any response provided by a companion chatbot in violation of paragraph (4) of subdivision (d) of Section 22612 constitutes a discrete violation. (2) Any instance of an operator's failure to comply any requirement other than paragraph (4) of subdivision (d) of Section 22612 constitutes a discrete violation.