S-1037
SC · State · USA
SC
USA
● Pending
South Carolina S. 1037 — Protecting Children from Chatbots Act (Chapter 81, Title 39)
Imposes safety and access control obligations on operators of chatbots with 500,000 or more monthly active users worldwide, with a focus on protecting minors. Requires covered entities to offer a limited-access mode (no account, no restricted features) for unverified users, implement reasonable age verification before enabling restricted features (personalization, proactive outreach, relationship simulation, extended sessions, explicit content), and obtain verifiable parental consent before enabling restricted features for minors. Prohibits features designed to prioritize engagement metrics at the expense of user wellbeing, requires crisis detection and response systems, mandates emergency notification to law enforcement when imminent risk of death or serious injury is detected, and requires incident reporting to the Attorney General within 15 days of knowledge of a covered harm. Enforceable by the Attorney General (up to $50,000 per violation per day) and via private right of action for harmed persons.
Summary

Imposes safety and access control obligations on operators of chatbots with 500,000 or more monthly active users worldwide, with a focus on protecting minors. Requires covered entities to offer a limited-access mode (no account, no restricted features) for unverified users, implement reasonable age verification before enabling restricted features (personalization, proactive outreach, relationship simulation, extended sessions, explicit content), and obtain verifiable parental consent before enabling restricted features for minors. Prohibits features designed to prioritize engagement metrics at the expense of user wellbeing, requires crisis detection and response systems, mandates emergency notification to law enforcement when imminent risk of death or serious injury is detected, and requires incident reporting to the Attorney General within 15 days of knowledge of a covered harm. Enforceable by the Attorney General (up to $50,000 per violation per day) and via private right of action for harmed persons.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement: the Attorney General may initiate an action in the name of the State and seek injunction and civil penalties. Private right of action: any person harmed by a violation, or a parent or legal guardian of a minor harmed, may bring a civil action. No cure period or safe harbor that conditions enforcement. Contractual waivers, limitations, and mandatory arbitration clauses are void and unenforceable.
Penalties
Attorney General may seek injunctive relief and civil penalties of up to $50,000 per violation; each day of non-compliance constitutes a separate violation. Private plaintiffs may recover monetary damages for the harm caused, reasonable attorney fees and costs, injunctive or declaratory relief, and punitive damages if the violation was willful and wanton, reckless, or grossly negligent. Private action damages require proof of harm caused by the violation. Rights and remedies may not be waived by contract; mandatory arbitration clauses for claims under this act are void.
Who Is Covered
"Covered entity" means an operator of a chatbot that has five hundred thousand or more monthly active users worldwide. A covered entity does not include an operator of a chatbot that is: (i) not offered to the general public, such as internal workplace tools, clinician-supervised clinical tools, or university research systems; or (ii) used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by that entity, customer service account information, or other information strictly related to its customer service.
What Is Covered
"Chatbot" means any artificial intelligence, algorithmic, or automated system that: (a) produces new expressive content or responses not fully predetermined by the operator of the service or application; (b) accepts open-ended, natural-language, or multimodal user input and produces adaptive or context-responsive natural language output; and (c) maintains a conversational state across exchanges and is designed to facilitate multi-turn dialogue rather than to respond to discrete information requests.
Compliance Obligations 10 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-20(A)-(C), (F), (G), (H)
Plain Language
Covered entities must offer a limited-access mode in which users can interact with the chatbot without creating an account and without any restricted features enabled. Before enabling any restricted feature (personalization, proactive outreach, extended sessions, relationship simulation, or explicit content), the entity must require account creation, verify age, and classify the user as a minor or adult. Age verification data must be minimized, used only for verification, not shared with third parties, not combined with other personal data, and deleted within 24 hours. Users must have a process to appeal age-verification decisions. The entity must also monitor for potential misclassifications and re-verify suspect accounts. Existing accounts must have restricted features disabled within 60 days of the effective date unless age-verified. A safe harbor protects entities from liability when a minor incidentally uses a correctly verified adult account, provided the entity complies with ongoing monitoring obligations.
Statutory Text
(A)(1) A covered entity shall make a limited-access mode available and shall ensure that any unverified user may only access and interact with a chatbot in limited-access mode. (B) Before enabling any restricted feature for a user, a covered entity shall: (1) require the user to create a user account; (2) verify the user's age using a reasonable age verification process, subject to item (3); and (3) using the age data, classify the user as a minor or an adult. (C) When conducting reasonable age verification process under this section, an operator shall: (1) collect only the age verification data that is strictly necessary to reasonably verify age; (2) use age verification data only for age verification; (3) not sell, rent, share, or otherwise disclose age verification data to any third party, except to a service provider performing age verification under a contract prohibiting further disclosure; (4) not combine age verification data with any other personal data about the user; (5) delete age verification data within twenty-four hours of completing the age verification process, except that the operator may retain a record that the user has been verified as a minor; and (6) provide a simple process for a user to appeal or correct an age-verification decision. (F) A covered entity shall implement reasonable systems and processes to identify user accounts that may be inaccurately classified by age, such as patterns of use suggesting a minor is using an adult account or credible reports that an account was created using false age data, and shall re-verify any such account before enabling any restricted feature. (G) A covered entity shall not be liable under this chapter solely because a minor incidentally uses a user account that has been correctly verified and classified as an adult account, provided the covered entity is otherwise in compliance with subsection (F). (H) With respect to each user account of a covered entity that exists as of the effective date of this act, a covered entity shall, within sixty days, disable access to restricted features for any account that has not been classified as an authorized minor account or a verified adult account, unless and until the user completes age verification.
MN-01 Minor User AI Safety Protections · MN-01.2MN-01.3 · Deployer · ChatbotMinors
S.C. Code § 39-81-30(A)-(D)
Plain Language
Minors may always use a chatbot in limited-access mode without parental consent. If a minor wants restricted features, the covered entity must obtain verifiable parental consent — meaning the parent must complete age verification and provide clear, informed agreement. Once parental consent is obtained, the entity must enable restricted features (except explicit content, which remains blocked for all minors), implement parental control functions (time limits, content restrictions, notifications, data deletion), and offer the parent the option to receive chat logs and set up a linked parental account. For users under 16, establishing a linked parental account or providing contact information is mandatory, not optional. This creates a tiered consent and control structure: limited access by default, restricted features with parental consent, and enhanced parental linkage for under-16s.
Statutory Text
(A) Nothing in this act shall be construed to require parental consent for a minor to access or interact with a chatbot in limited-access mode. (B) If the age verification process described in Section 39-81-20 classifies a user as a minor and the user seeks to access any restricted feature, then a covered entity shall offer the user the option of continuing to use the chatbot in limited-access mode or to obtain parental consent to access the restricted features. (C) If the user chooses to get parental consent, then the covered entity shall: (1) obtain verifiable parental consent; (2) remove limited-access mode and enable access to restricted features; (3) ensure that the chatbot continues to restrict access to any explicit content; (4) implement reasonable parental control functions, which may restrict the minor's access to features enabled under item (2); (5) offer the parent the option to provide contact information or establish a linked parental account in order to receive notifications; and (6) offer the parent the option to receive access to chat logs of any interactions between the minor and the chatbot conducted through the authorized minor account. (D) If the age verification process classifies the user as under sixteen, then a covered entity also shall require the consenting parent to provide contact information or establish a linked parental account.
S-04 AI Crisis Response Protocols · MN-01.10 · Deployer · ChatbotMinors
S.C. Code § 39-81-30(E)
Plain Language
When a crisis message is triggered — i.e., the chatbot detects a user expressing suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis — the covered entity must immediately notify the parent if it has contact information or a linked parental account. This applies only when a parental communication channel was established under the parental consent process. The immediacy requirement means this notification should be sent as close to real-time as possible, not batched or delayed.
Statutory Text
(E) If the covered entity has a way to reach the parent through a parental account or contact information provided under subsection (C) or (D), then the covered entity shall notify the parent immediately in the case of any incident provoking a crisis message, pursuant to Section 39-81-40(B)(3).
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(A)
Plain Language
Covered entities are prohibited from designing features that prioritize engagement, revenue, or retention metrics (session length, frequency of use, emotional engagement) at the expense of user wellbeing. This is a broad anti-manipulation prohibition that covers addictive design patterns, engagement optimization that harms users, and any feature architecture that subordinates user interests to platform metrics. Additionally, covered entities may not design features that help minors or unverified users hide their chatbot use from parents or guardians — this prevents circumvention of the parental oversight framework. The statute also defines a broader 'duty of loyalty' concept that reinforces this prohibition.
Statutory Text
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(B)(1)
Plain Language
Covered entities must implement reasonable systems to detect when a user is developing emotional dependence on the chatbot — meaning the user is relying on the chatbot as a primary source of emotional support or social connection, expressing distress at the prospect of losing access, or substituting the chatbot for human relationships. Upon detection, the entity must take reasonable steps to reduce that dependence and mitigate associated harm risks. This is a continuous monitoring and intervention obligation, not a one-time design requirement. The statute does not prescribe specific interventions, leaving 'reasonable steps' to the entity's judgment.
Statutory Text
(B) A covered entity shall implement reasonable systems and processes to: (1) identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce that dependence and associated risks of harm;
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(B)(2)
Plain Language
Covered entities must implement reasonable systems and processes to prevent their chatbot from making materially false representations that it is a human being. This is framed as an anti-deception obligation rather than a proactive disclosure requirement — the entity need not affirmatively disclose AI identity at every interaction, but must ensure the chatbot does not falsely claim to be human. The 'materially false' qualifier implies that incidental or clearly playful statements may not trigger liability, but any representation that could genuinely mislead a user into believing they are speaking with a human is prohibited.
Statutory Text
(B) A covered entity shall implement reasonable systems and processes to: (2) ensure that a chatbot does not make a materially false representation that it is a human being;
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(B)(3)
Plain Language
Covered entities must implement reasonable systems to detect when any user — not just minors — expresses suicidal thoughts, intent to self-harm, or shows signs of an acute mental health crisis. Upon detection, the entity must promptly provide a clear and prominent crisis message including crisis services information. This is a continuous operating requirement applicable to all users of the chatbot, and the response must be immediate. The statute does not specify which crisis services must be referenced, but the obligation requires that the information be actionable.
Statutory Text
(B) A covered entity shall implement reasonable systems and processes to: (3) identify when a user is expressing suicidal thoughts, intent to self-harm, or showing signs of an acute mental health crisis and shall promptly provide a clear and prominent crisis message, including crisis services information to any such user.
R-01 Incident Reporting · R-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-50(A)
Plain Language
When a covered entity learns that a user faces imminent risk of death or serious physical injury, it must make reasonable efforts within 24 hours to notify emergency services or law enforcement, using information it already has or can obtain through reasonable user-facing prompts. If the operator lacks sufficient information to enable emergency contact, it must instead: (1) promptly display a prominent message urging the user to contact emergency services with crisis information, (2) encourage the user to seek help from a trusted adult or emergency services, and (3) document the steps taken and why direct notification was not practicable. A good-faith safe harbor protects operators from liability for making the notification unless they acted with willful misconduct or gross negligence. This is an imminent-risk escalation obligation that goes beyond the crisis message requirement in § 39-81-40(B)(3).
Statutory Text
(A)(1) If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, then the operator must make reasonable efforts, within twenty-four hours, to notify appropriate emergency services or law enforcement, to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. (2) If the operator cannot make a notification under item (1) because the operator lacks sufficient information to enable an emergency response, then the operator shall: (a) promptly provide a clear and prominent message urging the user to contact emergency services and provide crisis services information, (b) make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services, and (c) document the steps taken and the basis for the operator's determination that notification was not practicable. (3) An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification, unless the operator acted with willful misconduct or gross negligence.
R-01 Incident Reporting · R-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-50(B)-(C)
Plain Language
Covered entities must report to the Attorney General within 15 days of learning of a covered incident — defined as an incident where a user suffered death, a suicide attempt, self-harm requiring medical attention, a psychiatric emergency requiring urgent medical treatment, or serious physical injury requiring medical attention arising from chatbot interactions. The report must include dates, a description of the incident and its connection to the chatbot, and actions taken in response. A supplemental report may be filed within 60 days to update or correct information. All reports are confidential and exempt from FOIA disclosure, though the Attorney General may publish aggregate statistics that do not identify users or reveal trade secrets.
Statutory Text
(B)(1) A covered entity shall submit a report to the Attorney General within fifteen days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: (a) the date the operator obtained knowledge of the incident; (b) the date of the incident, if known; (c) a brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and (d) a description of any actions the operator took in response. (2) A covered entity may submit a supplemental report within sixty days after the initial report to update or correct information learned through investigation. (C)(1) Reports submitted under this section shall be confidential and are not subject to disclosure pursuant to Chapter 4, Title 30, the Freedom of Information Act. (2) The Attorney General may publish aggregate information and statistics derived from the reports, so long as the publication does not identify individual users or disclose trade secrets.
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
S.C. Code § 39-81-30(C)(3)
Plain Language
Even when a parent has provided consent for a minor to access restricted features, the chatbot must continue to block explicit content. Explicit content includes not only prurient sexual material harmful to minors but also content that provides instructions for or glorifies suicide, self-injury, or disordered eating, and graphic depictions of extreme violence lacking serious value for minors. This is an absolute restriction that cannot be overridden by parental consent — it applies to all authorized minor accounts regardless of the parental control settings.
Statutory Text
(C) If the user chooses to get parental consent, then the covered entity shall: (3) ensure that the chatbot continues to restrict access to any explicit content;