S-1037
SC · State · USA
SC
USA
● Pending
Proposed Effective Date
2026-01-01
South Carolina S. 1037 — Protecting Children from Chatbots Act (Chapter 81, Title 39)
Imposes safety, access control, and design obligations on operators of chatbots with 500,000+ monthly active users worldwide, with a focus on protecting minors. Requires a tiered access model: unverified users may only use a feature-limited mode; restricted features (personalization, proactive outreach, relationship simulation, explicit content) require age verification and, for minors, verifiable parental consent. Prohibits engagement-maximizing design at the expense of user wellbeing, requires emotional dependence detection and crisis response protocols, and mandates incident reporting to the Attorney General within 15 days of learning of covered harms. Enforced by the Attorney General (up to $50,000 per violation per day) and through a private right of action for harmed persons with attorney fees and punitive damages available.
Summary

Imposes safety, access control, and design obligations on operators of chatbots with 500,000+ monthly active users worldwide, with a focus on protecting minors. Requires a tiered access model: unverified users may only use a feature-limited mode; restricted features (personalization, proactive outreach, relationship simulation, explicit content) require age verification and, for minors, verifiable parental consent. Prohibits engagement-maximizing design at the expense of user wellbeing, requires emotional dependence detection and crisis response protocols, and mandates incident reporting to the Attorney General within 15 days of learning of covered harms. Enforced by the Attorney General (up to $50,000 per violation per day) and through a private right of action for harmed persons with attorney fees and punitive damages available.

Enforcement & Penalties
Enforcement Authority
Attorney General enforcement: the Attorney General may initiate an action in the name of the State seeking injunction and civil penalties. Private right of action: any person harmed by a violation, or a parent or legal guardian of a minor harmed, may bring a civil action. No cure period. Rights and remedies may not be waived by contract, and mandatory arbitration clauses for claims under the act are void and unenforceable.
Penalties
Attorney General may seek civil penalties of up to $50,000 per violation; each day of noncompliance constitutes a separate violation. Private plaintiffs may recover monetary damages for harm caused, reasonable attorney fees and costs, injunctive or declaratory relief, and punitive damages if the violation was willful and wanton, reckless, or grossly negligent. Private action requires proof of actual harm — no statutory minimum damages.
Who Is Covered
"Covered entity" means an operator of a chatbot that has five hundred thousand or more monthly active users worldwide. A covered entity does not include an operator of a chatbot that is: (i) not offered to the general public, such as internal workplace tools, clinician-supervised clinical tools, or university research systems; or (ii) used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by that entity, customer service account information, or other information strictly related to its customer service.
What Is Covered
"Chatbot" means any artificial intelligence, algorithmic, or automated system that: (a) produces new expressive content or responses not fully predetermined by the operator of the service or application; (b) accepts open-ended, natural-language, or multimodal user input and produces adaptive or context-responsive natural language output; and (c) maintains a conversational state across exchanges and is designed to facilitate multi-turn dialogue rather than to respond to discrete information requests.
Compliance Obligations 13 obligations · click obligation ID to open requirement page
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-20(A)-(C)
Plain Language
Covered entities must implement a two-tier access model. By default, all users who have not completed age verification may only interact with the chatbot in limited-access mode — a stripped-down mode with no personalization, proactive outreach, relationship simulation, extended sessions, or explicit content. Before enabling any restricted feature, the operator must require account creation, verify the user's age, and classify the user as a minor or adult. Age verification data must be minimized, used only for verification, not shared with third parties (except contracted verification providers), not combined with other personal data, and deleted within 24 hours. Operators must also provide a simple appeal process for incorrect age classifications.
Statutory Text
(A)(1) A covered entity shall make a limited-access mode available and shall ensure that any unverified user may only access and interact with a chatbot in limited-access mode. (B) Before enabling any restricted feature for a user, a covered entity shall: (1) require the user to create a user account; (2) verify the user's age using a reasonable age verification process, subject to item (3); and (3) using the age data, classify the user as a minor or an adult. (C) When conducting reasonable age verification process under this section, an operator shall: (1) collect only the age verification data that is strictly necessary to reasonably verify age; (2) use age verification data only for age verification; (3) not sell, rent, share, or otherwise disclose age verification data to any third party, except to a service provider performing age verification under a contract prohibiting further disclosure; (4) not combine age verification data with any other personal data about the user; (5) delete age verification data within twenty-four hours of completing the age verification process, except that the operator may retain a record that the user has been verified as a minor; and (6) provide a simple process for a user to appeal or correct an age-verification decision.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-20(D)-(E)
Plain Language
After age verification, the access tier diverges: verified adults may access restricted features. Minors may not access any restricted feature unless the minor's account has been authorized through verifiable parental consent under § 39-81-30. This effectively creates a complete block on restricted features for minors absent parental authorization.
Statutory Text
(D) If the reasonable age verification process classifies the user as an adult, then the covered entity may enable restricted features for the verified adult account. (E) If the age verification process classifies the user as a minor, then a covered entity shall not enable any restricted feature unless the user is using an authorized minor account subject to Section 39-81-30.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-20(F)-(G)
Plain Language
Covered entities must maintain ongoing monitoring systems to detect age misclassification — for example, flagging accounts where usage patterns suggest a minor is using an adult-verified account or where credible reports indicate false age data. Flagged accounts must be re-verified before restricted features remain enabled. A safe harbor protects covered entities from liability when a minor incidentally uses a correctly verified adult account, but only if the entity is actually operating the required monitoring systems under subsection (F).
Statutory Text
(F) A covered entity shall implement reasonable systems and processes to identify user accounts that may be inaccurately classified by age, such as patterns of use suggesting a minor is using an adult account or credible reports that an account was created using false age data, and shall re-verify any such account before enabling any restricted feature. (G) A covered entity shall not be liable under this chapter solely because a minor incidentally uses a user account that has been correctly verified and classified as an adult account, provided the covered entity is otherwise in compliance with subsection (F).
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-20(H)
Plain Language
Existing accounts must be brought into compliance within 60 days of the act's effective date. Any pre-existing account that has not been age-verified must have restricted features disabled until the user completes age verification. This prevents grandfathering of unverified accounts.
Statutory Text
(H) With respect to each user account of a covered entity that exists as of the effective date of this act, a covered entity shall, within sixty days, disable access to restricted features for any account that has not been classified as an authorized minor account or a verified adult account, unless and until the user completes age verification.
MN-01 Minor User AI Safety Protections · MN-01.2MN-01.3 · Deployer · ChatbotMinors
S.C. Code § 39-81-30(A)-(D)
Plain Language
Minors may always use a chatbot in limited-access mode without parental consent. If a minor wants restricted features, the operator must offer a choice: stay in limited-access mode or pursue parental consent. If parental consent is sought, the operator must obtain verifiable parental consent (parent must also complete age verification), then unlock restricted features except explicit content — which must remain blocked for minors even with parental consent. Operators must implement parental controls (time limits, content restrictions, notification receipt, data deletion) and offer parents the option of a linked parental account and access to chat logs. For minors under 16, the linked parental account or contact information is mandatory, not optional.
Statutory Text
(A) Nothing in this act shall be construed to require parental consent for a minor to access or interact with a chatbot in limited-access mode. (B) If the age verification process described in Section 39-81-20 classifies a user as a minor and the user seeks to access any restricted feature, then a covered entity shall offer the user the option of continuing to use the chatbot in limited-access mode or to obtain parental consent to access the restricted features. (C) If the user chooses to get parental consent, then the covered entity shall: (1) obtain verifiable parental consent; (2) remove limited-access mode and enable access to restricted features; (3) ensure that the chatbot continues to restrict access to any explicit content; (4) implement reasonable parental control functions, which may restrict the minor's access to features enabled under item (2); (5) offer the parent the option to provide contact information or establish a linked parental account in order to receive notifications; and (6) offer the parent the option to receive access to chat logs of any interactions between the minor and the chatbot conducted through the authorized minor account. (D) If the age verification process classifies the user as under sixteen, then a covered entity also shall require the consenting parent to provide contact information or establish a linked parental account.
MN-02 AI Crisis Response Protocols · MN-02.4 · Deployer · ChatbotMinors
S.C. Code § 39-81-30(E)
Plain Language
When a minor account holder triggers a crisis response (suicidal thoughts, self-harm, acute mental health crisis under § 39-81-40(B)(3)), the operator must immediately notify the affiliated parent or guardian if the operator has parental contact information or a linked parental account. This is mandatory and immediate — not discretionary or delayed.
Statutory Text
(E) If the covered entity has a way to reach the parent through a parental account or contact information provided under subsection (C) or (D), then the covered entity shall notify the parent immediately in the case of any incident provoking a crisis message, pursuant to Section 39-81-40(B)(3).
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(A)
Plain Language
Covered entities are prohibited from designing chatbot features that prioritize engagement, revenue, or retention metrics — such as session length, frequency of use, or emotional engagement — over user wellbeing. This is a design-level prohibition: features must not be *designed to* prioritize these metrics at the user's expense. Separately, operators must not design features that help minors or unverified users hide their chatbot use from parents or guardians. The separately defined 'duty of loyalty' reinforces this by prohibiting material conflicts of interest between the operator and user.
Statutory Text
(A) A covered entity shall not implement features designed to: (1) prioritize engagement, revenue, or retention metrics, such as session length, frequency of use, or emotional engagement, at the expense of user wellbeing; or (2) encourage or facilitate a minor user or unverified user concealing the user's use of the chatbot from a parent or guardian.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(B)(1)
Plain Language
Covered entities must build and maintain systems to detect when any user — not just minors — is developing emotional dependence on the chatbot. Emotional dependence includes patterns like relying on the chatbot as a primary emotional support source, expressing distress about losing chatbot access, or substituting chatbot interaction for human relationships. Upon detection, the operator must take reasonable steps to reduce the dependence and mitigate associated harm risks. This is a continuous monitoring and intervention obligation, not a one-time design check.
Statutory Text
(B) A covered entity shall implement reasonable systems and processes to: (1) identify when a user is developing emotional dependence on the chatbot and take reasonable steps to reduce that dependence and associated risks of harm;
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(B)(2)
Plain Language
Covered entities must implement systems to prevent their chatbot from making materially false claims that it is human. This is not a proactive disclosure requirement (the chatbot need not affirmatively state it is AI at the start of every interaction), but rather a prohibition on the chatbot making false representations of humanity. The 'materially false' qualifier suggests minor or incidental anthropomorphic language may not trigger liability — only affirmative misrepresentation of human status.
Statutory Text
(B) A covered entity shall implement reasonable systems and processes to: (2) ensure that a chatbot does not make a materially false representation that it is a human being;
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-40(B)(3)
Plain Language
Covered entities must maintain systems to detect when any user — not just minors — expresses suicidal thoughts, intent to self-harm, or signs of an acute mental health crisis. Upon detection, the operator must promptly deliver a clear, prominent crisis message including crisis services information (e.g., 988 Lifeline, crisis text lines). This is a continuous operating requirement that applies to all users across all access modes, including limited-access mode.
Statutory Text
(B) A covered entity shall implement reasonable systems and processes to: (3) identify when a user is expressing suicidal thoughts, intent to self-harm, or showing signs of an acute mental health crisis and shall promptly provide a clear and prominent crisis message, including crisis services information to any such user.
R-01 Incident Reporting · R-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-50(A)(1)-(3)
Plain Language
When an operator learns a user faces imminent risk of death or serious physical injury, it must make reasonable efforts within 24 hours to notify emergency services or law enforcement, using information it already has or can obtain through reasonable user-facing prompts. If the operator lacks sufficient information to enable emergency response, it must instead promptly urge the user to contact emergency services, provide crisis information, encourage the user to seek help from a trusted adult, and document its steps and reasoning. A good-faith safe harbor protects operators from damages for making emergency notifications, unless they acted with willful misconduct or gross negligence.
Statutory Text
(A)(1) If a covered entity obtains knowledge that a user faces an imminent risk of death or serious physical injury, then the operator must make reasonable efforts, within twenty-four hours, to notify appropriate emergency services or law enforcement, to the extent practicable based on information the operator already possesses or can obtain through reasonable, user-facing prompts for the purpose of facilitating emergency assistance. (2) If the operator cannot make a notification under item (1) because the operator lacks sufficient information to enable an emergency response, then the operator shall: (a) promptly provide a clear and prominent message urging the user to contact emergency services and provide crisis services information, (b) make reasonable efforts to encourage the user to seek immediate help from a trusted adult or emergency services, and (c) document the steps taken and the basis for the operator's determination that notification was not practicable. (3) An operator that makes a notification in good faith under this subsection is not liable for damages solely for making the notification, unless the operator acted with willful misconduct or gross negligence.
R-01 Incident Reporting · R-01.1 · Deployer · ChatbotMinors
S.C. Code § 39-81-50(B)-(C)
Plain Language
Covered entities must report covered incidents to the South Carolina Attorney General within 15 days of obtaining knowledge of the incident. Covered incidents include death, suicide attempts, self-harm requiring medical attention, psychiatric emergencies requiring urgent treatment, or serious physical injury arising from chatbot interactions. Reports must include (to the extent known): dates, incident description, basis for believing the chatbot is connected, and responsive actions taken. Supplemental reports may be filed within 60 days. Reports are confidential and exempt from FOIA, though the AG may publish aggregate anonymized statistics. This is a 15-day standard reporting timeline — contrast with the 24-hour emergency notification to law enforcement under § 39-81-50(A) for imminent risk.
Statutory Text
(B)(1) A covered entity shall submit a report to the Attorney General within fifteen days of obtaining knowledge of a covered incident connected to one or more of its chatbots, which, to the extent known at the time of the report, shall include: (a) the date the operator obtained knowledge of the incident; (b) the date of the incident, if known; (c) a brief description of the incident and the basis for the operator's belief that the incident is connected to the chatbot; and (d) a description of any actions the operator took in response. (2) A covered entity may submit a supplemental report within sixty days after the initial report to update or correct information learned through investigation. (C)(1) Reports submitted under this section shall be confidential and are not subject to disclosure pursuant to Chapter 4, Title 30, the Freedom of Information Act. (2) The Attorney General may publish aggregate information and statistics derived from the reports, so long as the publication does not identify individual users or disclose trade secrets.
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
S.C. Code § 39-81-30(C)(3)
Plain Language
Even when a parent provides consent for a minor to access restricted features, the chatbot must continue to block all explicit content for the minor. Explicit content includes obscene material as to minors (tracking the Ginsberg standard), suicide/self-harm instructions or glorification, and gratuitous extreme violence. Parental consent cannot override this block — it is a hard floor. This means explicit content blocking is the one restricted feature that can never be unlocked for minors regardless of parental authorization.
Statutory Text
(3) ensure that the chatbot continues to restrict access to any explicit content;