S-624
NC · State · USA
NC
USA
● Pending
Proposed Effective Date
2026-01-01
North Carolina Senate Bill 624 — An Act Regulating Artificial Intelligence Chatbot Licensing, Safety, and Privacy in North Carolina
This bill creates two regulatory regimes for AI chatbots in North Carolina. Part I (Chapter 114B, the Chatbot Licensing Act) requires any person operating or distributing a chatbot that deals substantially with health information to obtain a license from the NC Department of Justice, submit detailed technical documentation, maintain liability insurance, demonstrate clinical effectiveness through peer-reviewed trials, undergo annual third-party audits, and submit quarterly performance reports. Part II (Chapter 170, the Chatbot Safety and Privacy Act) imposes a fiduciary-style 'duty of loyalty' on covered platforms providing chatbot services, including obligations to detect and respond to emergency situations, prevent emotional dependence, disclose the chatbot's AI nature with specific mandatory disclosures at each session, de-identify user data before storage, prohibit inclusion of sensitive personal information in training datasets, and use self-destructing messages for sensitive-domain chatbots. Part II is enforced by the AG as parens patriae and through a private right of action with $1,000 per-violation statutory damages. Both parts become effective January 1, 2026.
Summary

This bill creates two regulatory regimes for AI chatbots in North Carolina. Part I (Chapter 114B, the Chatbot Licensing Act) requires any person operating or distributing a chatbot that deals substantially with health information to obtain a license from the NC Department of Justice, submit detailed technical documentation, maintain liability insurance, demonstrate clinical effectiveness through peer-reviewed trials, undergo annual third-party audits, and submit quarterly performance reports. Part II (Chapter 170, the Chatbot Safety and Privacy Act) imposes a fiduciary-style 'duty of loyalty' on covered platforms providing chatbot services, including obligations to detect and respond to emergency situations, prevent emotional dependence, disclose the chatbot's AI nature with specific mandatory disclosures at each session, de-identify user data before storage, prohibit inclusion of sensitive personal information in training datasets, and use self-destructing messages for sensitive-domain chatbots. Part II is enforced by the AG as parens patriae and through a private right of action with $1,000 per-violation statutory damages. Both parts become effective January 1, 2026.

Enforcement & Penalties
Enforcement Authority
Part I (Chatbot Licensing Act, Chapter 114B): Enforced by the North Carolina Department of Justice. The Attorney General designates a Director, officers, and employees for oversight and enforcement, including physical and digital inspections. Enforcement is agency-initiated. No private right of action under Part I. Part II (Chatbot Safety and Privacy Act, Chapter 170): The Attorney General may bring a civil action as parens patriae on behalf of state residents. Private right of action available to any person who suffers injury in fact. Two-year statute of limitations from discovery. One action per person per covered platform per alleged violation. Rights and remedies may not be waived by agreement, policy, form, or condition of service.
Penalties
Part I (Chapter 114B): Civil penalties of $50,000 per violation of § 114B-5. Proceeds remitted to Civil Penalty and Forfeiture Fund. Part II (Chapter 170): Attorney General may obtain injunctive relief, damages, restitution, or other compensation. Private plaintiffs may recover the greater of actual damages or $1,000 per violation, plus reasonable attorneys' fees and litigation costs, plus injunctive relief and any other relief the court deems appropriate. Statutory damages do not require proof of actual monetary harm beyond injury in fact.
Who Is Covered
"Licensee" means a person holding a license issued and in effect under this Chapter.
"Covered platform" means any person that provides chatbot services to users in this State, if the person (i) has annual gross revenues exceeding $100,000 in the last calendar year or any of the two preceding calendar years or (ii) has more than 5,000 monthly active users in the United States for half or more of the months during the last 12 months. The term does not include any person that provides chatbot services solely for educational or research purposes and does not monetize such services through advertising or commercial uses or any government entity providing chatbot services for official purposes.
What Is Covered
"Chatbot" means a generative artificial intelligence system with which users can interact by or through an interface that approximates or simulates conversation through a text, audio, or visual medium.
Compliance Obligations 16 obligations · click obligation ID to open requirement page
R-02 Regulatory Disclosure & Submissions · R-02.3 · Deployer · ChatbotHealthcare
G.S. § 114B-3(a)-(b)
Plain Language
No person may operate or distribute a chatbot that deals substantially with health information in North Carolina without first obtaining a health information chatbot license from the Department of Justice. The application must include comprehensive documentation covering technical architecture, data practices, security measures, privacy protections, QA/testing procedures, risk assessment, regulatory compliance evidence, proof of insurance, and required fees. The definition of 'health information' is extremely broad, covering physical and mental health data, reproductive and gender-affirming care information, biometric and genetic data, and even inferred health data. This is a pre-market licensing requirement — the chatbot cannot be operated or distributed until the license is granted.
Statutory Text
(a) No person shall operate or distribute a chatbot that deals substantially with health information without first obtaining a health information chatbot license. (b) An application for a health information chatbot license shall include all of the following: (1) Detailed documentation of the chatbot's: a. Technical architecture and operational specifications. b. Data collection, processing, storage, and deletion practices. c. Security measures and protocols. d. Privacy protection mechanisms. (2) Quality control and testing procedures. (3) Risk assessment and mitigation strategies. (4) Evidence of compliance with applicable federal and state regulations. (5) Proof of insurance coverage. (6) Required application fees. (7) Any additional information required by the Department.
S-01 AI System Safety Program · S-01.1S-01.7 · Deployer · ChatbotHealthcare
G.S. § 114B-4(d)
Plain Language
Licensed health information chatbot operators must demonstrate their chatbot's effectiveness through peer-reviewed controlled trials with adequate sample sizes and real-world performance data, comparative analysis against human expert performance, and compliance with minimum domain benchmarks set by the Department. This is an ongoing operational requirement — licensees must continue to meet these standards, not merely demonstrate them at the time of application. The requirement for peer-reviewed trials is notably more rigorous than typical AI safety evaluation requirements in other jurisdictions.
Statutory Text
(d) A licensees shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
G-01 AI Governance Program & Documentation · G-01.3G-01.5 · Deployer · ChatbotHealthcare
G.S. § 114B-4(e)-(f)
Plain Language
Licensed health information chatbot operators must conduct regular self-inspections and undergo an annual third-party audit, with all results made available to the Department of Justice. They must also implement continuous monitoring for safety and risk indicators and submit quarterly performance reports that include incident reports. The quarterly reporting obligation creates a regular cadence of regulatory submissions beyond what most AI statutes require.
Statutory Text
(e) A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department. (f) A licensee shall implement continuous monitoring systems for safety and risk indicators and submit quarterly performance reports including incident reports.
D-01 Automated Processing Rights & Data Controls · D-01.1D-01.4 · Deployer · ChatbotHealthcare
G.S. § 114B-4(b)(1)-(5)
Plain Language
Licensed health information chatbot operators must implement industry-standard encryption for data at rest and in transit, maintain detailed access logs, and perform security audits at least every six months. Data breaches must be reported to the Department within 24 hours and to affected consumers within 48 hours — this overrides any contrary state breach notification law. Operators must obtain explicit user consent for data collection and use, provide users access to their personal data, and honor user deletion requests. These data rights and security obligations apply to all licensees under the Chatbot Licensing Act.
Statutory Text
(b) A licensee shall do all of the following: (1) Implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct regular security audits no less than once every six (6) months. (2) Report any data breaches within twenty-four (24) hours to the Department and within forty-eight (48) hours to affected consumers, notwithstanding any provision of law to the contrary. (3) Obtain explicit user consent for data collection and use. (4) Provide users with access to their personal data. (5) Provide users with the ability to delete their data upon request.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotHealthcare
G.S. § 114B-4(c)
Plain Language
Licensed health information chatbot operators must clearly disclose six categories of information to users: that the chatbot is AI, the service's limitations, data collection and use practices, user rights and remedies, emergency resources (when applicable), and human oversight and intervention protocols. This is a general disclosure obligation under the licensing regime — it is distinct from the more detailed chatbot identification process required under Part II (Chapter 170) for covered platforms.
Statutory Text
(c) A licensee must clearly disclose all of the following: (1) The artificial nature of the chatbot. (2) Limitations of the service. (3) Data collection and use practices. (4) User rights and remedies. (5) Emergency resources when applicable. (6) Human oversight and intervention protocols.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · ChatbotHealthcare
G.S. § 114B-5(b)-(f)
Plain Language
The AG's designated enforcement staff may conduct both physical and digital inspections of licensed health information chatbots. Digital inspections cover source code, algorithms, ML models, data practices, cybersecurity, user privacy protections, chatbot response testing, and integration with other platforms. The Director may access all records relating to development, testing, validation, production, distribution, and performance. Trade secrets and confidential commercial information are protected from public records disclosure. After each inspection, the Director provides a detailed findings report with required corrective actions. Manufacturers and importers must establish and maintain records and submit reports as the Director requires by regulation. Licensees must maintain documentation in a form that can be produced for inspection.
Statutory Text
(b) The Attorney General shall designate a Director, officers, and employees assigned to the oversight and enforcement of this Chapter. Upon presenting appropriate credentials and a written notice to the owner, operator, or agent in charge, those officers and employees are authorized to enter, at reasonable times, any factory, warehouse, or establishment in which chatbots licensed under this Chapter are manufactured, processed, or held, and to inspect, in a reasonable manner and within reasonable limits and in a reasonable time. In addition to physical inspections, the Department may conduct digital inspections of licensed chatbots under this Chapter, to include the following: (1) Examination of source code, algorithms, and machine learning models. (2) Review of data processing and storage practices. (3) Evaluation of cybersecurity measures and protocols. (4) Assessment of user data privacy protections. (5) Testing of chatbot responses and behaviors in various scenarios. (6) Audit of data collection, use, and retention practices. (7) Inspection of software development and update processes. (8) Review of remote access and monitoring capabilities. (9) Evaluation of integration with other digital health technologies or platforms. (c) As part of any inspection, whether physical or digital, the Director may require access to all records relating to the development, testing, validation, production, distribution, and performance of a chatbot licensed under this Chapter. (d) Any information obtained during an inspection which falls within the definition of a trade secret or confidential commercial information as defined in 21 CFR 20.61 shall be treated as confidential and shall not be disclosed under Chapter 132 of the General Statutes, except as may be necessary in proceedings under this Chapter or other applicable law. (e) Following any inspection, the Director shall provide a detailed report of findings to the manufacturer or importer, including any identified deficiencies and required corrective actions. (f) Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish and maintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
R-01 Incident Reporting · R-01.1R-01.2 · Deployer · ChatbotHealthcare
G.S. § 114B-4(b)(2)
Plain Language
Licensed health information chatbot operators must report any data breach to the Department of Justice within 24 hours and notify affected consumers within 48 hours. This accelerated timeline overrides any contrary state notification law. Note that this obligation is also captured in mapping NC-S-624-D-01-a as part of the broader data security requirements; it is separately mapped here because the incident reporting timeline is independently actionable and maps to a distinct taxonomy requirement.
Statutory Text
Report any data breaches within twenty-four (24) hours to the Department and within forty-eight (48) hours to affected consumers, notwithstanding any provision of law to the contrary.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1CP-01.2 · Deployer · Chatbot
G.S. § 170-3(a)-(b)(2),(4),(6)
Plain Language
Covered platforms are subject to a broad fiduciary-style duty of loyalty prohibiting them from processing data or designing chatbot systems in ways that significantly conflict with users' best interests. This umbrella obligation has specific subsidiary duties: platforms must implement systems to detect and prevent emotional dependence (for chatbots designed for social connection, extended conversation, or emotional support); must not design systems to influence users toward results against their best interests; and must act loyally when personalizing content. The emotional dependence duty is triggered by the chatbot's intended purpose and design features — not by the user's actual behavior. The 'best interests' standard is defined broadly as interests affected by the user's entrustment of data, labor, or attention to the platform.
Statutory Text
(a) A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots. (b) A covered platform shall, in fulfilling their duty of loyalty, abide by the following subsidiary duties: (2) Duty of loyalty regarding emotional dependence. — A covered platforms shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users. (4) Duty of loyalty in influence. — A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties. (6) Duty of loyalty in personalization. — A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
G.S. § 170-3(b)(1)
Plain Language
Covered platforms must implement and maintain reasonably effective systems to detect when users indicate intent to harm themselves or others, and must promptly respond to, report, and mitigate such emergency situations. User safety must be prioritized over the platform's other interests. This is a continuous operating requirement — the systems must be active and reasonably effective at all times. The emergency situation definition covers both self-harm and harm to others, making it broader than typical crisis response provisions that focus only on suicidal ideation and self-harm.
Statutory Text
(1) Duty of loyalty in emergency situations. — A covered platform shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the platform's other interests.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
G.S. § 170-3(b)(3)
Plain Language
Covered platforms must clearly and consistently identify their chatbots as AI when the chatbot's artificial nature is not already apparent. Platforms may not design systems or process data in ways that deceive or mislead users about the chatbot's non-human nature. Transparency must be prioritized over any engagement benefits from perceived human-like interaction. This is a conditional trigger — disclosure is required when the AI nature is 'not clearly apparent,' similar to the 'reasonable person' standard in CA SB 243. This is the overarching duty-of-loyalty framing; the detailed procedural requirements for the disclosure are specified in § 170-5.
Statutory Text
(3) Duty of loyalty un chatbot identity disclosure. — A covered platform has a duty to clearly and consistently identify the chatbot as an artificial entity when that fact is not clearly apparent. The platform shall not process data or design systems in ways that deceive or mislead users about the non-human nature of the chatbot, prioritizing transparency over any potential benefits of perceived human-like interaction.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
G.S. § 170-3(b)(5),(7)
Plain Language
Covered platforms must limit data collection and storage to information that is adequate, relevant, and necessary for a legitimate purpose — applying a data minimization standard framed through the duty of loyalty. Platforms must also act as loyal gatekeepers of user personal information when allowing government or third-party access, avoiding conflicts with users' best interests. The three-part adequacy/relevance/necessity test mirrors GDPR-style data minimization principles. Third-party data sharing must be evaluated against the user's best interests, not merely the platform's commercial interests.
Statutory Text
(5) Duty of loyalty in collection. — A covered platform shall collect and store only that information that does not conflict with a trusting party's best interests. Such information must be (i) adequate, in the sense that it is sufficient to fulfill a legitimate purpose of the platform; (ii) relevant, in the sense that the information has a relevant link to that legitimate purpose, and (iii) necessary, in the sense that it is the minimum amount of information which is needed for that legitimate purpose. (7) Duty of loyalty in gatekeeping. — A covered platform shall be a loyal gatekeeper of personal information from a trusted party, including avoiding conflicts to the best interests of trusting parties when allowing government or other third-party access to trusting parties and their data.
Other · Chatbot
G.S. § 170-4(a)-(c)
Plain Language
Covered platforms must establish their duties to users through a terms of service agreement presented in clear, conspicuous, and easily understandable language. The TOS must explicitly outline the platform's obligations and describe user rights and protections under the statute. Affirmative consent from the user is required before the agreement takes effect. Material changes require clear notice and renewed consent. The TOS must be accessible at all times via the platform's app or website. This is a contractual-process requirement — the platform must formalize the duty-of-loyalty relationship in an enforceable agreement.
Statutory Text
(a) The duties between a covered platform and an end-user shall be established through a terms of service agreement which is presented to the end-user in clear, conspicuous, and easily understandable language. The terms of service agreement must (i) explicitly outline the online service provider's obligations, (ii) describe the rights and protections afforded to the end-user under this relationship, and (iii) require affirmative consent from the end-user before the agreement takes effect. (b) The covered platform must provide clear notice to end-users of any material changes to the terms of service agreement and obtain renewed consent for such changes. (c) The terms of service agreement must be easily accessible to users at all times through the covered platform's application or the covered platform's website.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
G.S. § 170-5(a)-(e)
Plain Language
Covered platforms must implement a detailed chatbot identification process with four specific mandatory disclosures: (1) the chatbot is not human, human-like, or sentient; (2) it is a computer program based on statistical analysis of human text; (3) it cannot experience emotions; and (4) it has no personal preferences or feelings. This disclosure must be under 300 words, clearly presented, and readily accessible. Users must provide affirmative informed consent (e.g., clicking 'I understand') confirming they understand the chatbot's nature and limitations. Deceptive design elements in the consent flow are prohibited. Critically, the identification and consent process must be repeated at the start of each new session — not just at initial onboarding — and must be separate from any privacy policy or other consent process. This is among the most prescriptive AI identity disclosure requirements in any U.S. jurisdiction.
Statutory Text
(a) The chatbot identification process shall include all of the following elements: (1) A covered platform shall clearly inform users that the chatbot is: a. Not human, human-like, or sentient. b. A computer program designed to mimic human conversation based on statistical analysis of human-produced text. c. Incapable of experiencing emotions such as love or lust. d. Without personal preferences or feelings. (2) The information required by subdivision (1) of this subsection shall be readily accessible, clearly presented, and concisely conveyed in less than three hundred (300) words. (b) A users shall provide explicit and informed consent to interact with the chatbot. The consent process shall: (1) Require an affirmative action from the user (such as clicking an "I understand" button); and (2) Confirm the user's understanding of the chatbot's identity and limitations. (c) A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process. (d) The chatbot identity communication and opt-in consent process shall be repeated at the start of each new session with a user. (e) The chatbot identification and consent process required by this section shall be separate and distinct from any privacy policy agreement or other consent processes required by law or platform policy.
D-01 Automated Processing Rights & Data Controls · D-01.4D-01.5 · Deployer · Chatbot
G.S. § 170-6(a)-(d)
Plain Language
Covered platforms must de-identify all user-related data collected through chatbot conversations or third-party cookies before storing or analyzing it. Sensitive personal information derived from chatbot use must not be incorporated into training datasets for any chatbot or generative AI system. Non-sensitive chatbot conversations must be stored for at least 60 days. For chatbots in healthcare, financial services, legal, government, mental health, education, or any domain primarily processing sensitive personal information, platforms must implement self-destructing messages that auto-delete 30 days after acquisition. All platforms must use transport encryption for user-chatbot communications. The training data prohibition on sensitive personal information is particularly significant — it effectively bars platforms from using user health, financial, identity, and communications data to improve their models.
Statutory Text
(a) A covered platform must do each of the following: (1) Ensure that all user-related data disclosed collected through conversations between users and chatbots or through third-party cookies, undergoes a process of de-identification prior to storage and analysis; (2) Take reasonable care to prohibit the incorporation or inclusion of any sensitive personal information derived from a user during the use of a chatbot into an aggregate dataset used to train any chatbot or generative artificial intelligence system. (3) Store all chatbot conversations which does not include sensitive personal information for at least sixty (60) days. (b) Each covered platform that meets the standard set forth in subsection (a) of this section shall utilize self-destructing messages with a predetermined destruction period of thirty (30) days after the data has been acquired. (c) The requirements of subsection (b) of this section shall apply to all chatbots which are employed in: healthcare, financial services, the legal field, government services, mental health support, and education. In general, this applies to any domain, beyond those specifically listed, where chatbots are employed primarily for the processing or storage of sensitive personal information. (d) All covered platforms shall utilize transport encryption for all messages between a user and a chatbot.
Other · ChatbotHealthcare
G.S. § 114B-4(a)
Plain Language
Licensed health information chatbot operators must maintain professional liability insurance at a minimum per-occurrence amount set by the Department of Justice. The specific amount is not in the statute — it will be determined by Department rulemaking. This insurance requirement treats health information chatbots more like medical devices or professional services than typical software products.
Statutory Text
(a) A licensee shall maintain professional liability insurance in an amount not less than the amount per occurrence required by the Department.
Other · ChatbotHealthcare
G.S. § 114B-6(a)-(c)
Plain Language
This section enumerates unlawful acts under the Chatbot Licensing Act: operating an unlicensed health information chatbot in state commerce, failing to comply with any requirement or rule, refusing to permit record access, and failing to report adverse events. The Department has discretionary authority to exempt certain acts from these prohibitions if consistent with public protection. Violations of the inspection and oversight provisions (§ 114B-5) carry a $50,000 civil penalty, with proceeds remitted to the Civil Penalty and Forfeiture Fund.
Statutory Text
(a) It is unlawful for any person to do any of the following: (1) Introduce or deliver for introduction into state commerce any chatbot that deals substantially with health information without complying with the licensing requirement of this Chapter. (2) Fail to comply with any requirement of this Chapter or any rule adopted hereunder. (3) Refuse to permit access to or copying of any record as required by this Chapter. (4) Fail to report adverse events as required under this Chapter. (b) The Department may, at its discretion, exempt certain prohibited acts from some or all of these prohibitions if it determines that the exemption is consistent with the protection of the public. (c) Any person who violates any provision of G.S. 114B-5 shall be subject to civil penalties in the amount of $50,000. The clear proceeds of fines and forfeitures provided for in Chapter shall be remitted to the Civil Penalty and Forfeiture Fund in accordance with G.S. 115C-457.2.