S-963
NC · State · USA
NC
USA
● Pending
Proposed Effective Date
2027-01-01
North Carolina Senate Bill 963 — An Act Regulating Artificial Intelligence Chatbot Licensing, Safety, and Privacy in North Carolina
Creates two new chapters of North Carolina law regulating AI chatbots. Part I (Chapter 114B, the Chatbot Licensing Act) requires any person operating or distributing a chatbot that deals substantially with health information to obtain a license from the Department of Justice, submit detailed technical and safety documentation, maintain professional liability insurance, conduct annual third-party audits, and submit quarterly performance reports. Part II (Chapter 170, the Chatbot Safety and Privacy Act) imposes fiduciary-style duties of loyalty on covered platforms providing chatbot services — including duties regarding emergency situations, emotional dependence prevention, identity disclosure, anti-manipulation, data minimization, personalization, and gatekeeping of personal information. It also mandates detailed chatbot identification disclosures with per-interaction opt-in consent, data de-identification and self-destructing message requirements for sensitive-information chatbots, and transport encryption. Enforcement is split: the Department of Justice enforces the licensing chapter through inspections and civil penalties ($50,000), while Chapter 170 is enforced by AG parens patriae actions and a private right of action with $1,000 statutory minimum per violation. Both chapters become effective January 1, 2027.
Summary

Creates two new chapters of North Carolina law regulating AI chatbots. Part I (Chapter 114B, the Chatbot Licensing Act) requires any person operating or distributing a chatbot that deals substantially with health information to obtain a license from the Department of Justice, submit detailed technical and safety documentation, maintain professional liability insurance, conduct annual third-party audits, and submit quarterly performance reports. Part II (Chapter 170, the Chatbot Safety and Privacy Act) imposes fiduciary-style duties of loyalty on covered platforms providing chatbot services — including duties regarding emergency situations, emotional dependence prevention, identity disclosure, anti-manipulation, data minimization, personalization, and gatekeeping of personal information. It also mandates detailed chatbot identification disclosures with per-interaction opt-in consent, data de-identification and self-destructing message requirements for sensitive-information chatbots, and transport encryption. Enforcement is split: the Department of Justice enforces the licensing chapter through inspections and civil penalties ($50,000), while Chapter 170 is enforced by AG parens patriae actions and a private right of action with $1,000 statutory minimum per violation. Both chapters become effective January 1, 2027.

Enforcement & Penalties
Enforcement Authority
Dual enforcement. Part I (Chatbot Licensing, Chapter 114B): The North Carolina Department of Justice enforces the chapter and rules adopted thereunder. The Attorney General designates a Director, officers, and employees for oversight, inspection, and enforcement, including physical and digital inspections. Part II (Chatbot Safety and Privacy, Chapter 170): The Attorney General may bring a civil action as parens patriae on behalf of state residents to enjoin violations, obtain damages, restitution, or other compensation. Private right of action is available to any person who suffers injury in fact. Two-year statute of limitations from discovery. One action per plaintiff per covered platform for the same alleged violation. Rights and remedies may not be waived by agreement, policy, form, or condition of service.
Penalties
Chapter 114B: Civil penalties of $50,000 per violation of §§ 114B-5 or 114B-6. Fines remitted to the Civil Penalty and Forfeiture Fund. Chapter 170: Greater of actual damages or $1,000 per violation. Plaintiff may also recover injunctive relief, reasonable attorneys' fees and litigation costs, and any other relief the court deems appropriate. The Attorney General may obtain injunctive relief, damages, restitution, or other compensation. Statutory damages under Chapter 170 do not require proof of actual monetary harm.
Who Is Covered
"Covered platform" – Any person that provides chatbot services to users in this State, if the person (i) has annual gross revenues exceeding one hundred thousand dollars ($100,000) in the last calendar year or any of the two preceding calendar years or (ii) has more than 5,000 monthly active users in the United States for half or more of the months during the last 12 months. The term does not include any person that provides chatbot services solely for educational or research purposes and does not monetize such services through advertising or commercial uses or any government entity providing chatbot services for official purposes.
"Licensee" – A person holding a license issued and in effect under this Chapter.
What Is Covered
"Chatbot" – A generative artificial intelligence system with which users can interact by or through an interface that approximates or simulates conversation through a text, audio, or visual medium.
Compliance Obligations 28 obligations · click obligation ID to open requirement page
Other · Deployer · ChatbotHealthcare
G.S. § 114B-3(a)-(d)
Plain Language
No person may operate or distribute a chatbot that deals substantially with health information in North Carolina without first obtaining a health information chatbot license from the Department of Justice. The application requires comprehensive documentation of the chatbot's technical architecture, data practices, security, privacy protections, quality control procedures, risk assessment strategies, regulatory compliance evidence, insurance, and fees. The Department reviews applications against industry standards for technical competence, data protection, risk management, evidence-based efficacy for the health use case, expert endorsement, and public safety. The Department is directed to adopt implementing rules. This is a pre-market licensing regime without a direct analogue in the canonical taxonomy.
Statutory Text
(a) No person shall operate or distribute a chatbot that deals substantially with health information without first obtaining a health information chatbot license. (b) An application for a health information chatbot license shall include all of the following: (1) Detailed documentation of the chatbot's: a. Technical architecture and operational specifications. b. Data collection, processing, storage, and deletion practices. c. Security measures and protocols. d. Privacy protection mechanisms. (2) Quality control and testing procedures. (3) Risk assessment and mitigation strategies. (4) Evidence of compliance with applicable federal and State regulations. (5) Proof of insurance coverage. (6) Required application fees. (7) Any additional information required by the Department. (c) The Department shall review applications for health information chatbot licenses based upon all of the following: (1) Technical competence and reliability as compliant with industry standards. (2) Data protection and security measures as compliant with industry standards. (3) Compliance with applicable regulations. (4) Risk management procedures. (5) Professional qualification requirements, including: a. Evidence-based standards demonstrating substantial efficacy for the supported use case of health information; and b. Endorsement by qualified experts within the field of the supported use case. (6) Public safety considerations. (d) The Department shall adopt rules to carry out the purposes of this Chapter.
S-01 AI System Safety Program · S-01.4S-01.7 · Deployer · ChatbotHealthcare
G.S. § 114B-4(b)(1)
Plain Language
Licensees operating health-information chatbots must implement industry-standard encryption for data both in transit and at rest, maintain detailed access logs of system activity, and conduct security audits at least every six months. This is an ongoing operational security requirement — not a one-time pre-launch check.
Statutory Text
A licensee shall do all of the following: (1) Implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct regular security audits no less than once every six months.
R-01 Incident Reporting · R-01.1R-01.2 · Deployer · ChatbotHealthcare
G.S. § 114B-4(b)(2)
Plain Language
Licensees must report data breaches to the Department of Justice within 24 hours and notify affected consumers within 48 hours. This obligation overrides any conflicting state breach notification timelines. The 24-hour regulator reporting and 48-hour consumer notification windows are among the most aggressive in any U.S. AI statute.
Statutory Text
A licensee shall do all of the following: (2) Report any data breaches within 24 hours to the Department and within 48 hours to affected consumers, notwithstanding any provision of law to the contrary.
D-01 Automated Processing Rights & Data Controls · Deployer · ChatbotHealthcare
G.S. § 114B-4(b)(3)-(5)
Plain Language
Licensees must obtain explicit user consent before collecting or using data, provide users with access to their personal data held by the platform, and honor user requests to delete their data. These are standard data subject rights — consent, access, and deletion — applied specifically to licensed health-information chatbot operators.
Statutory Text
A licensee shall do all of the following: (3) Obtain explicit user consent for data collection and use. (4) Provide users with access to their personal data. (5) Provide users with the ability to delete their data upon request.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotHealthcare
G.S. § 114B-4(c)(1)-(6)
Plain Language
Licensees must clearly disclose to users six categories of information: that the chatbot is artificial, the service's limitations, how data is collected and used, what user rights and remedies exist, emergency resources when applicable, and how human oversight and intervention work. This is a multi-element disclosure obligation that goes beyond mere AI identity disclosure to include service limitations, data practices, rights, and safety information. The AI identity disclosure component (item 1) maps directly to T-01.1; the remaining items are broader operational transparency requirements.
Statutory Text
A licensee must clearly disclose all of the following: (1) The artificial nature of the chatbot. (2) Limitations of the service. (3) Data collection and use practices. (4) User rights and remedies. (5) Emergency resources when applicable. (6) Human oversight and intervention protocols.
S-01 AI System Safety Program · S-01.1 · Deployer · ChatbotHealthcare
G.S. § 114B-4(d)(1)-(3)
Plain Language
Licensees must demonstrate their health-information chatbot's effectiveness through three mechanisms: peer-reviewed controlled trials with real-world performance data, a comparative analysis against human expert performance, and meeting minimum domain benchmarks set by the Department. This is an unusually rigorous validation requirement — resembling FDA clinical trial standards more than typical AI regulation — and requires ongoing demonstration, not just pre-deployment testing.
Statutory Text
A licensee shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
G-01 AI Governance Program & Documentation · G-01.5 · Deployer · ChatbotHealthcare
G.S. § 114B-4(e)
Plain Language
Licensees must conduct regular internal inspections and an annual independent third-party audit of their health-information chatbot. All inspection and audit results must be made available to the Department of Justice. This creates both an ongoing self-inspection obligation and a mandatory annual external audit with regulatory disclosure.
Statutory Text
A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department.
R-03 Operational Performance Reporting · R-03.1 · Deployer · ChatbotHealthcare
G.S. § 114B-4(f)
Plain Language
Licensees must maintain continuous monitoring systems for safety and risk indicators and submit quarterly performance reports — including incident reports — to the Department. This is a routine periodic reporting obligation distinct from the breach notification requirement in § 114B-4(b)(2). The continuous monitoring component is an ongoing operational obligation, while the quarterly reports are scheduled submissions.
Statutory Text
A licensee shall implement continuous monitoring systems for safety and risk indicators and submit quarterly performance reports, including incident reports.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotHealthcare
G.S. § 114B-5
Plain Language
Licensees under the Chatbot Licensing Act must ensure their chatbots comply with the chatbot identification process requirements in Chapter 170, § 170-5. This is a cross-reference provision that extends the Chapter 170 identification and consent obligations to all licensed health-information chatbots. The substantive obligations are mapped separately under § 170-5. This provision creates no independent obligation beyond ensuring compliance with the referenced section.
Statutory Text
Licensees shall ensure that all interactions between chatbots and users comply with the provisions of G.S. 170-5.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · ChatbotHealthcare
G.S. § 114B-6(a)-(f)
Plain Language
The Department of Justice has broad inspection authority over licensed health-information chatbots, including both physical and digital inspections. Digital inspections may cover source code, algorithms, ML models, data practices, cybersecurity, user privacy protections, chatbot behavior testing, and integration with other platforms. The Director may require access to all development, testing, validation, production, distribution, and performance records. Trade secrets and confidential commercial information obtained during inspections are protected from public disclosure. Following inspections, the Director issues a detailed findings report with required corrective actions. Manufacturers and importers must establish and maintain records and submit reports as required by regulation. Licensees must maintain documentation in a form that can be produced to the Department upon request.
Statutory Text
(a) The Department shall enforce the provisions of, and the rules adopted under, this Chapter. (b) The Attorney General shall designate a Director, officers, and employees assigned to the oversight and enforcement of this Chapter. Upon presenting appropriate credentials and a written notice to the owner, operator, or agent in charge, those officers and employees are authorized to enter, at reasonable times, any factory, warehouse, or establishment in which chatbots licensed under this Chapter are manufactured, processed, or held, and to inspect, in a reasonable manner and within reasonable limits and in a reasonable time. In addition to physical inspections, the Department may conduct digital inspections of licensed chatbots under this Chapter, to include the following: (1) Examination of source code, algorithms, and machine learning models. (2) Review of data processing and storage practices. (3) Evaluation of cybersecurity measures and protocols. (4) Assessment of user data privacy protections. (5) Testing of chatbot responses and behaviors in various scenarios. (6) Audit of data collection, use, and retention practices. (7) Inspection of software development and update processes. (8) Review of remote access and monitoring capabilities. (9) Evaluation of integration with other digital health technologies or platforms. (c) As part of any inspection, whether physical or digital, the Director may require access to all records relating to the development, testing, validation, production, distribution, and performance of a chatbot licensed under this Chapter. (d) Any information obtained during an inspection which falls within the definition of a trade secret or confidential commercial information, as defined in 21 C.F.R. § 20.61, shall be treated as confidential and shall not be disclosed under Chapter 132 of the General Statutes, except as may be necessary in proceedings under this Chapter or other applicable law. (e) Following any inspection, the Director shall provide a detailed report of findings to the manufacturer or importer, including any identified deficiencies and required corrective actions. (f) Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish and maintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1 · Deployer · Chatbot
G.S. § 170-3(a)
Plain Language
Covered platforms are prohibited from processing data or designing chatbot systems in ways that significantly conflict with users' best interests. 'Best interests' is defined broadly as interests affected by the user's entrustment of data, labor, or attention to the platform. This is a general fiduciary-style duty of loyalty that serves as the overarching prohibition, with specific subsidiary duties enumerated in § 170-3(b). It effectively prohibits exploitative data processing and system design that prioritizes platform interests over user welfare.
Statutory Text
A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
G.S. § 170-3(b)(1)
Plain Language
Covered platforms must implement and maintain reasonably effective systems to detect when a user indicates intent to harm themselves or others, and must promptly respond to, report, and mitigate such emergency situations. User safety and well-being must be prioritized over the platform's commercial or engagement interests. The trigger is user-indicated intent to self-harm or harm others — a broader framing than suicidal ideation alone, as it also covers intent to harm others.
Statutory Text
Duty of loyalty in emergency situations. – A covered platform shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the platform's other interests.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.4 · Deployer · Chatbot
G.S. § 170-3(b)(2)
Plain Language
Covered platforms operating chatbots designed for social connection, extended human-like conversation, or emotional support/companionship must implement and maintain systems to detect and prevent users from becoming emotionally dependent on the chatbot. User psychological well-being must take priority over platform engagement or retention metrics. The duty applies only to chatbots with specific design features — social connection generation, extended conversational mimicry, or emotional support — assessed based on the chatbot's intended purpose, design, conversational capabilities, and interaction patterns.
Statutory Text
Duty of loyalty regarding emotional dependence. – A covered platform shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
G.S. § 170-3(b)(3)
Plain Language
Covered platforms must clearly and consistently disclose the chatbot's artificial nature whenever it is not already apparent to the user. Platforms may not process data or design systems in ways that deceive or mislead users about the chatbot being non-human. This is a conditional disclosure trigger (only when the AI nature is 'not clearly apparent') combined with an anti-deception prohibition. Transparency must be prioritized over any commercial benefit of human-like perceived interaction.
Statutory Text
Duty of loyalty in chatbot identity disclosure. – A covered platform has a duty to clearly and consistently identify the chatbot as an artificial entity when that fact is not clearly apparent. The platform shall not process data or design systems in ways that deceive or mislead users about the non-human nature of the chatbot, prioritizing transparency over any potential benefits of perceived human-like interaction.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1 · Deployer · Chatbot
G.S. § 170-3(b)(4)
Plain Language
Covered platforms may not process data or design chatbot systems to influence users toward outcomes that are against the users' best interests. This is a broad anti-manipulation prohibition that goes beyond specific techniques (dark patterns, psychological exploitation) to prohibit any design or data processing that steers users against their own interests. The 'best interests' standard is defined by reference to the user's entrustment of data, labor, or attention to the platform.
Statutory Text
Duty of loyalty in influence. – A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
G.S. § 170-3(b)(5)
Plain Language
Covered platforms must limit data collection and storage to information that does not conflict with users' best interests and that meets a three-part test: the data must be adequate (sufficient for a legitimate platform purpose), relevant (linked to that purpose), and necessary (the minimum amount needed). This is a data minimization obligation framed through a fiduciary lens — the 'best interests' overlay means that even data meeting the adequacy/relevance/necessity test may be prohibited if collection itself conflicts with user interests.
Statutory Text
Duty of loyalty in collection. – A covered platform shall collect and store only that information that does not conflict with a trusting party's best interests. Such information must be (i) adequate, in the sense that it is sufficient to fulfill a legitimate purpose of the platform, (ii) relevant, in the sense that the information has a relevant link to that legitimate purpose, and (iii) necessary, in the sense that it is the minimum amount of information which is needed for that legitimate purpose.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1 · Deployer · Chatbot
G.S. § 170-3(b)(6)
Plain Language
When covered platforms personalize chatbot content based on user personal information or characteristics, they must do so in a manner loyal to the user's best interests. This prevents platforms from using personalization to exploit user vulnerabilities, push users toward harmful content, or steer users against their interests. The obligation applies to any content personalization based on personal data or user characteristics.
Statutory Text
Duty of loyalty in personalization. – A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
D-01 Automated Processing Rights & Data Controls · Deployer · Chatbot
G.S. § 170-3(b)(7)
Plain Language
Covered platforms must act as loyal gatekeepers of user personal information, particularly when granting government or third-party access to user data. The platform must avoid conflicts with users' best interests when sharing data with external parties. This creates a fiduciary-style data stewardship obligation that restricts how platforms share user data with third parties, with heightened scrutiny for government data requests.
Statutory Text
Duty of loyalty in gatekeeping. – A covered platform shall be a loyal gatekeeper of personal information from a trusted party, including avoiding conflicts to the best interests of trusting parties when allowing government or other third-party access to trusting parties and their data.
Other · Deployer · Chatbot
G.S. § 170-4(a)-(c)
Plain Language
Covered platforms must present their terms of service in clear, conspicuous, and easily understandable language. The ToS must explicitly outline platform obligations, describe user rights and protections, and require affirmative user consent. Material changes require clear notice and renewed consent. The ToS must remain accessible at all times through the platform's app or website. This is a contractual transparency and consent framework that governs how the platform's legal obligations are communicated to users, but it creates no independent AI-specific compliance obligation beyond standard consumer contract requirements.
Statutory Text
(a) The duties between a covered platform and an end-user shall be established through a terms of service agreement which is presented to the end-user in clear, conspicuous, and easily understandable language. The terms of service agreement must (i) explicitly outline the online service provider's obligations, (ii) describe the rights and protections afforded to the end-user under this relationship, and (iii) require affirmative consent from the end-user before the agreement takes effect. (b) The covered platform must provide clear notice to end-users of any material changes to the terms of service agreement and obtain renewed consent for such changes. (c) The terms of service agreement must be easily accessible to users at all times through the covered platform's application or the covered platform's website.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
G.S. § 170-5(a)-(e)
Plain Language
Covered platforms must implement a detailed chatbot identification process with four specific disclosures: the chatbot is not human, human-like, or sentient; it is a computer program that mimics conversation via statistical analysis; it cannot experience emotions; and it has no personal preferences or feelings. This disclosure must be under 300 words, clearly presented, and readily accessible. Users must provide explicit, informed, affirmative consent (e.g., clicking 'I understand') confirming they understand the chatbot's identity and limitations. Deceptive design elements that manipulate consent or obscure the chatbot's nature are prohibited. The identification and consent process must be repeated at the start of each new interaction and must be separate from privacy policies or other consent processes. This is one of the most prescriptive chatbot disclosure requirements in U.S. legislation — it mandates specific factual statements rather than just requiring 'clear and conspicuous' notice.
Statutory Text
(a) The chatbot identification process shall include all of the following elements: (1) A covered platform shall clearly inform users that the chatbot is: a. Not human, human-like, or sentient. b. A computer program designed to mimic human conversation based on statistical analysis of human-produced text. c. Incapable of experiencing emotions such as love or lust. d. Without personal preferences or feelings. (2) The information required by subdivision (1) of this subsection shall be readily accessible, clearly presented, and concisely conveyed in less than 300 words. (b) A user shall provide explicit and informed consent to interact with the chatbot. The consent process shall: (1) Require an affirmative action from the user (such as clicking an "I understand" button); and (2) Confirm the user's understanding of the chatbot's identity and limitations. (c) A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process. (d) The chatbot identity communication and opt-in consent process shall be repeated at the start of each new interaction with a user. (e) The chatbot identification and consent process required by this section shall be separate and distinct from any privacy policy agreement or other consent processes required by law or platform policy.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
G.S. § 170-6(a)(1)-(3)
Plain Language
Covered platforms must de-identify all user-related data collected through chatbot conversations or third-party cookies before storing or analyzing it. De-identification requires replacing identifiable information with pseudonyms, aggregating data to make re-identification statistically improbable, and removing traceable context and metadata. Platforms must also take reasonable care to prevent sensitive personal information derived from chatbot use from being incorporated into training datasets for any chatbot or generative AI system. Non-sensitive chatbot conversations must be stored for at least 60 days. This creates a tension: data must be de-identified before storage but non-sensitive conversations must be retained for 60 days — in practice, the 60-day retention applies to de-identified conversation data.
Statutory Text
A covered platform must do all of the following: (1) Ensure that all user-related data disclosed collected through conversations between users and chatbots or through third-party cookies undergoes a process of de-identification prior to storage and analysis. (2) Take reasonable care to prohibit the incorporation or inclusion of any sensitive personal information derived from a user during the use of a chatbot into an aggregate dataset used to train any chatbot or generative artificial intelligence system. (3) Store all chatbot conversations which does not include sensitive personal information for at least 60 days.
D-01 Automated Processing Rights & Data Controls · Deployer · ChatbotHealthcareFinancial Services
G.S. § 170-6(b)-(c)
Plain Language
Covered platforms must implement self-destructing messages that automatically and irreversibly delete chatbot conversation data 30 days after acquisition. This requirement applies to chatbots employed in healthcare, financial services, legal services, government services, mental health support, education, and any other domain where chatbots primarily process or store sensitive personal information. The 30-day auto-deletion creates a hard ceiling on data retention for these sensitive-domain chatbots — data must be programmed to become permanently inaccessible to both parties after 30 days.
Statutory Text
(b) Each covered platform that meets the standard set forth in subsection (a) of this section shall utilize self-destructing messages with a predetermined destruction period of 30 days after the data has been acquired. (c) The requirements of subsection (b) of this section shall apply to all chatbots which are employed in healthcare, financial services, the legal field, government services, mental health support, and education. In general, this applies to any domain, beyond those specifically listed, where chatbots are employed primarily for the processing or storage of sensitive personal information.
S-01 AI System Safety Program · Deployer · Chatbot
G.S. § 170-6(d)
Plain Language
All covered platforms must encrypt all messages transmitted between users and chatbots during transit. The statute defines transport encryption as data encrypted during transmission but potentially accessible in unencrypted form at endpoints or by intermediary service providers. This is a baseline security requirement — notably, it mandates only transport-layer encryption (e.g., TLS), not end-to-end encryption, meaning the platform itself may access message content at rest.
Statutory Text
All covered platforms shall utilize transport encryption for all messages between a user and a chatbot.
Other · ChatbotHealthcare
G.S. § 114B-7(a)-(c)
Plain Language
This provision establishes the enforcement hooks and penalties for the Chatbot Licensing Act. It declares it unlawful to operate an unlicensed health-information chatbot, fail to comply with any chapter requirement, refuse records access, or fail to report adverse events. The Department may exempt certain acts from these prohibitions at its discretion. Violations of §§ 114B-5 (identification requirements) and 114B-6 (enforcement/inspections) carry civil penalties of $50,000 per violation. This is an enforcement and penalty provision that creates no independent compliance obligation beyond those already mapped.
Statutory Text
(a) It is unlawful for any person to do any of the following: (1) Introduce or deliver for introduction into State commerce any chatbot that deals substantially with health information without complying with the licensing requirement of this Chapter. (2) Fail to comply with any requirement of this Chapter or any rule adopted hereunder. (3) Refuse to permit access to or copying of any record as required by this Chapter. (4) Fail to report adverse events as required under this Chapter. (b) The Department may, at its discretion, exempt certain prohibited acts from some or all of these prohibitions if it determines that the exemption is consistent with the protection of the public. (c) Any person who violates any provision of G.S. 114B-5 or G.S. 114B-6 shall be subject to civil penalties in the amount of fifty thousand dollars ($50,000). The clear proceeds of fines and forfeitures provided for in this Chapter shall be remitted to the Civil Penalty and Forfeiture Fund in accordance with G.S. 115C-457.2.
Other · Chatbot
G.S. § 170-7(a)-(d)
Plain Language
This provision establishes the enforcement mechanisms for the Chatbot Safety and Privacy Act. The Attorney General may bring parens patriae civil actions to enjoin violations and obtain damages, restitution, or other relief. Any person with injury in fact may bring a private action for the greater of actual damages or $1,000 per violation, plus attorneys' fees, costs, and injunctive relief. Private actions have a two-year statute of limitations from discovery, are limited to one action per plaintiff per platform per violation, and rights and remedies cannot be waived by contract. This is an enforcement provision that creates no independent compliance obligation.
Statutory Text
(a) In any case in which the Attorney General has reason to believe that a covered platform has violated or is violating any provision of this Chapter, the State, as parens patriae, may bring a civil action on behalf of the residents of the State to (i) enjoin any practice violating this Chapter and enforce compliance with the pertinent section or sections on behalf of residents of the State, (ii) obtain damages, restitution, or other compensation, each of which shall be distributed in accordance with State law, or (iii) obtain such other relief as the court may consider to be appropriate. (b) Any person who suffers injury in fact as a result of a violation of this Chapter may bring a civil action against the covered platform to enjoin further the violation, recover damages in an amount equal to the greater of actual damages or one thousand dollars ($1,000) per violation, obtain reasonable attorneys' fees and litigation costs, and obtain any other relief that the court deems appropriate. (c) An action under subsection (b) of this section may not be brought more than two years after the date on which the person first discovered or reasonably should have discovered the violation. No person shall be permitted to bring more than one action under this subsection against the same covered platform for the same alleged violation. (d) The rights and remedies provided for in this section may not be waived by any agreement, policy, form, or condition of service.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Deployer · ChatbotHealthcare
G.S. § 114B-6(f)
Plain Language
Manufacturers and importers of licensed health-information chatbots must establish and maintain records and submit reports as required by the Director through regulation. The specific records and reports will be defined by regulation, but the obligation to establish and maintain a recordkeeping system capable of producing documentation on regulatory demand is immediate. This is a standing recordkeeping obligation tied to the Director's regulatory authority.
Statutory Text
Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish andmaintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · ChatbotHealthcare
G.S. § 114B-4(a)
Plain Language
Licensees operating health-information chatbots must maintain professional liability insurance at a minimum coverage level set by the Department of Justice. This is a financial responsibility requirement that ensures chatbot operators have insurance to cover potential harms. The specific minimum coverage amount will be determined by Department rulemaking.
Statutory Text
A licensee shall maintain professional liability insurance in an amount not less than the amount per occurrence required by the Department.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.3 · Deployer · Chatbot
G.S. § 170-5(c)
Plain Language
Covered platforms may not use dark patterns or deceptive design elements to manipulate or coerce users into consenting to chatbot interactions or to obscure the chatbot's artificial nature or the consent process itself. This is a standalone anti-dark-pattern prohibition that applies specifically to the chatbot identification and consent flow, complementing the affirmative disclosure requirements in § 170-5(a)-(b).
Statutory Text
A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process.