S-624
NC · State · USA
NC
USA
● Pending
Proposed Effective Date
2026-01-01
North Carolina Senate Bill 624 — An Act Regulating Artificial Intelligence Chatbot Licensing, Safety, and Privacy in North Carolina
NC SB 624 creates two new regulatory frameworks for AI chatbots. Part I (Chapter 114B, the Chatbot Licensing Act) requires any person operating or distributing a chatbot that deals substantially with health information to obtain a license from the NC Department of Justice, submit detailed technical and safety documentation, maintain professional liability insurance, demonstrate effectiveness through peer-reviewed trials, undergo annual third-party audits, and submit quarterly performance reports. Part II (Chapter 170, the Chatbot Safety and Privacy Act) imposes a broad duty of loyalty on covered platforms providing chatbot services, including duties regarding emergency situations, preventing emotional dependence, AI identity disclosure, anti-manipulation, data minimization, personalization, and gatekeeping. Part II also requires de-identification of user data, transport encryption, self-destructing messages for sensitive-domain chatbots, and a detailed chatbot identity disclosure and consent process at each session. Part II is enforced by the Attorney General as parens patriae and through a private right of action with a $1,000 per-violation statutory minimum. Both parts become effective January 1, 2026.
Summary

NC SB 624 creates two new regulatory frameworks for AI chatbots. Part I (Chapter 114B, the Chatbot Licensing Act) requires any person operating or distributing a chatbot that deals substantially with health information to obtain a license from the NC Department of Justice, submit detailed technical and safety documentation, maintain professional liability insurance, demonstrate effectiveness through peer-reviewed trials, undergo annual third-party audits, and submit quarterly performance reports. Part II (Chapter 170, the Chatbot Safety and Privacy Act) imposes a broad duty of loyalty on covered platforms providing chatbot services, including duties regarding emergency situations, preventing emotional dependence, AI identity disclosure, anti-manipulation, data minimization, personalization, and gatekeeping. Part II also requires de-identification of user data, transport encryption, self-destructing messages for sensitive-domain chatbots, and a detailed chatbot identity disclosure and consent process at each session. Part II is enforced by the Attorney General as parens patriae and through a private right of action with a $1,000 per-violation statutory minimum. Both parts become effective January 1, 2026.

Enforcement & Penalties
Enforcement Authority
Part I (Chatbot Licensing Act, Chapter 114B): The North Carolina Department of Justice enforces the chapter and rules adopted thereunder. The Attorney General designates a Director, officers, and employees for oversight and enforcement, including authority to conduct physical and digital inspections. No private right of action is created under Part I. Part II (Chatbot Safety and Privacy Act, Chapter 170): The Attorney General may bring a civil action as parens patriae on behalf of state residents to enjoin violations, obtain damages, restitution, or other compensation. Private right of action is available to any person who suffers injury in fact. Actions must be brought within two years of discovery. No person may bring more than one action against the same covered platform for the same alleged violation. Rights and remedies may not be waived by agreement, policy, form, or condition of service.
Penalties
Part I (Chapter 114B): Civil penalties of $50,000 per violation of G.S. 114B-5. Clear proceeds remitted to the Civil Penalty and Forfeiture Fund. Part II (Chapter 170): Greater of actual damages or $1,000 per violation. Plaintiff may also obtain injunctive relief, reasonable attorneys' fees and litigation costs, and any other relief the court deems appropriate. Statutory damages do not require proof of actual monetary harm — only injury in fact for standing.
Who Is Covered
"Licensee" means a person holding a license issued and in effect under this Chapter.
"Covered platform" means any person that provides chatbot services to users in this State, if the person (i) has annual gross revenues exceeding $100,000 in the last calendar year or any of the two preceding calendar years or (ii) has more than 5,000 monthly active users in the United States for half or more of the months during the last 12 months. The term does not include any person that provides chatbot services solely for educational or research purposes and does not monetize such services through advertising or commercial uses or any government entity providing chatbot services for official purposes.
What Is Covered
"Chatbot" means a generative artificial intelligence system with which users can interact by or through an interface that approximates or simulates conversation through a text, audio, or visual medium.
Compliance Obligations 21 obligations · click obligation ID to open requirement page
Other · Deployer · ChatbotHealthcare
G.S. 114B-3(a)-(d)
Plain Language
No person may operate or distribute a chatbot that substantially deals with health information in North Carolina without first obtaining a license from the Department of Justice. The application requires extensive documentation covering technical architecture, data practices, security, privacy, quality control, risk assessment, regulatory compliance, insurance, and fees. The Department evaluates applications against industry standards for technical reliability, data protection, risk management, evidence-based efficacy (peer-reviewed), expert endorsement, and public safety. This is a true licensing prerequisite — operating without a license is a prohibited act under § 114B-6.
Statutory Text
(a) No person shall operate or distribute a chatbot that deals substantially with health information without first obtaining a health information chatbot license. (b) An application for a health information chatbot license shall include all of the following: (1) Detailed documentation of the chatbot's: a. Technical architecture and operational specifications. b. Data collection, processing, storage, and deletion practices. c. Security measures and protocols. d. Privacy protection mechanisms. (2) Quality control and testing procedures. (3) Risk assessment and mitigation strategies. (4) Evidence of compliance with applicable federal and state regulations. (5) Proof of insurance coverage. (6) Required application fees. (7) Any additional information required by the Department. (c) The Department shall review applications for health information chatbot licenses based upon all of the following: (1) Technical competence and reliability as compliant with industry standards. (2) Data protection and security measures as compliant with industry standards. (3) Compliance with applicable regulations. (4) Risk management procedures. (5) Professional qualification requirements, including: a. Evidence-based standards demonstrating substantial efficacy for the supported use case of health information; and b. Endorsement by qualified experts within the field of the supported use case. (6) Public safety considerations. (d) The Department shall adopt rules to carry out the purposes of this Chapter.
G-01 AI Governance Program & Documentation · G-01.3G-01.4 · Deployer · ChatbotHealthcare
G.S. 114B-4(a)-(b)(1)
Plain Language
Licensed health information chatbot operators must maintain professional liability insurance at a Department-specified amount, implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct security audits at least every six months. These are ongoing operational and governance obligations that require contemporaneous documentation of security measures and audit results.
Statutory Text
(a) A licensee shall maintain professional liability insurance in an amount not less than the amount per occurrence required by the Department. (b) A licensee shall do all of the following: (1) Implement industry-standard encryption for data in transit and at rest, maintain detailed access logs, and conduct regular security audits no less than once every six (6) months.
R-01 Incident Reporting · R-01.1R-01.2 · Deployer · ChatbotHealthcare
G.S. 114B-4(b)(2)
Plain Language
Licensed health information chatbot operators must report any data breach to the Department of Justice within 24 hours and notify affected consumers within 48 hours. This supersedes any contrary provision of law, meaning it applies even if other North Carolina data breach notification statutes would otherwise provide longer timelines. The 24-hour regulator notification and 48-hour consumer notification timelines are among the most aggressive in any U.S. AI or data breach statute.
Statutory Text
(2) Report any data breaches within twenty-four (24) hours to the Department and within forty-eight (48) hours to affected consumers, notwithstanding any provision of law to the contrary.
D-01 Automated Processing Rights & Data Controls · D-01.1D-01.8 · Deployer · ChatbotHealthcare
G.S. 114B-4(b)(3)-(5)
Plain Language
Licensed health information chatbot operators must obtain explicit user consent before collecting and using data, provide users with access to their personal data, and allow users to delete their data upon request. These are individual data rights that must be operationalized — consent cannot be implied, and access and deletion requests must be honored on demand.
Statutory Text
(3) Obtain explicit user consent for data collection and use. (4) Provide users with access to their personal data. (5) Provide users with the ability to delete their data upon request.
T-01 AI Identity Disclosure · T-01.1 · Deployer · ChatbotHealthcare
G.S. 114B-4(c)
Plain Language
Licensed health information chatbot operators must clearly disclose six categories of information to users: that the chatbot is AI (not human), what the service's limitations are, how user data is collected and used, what rights and remedies users have, emergency resources where applicable, and human oversight and intervention protocols. This is an unconditional disclosure obligation — it applies regardless of whether users could be misled. The disclosure of emergency resources and human oversight protocols goes beyond standard AI identity disclosure.
Statutory Text
(c) A licensee must clearly disclose all of the following: (1) The artificial nature of the chatbot. (2) Limitations of the service. (3) Data collection and use practices. (4) User rights and remedies. (5) Emergency resources when applicable. (6) Human oversight and intervention protocols.
S-01 AI System Safety Program · S-01.1S-01.7 · Deployer · ChatbotHealthcare
G.S. 114B-4(d)
Plain Language
Licensed health information chatbot operators must demonstrate the chatbot's effectiveness through three separate requirements: (1) peer-reviewed, controlled trials with adequate sample sizes using real-world performance data, (2) comparative analysis against human expert performance, and (3) meeting minimum domain benchmarks set by the Department. These are substantive efficacy validation requirements — not just safety testing — and are unusual in requiring peer-reviewed trials and human-expert benchmarking for a chatbot product.
Statutory Text
(d) A licensees shall do all of the following: (1) Demonstrate effectiveness through peer-reviewed, controlled trials with appropriate validation studies done on appropriate sample sizes with real-world performance data. (2) Demonstrate effectiveness in a comparative analysis to human expert performance. (3) Meet minimum domain benchmarks as established by the Department.
G-01 AI Governance Program & Documentation · G-01.5 · Deployer · ChatbotHealthcare
G.S. 114B-4(e)
Plain Language
Licensed health information chatbot operators must conduct regular inspections and undergo an annual third-party audit. All inspection and audit results must be provided to the Department of Justice. The statute does not specify what the inspections or audits must cover, leaving scope to be determined by Department rules, but the annual third-party audit is mandatory, not optional.
Statutory Text
(e) A licensee shall conduct regular inspections and perform an annual third-party audit. Results of all inspections and audits must be made available to the Department.
R-03 Operational Performance Reporting · R-03.1 · Deployer · ChatbotHealthcare
G.S. 114B-4(f)
Plain Language
Licensed health information chatbot operators must implement continuous monitoring systems for safety and risk indicators and submit quarterly performance reports to the Department that include incident reports. This combines a continuous operational monitoring obligation with a periodic reporting obligation on a quarterly cadence.
Statutory Text
(f) A licensee shall implement continuous monitoring systems for safety and risk indicators and submit quarterly performance reports including incident reports.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · ChatbotHealthcare
G.S. 114B-5(a)-(f)
Plain Language
The Department of Justice has broad inspection authority — both physical and digital — over licensed health information chatbots. Digital inspections can cover source code, algorithms, ML models, data practices, cybersecurity, user privacy protections, chatbot behavior testing, development processes, and integration with other platforms. The Director may require access to all records relating to development, testing, validation, production, distribution, and performance. Trade secrets and confidential commercial information receive protection under 21 CFR 20.61. Following inspections, the Director provides a detailed findings report with required corrective actions. Manufacturers and importers must also establish records and submit reports as the Director requires by regulation. This creates a comprehensive regulatory inspection and recordkeeping framework that licensees must be prepared to comply with at any time.
Statutory Text
(a) The Department shall enforce the provisions of, and the rules adopted under, this Chapter. (b) The Attorney General shall designate a Director, officers, and employees assigned to the oversight and enforcement of this Chapter. Upon presenting appropriate credentials and a written notice to the owner, operator, or agent in charge, those officers and employees are authorized to enter, at reasonable times, any factory, warehouse, or establishment in which chatbots licensed under this Chapter are manufactured, processed, or held, and to inspect, in a reasonable manner and within reasonable limits and in a reasonable time. In addition to physical inspections, the Department may conduct digital inspections of licensed chatbots under this Chapter, to include the following: (1) Examination of source code, algorithms, and machine learning models. (2) Review of data processing and storage practices. (3) Evaluation of cybersecurity measures and protocols. (4) Assessment of user data privacy protections. (5) Testing of chatbot responses and behaviors in various scenarios. (6) Audit of data collection, use, and retention practices. (7) Inspection of software development and update processes. (8) Review of remote access and monitoring capabilities. (9) Evaluation of integration with other digital health technologies or platforms. (c) As part of any inspection, whether physical or digital, the Director may require access to all records relating to the development, testing, validation, production, distribution, and performance of a chatbot licensed under this Chapter. (d) Any information obtained during an inspection which falls within the definition of a trade secret or confidential commercial information as defined in 21 CFR 20.61 shall be treated as confidential and shall not be disclosed under Chapter 132 of the General Statutes, except as may be necessary in proceedings under this Chapter or other applicable law. (e) Following any inspection, the Director shall provide a detailed report of findings to the manufacturer or importer, including any identified deficiencies and required corrective actions. (f) Every person who is a manufacturer or importer of a licensed chatbot under this Chapter shall establish and maintain such records, and make such reports to the Director, as the Director may by regulation reasonably require to assure the safety and effectiveness of such devices.
Other · ChatbotHealthcare
G.S. 114B-6(a)-(c)
Plain Language
This provision lists prohibited acts under the Chatbot Licensing Act — operating without a license, failing to comply with any chapter requirement, refusing record access, and failing to report adverse events — and establishes a $50,000 civil penalty for violations of the enforcement/inspection provisions (§ 114B-5). The Department may exempt certain acts from prohibition if consistent with public protection. This is an enforcement provision that activates consequences for other obligations but does not create an independent compliance duty.
Statutory Text
(a) It is unlawful for any person to do any of the following: (1) Introduce or deliver for introduction into state commerce any chatbot that deals substantially with health information without complying with the licensing requirement of this Chapter. (2) Fail to comply with any requirement of this Chapter or any rule adopted hereunder. (3) Refuse to permit access to or copying of any record as required by this Chapter. (4) Fail to report adverse events as required under this Chapter. (b) The Department may, at its discretion, exempt certain prohibited acts from some or all of these prohibitions if it determines that the exemption is consistent with the protection of the public. (c) Any person who violates any provision of G.S. 114B-5 shall be subject to civil penalties in the amount of $50,000. The clear proceeds of fines and forfeitures provided for in Chapter shall be remitted to the Civil Penalty and Forfeiture Fund in accordance with G.S. 115C-457.2.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1 · Deployer · Chatbot
G.S. 170-3(a), (b)(4)
Plain Language
Covered platforms are subject to a general duty of loyalty prohibiting them from processing data or designing chatbot systems in ways that significantly conflict with users' best interests. The specific duty of loyalty in influence prohibits platforms from using data processing or chatbot design to influence users toward results that are against their best interests. 'Best interests' is defined broadly as interests affected by the user's entrustment of data, labor, or attention. This is a fiduciary-like obligation that constrains platform design and data use holistically — any data processing or system design choice that works against user interests is potentially a violation.
Statutory Text
(a) A covered platform shall not process data or design chatbot systems and tools in ways that significantly conflict with trusting parties' best interests, as implicated by their interactions with chatbots. (4) Duty of loyalty in influence. — A covered platform shall not process data or design chatbot systems and tools in ways that influence trusting parties to achieve particular results that are against the best interests of trusting parties.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
G.S. 170-3(b)(1)
Plain Language
Covered platforms must implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations — defined as when a user indicates intent to harm themselves or others. The platform must prioritize user safety and well-being over the platform's other interests (e.g., engagement, retention, revenue). This is a continuous operational requirement covering detection, response, reporting, and mitigation. Unlike some other state chatbot statutes that specify crisis referral services (e.g., 988 Lifeline), this statute leaves the specific response mechanism to the platform's discretion as long as it is 'reasonably effective.'
Statutory Text
(1) Duty of loyalty in emergency situations. — A covered platform shall implement and maintain reasonably effective systems to detect, promptly respond to, report, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users over the platform's other interests.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.4 · Deployer · Chatbot
G.S. 170-3(b)(2)
Plain Language
Covered platforms that operate chatbots designed to generate social connections, engage in extended human-mimicking conversation, or provide emotional support or companionship must implement and maintain reasonably effective systems to detect and prevent users from becoming emotionally dependent on the chatbot. The platform must prioritize user psychological well-being over engagement or retention metrics. This duty is limited to platforms whose chatbots meet the companion/social chatbot criteria based on intended purpose, design features, conversational capabilities, and interaction patterns — it does not apply to purely informational or transactional chatbots.
Statutory Text
(2) Duty of loyalty regarding emotional dependence. — A covered platforms shall implement and maintain reasonably effective systems to detect and prevent emotional dependence of a user on a chatbot, prioritizing the user's psychological well-being over the platform's interest in user engagement or retention. a. This duty only applies to any covered platform that utilizes a chatbot designed to (i) generate social connections with users, (ii) engage in extended conversation mimicking human interaction, or (iii) provide emotional support or companionship. b. The determination required by sub-subdivision a. of this subdivision shall be based on the chatbot's intended purpose, design features, conversational capabilities, and interaction patterns with users.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
G.S. 170-3(b)(3)
Plain Language
When the chatbot's artificial nature is not clearly apparent, the covered platform must clearly and consistently identify it as non-human. The platform is also affirmatively prohibited from processing data or designing systems that deceive or mislead users about the chatbot's non-human nature. This is a conditional trigger — the identity disclosure obligation activates when the AI nature is not already obvious — combined with an unconditional prohibition on deceptive design. The platform must prioritize transparency over any benefits of perceived human-like interaction.
Statutory Text
(3) Duty of loyalty un chatbot identity disclosure. — A covered platform has a duty to clearly and consistently identify the chatbot as an artificial entity when that fact is not clearly apparent. The platform shall not process data or design systems in ways that deceive or mislead users about the non-human nature of the chatbot, prioritizing transparency over any potential benefits of perceived human-like interaction.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · Chatbot
G.S. 170-3(b)(5)
Plain Language
Covered platforms may only collect and store user data that does not conflict with users' best interests. All data collected must satisfy a three-part test: it must be (1) adequate — sufficient to fulfill a legitimate platform purpose, (2) relevant — linked to that legitimate purpose, and (3) necessary — the minimum amount needed for that purpose. This is a strict data minimization obligation framed through a loyalty lens — the platform must not only minimize data but ensure collection doesn't conflict with user interests.
Statutory Text
(5) Duty of loyalty in collection. — A covered platform shall collect and store only that information that does not conflict with a trusting party's best interests. Such information must be (i) adequate, in the sense that it is sufficient to fulfill a legitimate purpose of the platform; (ii) relevant, in the sense that the information has a relevant link to that legitimate purpose, and (iii) necessary, in the sense that it is the minimum amount of information which is needed for that legitimate purpose.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.1 · Deployer · Chatbot
G.S. 170-3(b)(6)
Plain Language
When a covered platform personalizes chatbot content based on a user's personal information or characteristics, it must do so in a way that is loyal to the user's best interests. This means personalization algorithms and content selection must prioritize user welfare over platform commercial interests. Personalization that exploits user data to drive engagement at the expense of user well-being would violate this duty.
Statutory Text
(6) Duty of loyalty in personalization. — A covered platform shall be loyal to the best interests of trusting parties when personalizing content based upon personal information or characteristics.
D-01 Automated Processing Rights & Data Controls · D-01.5 · Deployer · Chatbot
G.S. 170-3(b)(7)
Plain Language
Covered platforms must act as loyal gatekeepers of user personal information, meaning they must avoid conflicts with user interests when sharing data with governments or other third parties. This effectively constrains third-party data sharing to situations that do not conflict with user interests. Platforms cannot share user data with governments or third parties in ways that undermine the user's interests, even if the sharing is otherwise lawful.
Statutory Text
(7) Duty of loyalty in gatekeeping. — A covered platform shall be a loyal gatekeeper of personal information from a trusted party, including avoiding conflicts to the best interests of trusting parties when allowing government or other third-party access to trusting parties and their data.
Other · Deployer · Chatbot
G.S. 170-4(a)-(c)
Plain Language
Covered platforms must establish their duties to end-users through a terms of service agreement that is clear, conspicuous, and easily understandable. The agreement must explicitly outline the platform's obligations, describe user rights and protections, and require affirmative user consent before taking effect. Material changes require clear notice and renewed consent. The agreement must be accessible at all times through the platform's app or website. This is a contractual framework requirement — it mandates how the platform's duties are communicated and agreed to, not what those duties are (those are specified elsewhere in the Chapter).
Statutory Text
(a) The duties between a covered platform and an end-user shall be established through a terms of service agreement which is presented to the end-user in clear, conspicuous, and easily understandable language. The terms of service agreement must (i) explicitly outline the online service provider's obligations, (ii) describe the rights and protections afforded to the end-user under this relationship, and (iii) require affirmative consent from the end-user before the agreement takes effect. (b) The covered platform must provide clear notice to end-users of any material changes to the terms of service agreement and obtain renewed consent for such changes. (c) The terms of service agreement must be easily accessible to users at all times through the covered platform's application or the covered platform's website.
T-01 AI Identity Disclosure · T-01.1T-01.3 · Deployer · Chatbot
G.S. 170-5(a)-(e)
Plain Language
Covered platforms must implement a detailed chatbot identification disclosure process with four specific elements: the chatbot must be identified as (1) not human, human-like, or sentient, (2) a computer program mimicking conversation based on statistical analysis, (3) incapable of emotions like love or lust, and (4) without personal preferences or feelings. This disclosure must be under 300 words, readily accessible, and clearly presented. Users must provide affirmative, informed consent (e.g., clicking 'I understand') confirming they understand the chatbot's identity and limitations. Platforms may not use deceptive design elements to manipulate the consent process. Critically, this identification and consent process must be repeated at the start of each new session and must be separate from privacy policies or other consent processes. This is among the most prescriptive AI identity disclosure requirements in any U.S. chatbot statute.
Statutory Text
(a) The chatbot identification process shall include all of the following elements: (1) A covered platform shall clearly inform users that the chatbot is: a. Not human, human-like, or sentient. b. A computer program designed to mimic human conversation based on statistical analysis of human-produced text. c. Incapable of experiencing emotions such as love or lust. d. Without personal preferences or feelings. (2) The information required by subdivision (1) of this subsection shall be readily accessible, clearly presented, and concisely conveyed in less than three hundred (300) words. (b) A users shall provide explicit and informed consent to interact with the chatbot. The consent process shall: (1) Require an affirmative action from the user (such as clicking an "I understand" button); and (2) Confirm the user's understanding of the chatbot's identity and limitations. (c) A covered platform is prohibited from using deceptive design elements that manipulate or coerce users into providing consent or obscure the nature of the chatbot or the consent process. (d) The chatbot identity communication and opt-in consent process shall be repeated at the start of each new session with a user. (e) The chatbot identification and consent process required by this section shall be separate and distinct from any privacy policy agreement or other consent processes required by law or platform policy.
D-01 Automated Processing Rights & Data Controls · D-01.4D-01.5 · Deployer · Chatbot
G.S. 170-6(a)-(d)
Plain Language
Covered platforms must implement four data privacy requirements: (1) all user-related data from chatbot conversations or third-party cookies must be de-identified before storage and analysis; (2) sensitive personal information from chatbot use must not be incorporated into aggregate training datasets for any chatbot or generative AI system; (3) non-sensitive chatbot conversations must be stored for at least 60 days; and (4) all messages between users and chatbots must use transport encryption. Additionally, chatbots deployed in healthcare, financial services, legal, government, mental health, education, or any other domain primarily processing sensitive personal information must use self-destructing messages that automatically and irreversibly delete data 30 days after acquisition. The training data prohibition is notable — it categorically blocks use of sensitive personal information derived from user interactions in model training.
Statutory Text
(a) A covered platform must do each of the following: (1) Ensure that all user-related data disclosed collected through conversations between users and chatbots or through third-party cookies, undergoes a process of de-identification prior to storage and analysis; (2) Take reasonable care to prohibit the incorporation or inclusion of any sensitive personal information derived from a user during the use of a chatbot into an aggregate dataset used to train any chatbot or generative artificial intelligence system. (3) Store all chatbot conversations which does not include sensitive personal information for at least sixty (60) days. (b) Each covered platform that meets the standard set forth in subsection (a) of this section shall utilize self-destructing messages with a predetermined destruction period of thirty (30) days after the data has been acquired. (c) The requirements of subsection (b) of this section shall apply to all chatbots which are employed in: healthcare, financial services, the legal field, government services, mental health support, and education. In general, this applies to any domain, beyond those specifically listed, where chatbots are employed primarily for the processing or storage of sensitive personal information. (d) All covered platforms shall utilize transport encryption for all messages between a user and a chatbot.
Other · Chatbot
G.S. 170-7(a)-(d)
Plain Language
This provision establishes two enforcement mechanisms for Chapter 170. The Attorney General may bring parens patriae actions on behalf of state residents to enjoin violations, obtain damages, restitution, or other compensation. Separately, any person who suffers injury in fact may bring a private civil action to enjoin violations, recover the greater of actual damages or $1,000 per violation, obtain attorneys' fees and costs, and obtain any other appropriate relief. Private actions are subject to a two-year discovery-based statute of limitations and a one-action-per-violation-per-platform limit. The rights and remedies cannot be waived by agreement, policy, or condition of service. This creates no new compliance obligation — it establishes how existing Chapter 170 obligations are enforced.
Statutory Text
(a) In any case in which the Attorney General has reason to believe that a covered platform has violated or is violating any provision of this Chapter, the State, as parens patriae, may bring a civil action on behalf of the residents of the State to (i) enjoin any practice violating this Chapter and enforce compliance with the pertinent section or sections on behalf of residents of the State; (ii) obtain damages, restitution, or other compensation, each of which shall be distributed in accordance with State law; or (iii) obtain such other relief as the court may consider to be appropriate. (b) Any person who suffers injury in fact as a result of a violation of this Chapter may bring a civil action against the covered platform to enjoin further the violation; recover damages in an amount equal to the greater of actual damages or one thousand dollars ($1,000) per violation; obtain reasonable attorneys' fees and litigation costs; and obtain any other relief that the court deems appropriate. (c) An action under paragraph subsection (b) of this section may not be brought more than two (2) years after the date on which the person first discovered or reasonably should have discovered the violation. No person shall be permitted to bring more than one action under this subsection against the same covered platform for the same alleged violation. (d) The rights and remedies provided for in this subsection may not be waived by any agreement, policy, form, or condition of service.