S-0963
SC · State · USA
SC
USA
● Pending
Proposed Effective Date
2025-01-01
South Carolina S. 963 — Consumer Protections in Interactions with Artificial Intelligence Systems Act
Establishes obligations for developers and deployers of high-risk AI systems used to make or substantially factor into consequential decisions affecting South Carolina consumers in areas such as employment, housing, credit, healthcare, insurance, education, and legal services. Requires developers to provide deployers with documentation on intended uses, training data, bias testing, and discrimination risks, and to publish a public summary of their high-risk AI systems. Deployers must implement a risk management program, complete annual impact assessments, notify consumers before and after adverse AI-driven decisions, and report discovered algorithmic discrimination to the Attorney General within 90 days. Also requires disclosure to consumers when they interact with any AI system. Enforcement is exclusively by the Attorney General; violations are treated as unfair trade practices. No private right of action is created. Small deployers (fewer than 50 employees who do not train the AI with their own data) are exempt from certain obligations if they pass through developer impact assessments to consumers.
Summary

Establishes obligations for developers and deployers of high-risk AI systems used to make or substantially factor into consequential decisions affecting South Carolina consumers in areas such as employment, housing, credit, healthcare, insurance, education, and legal services. Requires developers to provide deployers with documentation on intended uses, training data, bias testing, and discrimination risks, and to publish a public summary of their high-risk AI systems. Deployers must implement a risk management program, complete annual impact assessments, notify consumers before and after adverse AI-driven decisions, and report discovered algorithmic discrimination to the Attorney General within 90 days. Also requires disclosure to consumers when they interact with any AI system. Enforcement is exclusively by the Attorney General; violations are treated as unfair trade practices. No private right of action is created. Small deployers (fewer than 50 employees who do not train the AI with their own data) are exempt from certain obligations if they pass through developer impact assessments to consumers.

Enforcement & Penalties
Enforcement Authority
The Attorney General has exclusive enforcement authority under Section 37-31-60. No private right of action is created; the statute expressly provides that it does not provide the basis for a private right of action. Violations constitute unfair trade practices under Chapter 6 of Title 37. An affirmative defense is available to developers, deployers, or other persons who discover and cure a violation through feedback, adversarial testing, or internal review and are otherwise in compliance with the NIST AI RMF, ISO/IEC 42001, or another recognized or AG-designated risk management framework.
Penalties
Violations constitute unfair trade practices under Chapter 6 of Title 37 (South Carolina Unfair Trade Practices Act), which provides for civil penalties, injunctive relief, and other remedies available under that chapter. The bill itself does not specify separate penalty amounts or damages provisions beyond incorporating the existing UTPA remedies.
Who Is Covered
"Deployer" means a person doing business in this State that deploys a high-risk artificial intelligence system.
"Developer" means a person doing business in this State that develops or intentionally and substantially modifies an artificial intelligence system.
What Is Covered
"Artificial intelligence system" means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.
"High-risk artificial intelligence system" means any artificial intelligence system that, when deployed, makes or is a substantial factor in making, a consequential decision. "High-risk artificial intelligence system" does not include: (i) an artificial intelligence system if the artificial intelligence system is intended to: (A) perform a narrow procedural task; (B) detect decision-making patterns or deviations from prior decision-marking patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; or (C) the following technologies, unless the technologies, when deployed, make or are a substantial factor in making, a consequential decision: (1) antimalware; (2) antivirus; (3) artificial intelligence-enabled video games; (4) calculators; (5) cybersecurity; (6) databases; (7) data storage; (8) firewall; (9) internet domain registration; (10) internet website loading; (11) networking; (12) spam and robocall filtering; (13) spell-checking; (14) spreadsheets; (15) web caching; (16) web hosting or any similar technology; or (17) technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
Compliance Obligations 18 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Developer · Automated Decisionmaking
Section 37-31-20(A)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect South Carolina consumers from known or reasonably foreseeable algorithmic discrimination risks arising from intended and contracted uses. A rebuttable presumption of reasonable care applies if the developer complied with all requirements of Section 37-31-20 and any AG-adopted rules. Self-testing for bias mitigation and diversity expansion uses are carved out from the definition of algorithmic discrimination.
Statutory Text
(A) A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought by the Attorney General pursuant to Section 37-31-60, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Section 37-31-20(B)(1)-(4), (F)
Plain Language
Developers must provide deployers with comprehensive documentation covering: foreseeable and harmful uses, training data summaries, system limitations, algorithmic discrimination risks, performance evaluation methodology, data governance measures, intended outputs, mitigation measures, and usage/monitoring guidance. This is essentially a model card obligation directed at downstream deployers. The obligation does not require disclosure of trade secrets, legally protected information, or information creating security risks for the developer.
Statutory Text
(B) Except as provided in subsection (F), a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (a) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (b) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (c) the purpose of the high-risk artificial intelligence system; (d) the intended benefits and uses of the high-risk artificial intelligence system; and (e) all other information necessary to allow the deployer to comply with the requirements of Section 37-31-30; (3) documentation describing: (a) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (b) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (c) the intended outputs of the high-risk artificial intelligence system; (d) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (e) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination. (F) Nothing in subsections (B) through (E) requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Section 37-31-20(C)(1)-(2)
Plain Language
Developers must provide deployers with the documentation and artifacts — such as model cards, dataset cards, or impact assessments — needed for deployers to complete their own impact assessments. This obligation applies to the extent feasible and does not require a developer that also serves as its own deployer to generate this documentation unless the system is provided to an unaffiliated deployer.
Statutory Text
(C)(1) Except as provided in subsection (F), a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to Section 37-31-30(C). (2) A developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
Section 37-31-20(D)(1)-(2)
Plain Language
Developers must publish on their website or in a public use case inventory a clear summary of the types of high-risk AI systems they offer and how they manage algorithmic discrimination risks. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a covered system. Routine post-deployment learning that was anticipated in the initial impact assessment and documented in technical documentation does not trigger the update obligation.
Statutory Text
(D)(1) A developer shall make available, in a manner that is clear and readily available on the developer's website or in a public-use case inventory, a statement summarizing: (a) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (b) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with item (1)(a). (2) A developer shall update the statement described in item (1): (a) as necessary to ensure that the statement remains accurate; and (b) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in item (1)(a).
R-01 Incident Reporting · R-01.3 · Developer · Automated Decisionmaking
Section 37-31-20(E)(1)-(2)
Plain Language
When a developer discovers — through its own testing or through a credible deployer report — that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify both the Attorney General and all known deployers within 90 days. This dual-notification obligation is triggered by either the developer's own discovery or receipt of a credible external report.
Statutory Text
(E) A developer of a high-risk artificial intelligence system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
Section 37-31-20(G)
Plain Language
The Attorney General may request that a developer produce the deployer-facing documentation described in Section 37-31-20(B) within 90 days. Developers may designate materials as proprietary or trade secret, and attorney-client privilege and work-product protections are preserved. The disclosed documentation is exempt from FOIA. This requires developers to maintain their documentation in a form producible to the AG on demand.
Statutory Text
(G) The Attorney General may require that a developer disclose to the Attorney General, no later than ninety days after the request and in a form and manner prescribed by the Attorney General, the statement or documentation described in subsection (B). The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
H-02 Non-Discrimination & Bias Assessment · H-02.3 · Deployer · Automated Decisionmaking
Section 37-31-30(A)
Plain Language
Deployers must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination when deploying high-risk AI systems. A rebuttable presumption of compliance applies if the deployer complied with all requirements of Section 37-31-30 and any AG-adopted rules. This is the deployer-side analog to the developer duty in Section 37-31-20(A).
Statutory Text
(A) A deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought by the Attorney General pursuant to Section 37-31-70, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules adopted by the Attorney General pursuant to Section 37-31-70.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
Section 37-31-30(B)(1)-(2)
Plain Language
Deployers must establish and maintain a risk management policy and program covering their deployment of high-risk AI systems. The program must identify, document, and mitigate algorithmic discrimination risks, specify responsible personnel, and be iteratively reviewed and updated throughout each system's lifecycle. Reasonableness is calibrated to the NIST AI RMF, ISO/IEC 42001, or another recognized or AG-designated framework, as well as the deployer's size, system scope, and data sensitivity. A single program may cover multiple high-risk AI systems. Small deployers (fewer than 50 employees, not training with own data) using the system for intended purposes and passing through the developer's impact assessment are exempt per subsection (F).
Statutory Text
(B)(1) Except as provided in subsection (F), a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection must be reasonable considering: (a)(i) The guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (ii) any risk management framework for artificial intelligence systems that the Attorney General, in his discretion, may designate; (b) the size and complexity of the deployer; (c) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (d) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) A risk management policy and program implemented pursuant to item (1) may cover multiple high-risk artificial intelligence systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.8H-02.10 · Deployer · Automated Decisionmaking
Section 37-31-30(C)(1)-(7)
Plain Language
Deployers must complete an impact assessment before deploying each high-risk AI system, and repeat it at least annually and within 90 days of any intentional and substantial modification. The assessment must cover: system purpose and deployment context, algorithmic discrimination risk analysis and mitigation, data inputs and outputs, customization data, performance metrics, transparency measures, and post-deployment monitoring safeguards. A single assessment may cover comparable systems, and an assessment completed under another law satisfies this requirement if substantially similar in scope. All impact assessments and records must be retained for at least three years after final deployment. Additionally, deployers must conduct at least annual reviews to verify that deployed systems are not causing algorithmic discrimination. Small deployers meeting the subsection (F) criteria are exempt.
Statutory Text
(C)(1) Except as provided in items (4), (5), and subsection (F) of this section: (a) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system shall complete an impact assessment for the high-risk artificial intelligence system; and (b) a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) An impact assessment completed pursuant to this subsection must include, at a minimum, and to the extent reasonably known by or available to the deployer: (a) a statement by the deployer disclosing the purpose, intended-use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (b) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (c) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (d) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (e) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (f) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (g) a description of the postdeployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) In addition to the information required under item (2), an impact assessment completed pursuant to this item following an intentional and substantial modification to a high-risk artificial intelligence system must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. (6) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection, all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) At least annually, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Section 37-31-30(D)(1)(a)-(c)
Plain Language
Before a high-risk AI system makes or substantially factors into a consequential decision about a consumer, the deployer must: (1) notify the consumer that AI is being used for this purpose, (2) provide a plain-language description of the AI system, its purpose, the nature of the consequential decision, and deployer contact information, and (3) inform the consumer of any applicable opt-out rights regarding profiling. This is a pre-decision notice requirement — the consumer must know before the decision is made.
Statutory Text
(D)(1) No later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (a) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (b) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by this item; and (c) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer pursuant to Section 30-31-60(A)(1)(a)(iii).
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.4H-01.5 · Deployer · Automated Decisionmaking
Section 37-31-30(D)(2)(a)-(c)
Plain Language
When a high-risk AI system contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) an explanation of the principal reasons for the decision, including the AI system's degree of contribution, the type of data processed, and data sources; (2) an opportunity to correct inaccurate personal data used in the decision; and (3) an opportunity to appeal, with human review if technically feasible. The human review exception applies where appeal delay would pose a risk to the consumer's life or safety. These post-decision rights apply only to adverse decisions.
Statutory Text
(2) A deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (a) a statement disclosing the principal reason or reasons for the consequential decision, including: (i) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (ii) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (iii) the source or sources of the data described in item (2)(a)(ii); (b) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (c) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Section 37-31-30(D)(3)(a)-(b)
Plain Language
All notices and disclosures required under Section 37-31-30(D)(1) and (2) must be provided directly to the consumer, in plain language, in all languages the deployer uses in its ordinary business communications, and in accessible formats for consumers with disabilities. If direct provision is not possible, the deployer must use an alternative method reasonably calculated to reach the consumer. This is a formatting and delivery requirement that conditions the obligations in the preceding subsections.
Statutory Text
(3)(a) Except as provided in subitem (b), a deployer shall provide the notice, statement, contact information, and description required by items (1) and (2): (i) directly to the consumer; (ii) in plain language; (iii) in all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (iv) in a format that is accessible to consumers with disabilities. (b) If the deployer is unable to provide the notice, statement, contact information, and description required by items (1) and (2) directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Section 37-31-30(E)(1)-(2)
Plain Language
Deployers must publish on their website a clear summary of: the types of high-risk AI systems they deploy, how they manage algorithmic discrimination risks for each system, and the nature, source, and extent of data they collect and use. The statement must be periodically updated. Small deployers meeting the subsection (F) criteria are exempt.
Statutory Text
(E)(1) Except as provided in subsection (F), a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (a) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (b) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subitem (a); and (c) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) A deployer shall periodically update the statement described in item (1) of this section.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
Section 37-31-30(G)
Plain Language
If a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the Attorney General within 90 days of discovery, in the form and manner prescribed by the AG. This is an incident-reporting obligation triggered by actual discovery of discrimination, not by a suspicion or risk assessment.
Statutory Text
(G) If a deployer deploys a high-risk artificial intelligence system and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by him, a notice disclosing the discovery.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
Section 37-31-30(I)
Plain Language
The Attorney General may request that a deployer produce its risk management policy, impact assessments, and associated records within 90 days. Deployers may designate materials as proprietary or trade secret, and attorney-client privilege and work-product protections are preserved. All disclosed materials are exempt from FOIA. This requires deployers to maintain documentation in a form producible to the AG on demand.
Statutory Text
(I) The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to him, no later than ninety days after the request and in a form and manner prescribed by him, the risk management policy implemented pursuant to subsection (B), the impact assessment completed pursuant to subsection (C), or the records maintained pursuant to subsection (C)(6). The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the South Carolina Freedom of Information Act. In a disclosure made pursuant to this subsection, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
Section 37-31-40(A)-(B)
Plain Language
Any deployer or developer that makes an AI system available for consumer interaction must disclose to each consumer that they are interacting with an AI system. This disclosure is not required where it would be obvious to a reasonable person that they are interacting with AI. Note that this obligation applies to all AI systems intended to interact with consumers — not just high-risk systems — making it broader in scope than the rest of the chapter's high-risk-focused requirements.
Statutory Text
(A) Except as provided in subsection (B), a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (B) Disclosure is not required under subsection (A) under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Other · Automated Decisionmaking
Section 37-31-60(1)-(6)
Plain Language
This provision establishes the enforcement framework for the chapter. The Attorney General has exclusive enforcement authority. Violations are treated as unfair trade practices under SC's existing UTPA (Chapter 6, Title 37). An affirmative defense is available if the entity discovered and cured the violation through feedback, red-teaming, or internal review and otherwise complies with the NIST AI RMF, ISO/IEC 42001, or another recognized framework. No private right of action is created, and the chapter's presumptions and defenses apply only to AG enforcement actions, not to common law or other statutory claims.
Statutory Text
(1) Notwithstanding Section 37-31-30, the Attorney General has exclusive authority to enforce this chapter. (2) Except as provided in item (3), a violation of the requirements established in this chapter constitutes an unfair trade practice pursuant to the provisions of Chapter 6 of this title. (3) In any action commenced by the Attorney General to enforce this chapter, it is an affirmative defense that the developer, deployer, or other person: (a) discovers and cures a violation of this chapter as a result of: (i) feedback that the developer, deployer, or other person encourages deployers or users to provide to the developer, deployer, or other person; (ii) adversarial testing or red teaming, as those terms are defined or used by the National Institute of Standards and Technology; or (iii) an internal review process; and (b) is otherwise in compliance with: (i) the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce and standard ISO/IEC 42001 of the International Organization for Standardization; (ii) another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (iii) any risk management framework for artificial intelligence systems that the Attorney General, in his discretion, may designate and, if designated, shall publicly disseminate. (4) A developer, a deployer, or other person bears the burden of demonstrating to the Attorney General that the requirements established in item (3) have been satisfied. (5) Nothing in this chapter, including the enforcement authority granted to the Attorney General under this section, preempts or otherwise affects any right, claim, remedy, presumption, or defense available at law or in equity. A rebuttable presumption or affirmative defense established under this chapter applies only to an enforcement action brought by the Attorney General pursuant to this section and does not apply to any right, claim, remedy, presumption, or defense available at law or in equity. (6) This chapter does not provide the basis for, and is not subject to, a private right of action for violations of this chapter or any other law.
Other · Automated Decisionmaking
Section 37-31-70(1)-(6)
Plain Language
This provision authorizes the Attorney General to adopt rules implementing and enforcing the chapter, covering developer documentation requirements, notice and disclosure formats, risk management program content, impact assessment requirements, rebuttable presumption criteria, and the affirmative defense framework. It creates no independent compliance obligation on developers or deployers — it is a delegation of rulemaking authority.
Statutory Text
The Attorney General may promulgate rules as necessary for the purpose of implementing and enforcing this chapter, including: (1) the documentation and requirements for developers pursuant to Section 37-31-20(B); (2) the contents of and requirements for the notices and disclosures required by Sections 37-31-20(E) and (G); 37-31-30(D), (E), (F), and (H); and 37-31-40; (3) the content and requirements of the risk management policy and program required by Section 37-31-30(B); (4) the content and requirements of the impact assessments required by Section 37-31-30(C); (5) the requirements for the rebuttable presumptions set forth in Sections 37-31-20 and 37-31-30; and (6) the requirements for the affirmative defense set forth in Section 37-31-60(C), including the process by which the Attorney General will recognize any other nationally or internationally recognized risk management framework for artificial intelligence systems.