SB-167
GA · State · USA
GA
USA
● Pending
Proposed Effective Date
2025-07-01
Georgia Senate Bill 167 — Automated Decision Systems and Algorithmic Discrimination
Georgia SB 167 regulates developers and deployers of automated decision systems used to make or assist in consequential decisions — covering education, employment, government services, financial/lending, healthcare, housing, insurance, and legal services. Developers must disclose system documentation to the Attorney General and deployers, take steps to mitigate algorithmic discrimination, and publish public summaries of their systems. Deployers must implement risk management programs, complete annual impact assessments, provide pre-decision and post-decision notices to consumers including explanation of principal factors and appeal rights, and publish impact assessments on their websites. AI systems intended to interact with consumers must disclose their AI nature. Enforcement is through the Attorney General under the Fair Business Practices Act, with an affirmative defense for entities that self-discover and promptly cure violations while maintaining compliance with recognized risk management frameworks such as the NIST AI RMF.
Summary

Georgia SB 167 regulates developers and deployers of automated decision systems used to make or assist in consequential decisions — covering education, employment, government services, financial/lending, healthcare, housing, insurance, and legal services. Developers must disclose system documentation to the Attorney General and deployers, take steps to mitigate algorithmic discrimination, and publish public summaries of their systems. Deployers must implement risk management programs, complete annual impact assessments, provide pre-decision and post-decision notices to consumers including explanation of principal factors and appeal rights, and publish impact assessments on their websites. AI systems intended to interact with consumers must disclose their AI nature. Enforcement is through the Attorney General under the Fair Business Practices Act, with an affirmative defense for entities that self-discover and promptly cure violations while maintaining compliance with recognized risk management frameworks such as the NIST AI RMF.

Enforcement & Penalties
Enforcement Authority
Enforcement by the Attorney General through the Fair Business Practices Act of 1975 (O.C.G.A. Part 2 of Article 15 of Chapter 1 of Title 10). The Attorney General may commence enforcement actions, require disclosure of documentation within seven days, and promulgate rules necessary for implementation. An affirmative defense is available if the developer or deployer discovers the violation through adversarial testing or internal review, cures within seven days and reports to the AG and affected consumers, is otherwise in compliance with the chapter and a recognized AI risk management framework, and the violation was inadvertent, affected fewer than 100 consumers, and could not have been discovered through reasonable diligence. The statute expressly preserves all existing rights, claims, remedies, presumptions, and defenses available at law or in equity; the affirmative defense applies only to AG enforcement actions.
Penalties
Enforcement through the Fair Business Practices Act of 1975 (O.C.G.A. § 10-1-390 et seq.), which provides for injunctive relief, civil penalties, and other remedies available under that Act. The statute expressly preserves all rights, claims, remedies, presumptions, and defenses available at law or in equity, meaning existing causes of action (e.g., discrimination claims) remain fully available. No specific statutory minimum or per-violation penalty amount is specified in the new chapter itself — penalties flow from the FBPA framework.
Who Is Covered
'Deployer' means a person doing business in this state that deploys an automated decision system.
'Developer' means a person doing business in this state that develops or intentionally and substantially modifies an artificial intelligence system.
What Is Covered
'Automated decision system' means a computational process derived from machine learning, statistical modeling, data analytics, or an artificial intelligence system that, when deployed, issues a simplified output, including, but not limited to, a score, classification, or recommendation, that is used to assist or replace human discretionary decision making and materially impacts natural persons. Such term shall not include a tool that does not assist or replace processes for making consequential decisions and that does not materially impact natural persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, or other tool that does no more than organize data already in possession of the deployer of the automated decision system.
Compliance Obligations 18 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Developer · Automated Decisionmaking
O.C.G.A. § 10-16-2(a)
Plain Language
Developers are categorically prohibited from selling, distributing, or otherwise making available to deployers any automated decision system that results in algorithmic discrimination. The prohibition covers discrimination and disparate impact across a broad set of protected characteristics in the context of consequential decisions. Self-testing to identify or mitigate discrimination and diversity-expanding uses are carved out from the definition of algorithmic discrimination.
Statutory Text
No developer shall sell, distribute, or otherwise make available to deployers an automated decision system that results in algorithmic discrimination.
R-02 Regulatory Disclosure & Submissions · R-02.1 · Developer · Automated Decisionmaking
O.C.G.A. § 10-16-2(b)
Plain Language
Developers must submit comprehensive documentation about each automated decision system to the Attorney General, in a form the AG prescribes. The required information covers foreseeable uses and misuses, system purpose and benefits, training data summaries, known limitations and discrimination risks, mitigation measures taken, pre-deployment evaluation methods, data governance measures, usage and monitoring guidance, and any additional documentation deployers need for compliance. Developers may make reasonable trade-secret redactions under § 10-16-2(f) but must notify the AG and provide a basis for the redaction, and may not redact information deployers need for their own compliance obligations.
Statutory Text
Except as provided in subsection (f) of this Code section, a developer of an automated decision system shall provide certain information regarding such automated decision system to the Attorney General, in a form and manner prescribed by the Attorney General. Such information shall include, at a minimum: (1) A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the automated decision system; (2) Documentation disclosing: (A) The purpose of the automated decision system; (B) The intended benefits and uses of the automated decision system; (C) High-level summaries of the types of data used to train the automated decision system; (D) Known or reasonably foreseeable limitations of the automated decision system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system; (E) The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination; (F) How the automated decision system was evaluated for performance and mitigation of algorithmic discrimination before the automated decision system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (G) The data governance measures used to cover the training data sets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (H) How the automated decision system should be used, not be used, and be monitored by an individual when the automated decision system is used to make, or assist in making, a consequential decision; and (I) All other information necessary to allow the deployer to comply with the requirements of Code Section 10-16-3; and (3) Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitoring the performance of the automated decision system for risks of algorithmic discrimination.
T-03 Training Data Disclosure · T-03.3 · Developer · Automated Decisionmaking
O.C.G.A. § 10-16-2(c)
Plain Language
When a developer provides an automated decision system to a deployer or other developer, the developer must share — to the extent feasible — all the documentation required for the AG submission (including data governance measures, training data summaries, and bias mitigation steps), plus whatever additional information the deployer needs to complete its impact assessment (e.g., model cards, dataset cards). A developer that is also the deployer of its own system is exempt from generating this documentation unless the system is provided to an unaffiliated deployer. Trade secret redactions are permitted under § 10-16-2(f) but may not cover information the deployer needs for compliance.
Statutory Text
(1) Except as provided in subsection (f) of this Code section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer an automated decision system shall make available to the deployer or other developer, to the extent feasible, all of the information required to be provided to the Attorney General by subsection (b) of this Code section, as well as the documentation and information, through artifacts such as model cards, data set cards, or other impact assessments, necessary for a deployer or third party contracted by a deployer to complete an impact assessment pursuant to subsection (e) of Code Section 10-16-3. (2) A developer that also serves as a deployer for an automated decision system is not required to generate the documentation required by this subsection unless the automated decision system is provided to an unaffiliated entity acting as a deployer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
O.C.G.A. § 10-16-2(d)
Plain Language
Developers must publish on their website or in a public use case inventory a clear summary of the types of automated decision systems they currently make available and how they manage algorithmic discrimination risks. This statement must be kept accurate on an ongoing basis and updated within 90 days of any intentional and substantial modification to a described system. Continuous learning changes that were predetermined and documented in the initial impact assessment are excluded from the modification trigger.
Statutory Text
(1) A developer shall make available to the public, in a manner that is clear and readily available on the developer's public website or in a public use case inventory, a statement summarizing: (A) The types of automated decision systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (B) How the developer manages known or reasonably foreseeable risks of algorithmic discrimination. (2) A developer shall update the statement described in paragraph (1) of this subsection: (A) As necessary to ensure that the statement remains accurate; and (B) No later than 90 days after the developer intentionally and substantially modifies any automated decision system described in such statement.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.2 · Developer · Automated Decisionmaking
O.C.G.A. § 10-16-2(e)
Plain Language
Developers have a continuous obligation to test for and mitigate algorithmic discrimination, invalidity, and errors — including ensuring representative data sources, implementing data governance, testing for disparate impact, and searching for less discriminatory alternatives. This obligation persists for as long as any deployer uses the system. Additionally, when a developer discovers (through its own testing or a deployer's credible report) that a deployed system has caused or is reasonably likely to have caused algorithmic discrimination, it must notify the Attorney General and all known deployers or other developers within 90 days.
Statutory Text
(1) A developer of an automated decision system shall take steps to address risks of algorithmic discrimination, invalidity, and errors, including, but not limited to, ensuring suitability and representativeness of data sources, implementing data governance measures, testing the automated decision system for disparate impact, and searching for less discriminatory alternative decision methods. Developers shall continue assessing and mitigating the risk of algorithmic discrimination in their automated decision systems so long as such automated decision systems are in use by any deployer. (2) A developer of an automated decision system shall disclose to the Attorney General, in a form and manner prescribed by the Attorney General, and to all known deployers or other developers of the automated decision system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the automated decision system without unreasonable delay but no later than 90 days after the date on which: (A) The developer discovers through the developer's ongoing testing and analysis that the developer's automated decision system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (B) The developer receives from a deployer a credible report that the automated decision system has been deployed and has caused algorithmic discrimination.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
O.C.G.A. § 10-16-2(g)
Plain Language
The Attorney General may demand that developers produce any documentation or records required under § 10-16-2 within seven days. Records disclosed are exempt from Georgia open records requirements. Developers may designate materials as proprietary or trade secret, and disclosure does not waive attorney-client privilege or work-product protection. This is a regulatory-request power — the AG can compel production at any time, and developers must maintain records in a form ready for rapid assembly.
Statutory Text
The Attorney General may require that a developer disclose to the Attorney General, within seven days and in a form and manner prescribed by the Attorney General, any documentation or records required by this Code section, including, but not limited to, the statement or documentation described in subsection (b) of this Code section. The Attorney General may evaluate such statement or documentation to ensure compliance with this chapter, and, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, such records shall not be open to inspection by or made available to the public. In a disclosure pursuant to this subsection, a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-3(a)
Plain Language
Deployers are categorically prohibited from using automated decision systems in a manner that results in algorithmic discrimination. This is a strict liability prohibition — deployers are liable for discriminatory outcomes regardless of intent. It complements the parallel prohibition on developers in § 10-16-2(a).
Statutory Text
No deployer of an automated decision system shall use an automated decision system in a manner that results in algorithmic discrimination.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-3(b)-(c)
Plain Language
Deployers must implement and maintain a risk management policy and program governing their use of automated decision systems. The program must specify the principles, processes, and personnel used to identify, document, and mitigate algorithmic discrimination risks. It must be iterative — regularly and systematically reviewed and updated over the system's lifecycle — and must consider the NIST AI RMF, ISO/IEC 42001, or equivalent frameworks, as well as the deployer's size and complexity, nature and scope of deployed systems, and sensitivity and volume of data processed. A single program may cover multiple deployed systems. Small deployers meeting all conditions in § 10-16-6 (fewer than 15 employees, fewer than 1,000 affected consumers, no own-data training, etc.) are exempt.
Statutory Text
(b) Except as provided in Code Section 10-16-6, a deployer of an automated decision system shall implement a risk management policy and program to govern the deployer's deployment of the automated decision system. The risk management policy and program shall specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program shall be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of an automated decision system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection shall take into consideration: (1) Either: (A) The guidance and standards set forth in the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology of the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) Any risk management framework for artificial intelligence systems that the Attorney General, in the Attorney General's discretion, may designate; (2) The size and complexity of the deployer; (3) The nature and scope of the automated decision systems deployed by the deployer, including the intended uses of the automated decision systems; and (4) The sensitivity and volume of data processed in connection with the automated decision systems deployed by the deployer. (c) A risk management policy and program implemented pursuant to this Code section may cover multiple automated decision systems deployed by the deployer.
G-01 AI Governance Program & Documentation · G-01.3 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-3(d)
Plain Language
Deployers must establish and follow written standards, policies, and procedures governing their acquisition and use of third-party automated decision systems. This includes contractual controls ensuring developers provide all information needed for deployer compliance, procedures for reporting errors or evidence of algorithmic discrimination back to developers, and procedures for remediating and eliminating incorrect information from deployed systems. These are standing governance obligations — not one-time documentation exercises.
Statutory Text
Each deployer shall establish and adhere to: (1) Written standards, policies, procedures, and protocols for the acquisition, use of, or reliance on automated decision systems developed by third-party developers, including reasonable contractual controls ensuring that the developer statements and summaries described in subsection (b) of Code Section 10-16-2 include all information necessary for the deployer to fulfill its obligations under this Code section; (2) Procedures for reporting any incorrect information or evidence of algorithmic discrimination to a developer for further investigation and mitigation, as necessary; and (3) Procedures to remediate and eliminate incorrect information from its automated decision systems that the deployer has identified or has been reported to a developer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.8H-02.10 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-3(e)-(j)
Plain Language
Deployers (or their contracted third parties) must complete a comprehensive impact assessment for each automated decision system before deployment, at least annually thereafter, and within 90 days of any intentional and substantial modification. The assessment must cover system purpose and use cases, algorithmic discrimination risks and mitigation, accessibility impacts, labor law compliance risks, privacy intrusion risks, data categories, validity and reliability analysis, transparency measures, and post-deployment monitoring. If the assessment reveals a discrimination risk, the deployer may not deploy until less discriminatory alternatives are searched for and implemented. A single assessment may cover a comparable set of systems. Impact assessments completed for other regulatory requirements satisfy this obligation if reasonably similar in scope. All impact assessments and records must be retained throughout deployment and for at least three years after final deployment. Small deployers meeting all § 10-16-6 conditions are exempt from this requirement.
Statutory Text
(e) Except as otherwise provided for in this chapter: (1) A deployer, or a third party contracted by the deployer, that deploys an automated decision system shall complete an impact assessment for the automated decision system; and (2) A deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed automated decision system at least annually and within 90 days after any intentional and substantial modification to the automated decision system is made available. (f) An impact assessment completed pursuant to subsection (e) of this Code section shall include, at a minimum, and to the extent reasonably known by or available to the deployer: (1) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the automated decision system; (2) An analysis of whether the deployment of the automated decision system poses any known or reasonably foreseeable risks of: (A) Algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (B) Limits on accessibility for individuals who are pregnant, breastfeeding, or disabled, and, if so, what reasonable accommodations the deployer may provide that would mitigate any such limitations on accessibility; (C) Any violation of state or federal labor laws, including laws pertaining to wages, occupational health and safety, and the right to organize; or (D) Any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers if such intrusion: (i) Would be offensive to a reasonable person; and (ii) May be redressed under the laws of this state; (3) A description of the categories of data the automated decision system processes as inputs and the outputs the automated decision system produces; (4) If the deployer used data to customize the automated decision system, an overview of the categories of data the deployer used to customize the automated decision system; (5) An analysis of the automated decision system's validity and reliability in accordance with contemporary social science standards, and a description of any metrics used to evaluate the performance and known limitations of the automated decision system; (6) A description of any transparency measures taken concerning the automated decision system, including any measures taken to disclose to a consumer that the automated decision system is in use when the automated decision system is in use; (7) A description of the post-deployment monitoring and user safeguards provided concerning the automated decision system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the automated decision system; and (8) When such impact assessment is completed following an intentional and substantial modification to an automated decision system, a statement disclosing the extent to which the automated decision system was used in a manner that was consistent with, or varied from, the developer's intended uses of the automated decision system. (g) If the analysis required by paragraph (2) of subsection (f) of this Code section reveals a risk of algorithmic discrimination, the deployer shall not deploy the automated decision system until the developer or deployer takes reasonable steps to search for and implement less discriminatory alternative decision methods. (h) A single impact assessment may address a comparable set of automated decision systems deployed by a deployer. (i) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment shall satisfy the requirements established in this Code section if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this Code section. (j) A deployer shall maintain the most recently completed impact assessment for an automated decision system, all records concerning each impact assessment, and all prior impact assessments, if any, throughout the period of time that the automated decision system is deployed and for at least three years following the final deployment of the automated decision system.
H-02 Non-Discrimination & Bias Assessment · H-02.8 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-3(k)
Plain Language
Deployers must conduct at least annual reviews of each deployed automated decision system to affirmatively verify it is not causing algorithmic discrimination. This is a separate, ongoing operational review obligation distinct from the annual impact assessment update requirement in § 10-16-3(e)(2). Reviews may be conducted by the deployer itself or by a contracted third party.
Statutory Text
At least annually a deployer, or a third party contracted by the deployer, shall review the deployment of each automated decision system deployed by the deployer to ensure that the automated decision system is not causing algorithmic discrimination.
H-02 Non-Discrimination & Bias Assessment · H-02.5 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-3(l)
Plain Language
Deployers must publish all impact assessments completed within the preceding three years on their public websites, in a format prescribed by the Attorney General. This is a public disclosure requirement — not a confidential regulatory filing. It ensures consumers, researchers, and the public can review how deployers have assessed algorithmic discrimination risks.
Statutory Text
Deployers shall publish on their public websites all impact assessments completed within the preceding three years in a form and manner prescribed by the Attorney General.
H-01 Human Oversight of Automated Decisions · H-01.3H-01.1 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-4(a)
Plain Language
Before or at the time a deployer uses an automated decision system to make or assist in a consequential decision about a consumer, the deployer must notify the consumer and provide: the system's purpose and the nature of the consequential decision, deployer contact information, a plain-language description of what personal characteristics the system assesses and how, identification of human and automated components, a link to a public webpage with the system's logic, parameters, outputs, data sources, and latest impact assessment results, and instructions for accessing the deployer's public statement under § 10-16-5. This is a pre-decision disclosure obligation — not a post-hoc notice.
Statutory Text
(a) No later than the time that a deployer deploys an automated decision system to make, or assist in making, a consequential decision concerning a consumer, the deployer shall: (1) Notify the consumer that the deployer has deployed an automated decision system to make, or assist in making, a consequential decision; and (2) Provide to the consumer: (A) A statement disclosing the purpose of the automated decision system and the nature of the consequential decision; (B) The contact information for the deployer; (C) A description, in plain language, of the automated decision system, which description shall, at a minimum, include: (i) A description of the personal characteristics or attributes that the system will measure or assess; (ii) The method by which the system measures or assesses those attributes or characteristics; (iii) How those attributes or characteristics are relevant to the consequential decisions for which the system should be used; (iv) Any human components of such system; (v) How any automated components of such system are used to inform such consequential decision; and (vi) A direct link to a publicly accessible page on the deployer's public website that contains a plain-language description of the logic used in the system, including the key parameters that affect the output of the system; the system's outputs; the types and sources of data collected from natural persons and processed by the system when it is used to make, or assists in making, a consequential decision; and the results of the most recent impact assessment, or an active link to a web page where a consumer can review those results; and (D) Instructions on how to access the statement required by Code Section 10-16-5.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.4H-01.5 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-4(b)-(d)
Plain Language
Within one business day after a consequential decision is made, deployers must send the affected consumer a detailed post-decision notice including: the principal factors and variables that drove the decision (with explanation of the AI's contribution, data sources, and how the consumer's personal data informed the factors), information on the right to correct data and submit supplementary information, guidance on actions the consumer could take to secure a different outcome, instructions for correcting any incorrect personal data used, and information about appeal opportunities (which must allow human review if technically feasible). All notices must be provided directly, in plain language, in all languages the deployer ordinarily uses with consumers, and in disability-accessible formats. If direct delivery is impossible, the deployer must use a method reasonably calculated to reach the consumer. A deployer may not use an automated decision system at all if it cannot provide these notices and explanations.
Statutory Text
(b) A deployer that has used an automated decision system to make, or assist in making, a consequential decision concerning a consumer shall transmit to such consumer within one business day after such decision a notice that includes: (1) A specific and accurate explanation that identifies the principal factors and variables that led to the consequential decision, including: (A) The degree to which, and manner in which, the automated decision system contributed to the consequential decision; (B) The source or sources of the data processed by the automated decision system; and (C) A plain-language explanation of how the consumer's personal data informed these principal factors and variables when the automated decision system made, or assisted in making, the consequential decision; (2) Information about consumers' right to correct, and how the consumer can submit corrections and provide supplementary information relevant to, the consequential decision; (3) What actions, if any, the consumer might have taken to secure a different decision and the actions that the consumer might take to secure a different decision in the future; (4) Information on opportunities to correct any incorrect personal data that the automated decision system processed in making, or assisting in making, the consequential decision; and (5) Information on opportunities to appeal an adverse consequential decision concerning the consumer arising from the deployment of an automated decision system, which appeal shall, if technically feasible, allow for human review. (c)(1) A deployer shall provide the notice, statement, contact information, and description required by subsections (a) and (b) of this Code section: (A) Directly to the consumer; (B) In plain language; (C) In all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and (D) In a format that is accessible to consumers with disabilities. (2) If the deployer is unable to provide the notice, statement, contact information, and description directly to the consumer, the deployer shall make such information available in a manner that is reasonably calculated to ensure that the consumer receives it. (d) No deployer shall use an automated decision system to make, or assist in making, a consequential decision if it cannot provide notices and explanations that satisfy the requirements of this Code section.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-5(a)-(b)
Plain Language
Deployers must publish a clear, readily accessible statement on their public website summarizing the types of automated decision systems they deploy, how they manage algorithmic discrimination risks for each system, and detailed information about the nature, source, and extent of data they collect and use. The statement must be periodically updated. Small deployers meeting all § 10-16-6 conditions are exempt.
Statutory Text
(a) Except as provided in Code Section 10-16-6, a deployer shall make available, in a manner that is clear and readily available on the deployer's public website, a statement summarizing: (1) The types of automated decision systems that are currently deployed by the deployer; (2) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each such automated decision system; and (3) In detail, the nature, source, and extent of the information collected and used by the deployer. (b) A deployer shall periodically update the statement described in subsection (a) of this Code section.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-7
Plain Language
When a deployer discovers that a deployed automated decision system has caused algorithmic discrimination, the deployer must notify the Attorney General within 90 days of discovery, in a form and manner prescribed by the AG. This is a mandatory incident-reporting obligation triggered by the deployer's discovery of actual algorithmic discrimination — not merely a risk of discrimination.
Statutory Text
If a deployer deploys an automated decision system and subsequently discovers that the automated decision system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than 90 days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by the Attorney General, a notice disclosing the discovery.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
O.C.G.A. § 10-16-9
Plain Language
The Attorney General may demand that deployers (or their contracted third parties) produce any documentation or records required under this chapter within seven days. This includes risk management policies, impact assessments, and related records. Produced materials are exempt from Georgia open records requirements. Deployers may designate materials as proprietary or trade secret, and disclosure does not waive attorney-client privilege or work-product protection.
Statutory Text
The Attorney General may require that a deployer, or a third party contracted by the deployer, disclose to the Attorney General, no later than seven days after and in a form and manner prescribed by the Attorney General, any documentation or records required by this chapter. The Attorney General may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and such records, notwithstanding the provisions of Article 4 of Chapter 18 of Title 50, relating to open records, shall not be open to inspection by or made available to the public. In a disclosure pursuant to this Code section, a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records is subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
O.C.G.A. § 10-16-11(a)-(b)
Plain Language
Any deployer or developer that makes available an AI system intended to interact with consumers must disclose to each interacting consumer that they are interacting with an AI system. This is a per-interaction disclosure obligation — not a one-time or pre-engagement notice. The disclosure is excused only when it would be obvious to a reasonable person that they are interacting with AI. This provision applies to all AI systems intended for consumer interaction, not just automated decision systems — it uses the broader 'artificial intelligence system' definition.
Statutory Text
(a) Except as provided in subsection (b) of this Code section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) Disclosure is not required under subsection (a) of this Code section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.