SB-540
GA · State · USA
GA
USA
● Passed
Proposed Effective Date
2025-07-01
Georgia SB 540 — An Act to amend Chapter 5 of Title 39 of the Official Code of Georgia Annotated, relating to online internet safety, so as to require certain disclosures related to conversational AI services
Imposes safety, disclosure, and minor protection obligations on operators of conversational AI services accessible to the general public in Georgia. Requires operators to disclose AI identity to minor account holders either via a constantly visible disclaimer or at session start with reminders every three hours. Requires reasonable age verification before allowing access to services that could produce sexually explicit synthetic content. Prohibits operators from knowingly programming conversational AI to represent itself as providing professional mental or behavioral health care. Requires adoption of a crisis response protocol for suicidal ideation and self-harm. Enforcement is exclusively through the Attorney General, who may seek injunctive relief or civil penalties up to $10,000 per violation. Developers are expressly shielded from liability when a separate operator makes the service available to the public.
Summary

Imposes safety, disclosure, and minor protection obligations on operators of conversational AI services accessible to the general public in Georgia. Requires operators to disclose AI identity to minor account holders either via a constantly visible disclaimer or at session start with reminders every three hours. Requires reasonable age verification before allowing access to services that could produce sexually explicit synthetic content. Prohibits operators from knowingly programming conversational AI to represent itself as providing professional mental or behavioral health care. Requires adoption of a crisis response protocol for suicidal ideation and self-harm. Enforcement is exclusively through the Attorney General, who may seek injunctive relief or civil penalties up to $10,000 per violation. Developers are expressly shielded from liability when a separate operator makes the service available to the public.

Enforcement & Penalties
Enforcement Authority
The Attorney General may bring civil enforcement actions for violations. Enforcement is agency-initiated; no private right of action is created by the statute. The statute expressly provides that nothing shall be construed to limit or preclude any other available remedy at law or equity, but does not itself create a private cause of action.
Penalties
Civil penalties up to $10,000.00 per violation. Injunctive relief is also available. The statute preserves any other available remedy at law or equity.
Who Is Covered
'Operator' means a person that owns, controls, and makes available a conversational AI service to the public. Such term shall not include an app store provider or search engine solely because the app store provider or search engine provides access to a conversational AI service.
What Is Covered
'Conversational AI service' means a generative artificial intelligence system offered as a software application, web interface, or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communication. Such term shall not include an application, web interface, or computer program that: (A) Is primarily designed and marketed for use by developers or researchers; (B) Is designed to provide outputs relating to a narrow and discrete topic; (C) Is primarily designed and marketed for commercial use by business entities; (D) Functions as a speaker and voice command interface, or voice activated virtual assistant for a consumer electronic device; or (E) Is used by a business solely for internal purposes.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(b)
Plain Language
Operators must proactively and unconditionally disclose to all minor account holders that they are interacting with AI, not a human. Compliance may be achieved in one of two ways: (1) a constantly visible on-screen disclaimer, or (2) a disclosure at the beginning of each session plus a reminder at least every three hours during continuous interactions. This is not conditional on whether the minor could be misled — disclosure is mandatory for all minor accounts. Compare to subsection (e), which imposes a separate conditional disclosure for all users.
Statutory Text
An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service as opposed to a natural person: (1) With a constantly visible disclaimer; or (2) At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
O.C.G.A. § 39-5-6(e)
Plain Language
For all users (not just minors), if the conversational AI service could reasonably mislead someone into thinking they are talking to a human, the operator must display a clear and conspicuous disclosure that the service is not a natural person. Unlike subsection (b)'s unconditional obligation for minors, this general disclosure is triggered only when a reasonable person could be misled. If the AI clearly presents as non-human, no disclosure is required under this provision.
Statutory Text
If an individual could reasonably be expected to be misled to believe he or she was interacting with a natural person, an operator shall clearly and conspicuously disclose that the conversational AI service is not a natural person.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(c)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points, badges, or similar rewards given at unpredictable intervals — with the intent of encouraging increased engagement by minor account holders. This targets addictive engagement design patterns. The prohibition requires both unpredictable timing and intent to encourage increased engagement; predictable, non-engagement-driven rewards may still be permissible.
Statutory Text
An operator shall not provide a minor account with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
MN-01 Minor User AI Safety Protections · MN-01.5MN-01.6 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(d)
Plain Language
Operators must implement reasonable measures to prevent four categories of harmful output directed at minor account holders: (1) visual material depicting sexually explicit conduct; (2) statements suggesting the minor engage in sexual conduct; (3) statements that sexually objectify the minor; and (4) statements that would lead a reasonable person to believe they are interacting with a human — including claims of sentience, simulated emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. The standard is 'reasonable measures,' not absolute prevention, giving operators some latitude in implementation. The sexually explicit conduct definition is incorporated by reference from Georgia's existing criminal code at O.C.G.A. § 16-12-100.
Statutory Text
For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from: (1) Producing visual material of sexually explicit conduct; (2) Generating statements that suggest the account holder engage in sexual conduct; (3) Generating statements that sexually objectify the account holder; or (4) Generating statements that would lead a reasonable person to believe that the person is interacting with a natural person, including but not limited to: (A) Explicit claims that the conversational AI service is sentient or a natural person; (B) Statements that simulate emotional dependence; (C) Statements that simulate romantic or sexual innuendos; or (D) Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(f)
Plain Language
Before allowing access to any conversational AI service capable of generating sexually explicit synthetic content, operators must implement a reasonable age verification process. The statute provides a non-exhaustive list of acceptable methods: digitized ID card (including driver's license), government-issued identification, or any commercially reasonable method meeting or exceeding NIST Identity Assurance Level 2. This applies to the service as a whole if it 'could provide' such content — operators cannot satisfy the requirement by merely blocking explicit output while skipping verification. The obligation is triggered by the service's capability, not actual generation of explicit content.
Statutory Text
Before allowing access to a conversational AI service that could provide synthetic content containing sexually explicit conduct, an operator shall use a reasonable age verification method, which may include, but not be limited to: (1) The submission of a digitized identification card, including a digital copy of a driver's license; (2) The submission of government issued identification; or (3) Any commercially reasonable age verification method that meets or exceeds an Identity Assurance Level 2 standard as defined by the National Institute of Standards and Technology.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(g)
Plain Language
Operators must provide parents or guardians of minor account holders with tools to manage the minor's privacy and account settings. The statute does not specify the particular controls required, giving operators discretion in implementation, but the tools must meaningfully enable management of both privacy settings and account settings.
Statutory Text
An operator shall offer tools for a minor account holder's parent or guardian to manage the account holder's privacy and account settings.
MN-02 AI Crisis Response Protocols · MN-02.1 · Deployer · Chatbot
O.C.G.A. § 39-5-6(h)
Plain Language
Operators must adopt and maintain a protocol governing how the conversational AI service responds when a user expresses suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to crisis service providers (e.g., suicide hotlines, crisis text lines). This applies to all users, not just minors. The standard is 'reasonable efforts' to provide a referral, not absolute assurance of delivery. Unlike CA SB 243, there is no explicit requirement to publish the protocol on the operator's website or to report crisis referral metrics.
Statutory Text
An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response which refers the user to crisis service providers.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
O.C.G.A. § 39-5-6(i)
Plain Language
Operators may not knowingly and intentionally program or cause their conversational AI service to represent that it provides professional mental or behavioral health care. The prohibition requires both knowledge and intent — inadvertent or emergent AI outputs claiming to be a mental health professional would not violate this provision unless the operator knowingly caused or programmed the behavior. The scope is limited to 'explicit' representations; implicit suggestions that fall short of explicit claims may not be covered.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.