SB-540
GA · State · USA
GA
USA
● Passed
Proposed Effective Date
2025-07-01
Georgia SB 540 — An Act to amend Chapter 5 of Title 39 of the Official Code of Georgia Annotated, relating to online internet safety, so as to require certain disclosures related to conversational AI services
Georgia SB 540 imposes safety, disclosure, and minor-protection obligations on operators of conversational AI services accessible to the general public. Operators must disclose AI identity to minor account holders via a constantly visible disclaimer or session-start plus periodic reminders, and to all users when a reasonable person could be misled. Operators must implement age verification before providing access to sexually explicit content, prevent harmful content generation directed at minors, offer parental privacy tools, adopt crisis response protocols for suicidal ideation or self-harm, and refrain from representing the service as providing professional mental or behavioral health care. The Attorney General may bring civil enforcement actions with penalties up to $10,000 per violation. Developers are expressly shielded from liability when a separate operator makes their service available to the public.
Summary

Georgia SB 540 imposes safety, disclosure, and minor-protection obligations on operators of conversational AI services accessible to the general public. Operators must disclose AI identity to minor account holders via a constantly visible disclaimer or session-start plus periodic reminders, and to all users when a reasonable person could be misled. Operators must implement age verification before providing access to sexually explicit content, prevent harmful content generation directed at minors, offer parental privacy tools, adopt crisis response protocols for suicidal ideation or self-harm, and refrain from representing the service as providing professional mental or behavioral health care. The Attorney General may bring civil enforcement actions with penalties up to $10,000 per violation. Developers are expressly shielded from liability when a separate operator makes their service available to the public.

Enforcement & Penalties
Enforcement Authority
The Attorney General may bring civil enforcement actions for violations. Enforcement is agency-initiated by the Attorney General. No private right of action is explicitly created by the statute. The statute preserves other available remedies at law or equity but does not itself grant private standing. The statute expressly shields developers of a conversational AI service that is made available to the public by a separate operator from liability under this Code section.
Penalties
Civil penalties up to $10,000.00 per violation. Injunctive relief is also available. The statute preserves any other available remedy at law or equity.
Who Is Covered
'Operator' means a person that owns, controls, and makes available a conversational AI service to the public. Such term shall not include an app store provider or search engine solely because the app store provider or search engine provides access to a conversational AI service.
What Is Covered
'Conversational AI service' means a generative artificial intelligence system offered as a software application, web interface, or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communication. Such term shall not include an application, web interface, or computer program that: (A) Is primarily designed and marketed for use by developers or researchers; (B) Is designed to provide outputs relating to a narrow and discrete topic; (C) Is primarily designed and marketed for commercial use by business entities; (D) Functions as a speaker and voice command interface, or voice activated virtual assistant for a consumer electronic device; or (E) Is used by a business solely for internal purposes.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(b)
Plain Language
Operators must unconditionally disclose to minor account holders that they are interacting with an AI, not a human. The operator may satisfy this through either a constantly visible disclaimer or a notice at the beginning of each session plus reminders at least every three hours during continuous interactions. This obligation is unconditional for minors — unlike subsection (e), which applies to all users only when a reasonable person could be misled.
Statutory Text
An operator shall clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service as opposed to a natural person: (1) With a constantly visible disclaimer; or (2) At the beginning of each session and appearing at least every three hours in a continuous conversational AI service interaction.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
O.C.G.A. § 39-5-6(e)
Plain Language
For all users (not just minors), operators must provide a clear and conspicuous disclosure that the user is interacting with AI when a reasonable person could be expected to be misled into thinking they are talking to a human. This is a conditional trigger — if the conversational AI service does not plausibly appear to be a human, the disclosure is not required. Compare to subsection (b), which imposes an unconditional disclosure obligation for minor account holders.
Statutory Text
If an individual could reasonably be expected to be misled to believe he or she was interacting with a natural person, an operator shall clearly and conspicuously disclose that the conversational AI service is not a natural person.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(c)
Plain Language
Operators may not use variable-ratio reward mechanics — such as points or similar rewards given at unpredictable intervals — to encourage minors to engage more with the conversational AI service. The prohibition requires both unpredictable intervals and the intent to increase engagement; predictable reward schedules or rewards without engagement-increasing intent would not be covered.
Statutory Text
An operator shall not provide a minor account with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
MN-01 Minor User AI Safety Protections · MN-01.5MN-01.6 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(d)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from generating four categories of harmful content when interacting with minor account holders: (1) visual material depicting sexually explicit conduct; (2) statements suggesting the minor engage in sexual conduct; (3) statements sexually objectifying the minor; and (4) statements that would mislead a reasonable person into believing they are talking to a human — including claims of sentience, emotional dependence, romantic or sexual innuendo, and adult-minor romantic role-playing. The standard is 'reasonable measures,' not absolute prevention, but the obligation covers both sexually explicit output and anthropomorphic deception.
Statutory Text
For minor account holders, the operator shall institute reasonable measures to prevent the conversational AI service from: (1) Producing visual material of sexually explicit conduct; (2) Generating statements that suggest the account holder engage in sexual conduct; (3) Generating statements that sexually objectify the account holder; or (4) Generating statements that would lead a reasonable person to believe that the person is interacting with a natural person, including but not limited to: (A) Explicit claims that the conversational AI service is sentient or a natural person; (B) Statements that simulate emotional dependence; (C) Statements that simulate romantic or sexual innuendos; or (D) Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.1 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(f)
Plain Language
Before providing access to any conversational AI service capable of generating sexually explicit content, operators must verify the user's age using a reasonable method. Acceptable methods include submission of a digitized ID (e.g., driver's license), government-issued identification, or any commercially reasonable method meeting or exceeding NIST's Identity Assurance Level 2 standard. The non-exhaustive list gives operators flexibility, but the floor is a commercially reasonable method. This applies to any service that 'could provide' such content — not only services that are designed to do so.
Statutory Text
Before allowing access to a conversational AI service that could provide synthetic content containing sexually explicit conduct, an operator shall use a reasonable age verification method, which may include, but not be limited to: (1) The submission of a digitized identification card, including a digital copy of a driver's license; (2) The submission of government issued identification; or (3) Any commercially reasonable age verification method that meets or exceeds an Identity Assurance Level 2 standard as defined by the National Institute of Standards and Technology.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
O.C.G.A. § 39-5-6(g)
Plain Language
Operators must provide parents or guardians of minor account holders with tools to manage the minor's privacy and account settings. The statute does not specify what settings must be controllable — the obligation is to offer management tools, giving operators some discretion in implementation. However, the tools must cover both privacy settings and account settings.
Statutory Text
An operator shall offer tools for a minor account holder's parent or guardian to manage the account holder's privacy and account settings.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
O.C.G.A. § 39-5-6(h)
Plain Language
Operators must adopt and maintain a protocol governing how the conversational AI service responds when a user raises suicidal ideation or self-harm. The protocol must include making reasonable efforts to refer the user to crisis service providers. Unlike California SB 243, this provision does not require public posting of the protocol details, annual reporting of crisis referral metrics, or use of evidence-based methods. The obligation is to adopt the protocol and make reasonable referral efforts — the standard is 'reasonable efforts,' not guaranteed delivery.
Statutory Text
An operator shall adopt a protocol for the conversational AI service to respond to a user prompt regarding suicidal ideation or self-harm, which shall include making reasonable efforts to provide a response which refers the user to crisis service providers.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
O.C.G.A. § 39-5-6(i)
Plain Language
Operators may not knowingly and intentionally program or cause a conversational AI service to represent that it provides professional mental or behavioral health care. The mens rea standard is high — 'knowingly and intentionally' — meaning accidental or emergent AI outputs claiming to be a mental health professional would not violate this provision unless the operator deliberately caused or programmed the system to do so. The prohibition covers explicit representations only, not implied suggestions.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
Other · Chatbot
O.C.G.A. § 39-5-6(k)
Plain Language
Developers of conversational AI services are expressly shielded from liability under this statute when a separate operator makes the service available to the public. All obligations under this Code section fall on the operator, not the upstream developer. This creates no new compliance obligation but is significant because it means product counsel for a developer whose service is deployed by a third-party operator need not treat this statute as imposing direct obligations on the developer.
Statutory Text
Nothing in this code section shall be construed to create liability for the developer of a conversational AI service which is made available to the public by a separate operator.