SB-5
LA · State · USA
LA
USA
● Pending
Proposed Effective Date
2026-01-01
Louisiana SB 5 — Artificial Intelligence Applications Relating to Mental Health (R.S. 28:16 and 17)
Regulates operators of mental health chatbot platforms accessible to Louisiana users. Requires operators to clearly and conspicuously disclose to users that the chatbot is AI and not human — before the user accesses features, upon return after seven days of inactivity, and any time a user asks. Operators must maintain protocols addressing suicidal ideation, self-harm, and harm to others, including crisis service referrals. Prohibits sale or sharing of individually identifiable health information or user input with third parties (with narrow exceptions for healthcare providers, health plans, and functionality contractors subject to HIPAA-equivalent standards). Prohibits in-conversation advertising unless clearly labeled, and prohibits using user input to target or customize advertisements. Enforced exclusively by the attorney general with a 45-day cure period and civil fines up to $10,000 per violation.
Summary

Regulates operators of mental health chatbot platforms accessible to Louisiana users. Requires operators to clearly and conspicuously disclose to users that the chatbot is AI and not human — before the user accesses features, upon return after seven days of inactivity, and any time a user asks. Operators must maintain protocols addressing suicidal ideation, self-harm, and harm to others, including crisis service referrals. Prohibits sale or sharing of individually identifiable health information or user input with third parties (with narrow exceptions for healthcare providers, health plans, and functionality contractors subject to HIPAA-equivalent standards). Prohibits in-conversation advertising unless clearly labeled, and prohibits using user input to target or customize advertisements. Enforced exclusively by the attorney general with a 45-day cure period and civil fines up to $10,000 per violation.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. The attorney general may bring a civil action to enforce any violation. Before initiating an enforcement action, the attorney general must provide the operator with 45 days' written notice identifying each alleged violation. The attorney general may not initiate an action if the operator cures the violation within 45 days and provides a written statement that the violation is cured and no further violations will occur. The attorney general may proceed if the operator fails to cure or commits another violation of the same provision after curing and providing a written statement. No private right of action is created.
Penalties
Civil fine of up to $10,000 per violation. Violation of an administrative or court order issued under this Part subjects the person to a civil penalty of up to $5,000 per violation. If the court grants judgment or injunctive relief to the attorney general, the court shall award reasonable attorney fees, court costs, and investigative costs. All fines and civil penalties collected are used by the attorney general for consumer protection enforcement or education.
Who Is Covered
"Operator" means a person who makes a mental health chatbot platform available to a user.
What Is Covered
(a) "Mental health chatbot" means an artificial intelligence technology that: (i) Uses a system that is trained on data and is designed to simulate human conversation with a consumer through text, audio, or visual communication. (ii) Generates unscripted outputs similar to outputs created by a human, with limited or no human oversight. (iii) Engages in interactive conversations with a user of the mental health chatbot similar to the confidential communications that an individual would have with a licensed mental health provider. (iv) Represents to a user or causes a reasonable person to believe that it can or will provide mental health therapy or help a user manage or treat mental health conditions. (b) "Mental health chatbot" does not include artificial intelligence technology that only provides scripted output, such as guided meditations or mindfulness exercises, or analyzes an individual's input for the purpose of connecting the individual with a human mental health provider.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · ChatbotHealthcare
R.S. 28:16(B)(1)-(3)
Plain Language
Operators must cause the mental health chatbot to clearly and conspicuously disclose that it is AI and not a human in three situations: (1) before the user can access any features — this is an unconditional, pre-access gate; (2) at the start of any new interaction if the user has been inactive for more than seven days; and (3) whenever a user asks or prompts the chatbot about whether AI is being used. The seven-day re-disclosure requirement functions as a periodic reminder for returning users, though it is session-triggered rather than time-interval-based within a session. The on-demand disclosure in subsection (3) requires the chatbot to accurately identify itself as AI whenever asked.
Statutory Text
An operator of a mental health chatbot shall cause the chatbot to clearly and conspicuously disclose to a user that the chatbot is an artificial intelligence technology and not a human. The disclosure shall be made: (1) Before the user may access the features of the mental health chatbot. (2) At the beginning of any interaction with the user if the user has not accessed the mental health chatbot within the previous seven days. (3) Any time a user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · ChatbotHealthcare
R.S. 28:16(C)
Plain Language
Operators must maintain active protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. The protocols must include referral to crisis service providers such as a suicide hotline. This is a continuous operational requirement — the protocols must be in place at all times the chatbot is available, not merely documented as a policy. Unlike CA SB 243, this provision does not require public posting of the protocol details on the operator's website, nor does it require annual reporting of crisis referral metrics.
Statutory Text
An operator of a mental health chatbot shall have protocols in place to address possible suicidal ideation, self-harm, or physical harm to others expressed by the user, including referral to a crisis service provider such as a suicide hotline.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotHealthcare
R.S. 28:16(D)(1)-(2)
Plain Language
Operators may not sell or share any individually identifiable health information or user input with third parties. Three narrow exceptions apply: (1) a healthcare provider requests the information with the user's consent; (2) the user's health plan requests the information at the user's request; or (3) the operator shares data with a contracted party solely to ensure the chatbot's effective functionality. When sharing under any exception, the operator and the receiving entity must comply with HIPAA privacy and security rules (45 CFR Parts 160 and 164, Subparts A and E) as if the operator were a HIPAA covered entity and the receiving party were a business associate. This effectively extends HIPAA-equivalent protections to mental health chatbot operators who would not otherwise be covered entities.
Statutory Text
(1) An operator of a mental health chatbot may not sell to or share with any third party any individually identifiable health information of a user or the user's input. This Subsection shall not apply to individually identifiable health information that is requested by a healthcare provider with the consent of the user, provided to a health plan of a user upon request of the user, or shared to ensure the effective functionality of the mental health chatbot with another party with which the operator has a contract related to such functionality. (2) When sharing information pursuant to this Subsection, the operator and the other entity shall comply with all applicable privacy and security provisions of 45 CFR Part 160 and 45 CFR Part 164, Subparts A and E, as if the operator were a covered entity and the other entity were a business associate, as such terms are defined in 45 CFR 160.103.
CP-01 Deceptive & Manipulative AI Conduct · Deployer · ChatbotHealthcare
R.S. 28:16(E)
Plain Language
Operators may not use a mental health chatbot to advertise specific products or services within a user conversation unless two conditions are met: (1) the chatbot clearly and conspicuously identifies the content as an advertisement, and (2) the chatbot discloses to the user any sponsorship, business affiliation, or third-party agreement related to promoting the product or service. This is a conditional prohibition — in-conversation advertising is permitted only with full disclosure. The provision does not prohibit the chatbot from recommending that a user seek counseling, therapy, or assistance from a licensed healthcare professional (see § 16(G)).
Statutory Text
An operator may not use a mental health chatbot to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and discloses to the user any sponsorship, business affiliation, or agreement that the operator has with a third party to promote, advertise, or recommend the product or service.
CP-01 Deceptive & Manipulative AI Conduct · Deployer · ChatbotHealthcare
R.S. 28:16(F)(1)-(3)
Plain Language
Operators are prohibited from using a user's conversational input to target, select, or customize advertisements shown to the user. This covers three distinct uses: (1) deciding whether to show an ad at all (with a narrow exception for advertising the mental health chatbot itself); (2) selecting which product or service category to advertise; and (3) customizing how an ad is presented. This is a blanket prohibition on input-based ad targeting — operators cannot mine therapeutic conversations for advertising purposes. The prohibition applies to the user's input specifically, not to other data the operator may hold about the user.
Statutory Text
An operator of a mental health chatbot may not use a user's input to: (1) Determine whether to display an advertisement for a product or service to the user, unless the advertisement is for the mental health chatbot itself. (2) Determine a product, service, or category of product or service, to advertise to the user. (3) Customize how an advertisement is presented to the user.