SB-5
LA · State · USA
LA
USA
● Pending
Proposed Effective Date
2026-01-01
Louisiana SB 5 — Artificial Intelligence Applications Relating to Mental Health (R.S. 28:16 and 17)
Regulates operators of mental health chatbot platforms accessible to users in Louisiana. Requires operators to clearly and conspicuously disclose that the chatbot is AI and not a human — before the user accesses features, at the start of any interaction after a seven-day gap, and whenever the user asks. Operators must maintain protocols addressing suicidal ideation, self-harm, and intent to harm others, including crisis service referrals. The bill prohibits selling or sharing individually identifiable health information or user inputs except in narrow circumstances, and restricts in-conversation advertising and input-based ad targeting. Enforcement is exclusively through the attorney general, with a 45-day cure period for first violations and civil fines of up to $10,000 per violation.
Summary

Regulates operators of mental health chatbot platforms accessible to users in Louisiana. Requires operators to clearly and conspicuously disclose that the chatbot is AI and not a human — before the user accesses features, at the start of any interaction after a seven-day gap, and whenever the user asks. Operators must maintain protocols addressing suicidal ideation, self-harm, and intent to harm others, including crisis service referrals. The bill prohibits selling or sharing individually identifiable health information or user inputs except in narrow circumstances, and restricts in-conversation advertising and input-based ad targeting. Enforcement is exclusively through the attorney general, with a 45-day cure period for first violations and civil fines of up to $10,000 per violation.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. The attorney general may bring a civil action to enforce any violation. Before initiating an enforcement action, the attorney general must provide 45 days' written notice identifying each alleged violation. The attorney general may not initiate an action if the operator cures the violation within 45 days and provides a written statement that the violation is cured and no further violations will occur. The cure period does not apply if the operator fails to cure after notice or commits another violation of the same provision after previously curing and certifying compliance.
Penalties
Civil fine of up to $10,000 per violation. Violation of an administrative or court order issued under this Part subjects the violator to a civil penalty of up to $5,000 per violation. If the court grants judgment or injunctive relief, the court shall award the attorney general reasonable attorney fees, court costs, and investigative costs. All monies collected are used by the attorney general for consumer protection enforcement or education.
Who Is Covered
"Operator" means a person who makes a mental health chatbot platform available to a user.
What Is Covered
(a) "Mental health chatbot" means an artificial intelligence technology that: (i) Uses a system that is trained on data and is designed to simulate human conversation with a consumer through text, audio, or visual communication. (ii) Generates unscripted outputs similar to outputs created by a human, with limited or no human oversight. (iii) Engages in interactive conversations with a user of the mental health chatbot similar to the confidential communications that an individual would have with a licensed mental health provider. (iv) Represents to a user or causes a reasonable person to believe that it can or will provide mental health therapy or help a user manage or treat mental health conditions. (b) "Mental health chatbot" does not include artificial intelligence technology that only provides scripted output, such as guided meditations or mindfulness exercises, or analyzes an individual's input for the purpose of connecting the individual with a human mental health provider.
Compliance Obligations 6 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1T-01.2T-01.3 · Deployer · ChatbotHealthcare
R.S. 28:16(B)(1)-(3)
Plain Language
Operators must ensure the mental health chatbot clearly and conspicuously tells users it is AI and not a human in three situations: (1) before the user can access any features (unconditional initial disclosure), (2) at the start of any interaction following a seven-day gap in use (a re-disclosure obligation triggered by inactivity), and (3) whenever a user asks or prompts whether AI is being used (on-demand disclosure). This is an unconditional disclosure obligation — it does not depend on whether a reasonable person would be misled. The seven-day re-disclosure trigger is notably longer than the three-hour periodic reminder in CA SB 243, and there is no shorter interval for minor users.
Statutory Text
An operator of a mental health chatbot shall cause the chatbot to clearly and conspicuously disclose to a user that the chatbot is an artificial intelligence technology and not a human. The disclosure shall be made: (1) Before the user may access the features of the mental health chatbot. (2) At the beginning of any interaction with the user if the user has not accessed the mental health chatbot within the previous seven days. (3) Any time a user asks or otherwise prompts the mental health chatbot about whether artificial intelligence is being used.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · ChatbotHealthcare
R.S. 28:16(C)
Plain Language
Operators must maintain active protocols for detecting and responding to user expressions of suicidal ideation, self-harm, or intent to harm others. The protocols must include referral to crisis service providers such as a suicide hotline. This is a continuous operating requirement — the protocols must be in place at all times the chatbot is available to users. Unlike CA SB 243, this bill does not require operators to publicly post the protocol details on their website, nor does it require annual reporting of crisis referral counts.
Statutory Text
An operator of a mental health chatbot shall have protocols in place to address possible suicidal ideation, self-harm, or physical harm to others expressed by the user, including referral to a crisis service provider such as a suicide hotline.
D-01 Automated Processing Rights & Data Controls · D-01.4 · Deployer · ChatbotHealthcare
R.S. 28:16(D)(1)-(2)
Plain Language
Operators may not sell or share with third parties any individually identifiable health information of a user or the user's input. Three narrow exceptions apply: (1) health information requested by a healthcare provider with the user's consent, (2) information provided to the user's health plan at the user's request, and (3) information shared with a contracted party to ensure the chatbot functions effectively. When sharing under any exception, both the operator and the receiving entity must comply with HIPAA privacy and security rules (45 CFR Parts 160 and 164, Subparts A and E) as if the operator were a HIPAA covered entity and the receiving party were a business associate. This effectively extends HIPAA-like obligations to mental health chatbot operators who would not otherwise be covered entities.
Statutory Text
D.(1) An operator of a mental health chatbot may not sell to or share with any third party any individually identifiable health information of a user or the user's input. This Subsection shall not apply to individually identifiable health information that is requested by a healthcare provider with the consent of the user, provided to a health plan of a user upon request of the user, or shared to ensure the effective functionality of the mental health chatbot with another party with which the operator has a contract related to such functionality. (2) When sharing information pursuant to this Subsection, the operator and the other entity shall comply with all applicable privacy and security provisions of 45 CFR Part 160 and 45 CFR Part 164, Subparts A and E, as if the operator were a covered entity and the other entity were a business associate, as such terms are defined in 45 CFR 160.103.
CP-01 Deceptive & Manipulative AI Conduct · Deployer · ChatbotHealthcare
R.S. 28:16(E)
Plain Language
Operators may not use the mental health chatbot to advertise a specific product or service within a user conversation unless two conditions are met: (1) the chatbot clearly and conspicuously labels the advertisement as an advertisement, and (2) the chatbot discloses to the user any sponsorship, business affiliation, or agreement the operator has with a third party to promote, advertise, or recommend that product or service. This is not a blanket advertising ban — it is a conditional disclosure obligation that permits in-conversation advertising only if accompanied by conspicuous labeling and full relationship disclosure. Note that §16(G) expressly carves out recommendations to seek counseling, therapy, or other assistance from a licensed healthcare professional — those are not treated as advertisements.
Statutory Text
An operator may not use a mental health chatbot to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and discloses to the user any sponsorship, business affiliation, or agreement that the operator has with a third party to promote, advertise, or recommend the product or service.
CP-01 Deceptive & Manipulative AI Conduct · Deployer · ChatbotHealthcare
R.S. 28:16(F)(1)-(3)
Plain Language
Operators are flatly prohibited from using a user's input (i.e., what the user says or types into the chatbot) to target, select, or customize advertisements shown to the user. This covers three distinct uses: (1) deciding whether to show an ad at all (unless it's for the chatbot itself), (2) choosing which product or service category to advertise, and (3) customizing how an ad is presented. The single exception is that user input may be used to determine whether to show an ad for the mental health chatbot itself. This is a behavioral advertising prohibition specific to the therapeutic conversation context — it prevents operators from mining therapeutic disclosures for ad targeting.
Statutory Text
An operator of a mental health chatbot may not use a user's input to: (1) Determine whether to display an advertisement for a product or service to the user, unless the advertisement is for the mental health chatbot itself. (2) Determine a product, service, or category of product or service, to advertise to the user. (3) Customize how an advertisement is presented to the user.
Other · ChatbotHealthcare
R.S. 28:16(G)
Plain Language
This savings clause clarifies that the advertising and disclosure restrictions in §16 do not prevent a mental health chatbot from recommending that a user seek counseling, therapy, or other help from a licensed healthcare professional. This creates no new compliance obligation — it ensures that crisis referrals and professional care recommendations are not inadvertently treated as prohibited commercial advertising.
Statutory Text
The provisions of this Section shall not prohibit a mental health chatbot from recommending that a user seek counseling, therapy, or other assistance from a licensed healthcare professional.