S-1297
ID · State · USA
ID
USA
● Passed
Proposed Effective Date
2027-07-01
Idaho Senate Bill No. 1297, As Amended — Conversational AI Safety Act (Chapter 21, Title 48, Idaho Code)
Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Idaho. Core obligations include disclosing AI identity when a reasonable person could be misled, adopting crisis response protocols for suicidal ideation, and prohibiting representations that the service provides professional mental or behavioral health care. Heightened obligations apply to minor account holders, including unconditional AI disclosure, anti-manipulation protections, restrictions on sexually explicit content and emotional dependency simulations, and parental control tools. Enforced exclusively by the Idaho attorney general with civil penalties of $1,000 per violation (capped at $500,000 per operator) or actual damages, whichever is greater; no private right of action. AI model developers are expressly shielded from liability for violations by third-party operators.
Summary

Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Idaho. Core obligations include disclosing AI identity when a reasonable person could be misled, adopting crisis response protocols for suicidal ideation, and prohibiting representations that the service provides professional mental or behavioral health care. Heightened obligations apply to minor account holders, including unconditional AI disclosure, anti-manipulation protections, restrictions on sexually explicit content and emotional dependency simulations, and parental control tools. Enforced exclusively by the Idaho attorney general with civil penalties of $1,000 per violation (capped at $500,000 per operator) or actual damages, whichever is greater; no private right of action. AI model developers are expressly shielded from liability for violations by third-party operators.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. Civil penalties are to be sought by the attorney general. No private right of action — the statute expressly provides that nothing in the chapter shall be construed as creating a private right of action to enforce its provisions or to support a private right of action under any other law. The statute also expressly shields AI model developers from liability for violations committed by third-party operators.
Penalties
Civil penalties of $1,000 per violation, not to exceed $500,000 per operator, or actual damages, whichever is greater. Injunctive relief is also available. Statutory penalties do not require proof of actual monetary harm.
Who Is Covered
"Operator" means a person who makes available a conversational AI service to the public. Operator does not include mobile application stores or search engines solely because they provide access to a conversational AI service.
What Is Covered
(a) "Conversational AI service" means an artificial intelligence software application, web interface, or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communications. (b) "Conversational AI service" does not include a software application, web interface, or computer program that is any of the following: (i) Primarily designed and marketed for use by developers or researchers; (ii) A feature within another software application, web interface, or computer program that is not a conversational AI service; (iii) A chatbot that is a feature of a video game that is limited to replies related to the video game and that does not discuss topics related to mental health, self-harm, or material harmful to minors or maintain a dialogue on other topics unrelated to the video game; (iv) Designed to provide outputs relating to a narrow and discrete topic; (v) Primarily designed and marketed for commercial use by business entities, including those whose primary intended users are employees, contractors, or clients of business entities, whether delivered via cloud, on premises, or hybrid deployments; (vi) Designed to function as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device; (vii) Used by a business solely for internal purposes; (viii) Accessible only to individuals who have entered into a commercial agreement, enterprise contract, or similar business arrangement with the operator; or (ix) A chatbot used only for customer service, a business's operational purposes, productivity purposes, or analysis related to source information, internal research, or technical assistance.
Compliance Obligations 9 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Idaho Code § 48-2103(1)
Plain Language
When a reasonable person could be misled into believing they are speaking with a human, the operator must provide a clear and conspicuous disclosure that the service is AI. This is a conditional trigger — if the conversational AI service obviously presents itself as AI, no affirmative disclosure is required. The standard is objective (reasonable person), not subjective.
Statutory Text
If reasonable persons would be misled to believe that they are interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
Idaho Code § 48-2103(2)
Plain Language
Operators must adopt and maintain a protocol for the conversational AI service to respond to user prompts involving suicidal ideation. At minimum, the protocol must include making reasonable efforts to refer users to crisis service providers such as a suicide hotline or crisis text line. The 'includes but is not limited to' language signals that referral alone may be insufficient — operators should consider additional measures. Unlike CA SB 243, this provision does not require public posting of the protocol or impose reporting obligations.
Statutory Text
An operator shall adopt a protocol for the conversational AI service to respond to user prompts regarding suicidal ideation that includes but is not limited to making reasonable efforts to provide a response to users that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
Idaho Code § 48-2103(3)
Plain Language
Operators may not knowingly and intentionally cause or program a conversational AI service to represent that it provides professional mental or behavioral health care. This applies to explicit representations only — the provision does not cover implied suggestions. The scienter requirement is high: the operator must both know and intend the representation. This prevents operators from marketing or programming their conversational AI as a substitute for licensed mental or behavioral health professionals.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Idaho Code § 48-2104(1)
Plain Language
When a user is a minor account holder, operators must unconditionally disclose that the user is interacting with AI — no 'reasonable person' test applies. Operators may satisfy this obligation in one of two ways: (1) a persistent visible disclaimer always on screen, or (2) a disclosure at the beginning of each session plus a reminder at least every three hours during continuous interactions. The obligation is triggered when the operator has actual knowledge or reasonable certainty the user is under 18. Unlike the general disclosure in § 48-2103(1), this is unconditional — it applies regardless of whether the AI could be mistaken for a human.
Statutory Text
An operator shall clearly and conspicuously disclose to minor account holders that they are interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three (3) hours in a continuous conversational AI service interaction.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
Idaho Code § 48-2104(2)
Plain Language
Operators must not provide minor account holders with points or similar rewards at unpredictable intervals when the intent is to encourage increased engagement. This targets variable-ratio reward schedules — a design pattern associated with addictive engagement. The scienter requirement is twofold: the operator must know or have reasonable certainty the user is a minor, and the unpredictable rewards must be provided with the intent to encourage increased engagement. Predictable, non-manipulative reward systems appear to remain permissible.
Statutory Text
Where an operator knows or has reasonable certainty that an account holder is a minor, the operator shall not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
S-02 Prohibited Conduct & Output Restrictions · S-02.6 · Deployer · ChatbotMinors
Idaho Code § 48-2104(3)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from producing three categories of content for minor account holders: (a) visual depictions of sexually explicit conduct, (b) direct statements urging the minor to engage in sexually explicit conduct, and (c) statements that sexually objectify the minor. 'Sexually explicit conduct' and 'visual depiction' have the same meanings as in 18 U.S.C. § 2256. The standard is 'reasonable measures' — not absolute prevention — providing a proportionality safe harbor.
Statutory Text
For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from: (a) Producing visual material of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
Idaho Code § 48-2104(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that would mislead minor account holders into believing they are interacting with a human. The statute enumerates four specific categories: claims of sentience or humanity, statements simulating emotional dependence, statements simulating romantic or sexual innuendo, and role-playing of adult-minor romantic relationships. The 'including' language means these are illustrative — the obligation extends to any statement that would lead a reasonable person to believe they are interacting with a human. The standard is reasonable measures, not absolute prevention.
Statutory Text
For minor account holders, an operator shall institute reasonable measures to prevent a conversational AI service from generating statements that would lead reasonable persons to believe that they are interacting with a human, including: (a) Explicit claims that the conversational AI service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
Idaho Code § 48-2104(5)
Plain Language
Operators must provide tools for all account holders to manage their privacy and account settings. For account holders under 13, these tools must also be offered directly to parents or guardians. For minor account holders 13 and older, operators must also offer related parental tools, but the obligation is qualified — it is 'as appropriate based on relevant risks,' giving operators discretion to calibrate parental tool availability for teens based on a risk assessment. This creates a three-tier structure: all users get privacy tools, under-13 users trigger mandatory parental tools, and 13-17 users trigger risk-proportionate parental tools.
Statutory Text
An operator shall offer tools for account holders and, where such account holders are under thirteen (13) years of age, their parents or guardians, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen (13) years of age and older, as appropriate based on relevant risks.
Other · Chatbot
Idaho Code § 48-2105(1)-(3)
Plain Language
This provision establishes the enforcement framework for the chapter. Violations are subject to injunction and civil penalties of $1,000 per violation (capped at $500,000 per operator) or actual damages, whichever is greater. Enforcement is exclusively through the attorney general — the statute expressly negates any private right of action, including under other laws. AI model developers are shielded from liability for violations committed by third-party operators. This creates no new compliance obligation of its own.
Statutory Text
(1) An operator that violates the provisions of this chapter shall be subject to an injunction and liable for civil penalties of one thousand dollars ($1,000) per violation, not to exceed five hundred thousand dollars ($500,000) per operator, or actual damages, whichever is greater. (2) Civil penalties for violations of the provisions of this chapter are to be sought by the attorney general. Nothing in this chapter shall be construed as creating a private right of action to enforce the provisions of this chapter or to support a private right of action under any other law. (3) This chapter shall not create liability for the developer of an AI model for any violation of this chapter by a conversational AI system that is made available to the public by a third-party operator.