S-1297
ID · State · USA
ID
USA
● Pending
Proposed Effective Date
2027-07-01
Idaho Senate Bill No. 1297 — Conversational AI Safety Act (Chapter 21, Title 48, Idaho Code)
Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Idaho. Requires operators to disclose AI identity when a reasonable person could be misled, adopt suicide crisis referral protocols, and refrain from representing the service as professional mental or behavioral health care. Imposes heightened obligations for minor account holders, including unconditional AI disclosure, prohibition on variable-reward engagement mechanics, content restrictions on sexually explicit material and emotional dependency simulation, and parental control tools. Enforcement is exclusively by the Idaho Attorney General, with civil penalties of $1,000 per violation capped at $500,000 per operator, or actual damages, whichever is greater. No private right of action is created, and developers of underlying AI models are shielded from liability for violations by third-party operators.
Summary

Imposes safety and disclosure obligations on operators of conversational AI services accessible to the general public in Idaho. Requires operators to disclose AI identity when a reasonable person could be misled, adopt suicide crisis referral protocols, and refrain from representing the service as professional mental or behavioral health care. Imposes heightened obligations for minor account holders, including unconditional AI disclosure, prohibition on variable-reward engagement mechanics, content restrictions on sexually explicit material and emotional dependency simulation, and parental control tools. Enforcement is exclusively by the Idaho Attorney General, with civil penalties of $1,000 per violation capped at $500,000 per operator, or actual damages, whichever is greater. No private right of action is created, and developers of underlying AI models are shielded from liability for violations by third-party operators.

Enforcement & Penalties
Enforcement Authority
Attorney general enforcement only. Civil penalties are to be sought exclusively by the attorney general. No private right of action is created — the statute expressly provides that nothing in the chapter shall be construed as creating a private right of action to enforce its provisions or to support a private right of action under any other law. The chapter does not create liability for the developer of an AI model for any violation by a conversational AI system made available to the public by a third-party operator.
Penalties
Civil penalties of $1,000 per violation, not to exceed $500,000 per operator, or actual damages, whichever is greater. Injunctive relief is also available. Statutory penalties do not require proof of actual monetary harm.
Who Is Covered
"Operator" means a person who makes available a conversational AI service to the public. Operator does not include mobile application stores or search engines solely because they provide access to a conversational AI service.
What Is Covered
(a) "Conversational AI service" means an artificial intelligence software application, web interface, or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communications. (b) "Conversational AI service" does not include a software application, web interface, or computer program that is any of the following: (i) Primarily designed and marketed for use by developers or researchers; (ii) A feature within another software application, web interface, or computer program that is not a conversational AI service; (iii) A chatbot that is a feature of a video game that is limited to replies related to the video game and that does not discuss topics related to mental health, self-harm, or material harmful to minors or maintain a dialogue on other topics unrelated to the video game; (iv) Designed to provide outputs relating to a narrow and discrete topic; (v) Primarily designed and marketed for commercial use by business entities, including those whose primary intended users are employees, contractors, or clients of business entities, whether delivered via cloud, on premises, or hybrid deployments; (vi) Designed to function as a speaker and voice command interface or voice-activated virtual assistant for a consumer electronic device; (vii) Used by a business solely for internal purposes; (viii) Accessible only to individuals who have entered into a commercial agreement, enterprise contract, or similar business arrangement with the operator; or (ix) A chatbot used only for customer service, a business's operational purposes, productivity purposes, or analysis related to source information, internal research, or technical assistance.
Compliance Obligations 8 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Idaho Code § 48-2103(1)
Plain Language
If a reasonable person could be misled into thinking they are interacting with a human, the operator must provide a clear and conspicuous disclosure that the service is AI. This is a conditional trigger — it applies only when the AI's presentation could mislead, not unconditionally. Compare to the stricter unconditional disclosure required for minor account holders under § 48-2104(1).
Statutory Text
If reasonable persons would be misled to believe that they are interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
Idaho Code § 48-2103(2)
Plain Language
Operators must adopt a protocol requiring the conversational AI service to respond to user expressions of suicidal ideation by, at minimum, making reasonable efforts to refer users to crisis service providers such as suicide hotlines or crisis text lines. The 'includes but is not limited to' language means crisis referral is a floor, not a ceiling — operators may need to do more. Unlike CA SB 243, this statute does not require public posting of the protocol details or annual reporting of crisis referral metrics.
Statutory Text
An operator shall adopt a protocol for the conversational AI service to respond to user prompts regarding suicidal ideation that includes but is not limited to making reasonable efforts to provide a response to users that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
CP-01 Deceptive & Manipulative AI Conduct · CP-01.9 · Deployer · Chatbot
Idaho Code § 48-2103(3)
Plain Language
Operators may not knowingly and intentionally cause the conversational AI service to represent itself as providing professional mental or behavioral health care. This is a narrow prohibition — it requires both knowledge and intent, and covers only explicit representations that the service is designed to provide professional care. Implicit suggestions or ambiguous framing may not be captured. Operators should ensure no system output, branding, or interface element states or directly implies the service delivers licensed mental or behavioral health services.
Statutory Text
An operator shall not knowingly and intentionally cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · ChatbotMinors
Idaho Code § 48-2104(1)
Plain Language
When the operator knows or has reasonable certainty that an account holder is a minor, it must unconditionally disclose that the user is interacting with AI — regardless of whether a reasonable person would be misled. The operator has two compliance paths: either display a persistent visible disclaimer throughout the interaction, or disclose at the beginning of each session and then at least every three hours during continuous use. This is stricter than the general disclosure under § 48-2103(1), which is conditional on a reasonable person being misled.
Statutory Text
An operator shall clearly and conspicuously disclose to minor account holders that they are interacting with artificial intelligence: (a) As a persistent visible disclaimer; or (b) Both: (i) At the beginning of each session; and (ii) Appearing at least every three (3) hours in a continuous conversational AI service interaction.
MN-01 Minor User AI Safety Protections · MN-01.4 · Deployer · ChatbotMinors
Idaho Code § 48-2104(2)
Plain Language
Operators may not use variable-ratio reward mechanics — points or similar rewards delivered at unpredictable intervals — with the intent to encourage increased engagement by minor account holders. This targets gambling-like reinforcement schedules (e.g., surprise streaks, random bonus content). The prohibition requires intent to encourage increased engagement, so incidental reward mechanics not designed to drive engagement may not be captured. The trigger is actual knowledge or reasonable certainty of minor status.
Statutory Text
Where an operator knows or has reasonable certainty that an account holder is a minor, the operator shall not provide the user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational AI service.
MN-01 Minor User AI Safety Protections · MN-01.6 · Deployer · ChatbotMinors
Idaho Code § 48-2104(3)
Plain Language
Operators must implement reasonable measures to prevent their conversational AI service from generating three categories of content for minor account holders: (1) visual depictions of sexually explicit conduct (as defined by federal law at 18 U.S.C. § 2256), (2) direct statements encouraging the minor to engage in sexually explicit conduct, and (3) statements that sexually objectify the minor. The standard is 'reasonable measures' — not an absolute prohibition — meaning operators must demonstrate good-faith technical and design efforts to prevent these outputs.
Statutory Text
For minor account holders, an operator shall institute reasonable measures to prevent the conversational AI service from: (a) Producing visual material of sexually explicit conduct; (b) Generating direct statements that the account holder should engage in sexually explicit conduct; or (c) Generating statements that sexually objectify the account holder.
MN-01 Minor User AI Safety Protections · MN-01.5 · Deployer · ChatbotMinors
Idaho Code § 48-2104(4)
Plain Language
Operators must implement reasonable measures to prevent the conversational AI service from generating statements that could mislead minor account holders into believing they are interacting with a human. The statute provides a non-exhaustive list of covered outputs: claims of sentience or humanity, emotional dependence simulation, romantic or sexual innuendo, and adult-minor romantic role-playing. The 'including' framing means these are illustrative examples — operators should also address other outputs that could similarly mislead. This is an anti-emotional-dependency provision distinct from the general AI disclosure in § 48-2103(1), as it requires affirmative prevention of misleading outputs rather than just disclosure.
Statutory Text
For minor account holders, an operator shall institute reasonable measures to prevent a conversational AI service from generating statements that would lead reasonable persons to believe that they are interacting with a human, including: (a) Explicit claims that the conversational AI service is sentient or human; (b) Statements that simulate emotional dependence; (c) Statements that simulate romantic or sexual innuendos; or (d) Role-playing of adult-minor romantic relationships.
MN-01 Minor User AI Safety Protections · MN-01.3 · Deployer · ChatbotMinors
Idaho Code § 48-2104(5)
Plain Language
Operators must provide privacy and account management tools to all account holders. For account holders under 13, these tools must also be made available to their parents or guardians. For minor account holders aged 13 and older, operators must also offer related parental/guardian tools, but with a risk-based standard — 'as appropriate based on relevant risks' — giving operators some discretion in determining which tools to offer for the older-minor cohort. This creates a two-tier parental tools framework: mandatory for under-13, risk-calibrated for 13–17.
Statutory Text
An operator shall offer tools for account holders and, where such account holders are under thirteen (13) years of age, their parents or guardians, to manage the account holder's privacy and account settings. An operator shall also offer related tools to the parents or guardians of minor account holders thirteen (13) years of age and older, as appropriate based on relevant risks.