H-7350
RI · State · USA
RI
USA
● Pending
Proposed Effective Date
2027-01-01
Rhode Island H 7350 — An Act Relating to Commercial Law — General Regulatory Provisions — Artificial Intelligence Companion Models
Imposes safety and disclosure obligations on operators of AI companion systems in Rhode Island. Requires operators to maintain protocols addressing user expressions of suicidal ideation, self-harm, potential physical harm to others, and potential financial harm to others, including referral to crisis service providers. Mandates that operators provide a notification at the beginning of every interaction and every three hours thereafter stating that the AI companion is a computer program, not a human, and is unable to feel human emotion. Enforcement is available through both AG investigation and injunctions and a private right of action for persons physically injured through self-harm or physically or financially harmed by another due to a violation.
Summary

Imposes safety and disclosure obligations on operators of AI companion systems in Rhode Island. Requires operators to maintain protocols addressing user expressions of suicidal ideation, self-harm, potential physical harm to others, and potential financial harm to others, including referral to crisis service providers. Mandates that operators provide a notification at the beginning of every interaction and every three hours thereafter stating that the AI companion is a computer program, not a human, and is unable to feel human emotion. Enforcement is available through both AG investigation and injunctions and a private right of action for persons physically injured through self-harm or physically or financially harmed by another due to a violation.

Enforcement & Penalties
Enforcement Authority
Dual enforcement. The Attorney General is empowered to investigate, sue, and seek injunctions against noncompliant AI companion providers, including under RI's deceptive trade practices statute (R.I. Gen. Laws ch. 6-13.1). Private right of action available to any person who was physically injured through self-harm or was physically or financially harmed by another because of a violation. Standing requires actual physical injury through self-harm or actual physical or financial harm caused by another person due to the violation.
Penalties
Damages, equitable relief, and such other remedies as the court may deem appropriate. No statutory minimum or per-violation amount specified. Plaintiff must have been physically injured through self-harm or physically or financially harmed by another because of the violation. Attorney General may seek injunctions. Deceptive trade practices remedies under R.I. Gen. Laws ch. 6-13.1 may also be available.
Who Is Covered
"Operator" means any person, partnership, association, firm, or business entity, or any member, affiliate, subsidiary or beneficial owner of any partnership, association, firm, or business entity who operates or provides an AI companion.
What Is Covered
"AI companion" means a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms to simulate social human interaction, by retaining information on prior interactions and user preference, asking questions, providing advice, and engaging in simulated conversation on matters of personal well-being. "AI companion" shall not include any system used by a business entity solely intended to provide users with information about available commercial services or products, customer account information, or other information related to a user's customer, or potential customer relationship with such business entity.
Compliance Obligations 5 obligations · click obligation ID to open requirement page
S-02 Prohibited Conduct & Output Restrictions · S-02.7 · Deployer · Chatbot
R.I. Gen. Laws § 6-63-2
Plain Language
Operators may not operate or provide an AI companion at all unless the system has protocols in place to address three categories of user expression: (1) suicidal ideation or self-harm, (2) potential physical harm to others, and (3) potential financial harm to others. The protocol must include, at a minimum, a notification referring users to crisis service providers such as a suicide hotline or crisis text line. This is a continuous operating prerequisite — the protocol must be in place as a condition of lawfully providing the AI companion. Note that subsection (3) mentions the crisis referral obligation explicitly, but the statute structures it as applying to all three categories through the chapeau. The scope of covered harms is broader than CA SB 243, which focuses on suicidal ideation and self-harm only.
Statutory Text
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
S-04 AI Crisis Response Protocols · S-04.1 · Deployer · Chatbot
R.I. Gen. Laws § 6-63-2
Plain Language
This maps the crisis referral component of § 6-63-2. Operators must maintain a protocol that includes referring users to crisis service providers (suicide hotline, crisis text line, or equivalent) when users express suicidal ideation, self-harm, physical harm to others, or financial harm to others. The crisis referral notification must be active as a condition of lawful operation. Unlike CA SB 243, this provision applies to all users — not only minors — and extends to harm-to-others scenarios beyond just self-harm and suicidal ideation.
Statutory Text
It shall be unlawful for any operator to operate or provide an AI companion to a user unless such AI companion contains a protocol for addressing: (1) Possible suicidal ideation or self-harm expressed by a user to the AI companion; (2) Possible physical harm to others expressed by a user to the AI companion; and (3) Possible financial harm to others expressed by the user to the AI companion that includes, but is not limited to, a notification to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.
T-01 AI Identity Disclosure · T-01.1T-01.2 · Deployer · Chatbot
R.I. Gen. Laws § 6-63-3
Plain Language
Operators must provide a mandatory notification at the start of every AI companion interaction and at least every three hours during ongoing interactions. The notification must be delivered either verbally or in bold, capitalized text of at least 16-point type, using the prescribed language: the AI companion is a computer program, not a human being, and is unable to feel human emotion. This is an unconditional disclosure obligation — it applies regardless of whether a reasonable person would be misled. Unlike CA SB 243, where the three-hour periodic reminder applies only to known minors, this provision applies to all users. The statute also prescribes specific formatting requirements (bold, capitalized, 16-point minimum) and mandated verbatim language, which is stricter than most comparable statutes.
Statutory Text
An operator shall provide a notification to a user at the beginning of any AI companion interaction and at least every three (3) hours for continuing AI companion interactions hereafter, which states either verbally or in bold and capitalized letters of at least sixteen (16) point type, the following: "THE AI COMPANION (OR NAME OF THE AI COMPANION) IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION".
Other · Chatbot
R.I. Gen. Laws § 6-63-4(a)-(b)
Plain Language
This provision establishes the enforcement framework for the chapter. It creates a private right of action for persons physically injured through self-harm or physically or financially harmed by another due to a violation, allowing them to sue in superior court for damages and equitable relief. It also grants the Attorney General authority to investigate, sue, and seek injunctions against noncompliant operators, including under Rhode Island's deceptive trade practices law. This creates no new substantive compliance obligation — it is purely an enforcement mechanism.
Statutory Text
(a) Any person who was physically injured through self-harm or was physically or financially harmed by another because of a violation of §§ 6-63-2 and 6-63-3 may bring an action in the superior court for damages, equitable relief, and such other remedies as the court may deem appropriate. (b) The attorney general shall be empowered to investigate, sue, and seek injunctions against noncompliant AI companion providers including, but not limited to, the provisions of chapter 13.1 of title 6 (deceptive trade practices).
Other · Chatbot
R.I. Gen. Laws § 6-63-5
Plain Language
This provision directs the existing Rhode Island AI task force to continue its advisory role to the governor and general assembly on AI development and regulation. It creates no compliance obligation for AI companion operators — it is a governmental advisory mandate.
Statutory Text
The Rhode Island artificial intelligence (AI) task force shall continue to advise the governor and the general assembly by assessing and promoting the development, implementation, and regulation of artificial intelligence technologies within Rhode Island by understanding AI's potential impact on various sectors, including business, education, healthcare, and government.