S-264
MA · State · USA
MA
USA
● Pre-filed
Proposed Effective Date
2025-01-17
Massachusetts Senate No. 264 — An Act establishing protections for consumers interacting with artificial intelligence chatbots
Requires any commercial entity deploying a chatbot to clearly and conspicuously disclose to users that they are interacting with a chatbot, not a human. Chatbot interactions and representations are given the same legal force and effect as interactions with a human employee or agent of the commercial entity, and disclaimers cannot serve as a defense. Violations are treated as unfair or deceptive acts under Massachusetts Chapter 93A, enabling both Attorney General enforcement and private suits by consumers. The bill is notably broad — it covers all commercial chatbots regardless of modality (audio, visual, or text) and does not limit its scope to companion or high-risk chatbots.
Summary

Requires any commercial entity deploying a chatbot to clearly and conspicuously disclose to users that they are interacting with a chatbot, not a human. Chatbot interactions and representations are given the same legal force and effect as interactions with a human employee or agent of the commercial entity, and disclaimers cannot serve as a defense. Violations are treated as unfair or deceptive acts under Massachusetts Chapter 93A, enabling both Attorney General enforcement and private suits by consumers. The bill is notably broad — it covers all commercial chatbots regardless of modality (audio, visual, or text) and does not limit its scope to companion or high-risk chatbots.

Enforcement & Penalties
Enforcement Authority
Violations are deemed unfair or deceptive acts or practices under Chapter 93A, § 2. Enforcement is available through the Attorney General under Chapter 93A, § 4, and through private right of action under Chapter 93A, § 9 (consumers) and § 11 (businesses). Private plaintiffs must demonstrate injury — Chapter 93A § 9 requires a 30-day demand letter before suit. No separate agency enforcer is designated by this chapter.
Penalties
Enforcement is through Chapter 93A. Under § 9, consumers may recover actual damages or $25 (whichever is greater), with treble damages available for willful or knowing violations up to a minimum of $25. Under § 4, the Attorney General may seek injunctive relief and civil penalties of up to $5,000 per violation. Reasonable attorney's fees and costs are recoverable under § 9. The statute also preserves 'any other remedies that may be available.' Statutory damages under 93A do not require proof of actual monetary harm.
Who Is Covered
What Is Covered
"Chatbot", an automated program designed to simulate conversation with human users whether through the use of generative artificial intelligence or other similar technology. Such a program may use audio, visual, or textual methods, or a combination thereof, to communicate.
Compliance Obligations 2 obligations · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
Chapter 93M, § 2
Plain Language
Any commercial entity that deploys a chatbot must provide a clear and conspicuous disclosure to every user that they are interacting with a chatbot, not a human. This is an unconditional obligation — it applies regardless of whether a reasonable person would be misled. The disclosure must be made to every person the chatbot interacts with. The statute does not specify timing or format, but the 'clearly and conspicuously' standard requires prominence sufficient to ensure users actually notice the disclosure.
Statutory Text
Any commercial entity deploying a chatbot shall clearly and conspicuously disclose to the person with whom the chatbot interacts that the person is interacting with a chatbot and not a human.
Other · Chatbot
Chapter 93M, § 3
Plain Language
Any statement, representation, or interaction made by a chatbot deployed by a commercial entity is legally treated as if it were made by a human employee or agent of that entity. This means the entity is fully liable for chatbot outputs — including false claims, misleading representations, or warranties — under the same legal theories that would apply to human employees. Critically, adding a disclaimer (e.g., 'this is AI-generated and may be inaccurate') does not provide any defense under this chapter, under Chapter 93A, or under any other Massachusetts cause of action. This provision does not create an affirmative compliance obligation but rather establishes a liability attribution rule with significant exposure implications.
Statutory Text
Interactions with, including but not limited to any information or representations provided by, a chatbot deployed by a commercial entity shall have the same legal force and effect as interactions with a person employed by, or acting as an agent of, the commercial entity. Use of a disclaimer shall not constitute a defense under this chapter, chapter 93A, or any other cause of action under the laws of the commonwealth.