S-243
MA · State · USA
MA
USA
● Pre-filed
Proposed Effective Date
2025-01-17
Massachusetts Senate No. 243 — An Act requiring consumer notification for chatbot systems
Declares it an unfair and deceptive act under Massachusetts Chapter 93A for any person to engage in a commercial transaction or trade practice with a consumer where the consumer interacts with a bot that may mislead a reasonable person into believing they are engaging with a human — regardless of whether the consumer is actually misled or harmed. Provides a safe harbor: no liability attaches if the consumer is notified clearly and conspicuously that they are communicating with a computer. Covers broadly defined 'bots' including chatbots, AI agents, and avatars on any public-facing website or application. Enforcement leverages the existing 93A framework, including Attorney General enforcement and the pre-existing private right of action for consumers under § 9.
Summary

Declares it an unfair and deceptive act under Massachusetts Chapter 93A for any person to engage in a commercial transaction or trade practice with a consumer where the consumer interacts with a bot that may mislead a reasonable person into believing they are engaging with a human — regardless of whether the consumer is actually misled or harmed. Provides a safe harbor: no liability attaches if the consumer is notified clearly and conspicuously that they are communicating with a computer. Covers broadly defined 'bots' including chatbots, AI agents, and avatars on any public-facing website or application. Enforcement leverages the existing 93A framework, including Attorney General enforcement and the pre-existing private right of action for consumers under § 9.

Enforcement & Penalties
Enforcement Authority
Enforced as an unfair and deceptive act or practice under Massachusetts General Laws Chapter 93A, § 2. The Attorney General has enforcement authority under Chapter 93A, § 4. Private enforcement is available under Chapter 93A, § 9 (consumers) and § 11 (businesses), which are pre-existing enforcement mechanisms of the 93A framework — the bill itself does not independently create a private right of action but incorporates one by declaring the conduct a 93A violation.
Penalties
Remedies are those available under Chapter 93A. Under § 9, consumers may recover actual damages or $25 (whichever is greater), with treble damages for willful or knowing violations, plus attorney's fees and costs. Under § 4, the Attorney General may seek injunctive relief and civil penalties of up to $5,000 per violation. The bill itself specifies no independent damages schedule; it incorporates the existing 93A remedial framework by declaring the conduct an unfair and deceptive act or practice. Notably, the bill states liability arises 'regardless of whether such consumer is in fact misled, deceived or damaged thereby,' eliminating actual harm as a prerequisite for the violation itself.
Who Is Covered
What Is Covered
"Bot", an automated online account wherein all or substantially all of the actions or posts of that account are not the result of a person including, but not limited to, a chatbot, artificial intelligence agent, avatar or other computer technology that engages in a textual or aural conversation.
Compliance Obligations 1 obligation · click obligation ID to open requirement page
T-01 AI Identity Disclosure · T-01.1 · Deployer · Chatbot
M.G.L. c. 93, § 115(b)
Plain Language
Any person who deploys a bot in a commercial transaction or trade practice with a consumer commits a per se Chapter 93A violation if the bot could mislead a reasonable person into thinking they are interacting with a human — actual consumer deception or harm is not required. The sole safe harbor is providing clear and conspicuous notice to the consumer that they are communicating with a computer rather than a human being. This notice must be provided before or during the interaction; failure to disclose creates the violation regardless of outcome. The obligation applies only in the context of commercial transactions or trade practices — non-commercial uses of bots are not covered. Because the statute uses a 'reasonable person' standard for the misleading threshold but provides an absolute safe harbor for clear disclosure, the practical compliance path is to always disclose AI identity clearly and conspicuously.
Statutory Text
It is hereby declared to be an unfair and deceptive act or practice in violation of section 2 of chapter 93A for any person to engage in a commercial transaction or trade practice with a consumer of any kind in which the consumer is communicating or otherwise interacting with a bot that may mislead or deceive a reasonable person to believe they are engaging with a human, regardless of whether such consumer is in fact misled, deceived or damaged thereby; provided, however, that a person utilizing or deploying a bot shall not be liable under this section if the consumer is notified in a clear and conspicuous fashion that they are communicating with a computer rather than a human being.