H-97
MA · State · USA
MA
USA
● Pre-filed
An Act protecting consumers in interactions with artificial intelligence systems (House No. 97, 194th General Court)
Massachusetts H 97 imposes obligations on developers and deployers of high-risk AI systems — defined as AI systems that make or substantially factor into consequential decisions in areas such as employment, housing, credit, education, healthcare, insurance, and legal services. Developers must exercise reasonable care to prevent algorithmic discrimination, provide deployers with detailed documentation on system capabilities, limitations, training data, and bias risks, and publicly post summaries of their high-risk AI systems. Deployers must implement risk management programs, complete annual impact assessments, notify consumers before AI-driven consequential decisions, provide adverse-action explanations and appeal opportunities, and publicly disclose their AI system use. The attorney general has exclusive enforcement authority; violations are treated as unfair trade practices under Chapter 93A. No private right of action is created. The bill includes a safe harbor for small deployers (fewer than 50 employees who do not use their own data to train), exemptions for federally approved systems and regulated financial institutions, and an affirmative defense for entities that discover and cure violations while following recognized risk management frameworks like NIST AI RMF.
Summary

Massachusetts H 97 imposes obligations on developers and deployers of high-risk AI systems — defined as AI systems that make or substantially factor into consequential decisions in areas such as employment, housing, credit, education, healthcare, insurance, and legal services. Developers must exercise reasonable care to prevent algorithmic discrimination, provide deployers with detailed documentation on system capabilities, limitations, training data, and bias risks, and publicly post summaries of their high-risk AI systems. Deployers must implement risk management programs, complete annual impact assessments, notify consumers before AI-driven consequential decisions, provide adverse-action explanations and appeal opportunities, and publicly disclose their AI system use. The attorney general has exclusive enforcement authority; violations are treated as unfair trade practices under Chapter 93A. No private right of action is created. The bill includes a safe harbor for small deployers (fewer than 50 employees who do not use their own data to train), exemptions for federally approved systems and regulated financial institutions, and an affirmative defense for entities that discover and cure violations while following recognized risk management frameworks like NIST AI RMF.

Enforcement & Penalties
Enforcement Authority
The attorney general has exclusive authority to enforce this chapter. Enforcement is agency-initiated. Violations constitute unfair trade practices under Chapter 93A. No private right of action is created — the statute expressly provides that it does not provide the basis for, and is not subject to, a private right of action for violations of this chapter or any other law. An affirmative defense is available if the developer, deployer, or other person discovers and cures a violation through feedback, adversarial testing, red teaming, or internal review, and is otherwise in compliance with the NIST AI RMF, ISO/IEC 42001, or another recognized risk management framework. The burden of demonstrating the affirmative defense rests on the developer, deployer, or other person.
Penalties
Violations constitute unfair trade practices under Chapter 93A, which provides the attorney general with authority to seek injunctive relief, civil penalties, and other remedies available under that chapter. The bill does not specify independent statutory damages or penalty amounts beyond those available under Chapter 93A.
Who Is Covered
"Deployer" means a person doing business in this state that deploys a high-risk artificial intelligence system.
"Developer" means a person doing business in this state that develops or intentionally and substantially modifies an artificial intelligence system.
What Is Covered
"High-risk artificial intelligence system" means any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision. "High-risk artificial intelligence system" does not include: (1) an artificial intelligence system if the artificial intelligence system is intended to: (i) perform a narrow procedural task; or (ii) detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review; or (2) the following technologies, unless the technologies, when deployed, make, or are a substantial factor in making, a consequential decision: (i) anti-fraud technology that does not use facial recognition technology; (ii) anti-malware; (iii) anti-virus; (iv) artificial intelligence-enabled video games; (v) calculators; (vi) cybersecurity; (vii) databases; (viii) data storage; (ix) firewall; (x) internet domain registration; (xi) internet website loading; (xii) networking; (xiii) spam- and robocall-filtering; (xiv) spell-checking; (xv) spreadsheets; (xvi) web caching; (xvii) web hosting or any similar technology; or (xviii) technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
Compliance Obligations 17 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Developer · Automated Decisionmaking
Chapter 93M § 2(a)
Plain Language
Developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the system. Compliance with this section and any AG rules creates a rebuttable presumption that reasonable care was used — but only in AG enforcement actions, not in any other legal proceeding. The self-testing and diversity-expansion carve-outs in the algorithmic discrimination definition mean that developers using their systems solely for bias testing or pool expansion are not subject to this duty.
Statutory Text
(a) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a developer used reasonable care as required under this section if the developer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Chapter 93M § 2(b)(1)-(4), (c), (f)
Plain Language
Developers must provide deployers and downstream developers with comprehensive documentation about each high-risk AI system, including: a statement of foreseeable and harmful uses; summaries of training data types; known limitations and discrimination risks; purpose and intended uses; pre-deployment evaluation methodology; data governance measures; mitigation steps taken; human monitoring guidance; and any additional documentation needed for the deployer to complete impact assessments. This documentation may be delivered through model cards, dataset cards, or similar artifacts. Trade secrets, legally protected information, and security-sensitive information are exempt. A developer that is also the sole deployer of its own system need not generate this documentation unless the system is provided to an unaffiliated deployer.
Statutory Text
(b) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a developer of a high-risk artificial intelligence system shall make available to the deployer or other developer of the high-risk artificial intelligence system: (1) a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system; (2) documentation disclosing: (i) high-level summaries of the type of data used to train the high-risk artificial intelligence system; (ii) known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system; (iii) the purpose of the high-risk artificial intelligence system; (iv) the intended benefits and uses of the high-risk artificial intelligence system; and (v) all other information necessary to allow the deployer to comply with the requirements of section 3; (3) documentation describing: (i) how the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer; (ii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation; (iii) the intended outputs of the high-risk artificial intelligence system; (iv) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (v) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and (4) any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination. (c) (1) except as provided in subsection (f) of this section, a developer that offers, sells, leases, licenses, gives, or otherwise makes available to a deployer or other developer a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall make available to the deployer or other developer, to the extent feasible, the documentation and information, through artifacts such as model cards, dataset cards, or other impact assessments, necessary for a deployer, or for a third party contracted by a deployer, to complete an impact assessment pursuant to section 3 (c). (2) a developer that also serves as a deployer for a high-risk artificial intelligence system is not required to generate the documentation required by this section unless the high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer. (f) nothing in subsections (b) to (e) of this section requires a developer to disclose a trade secret, information protected from disclosure by state or federal law, or information that would create a security risk to the developer.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
Chapter 93M § 2(d)
Plain Language
Developers must publish on their website or in a public use case inventory a clear statement summarizing: (1) the types of high-risk AI systems they currently make available, and (2) how they manage known or foreseeable algorithmic discrimination risks from those systems. This statement must be kept current and updated within 90 days of any intentional and substantial modification to a listed system.
Statutory Text
(d) (1) Not later than 6 months after the effective date of this act, a developer shall make available, in a manner that is clear and readily available on the developer's website or in a public use case inventory, a statement summarizing: (i) the types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and (ii) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with subsection (d)(1)(i) of this section. (2) a developer shall update the statement described in subsection (d)(1) of this section: (i) as necessary to ensure that the statement remains accurate; and (ii) no later than ninety days after the developer intentionally and substantially modifies any high-risk artificial intelligence system described in subsection (d)(1)(i) of this section.
R-01 Incident Reporting · R-01.3 · Developer · Automated Decisionmaking
Chapter 93M § 2(e)
Plain Language
When a developer discovers — through its own testing or a credible deployer report — that its high-risk AI system has caused or is reasonably likely to have caused algorithmic discrimination, the developer must notify both the attorney general and all known deployers/developers within 90 days. This is an event-triggered disclosure, not a routine reporting obligation. The notice must describe the known or foreseeable discrimination risks.
Statutory Text
(e) Not later than 6 months after the effective date of this act, a developer of a high-risk artificial intelligence system shall disclose to the attorney general, in a form and manner prescribed by the attorney general, and to all known deployers or other developers of the high-risk artificial intelligence system, any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system without unreasonable delay but no later than ninety days after the date on which: (1) the developer discovers through the developer's ongoing testing and analysis that the developer's high-risk artificial intelligence system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; or (2) the developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Developer · Automated Decisionmaking
Chapter 93M § 2(g)
Plain Language
The attorney general may request at any time that a developer produce the documentation described in Section 2(b) within 90 days. The AG may evaluate it for compliance, but it is exempt from public records disclosure. Developers may designate materials as containing trade secrets or proprietary information, and submitting privileged materials does not waive privilege.
Statutory Text
(g) Not later than 6 months after the effective date of this act, the attorney general may require that a developer disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the statement or documentation described in subsection (b) of this section. The attorney general may evaluate such statement or documentation to ensure compliance with this chapter, and the statement or documentation is not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (g), a developer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the statement or documentation includes information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
H-02 Non-Discrimination & Bias Assessment · H-02.1H-02.3 · Deployer · Automated Decisionmaking
Chapter 93M § 3(a)
Plain Language
Deployers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Compliance with Section 3 and any AG rules creates a rebuttable presumption that reasonable care was used — but this presumption applies only in AG enforcement actions, not in any other proceeding.
Statutory Text
(a) Not later than 6 months after the effective date of this act, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought not later than 6 months after the effective date of this act, by the attorney general pursuant to section 6, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 7.
G-01 AI Governance Program & Documentation · G-01.1G-01.2 · Deployer · Automated Decisionmaking
Chapter 93M § 3(b)
Plain Language
Deployers must implement a risk management policy and program governing their deployment of high-risk AI systems. The program must identify, document, and mitigate algorithmic discrimination risks; specify the principles, processes, and personnel involved; and be iteratively reviewed and updated over the system's life cycle. The program's reasonableness is assessed relative to the NIST AI RMF, ISO/IEC 42001, or another recognized framework, as well as the deployer's size, system scope, and data sensitivity. A single program may cover multiple high-risk systems. Small deployers (fewer than 50 FTEs who do not use their own data to train) are exempt per subsection (f).
Statutory Text
(b) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (b) must be reasonable considering: (i) (A) the guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology in the United States Department of Commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (B) any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate; (ii) the size and complexity of the deployer; (iii) the nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and (iv) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer. (2) a risk management policy and program implemented pursuant to subsection (b)(1) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.8H-02.10 · Deployer · Automated Decisionmaking
Chapter 93M § 3(c)(1)-(7)
Plain Language
Deployers must complete an impact assessment for each high-risk AI system before deployment, repeat it at least annually, and complete a new one within 90 days of any intentional and substantial modification. The impact assessment must cover: system purpose and benefits, algorithmic discrimination risk analysis and mitigation, data inputs and outputs, customization data, performance metrics and limitations, transparency measures, and post-deployment monitoring. A single assessment may cover comparable systems, and an assessment completed under another law satisfies this requirement if reasonably similar in scope. All impact assessments and records must be retained for at least three years after final deployment. Additionally, deployers must conduct at least annual reviews to verify each system is not causing algorithmic discrimination. Small deployers meeting the subsection (f) criteria are exempt.
Statutory Text
(c) (1) except as provided in subsections (c)(4), (c)(5), and (f) of this section: (i) a deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, shall complete an impact assessment for the high-risk artificial intelligence system; and (ii) Not later than 6 months after the effective date of this act, a deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. (2) an impact assessment completed pursuant to this subsection (c) must include, at a minimum, and to the extent reasonably known by or available to the deployer: (i) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system; (ii) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks; (iii) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces; (iv) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system; (v) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system; (vi) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and (vii) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system. (3) in addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (c) following an intentional and substantial modification to a high-risk artificial intelligence system not later than 6 months after the effective date of this act, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system. (4) a single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed by a deployer. (5) if a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (c) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (c). (6) a deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (c), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system. (7) Not later than 6 months after the effective date of this act, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
H-01 Human Oversight of Automated Decisions · H-01.3 · Deployer · Automated Decisionmaking
Chapter 93M § 3(d)(1)
Plain Language
Before making or substantially contributing to a consequential decision about a consumer, deployers must: (1) notify the consumer that a high-risk AI system will be used, (2) provide a plain-language statement disclosing the system's purpose, the nature of the decision, the deployer's contact information, and how to access the deployer's public website statement, and (3) inform the consumer of any applicable right to opt out of profiling for decisions with legal or similarly significant effects. This notification must be provided directly to the consumer in plain language, in all languages the deployer ordinarily uses, and in formats accessible to consumers with disabilities — or, if direct delivery is not possible, in a manner reasonably calculated to reach the consumer.
Statutory Text
(d) (1) Not later than 6 months after the effective date of this act, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall: (i) notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made; (ii) provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and (iii) provide to the consumer information, if applicable, regarding the consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.2H-01.5 · Deployer · Automated Decisionmaking
Chapter 93M § 3(d)(2)
Plain Language
When a high-risk AI system contributes to an adverse consequential decision about a consumer, the deployer must provide: (1) a statement explaining the principal reasons for the decision — including the degree of AI involvement, the types of data processed, and data sources; (2) an opportunity to correct any incorrect personal data used in the decision; and (3) an opportunity to appeal, which must include human review if technically feasible, unless delay would endanger the consumer's life or safety. This is a post-decision adverse-action package — all three elements must be provided together.
Statutory Text
(2) Not later than 6 months after the effective date of this act, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer: (i) a statement disclosing the principal reason or reasons for the consequential decision, including: (A) the degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision; (B) the type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and (C) the source or sources of the data described in subsection (d)(2)(i)(B) of this section; (ii) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (iii) an opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Chapter 93M § 3(e)
Plain Language
Deployers must publish on their website a clear summary describing: the types of high-risk AI systems they currently deploy, how they manage algorithmic discrimination risks for each, and detailed information about the data they collect and use. This statement must be periodically updated. Small deployers meeting the subsection (f) criteria are exempt.
Statutory Text
(e) (1) Not later than 6 months after the effective date of this act, and except as provided in subsection (f) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing: (i) the types of high-risk artificial intelligence systems that are currently deployed by the deployer; (ii) how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subsection (e)(1)(i) of this section; and (iii) in detail, the nature, source, and extent of the information collected and used by the deployer. (2) a deployer shall periodically update the statement described in subsection (e)(1) of this section.
R-01 Incident Reporting · R-01.3 · Deployer · Automated Decisionmaking
Chapter 93M § 3(g)
Plain Language
When a deployer discovers that a deployed high-risk AI system has caused algorithmic discrimination, the deployer must notify the attorney general within 90 days of discovery, including the impact assessment information required under Section 3(c)(2). This is a deployer-side counterpart to the developer's discrimination notification obligation in Section 2(e).
Statutory Text
(g) if a deployer deploys a high-risk artificial intelligence system not later than 6 months after the effective date of this act, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send subsection (c)(2) to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
R-02 Regulatory Disclosure & Submissions · R-02.2 · Deployer · Automated Decisionmaking
Chapter 93M § 3(i)
Plain Language
The attorney general may request that a deployer produce its risk management policy, impact assessments, or records within 90 days. These documents are exempt from public records disclosure and may be designated as containing trade secrets or proprietary information. Submission of privileged materials does not waive attorney-client privilege or work-product protection.
Statutory Text
(i) Not later than 6 months after the effective date of this act, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (b) of this section, the impact assessment completed pursuant to subsection (c) of this section, or the records maintained pursuant to subsection (c)(6) of this section. The attorney general may evaluate the risk management policy, impact assessment, or records to ensure compliance with this chapter, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Massachusetts Public Records Law", chapter 66, section 10 of the General Laws. In a disclosure pursuant to this subsection (i), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records include information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
T-01 AI Identity Disclosure · T-01.1 · DeveloperDeployer · Automated Decisionmaking
Chapter 93M § 4(a)-(b)
Plain Language
Any deployer or developer that makes available a consumer-facing AI system must disclose to each interacting consumer that they are interacting with an AI system. This is a broad obligation that applies to all AI systems intended to interact with consumers — not just high-risk systems. No disclosure is required where it would be obvious to a reasonable person that the interaction is with an AI system. Note this is one of the few provisions in the bill that applies beyond high-risk AI systems to any consumer-facing AI system.
Statutory Text
(a) Not later than 6 months after the effective date of this act, and except as provided in subsection (b) of this section, a deployer or other developer that deploys, offers, sells, leases, licenses, gives, or otherwise makes available an artificial intelligence system that is intended to interact with consumers shall ensure the disclosure to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. (b) disclosure is not required under subsection (a) of this section under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.
Other · Automated Decisionmaking
Chapter 93M § 6(a)-(b)
Plain Language
The attorney general has exclusive enforcement authority. Violations of this chapter are deemed unfair trade practices under Massachusetts Chapter 93A, which provides the enforcement framework (injunctive relief, civil penalties, etc.). This provision creates no new compliance obligation — it activates an existing enforcement mechanism.
Statutory Text
(a) the attorney general has exclusive authority to enforce this chapter. (b) except as provided in subsection (c) of this section, a violation of the requirements established in this chapter constitutes an unfair trade practice pursuant to chapter 93A.
Other · Automated Decisionmaking
Chapter 93M § 6(c)-(d)
Plain Language
In AG enforcement actions, it is an affirmative defense if the entity: (1) discovered and cured the violation through user feedback, adversarial testing/red teaming, or internal review; AND (2) is otherwise in compliance with the NIST AI RMF and ISO/IEC 42001, an equivalent framework, or a framework designated by the AG. The entity bears the burden of demonstrating the defense. This is a safe harbor — not an independent compliance obligation.
Statutory Text
(c) in any action commenced by the attorney general to enforce this chapter, it is an affirmative that the developer, deployer, or other person: (1) discovers and cures a violation of this this chapter 93 as a result of: (i) feedback that the developer, deployer, or other person encourages deployers or users to provide to the developer, deployer, or other person; (ii) adversarial testing or red teaming, as those terms are defined or used by the national institute of standards and technology; or (iii) an internal review process; and (2) is otherwise in compliance with: (i) the latest version of the "Artificial intelligence risk management framework" published by the national institute of standards and technology in the United States Department of Commerce and Standard ISO/IEC 42001 of the International Organization for Standardization; (ii) another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this chapter; or (iii) any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate and, if designated, shall publicly disseminate. (d) a developer, a deployer, or other person bears the burden of demonstrating to the attorney general that the requirements established in subsection (3) of this section have been satisfied.
Other · Automated Decisionmaking
Chapter 93M § 6(e)-(f)
Plain Language
The chapter expressly does not create a private right of action and does not preempt other existing rights or remedies. The rebuttable presumptions and affirmative defenses in this chapter apply only to AG enforcement actions, not to any other claim or proceeding. This is a savings/limitation clause, not an independent obligation.
Statutory Text
(e) nothing in this chapter, including the enforcement authority granted to the attorney general under this section, preempts or otherwise affects any right, claim, remedy, presumption, or defense available at law or in equity. A rebuttable presumption or affirmative defense established under this chapter applies only to an enforcement action brought by the attorney general pursuant to this section and does not apply to any right, claim, remedy, presumption, or defense available at law or in equity. (f) this chapter does not provide the basis for, and is not subject to, a private right of action for violations of this chapter or any other law.