H-94
MA · State · USA
MA
USA
● Pre-filed
Proposed Effective Date
2025-07-07
An Act to ensure accountability and transparency in artificial intelligence systems (House No. 94, 194th General Court)
Establishes Chapter 93M of Massachusetts General Laws, imposing accountability and transparency obligations on developers and deployers of AI systems, with heightened requirements for high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, insurance, education, and government services). Developers must exercise reasonable care to mitigate algorithmic discrimination, provide documentation to deployers, notify the Attorney General of discrimination risks within 90 days, and publish a plain-language public summary. Deployers of high-risk systems must maintain a NIST-aligned risk management program, conduct annual impact assessments, notify consumers of AI-driven consequential decisions, and provide appeal mechanisms. Corporations using AI for consumer targeting or behavioral influence face additional disclosure requirements. Enforcement is exclusively through the Attorney General under Chapter 93A; no private right of action is created. Small businesses under 50 employees (not using proprietary training data) and low-risk AI systems are exempt.
Summary

Establishes Chapter 93M of Massachusetts General Laws, imposing accountability and transparency obligations on developers and deployers of AI systems, with heightened requirements for high-risk AI systems used in consequential decisions (employment, housing, credit, healthcare, insurance, education, and government services). Developers must exercise reasonable care to mitigate algorithmic discrimination, provide documentation to deployers, notify the Attorney General of discrimination risks within 90 days, and publish a plain-language public summary. Deployers of high-risk systems must maintain a NIST-aligned risk management program, conduct annual impact assessments, notify consumers of AI-driven consequential decisions, and provide appeal mechanisms. Corporations using AI for consumer targeting or behavioral influence face additional disclosure requirements. Enforcement is exclusively through the Attorney General under Chapter 93A; no private right of action is created. Small businesses under 50 employees (not using proprietary training data) and low-risk AI systems are exempt.

Enforcement & Penalties
Enforcement Authority
The Attorney General has exclusive authority to enforce this chapter. Violations are deemed unfair or deceptive trade practices under Chapter 93A. Enforcement is agency-initiated. An affirmative defense is available if the developer or deployer identifies and remedies violations through testing, internal review, or consumer feedback, and demonstrates compliance with recognized AI risk management standards.
Penalties
Violations are treated as unfair or deceptive trade practices under Chapter 93A, which provides the Attorney General with authority to seek civil penalties, injunctive relief, and restitution. The bill does not specify its own penalty schedule; remedies are those available under Chapter 93A enforcement actions. No private right of action is created.
Who Is Covered
"Developer" means an entity or individual developing, modifying, or making AI systems available in Massachusetts.
"Deployer" means an entity using AI systems to make decisions impacting consumers in Massachusetts.
What Is Covered
"High-Risk Artificial Intelligence System" means AI systems that materially influence consequential decisions, including but not limited to: (a) Education opportunities; (b) Employment decisions; (c) Financial or lending services; (d) Housing access; (e) Healthcare services; (f) Insurance decisions; (g) Legal or government services.
"Artificial Intelligence System" means any machine-based system that processes inputs to generate outputs, including content, decisions, predictions, or recommendations, that influence physical or virtual environments.
Compliance Obligations 15 obligations · click obligation ID to open requirement page
H-02 Non-Discrimination & Bias Assessment · H-02.1 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(a)
Plain Language
Developers must exercise reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination in their AI systems. This is a general duty of care obligation — it does not prescribe specific testing methodologies but requires affirmative steps to find and address discriminatory risks across all protected classifications under Massachusetts and federal law. The duty encompasses both pre-deployment identification and ongoing mitigation.
Statutory Text
(a) Duty of Care: Developers must use reasonable care to identify, mitigate, and disclose risks of algorithmic discrimination.
G-02 Public Transparency & Documentation · G-02.1 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(b)
Plain Language
Developers must furnish downstream deployers with documentation covering three areas: (1) the AI system's intended and foreseeable uses, (2) known limitations and risks including algorithmic discrimination potential, and (3) training dataset information and bias mitigation measures applied. This is a deployer-facing documentation obligation — the bill does not require this specific documentation to be made publicly available (that obligation is in Section 2(d)). The training data disclosure includes both the data used and the bias mitigation steps taken.
Statutory Text
(b) Documentation Requirements: Developers must provide deployers with: (1) A summary of intended and foreseeable uses of the AI system; (2) Known limitations and risks, including algorithmic discrimination; (3) Information on the datasets used for training, including measures taken to mitigate biases.
R-01 Incident Reporting · R-01.3 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(c)
Plain Language
When a developer discovers or identifies a known or foreseeable risk of algorithmic discrimination in an AI system, they must notify both the Attorney General and all deployers of that system within 90 days. This is a discovery-triggered notification — it is not a routine periodic report but an event-driven disclosure obligation. The 90-day window runs from the point of discovery, not from a calendar date.
Statutory Text
(c) Disclosure of Risks: Developers must notify the Attorney General and deployers of any known or foreseeable risks of discrimination within 90 days of discovery.
G-02 Public Transparency & Documentation · G-02.4 · Developer · Automated Decisionmaking
Chapter 93M, Section 2(d)
Plain Language
Developers must publish a plain-language summary on their public website describing the types of AI systems they develop, the measures they take to mitigate algorithmic discrimination, and contact information for inquiries. This is an ongoing public transparency obligation — the summary must be accessible to anyone, not just deployers or regulators.
Statutory Text
(d) Public Statement: Developers must publish a plain-language summary on their website, detailing: (1) Types of AI systems they develop; (2) Measures to mitigate algorithmic discrimination; (3) Contact information for inquiries.
G-01 AI Governance Program & Documentation · G-01.1 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(a)
Plain Language
Deployers of high-risk AI systems must establish and maintain a formal risk management program that identifies and mitigates known or foreseeable risks of algorithmic discrimination. The program must align with recognized industry standards, with the NIST AI Risk Management Framework cited as an example benchmark. This is a continuing obligation — the program must be maintained, not just created. Small businesses with fewer than 50 employees that do not use proprietary data to train AI systems are exempt per Section 5(1).
Statutory Text
(a) Risk Management Policy: Deployers of high-risk AI systems must implement and maintain a risk management program that: (1) Identifies and mitigates known or foreseeable risks of algorithmic discrimination; (2) Aligns with industry standards, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
H-02 Non-Discrimination & Bias Assessment · H-02.3H-02.8H-02.10 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(b)
Plain Language
Deployers of high-risk AI systems must complete a formal impact assessment annually for each system, covering the system's purpose and intended use, data categories and outputs, and discrimination risks with corresponding mitigation measures. Assessments must also be updated whenever a substantial modification is made to the system, regardless of the annual cycle. The state will provide templates to standardize and reduce the compliance burden. This creates both a periodic (annual) obligation and an event-driven (substantial modification) update requirement.
Statutory Text
(b) Impact Assessments: (1) Deployers must complete an annual impact assessment for each high-risk AI system, including: (i) The purpose and intended use of the system; (ii) Data categories used and outputs generated; (iii) Potential risks of discrimination and mitigation measures. (2) Impact assessments must be updated after any substantial modification to the system. State-provided templates for these assessments will be made available to reduce compliance burdens.
H-01 Human Oversight of Automated Decisions · H-01.1H-01.3H-01.5 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(c)
Plain Language
When an AI system materially influences a consequential decision about a consumer, deployers must: (1) notify the consumer that AI was involved, (2) explain the system's purpose and how it influenced the specific decision, and (3) provide a process for the consumer to appeal or correct adverse decisions. This creates three distinct consumer-facing obligations triggered by any consequential decision — covering employment, housing, healthcare, lending, insurance, education, and government services. The explanation must address how the system influenced the particular decision, not just a generic statement that AI was used.
Statutory Text
(c) Consumer Protections: Deployers must: (1) Notify consumers when an AI system materially influences a consequential decision; (2) Provide consumers with: (i) The purpose of the system; (ii) An explanation of how the system influenced the decision; (iii) A process to appeal or correct adverse decisions.
G-02 Public Transparency & Documentation · G-02.4 · Deployer · Automated Decisionmaking
Chapter 93M, Section 3(d)
Plain Language
Deployers must publicly disclose which types of high-risk AI systems they use and the strategies they employ to mitigate risks. This is a public-facing transparency obligation distinct from the deployer-to-consumer notifications in Section 3(c) — it requires general public disclosure rather than individual notice at the point of decision.
Statutory Text
(d) Transparency: Deployers must publicly disclose the types of high-risk AI systems in use and their risk mitigation strategies.
CP-01 Deceptive & Manipulative AI Conduct · Deployer · Automated DecisionmakingGeneral Consumer App
Chapter 93M, Section 4(a)
Plain Language
Any corporation operating in Massachusetts that uses AI to target specific consumer groups or influence behavior must disclose: the methods, purposes, and contexts of the targeting; the specific ways AI tools are designed to influence consumer behavior; and details of third-party entities involved in designing, deploying, or operating such systems. Proprietary information is protected under state confidentiality laws. This provision applies broadly to any corporation using AI for targeting or behavioral influence — it is not limited to high-risk AI systems or consequential decisions.
Statutory Text
(a) Disclosure of AI Use: Any corporation operating in Massachusetts that uses artificial intelligence systems or related tools to target specific consumer groups or influence behavior must disclose: (1) Purpose of AI Use: The methods, purposes, and contexts in which AI systems are used to identify or target specific classes of individuals; (2) Behavioral Influence: The specific ways in which AI tools are designed to influence consumer behavior; (3) Third-Party Partnerships: Details of any third-party entities involved in the design, deployment, or operation of AI systems used for targeting or behavioral influence. Proprietary information will be safeguarded and exempt from public disclosure under state confidentiality laws.
CP-01 Deceptive & Manipulative AI Conduct · Deployer · Automated DecisionmakingGeneral Consumer App
Chapter 93M, Section 4(b)
Plain Language
The disclosures required under Section 4(a) must be presented in two ways: (1) publicly on the corporation's website in an easily accessible and comprehensible format, and (2) embedded in the terms and conditions provided to consumers before any significant interaction with an AI system. This ensures both general public access and individual consumer awareness before engaging with AI-driven targeting or behavioral influence systems.
Statutory Text
(b) Public Disclosure Requirements: Corporations must make these disclosures: (1) Publicly available on their website in a manner that is easily accessible and comprehensible; (2) Included in terms and conditions provided to consumers prior to significant interaction with an AI system.
T-01 AI Identity Disclosure · T-01.1 · Deployer · Automated DecisionmakingGeneral Consumer App
Chapter 93M, Section 4(c)
Plain Language
Consumers must receive notification in two situations: (1) when AI systems are targeting or influencing them in ways that materially impact their decisions, and (2) when algorithms are used to determine pricing, eligibility, or access to services. This is broader than the consequential-decision notification in Section 3(c) — it covers any material impact on consumer decisions, including pricing and service eligibility determinations that may not rise to the level of a 'consequential decision' as defined in the bill.
Statutory Text
(c) Consumer Notification: Consumers must be notified when: (1) They are being targeted or influenced by AI systems in a way that materially impacts their decisions; (2) Algorithms are used to determine pricing, eligibility, or access to services.
Other · Automated Decisionmaking
Chapter 93M, Section 6(a)
Plain Language
The Attorney General has exclusive enforcement authority over this chapter, and violations are treated as unfair or deceptive trade practices under the existing Chapter 93A framework. This creates no new compliance obligation — it specifies who enforces the law and through what legal mechanism. Notably, it does not create a private right of action.
Statutory Text
(a) Attorney General Authority: The Attorney General has exclusive authority to enforce this Chapter. Violations are deemed unfair or deceptive trade practices under Chapter 93A.
Other · Automated Decisionmaking
Chapter 93M, Section 6(b)
Plain Language
Developers and deployers have an affirmative defense against enforcement if they can show they identified and remedied violations through testing, internal review, or consumer feedback, and that they comply with recognized AI risk management standards. This is a safe harbor that incentivizes self-auditing — it modifies the enforcement of all substantive obligations in the chapter but creates no independent compliance duty.
Statutory Text
(b) Affirmative Defense: A developer or deployer may defend against enforcement if: (1) They identify and remedy violations through testing, internal review, or consumer feedback; (2) They demonstrate compliance with recognized AI risk management standards.
Other · Automated Decisionmaking
Chapter 93M, Section 7
Plain Language
The Attorney General is authorized to issue rules further defining documentation and impact assessment requirements, setting standards for risk management programs and consumer notifications, and designating recognized AI risk management frameworks. This is a delegation of rulemaking authority — it creates no compliance obligation on covered entities until rules are actually promulgated.
Statutory Text
The Attorney General may issue rules to: (1) Define documentation and impact assessment requirements (2) Set standards for risk management programs and consumer notifications; (3) Designate recognized AI risk management frameworks.
Other · Government · Automated Decisionmaking
Chapter 93M, Section 8
Plain Language
The Attorney General must establish a public education campaign to inform Massachusetts residents about their rights under this chapter and the role of AI in decision-making. This is a government obligation, not a compliance requirement for developers or deployers.
Statutory Text
The Attorney General, in collaboration with relevant state agencies, shall establish a public education campaign to inform residents of their rights under this Chapter and to increase awareness of the role of AI in decision-making processes.